1701088468 AI law Council Presidency calls on EU states to reach

AI law: Council Presidency calls on EU states to reach a compromise EURACTIV Germany

The Spanish presidency of the EU Council has asked Member States for flexibility in the sensitive area of ​​law enforcement ahead of a crucial political meeting on AI law.

The AI ​​Act is a flagship law to regulate artificial intelligence based on its ability to cause harm. It is currently in the final stage of the legislative process, where the EU Commission, the Council and the Parliament are negotiating the final provisions in so-called trilogues.

EU institutions aim to reach a final agreement in the December 6 trilogue. Before this crucial date, the Spanish Presidency, which is negotiating on behalf of European governments, needs a revised negotiating mandate.

On Friday (November 24), the Presidency distributed the first half of the negotiating mandate, in which it asked for flexibility and pointed out possible starting points in the area of ​​law enforcement. The mandate should reach the table of the Permanent Representatives Committee (COREPER) on Wednesday.

The second half of the mandate deals with basic models, governance, access to source code, sanctions regime, entry into force of regulation and secondary legislation. It will be discussed at COREPER level on Friday (1st December).

AI law Council Presidency calls on EU states to reach

Prohibitions

MEPs have significantly expanded the list of prohibited practices – AI applications that pose excessive risk.

The Presidency proposes accepting a ban on untargeted scanning of facial images, emotion recognition in the workplace and educational institutions, biometric categorization to obtain sensitive data such as sexual orientation and religious beliefs, and predictive surveillance of people .

Furthermore, “in a spirit of compromise”, the Presidency proposes to add to the list of high-risk use cases the European Parliament’s proposed bans that were not accepted, namely all other biometric categorization and emotion recognition applications.

Regarding remote biometric recognition, parliamentarians agreed to abandon the total ban on real-time use and, in exchange, limit exceptional use and include more security measures. The Presidency of the Council classifies the subsequent use of this technology as “high risk”.

1701088458 83 AI law Council Presidency calls on EU states to reach

Exceptions for law enforcement

The Council’s mandate contains several exemptions for the use of AI tools by law enforcement authorities. The Presidency notes that it managed to “take on almost all of them”.

These include making the text more flexible for police forces with respect to the obligation for human oversight, risk system reporting, post-market surveillance and confidentiality measures to prevent the disclosure of sensitive operational data.

The Presidency also intends for law enforcement authorities to be able to use emotion recognition and biometric categorization software without informing those affected.

The European Parliament, on the other hand, ensured that law enforcement authorities registered high-risk systems in the EU database, albeit in a non-public area. The deadline for large-scale IT systems to comply with AI Law obligations has been set at 2030.

AI law Council Presidency calls on EU states to reach

National security exception

France has pushed for a comprehensive national security exemption in AI law. At the same time, the Presidency noted that the EU Parliament has not demonstrated flexibility in adopting the text of the Council’s mandate.

Spain proposes to divide this provision into two paragraphs. The first states that the regulation does not apply to areas not covered by EU law and that it does not in any way affect the competences of Member States in the field of national security or of bodies entrusted with tasks in this field.

Second, the section states that the AI ​​Act does not apply to systems marketed or used for defense and military activities.

Impact assessment on fundamental rights

Socialist MEPs introduced fundamental rights impact assessment as a new obligation for users of high-risk systems before their implementation. For the Presidency, this point represents an “absolute necessity” to include this commitment in order to reach an agreement with Parliament.

A critical point about the issue was the scope: MPs called for the inclusion of all users, while EU states pushed for the offer to be limited to public institutions. The commitment consisted of including public entities and only private actors that provide services of general interest.

Furthermore, the fundamental rights impact assessment would have to cover aspects not already covered by other legal obligations, in order to avoid duplication.

When it comes to risk management, data governance and transparency obligations, users only need to verify that the high-risk system provider has fulfilled these obligations.

For the Presidency, the obligation to carry out a six-week consultation should also be abolished for public bodies and replaced by a simple notification to the competent national authority.

AI law Spains EU Council presidency is running out EURACTIV

EU negotiations on AI law have stalled

A technical meeting on the EU law on artificial intelligence failed on Friday (10 November). Among other things, Germany called for the proposed approach to basic models to be withdrawn.

Tests in real conditions

A point of contention in the negotiations was the possibility introduced by the Council to test high-risk AI systems outside the legal framework. According to the Presidency’s note, some safeguards were included to make the measure acceptable to Parliament.

The text indicates that people undergoing the test must give informed consent. It also states that in law enforcement cases where it is not possible to ask for consent, the tests and their results must not have any negative impact on the people concerned.

Exemption for conformity assessment

The Council also introduced an emergency procedure that allows law enforcement authorities to use, in an emergency, a high-risk AI tool that has not yet gone through the conformity assessment procedure.

MEPs want this procedure to be subject to judicial approval, a point that the Presidency considers unacceptable for EU states. As a compromise, the Spanish Presidency proposed the reintroduction of the mechanism that allows the Commission to review the decision.

[Bearbeitet von Kjeld Neubert]