European Commission reaches Deal on Artificial Intelligence Act, Parliament still has to Ratify

Saturday, December 9, 2023
European Commission reaches Deal on Artificial Intelligence Act, Parliament still has to Ratify

The European Commission welcomes the political agreement reached between the European Parliament and the Council on the Artificial Intelligence Act (AI Act), proposed by the Commission in April 2021.

Ursula von der Leyen, President of the European Commission, said:“Artificial intelligence is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I very much welcome today's political agreement by the European Parliament and the Council on the Artificial Intelligence Act. The EU's AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, today's agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.” 

The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach:

Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens' rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.

High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk. 

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated. Companies not complying with the rules will be fined.

The AI Act introduces dedicated rules for general purpose AI models that will ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the Commission.

In terms of governance, national competent market surveillance authorities will supervise the implementation of the new rules at national level, while the creation of a new European AI Office within the European Commission will ensure coordination at European level. The new AI Office will also supervise the implementation and enforcement of the new rules on general purpose AI models. Along with the national market surveillance authorities, the AI Office will be the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point. For general purpose models, a scientific panel of independent experts will play a central role by issuing alerts on systemic risks and contributing to classifying and testing the models.

The political agreement is now subject to formal approval by the European Parliament and the Council.

Once the AI Act is adopted, there will be a transitional period before the Regulation becomes applicable. To bridge this time, the Commission will be launching anAI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.

To promote rules on trustworthy AI at international level, the European Union will continue to work in fora such as the G7, the OECD, the Council of Europe, the G20 and the UN.  

 

Stephanie Cime

ArtDependence WhatsApp Group

Get the latest ArtDependence updates directly in WhatsApp by joining the ArtDependence WhatsApp Group by clicking the link or scanning the QR code below

whatsapp-qr

Subscribe to the Newsletter

Image of the Day

Anna Melnykova, "Palace of Labor (palats praci), architector I. Pretro, 1916", shot with analog Canon camera, 35 mm Fuji film in March 2022.

Anna Melnykova, "Palace of Labor (palats praci), architector I. Pretro, 1916", shot with analog Canon camera, 35 mm Fuji film in March 2022.

Search

About ArtDependence

ArtDependence Magazine is an international magazine covering all spheres of contemporary art, as well as modern and classical art.

ArtDependence features the latest art news, highlighting interviews with today’s most influential artists, galleries, curators, collectors, fair directors and individuals at the axis of the arts.

The magazine also covers series of articles and reviews on critical art events, new publications and other foremost happenings in the art world.

If you would like to submit events or editorial content to ArtDependence Magazine, please feel free to reach the magazine via the contact page.