Home / Publications / Looking ahead to the EU AI Act

Looking ahead to the EU AI Act

Introduction

On 21 May 2024, the Council of the European Union adopted the Regulation laying down harmonised rules on artificial intelligence” (the so-called AI Act). As the world's first comprehensive law to regulate artificial intelligence, the AI Act aims to establish uniform requirements for the development and use of artificial intelligence in the European Union.

Following the European Parliament's adoption of the draft on 13 March 2024, the AI Act has now been formally adopted . Once signed by the Presidents of the European Parliament and the Council, the Regulation will be published in the Official Journal of the EU and will enter into force twenty days after its publication.

With this adoption of the world’s most significant legislation on Artificial Intelligence, the EU is solidifying its position as a pioneer among global legislators. This initiative aims to establish and reinforce the EU’s role as a premier hub for AI while ensuring that AI development remains focused on human-centered and trustworthy principles.

After a long and complex journey that began in 2021 with the European Commission’s proposal of a draft AI Act, this new regulation is expected to be passed into law in June 2024.

The AI Act aims to ensure that the marketing and use of AI systems and their outputs in the EU are consistent with fundamental rights under EU law, such as privacy, democracy, the rule of law and environmental sustainability. Adopting a dual approach, it outright prohibits AI systems deemed to pose unacceptable risks while imposing regulatory obligations on other AI systems and their outputs.

The new regulation, which also aims to strike a fair balance between innovation and the protection of individuals, not only makes Europe a world leader in the regulation of this new technology, but also endeavours to create a legal framework that users of AI technologies will be able to comply with in order to make the most of this significant development opportunity.

In this article we provide a first overview of the key points contained in the text of the AI Act that companies should be aware of in order to prepare for the implementing regulation.

Setting the scene

As the world’s groundbreaking comprehensive horizontal legal framework for AI regulation, the AI Act stresses the EU’s commitment to defining and managing the risks and unlocking opportunities inherent in AI technology.

The reason for establishing an EU regulation is to ensure unified action among all Member States, positioning the EU as a major player worldwide. As indicated in the preamble of this regulation, differing national rules could lead to fragmentation of the internal market, threatening EU competitive advantage, and also decrease legal certainty for operators that develop, import or use AI systems within the EU.

This indeed fits the EU strategy ‘A Europe fit for the digital age’, determined to strengthen the EU’s digital sovereignty on its territory and take the regulatory lead, rather than adopting a “following others’” strategy. Drawing inspiration from past successes of the “Brussels effect” such as GDPR, the AI Act is intended to serve as a pioneering endeavour to set standards for third countries. 

According to the European Commissioner for Internal Market, Thierry Breton, the result is “a balanced and futureproof text, promoting trust and innovation in trustworthy AI”. Indeed, the rationale behind this regulation is to provide confidence to end users while still fostering innovation.

Definition of “AI system” - Scope of the AI Act: which AI systems does the AI Act apply to?

The AI Act defines an AI system as any “machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (Art. 3(1)).  This definition is strongly inspired by the (revised) definition of “AI System” in the OECD’s AI Principles.

For an AI system to come within the scope of the AI Act the following key elements need to be satisfied:

  • Autonomy: for a system to qualify as an AI system it should, at least to some degree, be capable of operating without human intervention (i.e. have a degree of independence from human actions).
  • Adaptiveness: for a system to qualify as an AI system it should have self-learning capabilities so as to allow the system to change while in use.
  • Objective(s): objectives may for example be the assessment of a person’s credit-worthiness (in the case of a credit scoring system) or travel from one place to another (in the case of a visually impaired assistant).
  • Capability to infer: means capable of obtaining the outputs (e.g. predictions, content, recommendations, or decisions) and capable of deriving models and/or algorithms from inputs/data. This goes beyond basic data processing and implies learning, reasoning or modelling (Recital 6).
  • Input: input may be machine-based input (e.g. in the case a credit scoring system, the input may be historical data on people’s profiles and on whether they repaid loans). Input may also be human-based (e.g. a set of rules).
  • Outputs: outputs may include predictions, content, recommendations, or decisions.  For example, a credit scoring system generates a credit score or an assistant for visually impaired people provides an audio description of the objects detected on the street.
  • Influence environments: an environment is the context in which the AI system operates. For example, a credit scoring system influences its environment by helping to decide whether (or not) someone will be granted a loan. Other example: an assistant for visually impaired people influences its environment by helping visually impaired persons avoid obstacles or cross a street.

General provisions

(Chapter I , Art. 1-4)

Article 1 opens the AI Act by outlining the objectives of the regulation, giving away the ambitious purpose of the text. While bolstering the functioning of the internal market stands out once again as a fundamental objective of the EU, the true emphasis given in the Article appears to lie in ensuring responsible and ethical AI development and deployment. This involves a multi-faceted approach focused on:

  • Prioritizing Human-Centered AI: the AI Act calls for the development and use of human centric and trustworthy AI that respects human values.
  • Balancing Innovation with Safety and Ethics: innovation in the AI field is encouraged while recognising the potential risks inherent in such powerful technologies. The AI Act outlines measures to mitigate these risks, by deeming specific requirements for high-risk AI systems, harmonised transparency rules for certain AI systems and the straight prohibition of certain artificial intelligence practices.
  • Mitigating Harmful Effects: The AI Act also acknowledges the potential harms associated with AI misuse, such as bias, discrimination, and privacy violations. It implements safeguards to address these concerns, including transparency requirements that allow individuals to understand how AI systems work and make decisions, along with rules on market monitoring, market surveillance governance and enforcement.

Article 2 clarifies the scope of the AI Act and identifies who and what it regulates. Specifically, the AI Act applies to three main categories:

  1. providers placing on the market or putting into service AI systems or placing on the market general-purpose AI (General Purpose AI) models in the Union, irrespective of whether they are based in the Union or not;
  2. deployers of AI systems that are established or located in the Union; and
  3. providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in the Union.

In addition to these three categories of entities regulated by the AI Act, the provisions of this regulation also apply to importers and distributors of AI systems, product manufacturers who integrate AI systems into their products under their own name or trademark, and authorized representatives of providers not established in the Union. The AI Act also applies to affected persons located within the Union.

Article 2 gives away the activities to which the AI Act applies only in a very limited way, referring to the systems classified as high-risk under Article 6 that are already covered by certain EU harmonisation legislation listed in Annex II, as well as the ones to whom the AI Act does not apply, such as:

  • AI systems used solely for scientific research and development, recognising the importance of fostering innovation in this critical field; and
  • Activities outside the scope of EU law, including military, defence, and national security concerns, regardless of who conducts them.

The AI Act is designed to complement, rather than interfere with, existing legislation such as the EU General Data Protection Regulation (GDPR) and the EU e-Privacy Directive. This ensures that the powers of supervisory authorities and the rights of data subjects under these laws are preserved.

Article 3 is dedicated to defining key terms that are essential for understanding the implementation of the AI Act. Understanding these terms is essential for stakeholders to navigate the regulatory landscape and implement compliant AI practices.

The key points of the AI Act that companies should be aware of in order to prepare for the implementing regulation:

Authors

Show more Show less
Martina Gavalec
Martina Gavalec
Senior Associate
AI Law expert
Bratislava
Italo de Feo
Italo de Feo
Partner
Rome
Tom De Cordier
Tom De Cordier
Partner
Brussels
Javier Torre de Silva
Javier Torre de Silva
Partner
Global Co-Head of Communications, TMC
Madrid
Ian Stevens
Ian Stevens
Partner
London
Dr. Björn Herbers, M.B.L.
Partner
Rechtsanwalt
Brussels - EU Law Office
Dr. Markus Kaulartz
Partner
Rechtsanwalt | Co-Head of the CMS Crypto, Digital Assets and FinTech International Focus Group
Munich
Charles Kerrigan
Charles Kerrigan
Partner
London
João Leitão Figueiredo
João Leitão Figueiredo
Partner
Lawyer
Lisbon
María González Gordon
María González Gordon
Managing Partner
Madrid
Gianfabio Florio
Gianfabio Florio
Counsel
Rome
Katalin Horváth
Katalin Horváth
Partner
Budapest
Márton Domokos
Márton Domokos
Co-ordinator of the CEE Data Protection Practice, CMNO
Budapest
Daniel Gallagher
Daniel Gallagher
Senior Associate
London
Erica Stanford
London
Veronica Mazzaferro
Veronica Mazzaferro
Senior Associate
Rome
Andrea Afferni
Andrea Afferni
Associate
Rome
Beatriz Alegre Villarroya
Beatriz Alegre Villarroya
Abogada
Madrid
Celya Amsellem
Celya Amsellem
Associate
Paris
David Rappenglück
Brussels - EU Law Office
Gabriela Karaivanova
Gabriela Karaivanova
Associate
Sofia
Katharina Hirzle
Katharina Hirzle
Senior Associate
Rechtsanwältin
Munich
Neza Vonzina
Neža Vončina
Lawyer
Ljubljana
Ricardo Pintão
Ricardo Pintão
Associate
Lawyer
Lisbon
Tom Marshall
Tom Marshall
Senior Associate
London
Beatriz Dias
Beatriz Dias
Associate
Lawyer
Lisbon
Gabriele Cattaneo
Gabriele Cattaneo
Trainee
Rome
Previous 1 / 6 Next