The EU AI Act is a significant regulatory framework aimed at harmonising the development, deployment, and use of AI within the EU. This comprehensive regulation, which went into effect on 1 August 2024, seeks to ensure safety, protect fundamental rights, and promote innovation while preventing market fragmentation.
The AI Act covers a broad range of AI applications across various sectors, including healthcare, finance, insurance, transportation, and education. It applies to providers and deployers of AI systems within the EU, as well as those outside the EU whose AI systems impact the EU market. Exceptions include AI systems used for military, defence, or national security purposes, and those developed solely for scientific research.
“AI system” is defined as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness, and from the input it receives can generate derived outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. The EU Commission has published guidelines on AI system definition to explain the practical application of this legal concept, as covered in the AI Act.
AI literacy
The Act emphasises the importance of AI literacy for providers and deployers, and requires that staff of companies and organisations (and other relevant individuals) possess the necessary skills and understanding to engage with AI technologies responsibly. This obligation includes ongoing training and education tailored to specific sectors and use cases.
Risk-based approach
In order to introduce a proportionate and effective set of binding rules for AI systems, the AI Act has introduced a pre-defined risk-based approach, which tailors the type and content of such rules with the intensity and scope of the risks that AI systems can generate. The act prohibits certain unacceptable AI practices and sets down requirements for high-risk AI systems and general-purpose AI models, as well as different obligations for the various operators. In addition, it introduces transparency obligations for certain AI systems.
Prohibited AI practices
The AI Act prohibits certain AI practices deemed to pose unacceptable risks to fundamental rights, safety, and public interests. These include:
- AI systems using subliminal techniques to manipulate behaviour;
- Exploiting vulnerabilities of specific groups, such as children or individuals with disabilities;
- Social scoring based on personal characteristics leading to discriminatory outcomes;
- Predicting criminal behaviour based solely on profiling;
- Untargeted scraping for facial recognition databases;
- Emotion recognition in workplaces and educational institutions, except for medical or safety reasons;
- Biometric categorisation to infer sensitive attributes, except for lawful law enforcement purposes; and
- Real-time remote biometric identification in public spaces for law enforcement.
The AI Act does not affect the prohibitions that apply where an AI practice infringes other EU law. These continue to apply. The market surveillance authorities are required to report annually to the EU Commission and relevant national competition authorities on the prohibited practices that occurred during that year and the measures taken in response. The EU Commission has published Guidelines on prohibited AI practices, as defined by the AI Act.
AI literacy and prohibited AI practices obligations are applicable from 2 February 2025.
High-risk AI systems
In order to ensure consistent and high-level protection of public interests related to health, safety and fundamental rights (including the right to non-discrimination, data protection and privacy), the AI Act establishes common rules for high-risk AI systems, which include the following:
- Establishing a risk management system;
- Ensuring data quality and governance;
- Maintaining technical documentation and logging capabilities;
- Providing transparent information and human oversight;
- Ensuring accuracy, robustness, and cybersecurity, and
- Implementing a quality management system.
General purpose AI models
The act includes specific rules for general-purpose AI models, particularly those with systemic risks. Providers must notify the EU Commission if their models meet high-impact capability thresholds, and prepare comprehensive technical documentation, ensure copyright compliance, and provide summaries of training content.
“High-impact capabilities” are those that match or exceed the capabilities recorded in the most advanced general purpose AI models. Annex XIII of the AI Act defines specific criteria for the designation of general purpose AI models with systemic risk, such as the number of parameters of the model; the quality or size of the data set; the amount of computation used for training the model; the input and output modalities of the model; and the benchmarks and evaluations of the capabilities of the model.
Governance, compliance and regulatory aspects
The Act mandates transparency to ensure public trust and prevent misuse of AI technologies. Providers and deployers must inform individuals about their interaction with AI systems, maintain detailed documentation, and adhere to logging and record-keeping practices. High-risk AI systems have stricter transparency requirements, including marking synthetic content to prevent misinformation and ensure transparency on AI use and AI decision-making.
The AI Act promotes ethical AI development through regulatory sandboxes, providing a controlled environment for testing AI technologies. These sandboxes support cooperation among stakeholders, remove barriers for SMEs, and accelerate market access.
Furthermore, the Act encourages the development of codes of conduct and guidelines to facilitate compliance. These may cover voluntary application of requirements, ethical guidelines, environmental sustainability, AI literacy, and inclusive design.
Effective AI governance involves setting up AI use policies, AI literacy programmes, centralised risk assessment frameworks, governance committees, and operational controls. These will ensure ethical AI use, compliance with regulations, and continuous improvement in AI risk management.
Penalties
The AI Act imposes significant penalties for non-compliance, with fines up to EUR 35 million or 7% of total worldwide annual turnover in the preceding financial year for prohibited practices. Other infringements can incur fines of up to EUR 15 million or 3% of the offender's total worldwide annual turnover for the preceding financial year, whichever is higher. The penalties aim to be effective, proportionate and dissuasive, and will be considerate of the interests of small and medium enterprises (SMEs), including startups. In the case of SMEs, each fine will be the lower of the corresponding amount or percentages referred to above.
Conclusion
The EU AI Act creates a trustworthy and human-centric AI ecosystem by balancing innovation with the protection of fundamental rights and public interests. By adhering to the Act's requirements, businesses can ensure the safe and ethical development and deployment of AI technologies.
For a detailed analysis, refer to the full article included in the pdf-file.
Visit CMS’s AI Insight pages for more insights about the responsible AI use.
If you have any questions about the AI Act or want to learn more about the specific AI Act obligations of your organisation, contact your CMS client partner or this CMS expert: