Home / Publications / Looking ahead to the EU AI Act / Prohibited AI practices and high-risk AI systems

Prohibited Artificial Intelligence practices and high-risk AI systems

Prohibited Artificial Intelligence practices

(Chapter II, Art. 5)

1. Introduction to the unacceptable risk category

Article 5 categorises certain AI technologies as posing an “unacceptable risk” (Unacceptable Risk). Unlike other risk categories outlined in the AI Act, the use of AI technologies that fall within this category is strictly prohibited ("Prohibited AI Systems"). It is therefore necessary to distinguish between:

  1. those technologies that are clearly prohibited; and
  2. those AI applications that are not clearly prohibited but may involve similar risks.

The most challenging problem in practice is to ensure that activities, which are not prohibited, do not become Unacceptable Risk activities and therefore prohibited.

2. Unacceptable Risk: Prohibited AI practices

Article 5 explicitly bans harmful AI practices:

The first prohibition under Article 5 addresses systems that manipulate individuals or exploit their vulnerabilities, leading to physical or psychological harm. Accordingly, it would be prohibited to place on the market, put into services or use in the EU:

  • AI systems designed to deceive, coerce or influence human behaviour in harmful ways; and
  • AI tools that prey on an individual’s weaknesses, exacerbating their vulnerabilities.

The second prohibition covers AI systems that exploit these vulnerabilities, even if harm is not immediate. Examples include:

  • AI tools that compromise user privacy by collecting sensitive data without consent; and
  • AI algorithms that perpetuate bias or discrimination against certain groups.

The third prohibition focuses on the use of AI for social scoring. Social scoring systems assign scores to individuals based on their behaviour, affecting access to services, employment or other opportunities. Prohibited practices include:

  • AI-driven scoring mechanisms that lack transparency, fairness or accountability; and
  • Systems that discriminate based on protected characteristics (e.g. race, gender, religion).

The fourth prohibition covers biometric real-time identification in publicly accessible spaces for law enforcement purposes. This includes:

  • AI systems that identify individuals without their knowledge or consent; and
  • Continuous monitoring of people’s movements using biometric data.

3. Clearly listed: Best practices and compliance

Transparency and accountability are essential in complying with the prohibitions under Article 5. Firms using AI must design and continuously test systems, be transparent about their intensions and avoid manipulative practices. They should also disclose AI systems functionality, data usage, and decision-making processes.

Companies should conduct thorough impact assessments to identify unintended vulnerabilities and implement specific safeguards to prevent exploitation. This should form part of assessments of AI systems to understand their impact on individuals and society.

Companies should develop clear guidelines for scoring systems to prevent the development of social scoring characteristics, and prioritise ethical design, fairness and non-discrimination.

Privacy impact assessments should be pursued to ensure compliance with the various prohibitions. In particular, firms should be very careful using any real-time identification systems.

In all cases, companies should maintain comprehensive records of AI system design, training, and deployment. Any critical decision made by AI systems should be overseen by a human.

4. Not clearly listed: Categorisation

Unacceptable Risk AI systems cover systems that are deemed inherently harmful and are considered a threat to human safety, livelihoods, and rights In contrast, high-risk AI systems cover systems designed to be applied to specific use cases, including using AI for hiring and recruitment that may cause harm but are not inherently harmful. High risk AI systems are legal, but subject to important requirements under the AI Act. It is therefore crucial to determine the difference between high risk and unacceptable risk AI systems.

In essence, any high risk activity can escalate to Unacceptable Risk under the following circumstances:

  • Bias and Discrimination: if AI perpetuates bias or discriminates against protected groups.
  • Privacy Violations: when AI systems compromise user privacy or misuse sensitive data.
  • Psychological Harm: if AI manipulates individuals, causing psychological distress.

AI systems that are able to perform generally applicable functions and are able to have multiple intended and unintended purposes (being General Purpose AI models) are not inherently prohibited under the AI Act, but must be used with care since in certain scenarios they lead to Unacceptable Risk activities. To assess whether a General Purpose AI Model poses an Unacceptable Risk, it is necessary to consider the context in which the model operates. If it influences critical decisions (e.g. hiring, credit scoring), perpetuates bias or discriminates, compromises user privacy (e.g. by collecting sensitive data without consent), the risk increases, and the model may need to be adapted.

5. Best practice and compliance

While the AI Act provides examples of explicit prohibitions under the AI Act, it cannot cover all possible situations as the technology is, through updated versions and by definition, constantly evolving.

As a guide, legal and compliance teams should ask the following questions when considering high- risk AI systems:

Risk assessment:

  • What is the evidence that the categorisation of the AI application is minimal, limited, high or Unacceptable Risk?
  • Does the application in any circumstances use or act on sensitive data or influence critical decisions?

Contextual analysis:

  • Does the application operate in a sector that has a presumption of increased risk, for example, (a) financial services, or (b) healthcare?
  • In what ways does the deployment of the application impact (a) individuals, and (b) society?

Specific criteria:

  • Can any decisions of the application be considered to give rise to manipulation, exploitation, discriminatory scoring, or biometric identification?
  • Does the application operate or have access to data that could give rise to the exploitation of subliminal techniques or vulnerabilities related to protracted characteristics, such as age or disability?

Transparency and Documentation:

  • In what ways is the AI system transparent about its inherent functioning and decision-making?
  • In what ways does the user’s documentation of the design, training and deployment of the application demonstrate compliance with the various rules?

6. Conclusion

Unacceptable Risk AI activities are those practices that pose inherent harm to people and are strictly forbidden under the AI Act. The potential for reputational damage and regulatory sanctions serve as strong deterrents for firms to avoid breaching these provisions of the AI Act. It is essential for companies to take proactive measures to ensure compliance and prevent harm to individuals and society.

High-risk AI systems: classification of AI systems as high-risk; requirements for high-risk AI systems

(Chapter III, Section 1-2: Art. 6-15)

High-risk AI systems are a category of AI systems that involve significant risks of harm to the health, safety, or fundamental rights of people or the environment.

In accordance with the AI Act, these systems are subject to strict requirements and obligations before they can be placed on the market or put into service in the EU. It aims to ensure that they are trustworthy, lawful, and ethical, and that they respect the fundamental rights and values of the EU.

The AI Act defines high-risk AI systems in two ways. First, high-risk AI systems are systems intended to be used as safety components of products, or are themselves products, covered by EU harmonisation legislation and require a third-party conformity assessment. Secondly, AI systems that perform profiling of natural persons and those used in specific areas listed in the AI Act, such as biometrics, critical infrastructure, law enforcement, or democratic processes, are also considered high-risk.

Providers of high-risk AI systems must ensure that their systems comply with certain obligations set out in the AI Act, of which the most important are the following:

  • A risk management system must be established and maintained for each high-risk AI system, covering its entire lifecycle, and identifying and mitigating the relevant risks of AI systems on health, safety, and fundamental rights.
  • High-risk AI systems using techniques involving the training of models with data must be developed based on quality data sets that are relevant, representative, free of errors, and complete.
  • Providers must create and maintain technical documentation for each high-risk AI system before it is placed on the market or put into service, detailing elements such as the system’s intended purpose, design, or development process.
  • High-risk AI systems must have logging capabilities to automatically record events over their lifetime (e.g., the system’s activation, operation, and modification).

High-risk AI systems: obligations of providers and deployers of high-risk AI systems and other parties

(Chapter III, Section  3: Art. 16-27)

The regulatory focus of the AI Act is on the operators along the value chain of a high-risk AI system. It establishes various obligations for providers, deployers, distributors as well as other third parties along the AI value chain.

The provider, for example a company that develops a system for corporate training programs that uses AI for personalized learning recommendations and makes it available to companies within the EU, is primarily required to ensure that the extensive requirements for high-risk AI systems set out above are met throughout the lifecycle of the AI system and that all relevant technical and other documentation are available for the competent authorities. Further obligations include the control of automatically generated logs, corrective actions to ensure the continuous conformity of the AI system as well as information and cooperation obligations. To enable the enforcement of the AI Act vis-à-vis providers established outside the EU, e.g., if the developer of the AI-powered corporate training programs is based in China, it must appoint an authorised representative established in the EU as a contact person.

For example, an importer who purchases an AI based HR tool for assessing employee promotions from a US-based provider and offers it to companies in the EU, and a distributor who purchases an AI based credit scoring system from a provider and sells it to banks and financial institutions, are both primarily obligated to ensure that the high-risk AI system bears the required CE conformity marking and is accompanied by a copy of the EU declaration of conformity and instructions for use. They must also verify that the provider and the importer, as applicable, have fulfilled their respective obligations.

A deployer, such as a company using an AI-based system to process and decide on job applications, must ensure that the high-risk AI system is used in accordance with its use instructions. Further obligations include human oversight as well as monitoring of the operation of the high-risk AI system and automated logs. In addition, deployers must perform a fundamental rights impact assessment for high-risk AI systems.

Provider obligations can apply to distributors, importers, deployers and other third parties if:

  1. they place a high-risk AI system on the market or put it into service under their own name or trademark;
  2. they make a substantial modification to a high-risk AI system on the market; or
  3. they modify the intended purpose of an AI system on the market and it becomes a high-risk AI system as a result.

High-risk AI systems: notifiying authorities and notified bodies

(Chapter III, Section 4: Art. 2830-39)

Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation, and notification of conformity assessment bodies and for their monitoring. The AI Act includes provisions on the application process and requirements for conformity assessment bodies.

High-risk AI systems: standards, conformity assessment, certificates, registration

(Chapter III, Section 5: 40-49)

The AI Act contains specific rules and presumptions on conformity of high-risk AI systems and General Purpose AI models.

If the high-risk AI system or General Purpose AI model is in conformity with a harmonised EU standard, or with a common specification issued by the Commission, it must be presumed to be in conformity with the requirements of high-risk AI systems stipulated in the AI Act. The European Commission will issue standardisation requests for this purpose. If the presumption can apply, the provider of an AI system can use a simplified conformity assessment procedure.

Where providers of high-risk AI systems do not comply with the common specifications and/or the above EU harmonised standards, they are required to justify that they have adopted appropriate equivalent technical solutions and they must conduct one of the conformity assessment procedures in the Annexes of the AI Act, depending on the type of high-risk AI system.

High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure whenever they are substantially modified.

As a result of the conformity assessment procedure the notified body will issue a certificate, which is valid for not more than four or five years, depending on the type of the high-risk AI system, which can be extended by another four or five years provided that the high-risk AI system is re-assessed.

If the high-risk AI system no longer meets the requirements set out in the AI Act, the notified body is entitled to suspend or withdraw the certificate issued, or impose any restrictions on it.

The AI Act contains exceptions where the authorities may grant a temporary derogation from the above conformity assessment procedures in particularly justified cases.

High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to the EU Cybersecurity Act must be presumed to be in compliance with the cybersecurity requirements set out in the AI Act.

The provider of a high-risk AI system must draw up an EU declaration of conformity for each high-risk AI system and keep it for 10 years after the high-risk AI system has been placed on the market or put into service. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. The EU declaration of conformity shall state that the high-risk AI system in question meets the requirements set out in the AI Act. The provider shall keep the EU declaration of conformity up-to-date as appropriate. The providers must use the CE marking of conformity, for which the AI Act contains detailed AI specific rules.

Before placing on the market or putting into service a high-risk AI system, with the exception of high risk AI systems in critical infrastructure, the provider or, where applicable, the authorised representative, and deployers who are public authorities, agencies or bodies or persons acting on their behalf must register themselves and their system in the EU AI system database. High risk AI systems in critical infrastructure must be registered at national level.

Transparency obligations for providers and deployers of certain AI systems

(Chapter  IV, Art. 50)

The four primary transparency obligations in Article 50 of the AI Act apply to AI systems engaging with individuals, AI-generated synthetic content, emotion recognition or biometric systems and the creation or manipulation of deep fake content. EU or national law may lay down additional transparency obligations.

  1. AI systems engaging with individuals. Providers of AI systems that directly interact with people must ensure that individuals are informed when they are interacting with an AI system, unless it is already obvious to a reasonably well-informed person. Such AI systems may include, for example, virtual assistants (like a chatbot or voice-activated system), AI-driven chatbots handling customer inquiries, or social media accounts which respond to users automatically. There is an exception for AI systems authorised by law for criminal offense detection, investigation and prosecution, provided there are safeguards for the rights of third parties, unless these systems are publicly accessible for reporting criminal offenses.
  2. AI-generated synthetic content. Providers of AI systems (including General Purpose AI models) must ensure that the AI-generated synthetic content is marked in a machine-readable format to indicate their artificial origin. Such content includes, for example, realistic-looking images of non-existent people, a video that simulates a person saying things they never said in real life, news articles or stories that may be mistaken for human-written content or an audio recording which mimics a specific person’s voice to generate speech for various purposes. The technical solutions for marking should be effective, interoperable, robust and reliable within technological feasibility, taking into account content types, implementation costs, and the state-of-the-art standards. This marking obligation is waived for AI systems that assist with standard editing without significantly altering input data, or when legally authorised for criminal offense-related activities.
  3. Emotion recognition or biometric systems. Operators of emotion recognition or biometric categorisation systems must inform individuals about the system’s operation. Such an operator may be, for example, a retail store analysing the facial expressions of customers to learn product preferences, or an office building using facial features for access control. This obligation does not apply to systems if they are legally authorised for detecting, preventing, and investigating criminal offenses, provided appropriate safeguards for the rights of third parties are in place.
  4. Deep fake content. Deployers of AI systems generating or manipulating image, audio or video deep fake content must disclose its artificial nature, except when authorised by law for criminal investigations. For example, law enforcement agencies may create deep fake videos to gather intelligence for criminal investigations without disclosing their artificial nature. For content that is evidently artistic, creative, satirical, fictional or analogous, transparency obligations are limited to appropriately disclosing the existence of such manipulated content. For example, this would apply where a film production studio uses AI to create deep fake scenes for a science fiction movie. Regarding text generation for public information, deployers must disclose if the content is artificially generated, unless authorised by law for criminal investigations or if the content undergoes human review or editorial control.

EU database for high-risk AI systems listed in Annex III

(Chapter VIII , Art. 71)

In the rapidly evolving landscape of Artificial Intelligence, the European Union has taken proactive steps to ensure the safety, transparency, and accountability of high-risk AI systems through the establishment of the EU database. Considering the AI Act’s reliance on self-assessment by AI system providers and deployers, the EU database serves as a tool to promote accountability through public oversight. This mechanism potentially enhances transparency of the AI systems, marking an initial move towards the responsible deployment of AI.

The EU database shall serve as a central repository for detailed information about high-risk AI systems that fall under the scope of Article 6(2). Accessibility and transparency are cornerstones of the EU database. It is designed to make the bulk of its contents readily available to the public in a user-friendly and machine-readable format. However, certain information is reserved for market surveillance authorities and the Commission. Providers of high-risk AI systems, along with their authorised representatives, bear the responsibility of inputting essential data into the database, as specified in Annex VIII, Section A. Similarly, deployers acting on behalf of public entities are required to submit information detailed in Annex VIII, Section B, reinforcing the collaborative nature of this regulatory mechanism.

The content of the database focuses on details provided by high-risk AI system providers as defined by the AI Act. The real impact of AI on individuals and society is linked to its usage context, making this omission significant. Without this critical information, the public remains uninformed about the specifics of high-risk AI system deployment, including the whereabouts of their use, potentially diminishing the database’s intended effectiveness. On the other hand, businesses have voiced several criticisms regarding the EU’s database for high-risk AI systems, highlighting concerns about increased regulatory burdens. Additionally, companies are worried about privacy and confidentiality of sensitive information, the ambiguity in high-risk AI classification, and the consistency of enforcement across the EU.

While the EU database for high-risk AI systems is a significant step towards ensuring the responsible deployment of AI systems, its current focus on provider-supplied details may not fully capture the broader implications of AI use in society. This limitation, coupled with the business community’s concerns over privacy, classification ambiguity, and enforcement consistency, underscores the need for a more nuanced approach. Balancing the goals of transparency and accountability with the imperatives of innovation and privacy will be crucial in refining the database to effectively serve its intended purpose of safeguarding public interests in the evolving AI landscape.