Home / Publications / Comprehensive guidance for AI businesses / What are the risk categories and the corresponding...

What are the risk categories and the corresponding compliance obligations?

The AI Act categorises AI systems according to 4 levels of risk: 

 

  1. Unacceptable risk
  2. High risk
  3. Limited risk
  4. Minimal or no risk

 

The higher the risk, the stricter the requirements

The compliance obligations that companies must meet will vary based on the level of risk posed by the AI system. Each risk level entails different requirements, with high-risk AI systems needing to comply with stricter rules related to transparency, human oversight, and robustness.

While most compliance rules will affect high-risk AI systems and their providers, there are also provisions that will apply to providers and users of limited risk or minimal risk AI systems. We assume that the criteria to determine if an AI system is high-risk will vary in time.

Unacceptable risk:

 

  • Which AI systems fall into this risk category? 
    A very limited set of detrimental uses of AI that contravene the EU’s values will be considered to pose an unacceptable risk and thus be prohibited (e.g. cognitive behavioural manipulation of people or specific vulnerable groups, social scoring, real-time remote biometric identification systems, etc.).
  • What are the compliance obligations? 
    AI systems falling into the category of unacceptable risk will be prohibited. Therefore, there are no compliance requirements.

High risk:

 

  • Which AI systems fall into this risk category? 
    A limited number of AI systems with the potential for adverse impacts on people’s safety or fundamental rights. High-risk AI systems are divided into two categories: 
    1.    High-risk AI systems that are listed in the annex to the AI Act. The list is not final and may be expanded in the future due to technological advancements (i.e. future-proofing).
    2.    AI systems subject to third-party conformity assessment under the EU’s product safety legislation.
  • What are the compliance obligations? 
    Mandatory requirements are proposed for all high-risk AI systems in order to ensure trust, and a consistent and high level of protection of safety and fundamental rights. These requirements cover:
    i)    the quality of the data sets used,
    ii)    technical documentation,
    iii)   record keeping,
    iv)   transparency and the provision of information to users,
    v)    human oversight,
    vi)   robustness, accuracy and cybersecurity.

In the event of a breach, the requirements will enable national authorities to access the information needed to investigate whether the use of the AI system complied with the law.

Limited risk:

 

  • Which AI systems fall into this risk category? 
    AI systems that do not fall into the categories of high-risk or unacceptable risk AI systems and thus do not represent a significant threat to people’s safety and fundamental rights. These include AI systems that generate or manipulate images, audio or video content (e.g. deepfakes). 
  • What are the compliance obligations? 
    Specific transparency requirements are imposed, as users should be aware and informed of the fact that they are interacting with an AI, and should be able to decide whether they want to continue using it. 

Minimal or no risk:

 

  • Which AI systems fall into this risk category? 
    All other AI systems that can be developed and used subject to the existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. 
  • What are the compliance obligations? 
    There are no additional compliance obligations. Providers of these systems may choose to voluntarily apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.