High-risk AI systems: classification of AI systems as high-risk; requirements for high-risk AI systems
(Chapter III, Section 1-2: Art. 6-15)
High-risk AI systems are a category of AI systems that involve significant risks of harm to the health, safety, or fundamental rights of people or the environment.
In accordance with the AI Act, these systems are subject to strict requirements and obligations before they can be placed on the market or put into service in the EU. It aims to ensure that they are trustworthy, lawful, and ethical, and that they respect the fundamental rights and values of the EU.
The AI Act defines high-risk AI systems in two ways. First, high-risk AI systems are systems intended to be used as safety components of products, or are themselves products, covered by EU harmonisation legislation and require a third-party conformity assessment. Secondly, AI systems that perform profiling of natural persons and those used in specific areas listed in the AI Act, such as biometrics, critical infrastructure, law enforcement, or democratic processes, are also considered high-risk.
Providers of high-risk AI systems must ensure that their systems comply with certain obligations set out in the AI Act, of which the most important are the following:
- A risk management system must be established and maintained for each high-risk AI system, covering its entire lifecycle, and identifying and mitigating the relevant risks of AI systems on health, safety, and fundamental rights.
- High-risk AI systems using techniques involving the training of models with data must be developed based on quality data sets that are relevant, representative, free of errors, and complete.
- Providers must create and maintain technical documentation for each high-risk AI system before it is placed on the market or put into service, detailing elements such as the system’s intended purpose, design, or development process.
- High-risk AI systems must have logging capabilities to automatically record events over their lifetime (e.g., the system’s activation, operation, and modification).
High-risk AI systems: obligations of providers and deployers of high-risk AI systems and other parties
(Chapter III, Section 3: Art. 16-27)
The regulatory focus of the AI Act is on the operators along the value chain of a high-risk AI system. It establishes various obligations for providers, deployers, distributors as well as other third parties along the AI value chain.
The provider, for example a company that develops a system for corporate training programs that uses AI for personalized learning recommendations and makes it available to companies within the EU, is primarily required to ensure that the extensive requirements for high-risk AI systems set out above are met throughout the lifecycle of the AI system and that all relevant technical and other documentation are available for the competent authorities. Further obligations include the control of automatically generated logs, corrective actions to ensure the continuous conformity of the AI system as well as information and cooperation obligations. To enable the enforcement of the AI Act vis-à-vis providers established outside the EU, e.g., if the developer of the AI-powered corporate training programs is based in China, it must appoint an authorised representative established in the EU as a contact person.
For example, an importer who purchases an AI based HR tool for assessing employee promotions from a US-based provider and offers it to companies in the EU, and a distributor who purchases an AI based credit scoring system from a provider and sells it to banks and financial institutions, are both primarily obligated to ensure that the high-risk AI system bears the required CE conformity marking and is accompanied by a copy of the EU declaration of conformity and instructions for use. They must also verify that the provider and the importer, as applicable, have fulfilled their respective obligations.
A deployer, such as a company using an AI-based system to process and decide on job applications, must ensure that the high-risk AI system is used in accordance with its use instructions. Further obligations include human oversight as well as monitoring of the operation of the high-risk AI system and automated logs. In addition, deployers must perform a fundamental rights impact assessment for high-risk AI systems.
Provider obligations can apply to distributors, importers, deployers and other third parties if:
- they place a high-risk AI system on the market or put it into service under their own name or trademark;
- they make a substantial modification to a high-risk AI system on the market; or
- they modify the intended purpose of an AI system on the market and it becomes a high-risk AI system as a result.
High-risk AI systems: notifiying authorities and notified bodies
(Chapter III, Section 4: Art. 2830-39)
Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation, and notification of conformity assessment bodies and for their monitoring. The AI Act includes provisions on the application process and requirements for conformity assessment bodies.
High-risk AI systems: standards, conformity assessment, certificates, registration
(Chapter III, Section 5: 40-49)
The AI Act contains specific rules and presumptions on conformity of high-risk AI systems and General Purpose AI models.
If the high-risk AI system or General Purpose AI model is in conformity with a harmonised EU standard, or with a common specification issued by the Commission, it must be presumed to be in conformity with the requirements of high-risk AI systems stipulated in the AI Act. The European Commission will issue standardisation requests for this purpose. If the presumption can apply, the provider of an AI system can use a simplified conformity assessment procedure.
Where providers of high-risk AI systems do not comply with the common specifications and/or the above EU harmonised standards, they are required to justify that they have adopted appropriate equivalent technical solutions and they must conduct one of the conformity assessment procedures in the Annexes of the AI Act, depending on the type of high-risk AI system.
High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure whenever they are substantially modified.
As a result of the conformity assessment procedure the notified body will issue a certificate, which is valid for not more than four or five years, depending on the type of the high-risk AI system, which can be extended by another four or five years provided that the high-risk AI system is re-assessed.
If the high-risk AI system no longer meets the requirements set out in the AI Act, the notified body is entitled to suspend or withdraw the certificate issued, or impose any restrictions on it.
The AI Act contains exceptions where the authorities may grant a temporary derogation from the above conformity assessment procedures in particularly justified cases.
High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to the EU Cybersecurity Act must be presumed to be in compliance with the cybersecurity requirements set out in the AI Act.
The provider of a high-risk AI system must draw up an EU declaration of conformity for each high-risk AI system and keep it for 10 years after the high-risk AI system has been placed on the market or put into service. A copy of the EU declaration of conformity shall be submitted to the relevant national competent authorities upon request. The EU declaration of conformity shall state that the high-risk AI system in question meets the requirements set out in the AI Act. The provider shall keep the EU declaration of conformity up-to-date as appropriate. The providers must use the CE marking of conformity, for which the AI Act contains detailed AI specific rules.
Before placing on the market or putting into service a high-risk AI system, with the exception of high risk AI systems in critical infrastructure, the provider or, where applicable, the authorised representative, and deployers who are public authorities, agencies or bodies or persons acting on their behalf must register themselves and their system in the EU AI system database. High risk AI systems in critical infrastructure must be registered at national level.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our privacy policy.