The EU AI Act introduces a sophisticated product safety regime constructed around the following 4 categories of risk:
1. Unacceptable risk AI systems
AI systems considered unacceptable due to contravening Union values, comprising:
- Subliminal techniques: These refer to techniques that deploy subliminal methods to materially distort a person’s behaviour in a way that causes or is likely to cause physical or psychological harm.
- Manipulation: This involves exploiting vulnerabilities within a specific group of individuals to materially distort the behaviour of a person belonging to that group, in a manner that is likely to cause physical or psychological harm.
- Social scoring: This pertains to a social behaviour credit system, for example, a system designed to assess families at high risk for child neglect and abuse.
- Biometrics: This applies specifically to real-time remote biometric identification systems used by law enforcement in publicly accessible spaces.
2. High-risk AI systems
AI systems subject to a detailed certification regime:
- AI systems that are intended to be used as a safety component of a product or are themselves products already regulated under the NLF (e.g. machinery, toys, medical devices) and other categories of harmonised EU law (e.g. boats, rail and motor vehicles, aircraft, etc.)
- A list of eight new high-risk AI systems annexed to the AI Act:
a) Critical infrastructures (e.g. transport)
b) Biometric ID systems (excluding those posing an unacceptable risk)
c) Educational and vocational training (e.g. automated scoring of exams)
d) Employment, workforce management, and access to self-employment (e.g. automated hiring and CV triage software)
e) Essential private and public services (e.g. automated welfare benefit systems)
f) Law enforcement systems that may interfere with people’s fundamental rights (e.g. pre-crime detection, automated risk scoring for bail, etc.)
g) Migration, asylum, and border control management (e.g. verification of travel documents, visa processing)
h) Administration of justice and democratic processes (e.g. automated sentencing assistance)
3. Limited risk AI systems
AI systems subject to transparency requirements:
- Chatbots
- Emotion recognition and biometric categorisation AI systems
- Systems generating deepfakes or synthetic content
4. Minimal or no risk AI systems
AI systems AI systems subject to voluntary codes of conduct, e. g. spam filters or AI-enabled video games.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our privacy policy.