AI Act - Transparency obligations, rights for affected persons, AI Office and sanctions
Key contacts
Whether the AI Act will be adopted in its current version, which now also provides for explicit rights of affected persons, will become clear at the end of the year.
The scope of the draft Artificial Intelligence Act (AI Ordinance, AI Act) is broad and generative AI is covered in principle. While general principles are established for the development and use of all AI systems, depending on their risk classification, AI systems may be prohibited altogether as prohibited practices, or may be subject to extensive requirements as high-risk AI Systems and result in obligations for providers, deployers, importers, distributors, and other third parties in the AI value chain, including providers of foundation models.
Transparency obligations can apply regardless of whether an AI system is classified as a high-risk AI system. Providers and deployers of other AI systems can join codes of conduct on a voluntary basis.
Transparency obligations for certain AI systems
For certain AI systems – irrespective of a classification as a high-risk AI system – transparency obligations for providers and users apply (Art. 52), which are specified and extended in the negotiating position of parliament.
Transparency obligations for AI systems interacting with humans or recognising emotions and characteristics
When humans interact with AI systems (e.g. chatbots or personal assistants) or when emotions or characteristics are detected by automated means, the natural persons concerned must be informed of this in a timely, clear, and intelligible manner.
Where appropriate and relevant, such interacting AI systems should also specify the functions performed by the AI, the individual responsible for decision-making processes, and the legal remedies an affected person has under EU and national law against the use of such systems, including the right of affected persons to explain themselves.
Transparency obligations do not apply if the respective AI system is authorised by law for the detection, prevention, investigation, and prosecution of criminal offenses.
Transparency obligations for generative AI systems now cover systems like AI chatbots
Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and features depictions of people appearing to say or do things they did not say or do without their consent (e.g. deep fakes), will disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Content must be labeled, considering the generally accepted state of the art and relevant harmonised standards and specifications, in a manner that informs that the content is not authentic and is clearly visible to the recipient of such content.
While the proposed regulation only covered generative AI with regard to audio or visual content, the current version now also explicitly mentions text as generated or manipulated content. The transparency obligation now also applies to texts generated with AI chatbots or comparable applications.
The transparency obligations are limited when generative AI is authorised by law for the detection, prevention, investigation, and prosecution of criminal offenses or is necessary in the exercise of fundamental rights such as the right to freedom of expression and freedom of art and science. In the current version, the transparency obligation for the use of generative AI content related to creative, satirical, artistic, works or programmes, or in video games is limited to indicating the existence of the generated or manipulated content.
In each case, the information must be provided no later than the time of the first interaction or first engagement with the content. Regarding vulnerable persons such as children or persons with disabilities, intervention or labeling processes may need to be implemented.
Voluntary codes of conduct for providers and deployers of other AI systems
The AI Act also provides the basis for establishing voluntary codes of conduct under which providers and deployers of AI systems that do not pose a high risk can voluntarily apply the mandatory requirements for high-risk AI systems or establish and implement codes of conduct. These codes may also include voluntary commitments to, for example, environmental sustainability, accessibility for persons with disabilities, stakeholder participation in the design and development of AI systems, and diversity of the development team.
No obligations for open source developers
In the parliament's negotiating position, the AI Act does not cover AI model components available as open source unless they are placed on the market or put into service by a provider as part of a high-risk AI system or a prohibited AI system (Recital 12 a, b, and c, Art. 2(5e)). Neither the collaborative development of open source AI components nor their public provision constitutes placing on the market or putting into service in this context, unless the provision is monetised (e.g. remuneration for providing the component, for technical support services, for providing a software platform through which other remunerated services are offered) or offered only in exchange for the provision of personal data. Developers of open source AI components are thus not subject to the obligations of the AI Act for providers or other third parties in the value chain.
Rights for affected persons
As rights and freedoms of natural or legal persons and groups of natural persons may be seriously undermined by AI systems, parliament has added notification and redress mechanisms (Recital 84a, Art. 68 a, b) and a right to explanation (Recital 84b, Art. 68 c) to the Commission's proposal.
Affected persons can report to the competent authority if they believe that an AI system violates the AI Act regarding the respective affected person and can file a complaint against the providers or users of AI systems (Art. 68 a.). The affected persons must have a further right of appeal against decisions of the authority, which will be decided in court.
In addition, affected persons whom deployers have made a decision upon based on the results of a high-risk AI system and where they feel their health, safety, fundamental rights, socio-economic well-being or other rights under the AI Act have been adversely affected, have a right of explanation to the deployer (Art. 68c). The deployer must explain what role the AI system played in the decision-making process, taking into account the expertise and level of knowledge of an average consumer.
Sanctions for violations of the AI Act
The member states are to determine detailed penalties for operators (i.e. providers, deployers, authorised representatives, importers and distributors, see Art. 3 No. 8) if they violate the provisions of the AI Act (Art. 71). Special consideration must be given to the interests of small providers and startups and their economic survival.
Regarding fines, the maximum fine has been increased in the current version of the AI Act, but overall the level of penalties within each category has been reduced and the classification changed:
- Violation of Prohibited Practices: Violations of Art. 5 can now be sanctioned with fines of up to EUR 40 million or 7% of a company's annual worldwide turnover in the preceding year (previously EUR 30 million or 6%).
- Violation of Data Quality and Transparency Requirements for High-Risk AI Systems: Violations with data quality requirements of Art. 10 are now no longer sanctioned on an equal footing with Prohibited Practices violations, but together with violations of transparency requirements of Art. 13 are subject to fines of EUR 20 million or 4% of a company's annual global turnover in the preceding year.
- Violations of other obligations and requirements of the AI Act: If AI systems or foundation models violate requirements or obligations of the AI Act other than Art. 5, 10 and 10, fines of up to EUR 10 million or 2% of a company's global annual turnover in the preceding year are to be imposed. This includes, for example, violations of the transparency obligations of Art. 52 (previously EUR 20 million or 4%).
- Incorrect, incomplete or misleading information: If false, incomplete or misleading information is provided to authorities, there are fines of up to EUR 5 million or 1% of a company's annual worldwide turnover in the previous year (previously EUR 10 million or 2%).
Establishment of a European Artificial Intelligence Office
The AI Act provides for the establishment of the European Artificial Intelligence Office. According to the negotiating position of parliament, this AI Office should be a legal entity and independent institution of the EU with headquarters in Brussels (Art. 56 ff). The tasks of the AI Office will include monitoring and ensuring the application of the AI Act, coordinating and agreeing with the member states and competent authorities, issuing annual reports on the implementation of the AI Act and making recommendations to the Commission regarding prohibited practices and high-risk AI systems.
Conclusion
Regarding the broad scope of the AI Act and its complex and extensive requirements and obligations, the AI Act could lead to considerable burdens, especially for start-ups and smaller companies. It is feared that this will weaken the innovative strength of the EU and impair its competitiveness compared to other locations. There is also criticism that European companies will be disadvantaged in global competition compared to countries with more flexible or less stringent regulatory requirements as a result of the AI Act.
It remains to be seen whether the establishment of regulatory sandboxes by the member states (Art. 53) and further measures for small and medium-sized enterprises and startups (Art. 55) will meet their goal of promoting innovation. For example, small and medium-sized enterprises and startups are to be granted priority access to AI sandboxes if the relevant requirements are met, communication channels are to be established for orientation and questions regarding the AI Act, lower fees are to be set for conformity assessment (Art. 55), and the interests of small providers and startups as well as their economic survival, are to be taken into account in sanctions (Art. 71).
With the current version under negotiation in parliament, the AI Act creates more clarity in some points. Whether this version will prevail and what the AI Act and its requirements and obligations will look like in the final version will likely become clear at the end of the year.