Home / Legal Publications / Looking ahead to the EU AI Act / Codes of conduct, confidentiality and penalties, delegation...

Codes of conduct, confidentiality and penalties, delegation of power and committee procedure, final provisions

Codes of conduct

(Chapter X, Art. 95)

In order to foster ethical and reliable AI systems and to increase AI literacy among those involved in the development, operation and use of AI, the new AI Act mandates the AI Office and Member States to promote the development of codes of conduct for non-high-risk AI systems. These codes of conduct, which should take into account available technical solutions and industry best practices, would promote voluntary compliance with some or all of the mandatory requirements that apply to high-risk AI systems.

Such voluntary guidelines should be consistent with the EU values and fundamental rights and address issues such as transparency, accountability, fairness, privacy and data governance, and human oversight. Furthermore, to be effective, such codes of conduct should be based on clear objectives and key performance indicators to measure the achievement of these objectives.

Codes of conduct may be developed by individual AI system providers, deployers, or organizations representing them and should be developed in an inclusive manner, involving relevant stakeholders such as business and civil society organisations, academia, etc.

The  European Commission will assess the impact and effectiveness of the codes of conduct within two years of the AI Act entering into application, and every three years thereafter. The aim is to encourage the application of requirements for high-risk AI systems to non-high-risk AI systems, and possibly other additional requirements for such AI systems (including in relation to environmental sustainability).

Penalties

(Chapter XII , Art. 99 – 101)

Penalties (Art. 99, 100 and 101)

The Member States are to determine detailed penalties and other enforcement measures for infringement of the AI Act by operators (i.e. providers, deployers, authorised representatives, importers and distributors).  The penalties provided for should be effective, proportionate and dissuasive and take into account the interests of Small and medium-sized enterprises (SMEs), including startups and their economic viability.

Fines can be levied as follows:

  • Violation of Prohibited Practices (Art. 99): Non-compliance with Article 5 (regarding prohibited AI practices) can be sanctioned with fines of up to EUR 35 million or 7% of a company’s annual worldwide turnover in the preceding year, whichever is higher.
  • Violations of other obligations and requirements of the AI Act (Art. 99): If AI systems are non-compliant with the following provisions of the AI Act (other than in Art. 5), fines of up to EUR 15 million or 3% of a company’s annual worldwide turnover in the preceding year, whichever is higher, may be imposed:
    • Obligations of providers of high-risk AI systems under Article 16.
    • Obligations of authorised representatives, importers, distributors and deployers under Articles 22, 23, 24 and 26 respectively.
    • The transparency obligations for providers and deployers under Article 50.
  • Supply of incorrect, incomplete or misleading information (Art.99): Provision of incorrect, incomplete or misleading information to authorities, can be sanctioned with fines of up to EUR 7.5 million or 1% of a company’s annual worldwide turnover in the preceding year, whichever is higher.
  • Lower thresholds for SMEs (Art. 99): In the case of SMEs, the fines above will be up to the lower of the percentages or amounts stated.
  • Fines for providers of General Purpose AI models (Art. 101): Providers of General Purpose AI models may be subject to fines imposed by the European Commission up to EUR 15 million or 3% of their annual worldwide turnover in the preceding year, whichever is higher, where they intentionally or negligently:
    • Infringe the applicable provisions of the AI Act,
    • Fail to comply with a result for documentation or information or supply incorrect, incomplete or misleading information,
    • Fail to comply with measures requested by the European Commission under Article 93, or
    • Fail to make available to the European Commission access to the General Purpose AI model for the purposes of an evaluation under Article92.

Fines for providers of General Purpose AI models may only be imposed from one year after the relevant AI Act provisions come into effect, to allow providers time to adapt.  Because Article 101  is an independent provision alongside Article 99, it can be assumed that if the General Purpose AI system is a high-risk AI systems, these penalties may be imposed by the European Commission in addition to any penalties imposed under Article 99.

  • Fines for EU institutions, agencies and bodies (Art. 100): Fines for non-compliance with Article 5 (regarding prohibited AI practices) will be limited to EUR 1.5 million, and for any other requirements or obligations limited to EUR 750,000.

Final provisions

(Currently Chapter XIII, Art. 102-113-85)

The final provisions of the AI Act (Arts. 102-113) deal mainly with the amendment of other EU regulations (Arts. 102-110), the grandfathering clause (Art. 111), evaluation and review of the AI Act (Art. 112) and entry into force (Art. 113).

Article 111 outlines three grandfathering rules for AI systems and general-purpose AI models already on the market or in service: (i) for high-risk AI systems that are components of large-scale IT systems, (ii) for other high-risk AI systems, and (iii) for general purpose AI models, with up to a 36-month compliance grace period depending on design or usage changes, while certain components must comply by the end of 2030.

Article 113 regulates comprehensive obligations for the  European Commission to evaluate and review the AI Act.

The general transition periods are regulated in Article 113 of the AI Act. The AI Act will enter into force on the 20th day following that day of its publication in the Official Journal of the European Union. The entry into force date serves as the starting point for the application of transition periods, which are:

  • 6 months for prohibited AI Systems.
  • 12 months provisions concerning notifying authorities, governance, confidentiality, penalties and General Purpose AI obligations.
  • 36 months for compliance requirements in respect of high-risk AI systems relating to safety components covered by EU harmonisation legislation.
  • 24 months for all other obligations.