Home / Publications / Looking ahead to the EU AI Act / Governance and post-market monitoring, information...

Governance and post-market monitoring, information sharing, market surveillance

Governance

(Chapter VII, Art. 64-70 )

The AI Act establishes a governance framework under Chapter VII, with the scope of coordinating and supporting its application on a national level, as well as build capabilities at Union level and integrate stakeholders in the field of artificial intelligence. The measures related to governance will apply from 12 months following the entry into force of the AI Act.

To develop Union expertise and capabilities, an AI Office is established within the Commission, having a strong link with the scientific community to support its work which includes the issuance of guidance; its establishment should not affect the powers and competences of national competent authorities, and bodies, offices and agencies of the Union in the supervision of AI systems.

The newly proposed AI governance structure also includes the establishment of the European AI Board (AI Board), composed of one representative per Member State, designated for a period of 3 years.

Its list of tasks has been extended and includes the collection and sharing of technical and regulatory expertise and best practices in the Member States, contributing to their harmonisation, and the assistance to the AI Office for the establishment and development of regulatory sandboxes with national authorities.

  • Upon request of the Commission, the AI Board will issue recommendations and written opinions on any matter related to the implementation of the AI Act.
  • The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues related to market surveillance and notified bodies.

The final text of the AI Act also introduces two new advisory bodies.

  1. An advisory forum (Art. 67) will be established to provide stakeholder input to the European Commission and the AI Board preparing opinions, recommendations and written contributions.
  2. A scientific panel of independent experts (Art. 68) selected by the European Commission will provide technical advice and input to the AI Office and market surveillance authorities.
    The scientific panel will also be able to alert the AI Office of possible systemic risks at Union level.
    Member States may call upon experts of the scientific panel to support their enforcement activities under the AI Act and may be required to pay fees for the advice and support by the experts.

Each Member State shall establish or designate at least one notifying authority and at least one market surveillance authority as national competent authorities for the purpose of the AI Act.

Member States shall ensure that the national competent authority is provided with adequate technical, financial and human resources and infrastructure to fulfil their tasks effectively under this regulation, and satisfies an adequate level of cybersecurity measures.

One market surveillance authority shall also be appointed by Member States to act as a single point of contact.

Post-market monitoring, information sharing, market surveillance

(Chapter IX, Art. 72 – 94)

Post-market monitoring (Art. 72)

Under the AI Act, a ‘post-market monitoring system’ refers to all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service, for the purpose of identifying any need to immediately apply any necessary corrective or preventive action.

High-risk systems providers will be required to implement an active post-market monitoring system which:

  1. collects, documents and analyses relevant data about the performance of the high-risk AI systems through their lifetime,
  2. allows the provider to evaluate continuous compliance of the high-risk AI systems with the requirements in the AI Act for such high-risk AI systems,
  3. where relevant, includes analysis of the interaction with other AI systems, and
  4. is based on a post-market monitoring plan (the European Commission has an obligation under the AI Act to establish a template for such plan within 6 months from when the AI Act starts to apply).

Sharing of information on  serious incidents (Art. 73)

High-risk AI systems providers will be required to report any serious incident to market surveillance authorities (MSAs) in the Members States where the incident has occurred, in accordance with the following:

  1. the report should be made immediately after the provider establishes the causal link between the high-risk AI systems and the serious incident but in any event within 15 days from becoming aware of the incident,
  2. in the event of a serious and irreversible disruption of critical infrastructure or a widespread infringement (under the AI Act, ‘widespread infringement’ means any act or omission contrary to EU law that harms the interests of individuals in at least two other Member States or, where there are common features, such as the same unlawful practice or same interest being infringed, and that are occurring concurrently, committed by the same operator, in at least three Members States), the report should be made immediately but in any event within 2 days, and
  3. in the event of death, the report should be made immediately but in any event within 10 days.

Under the AI Act, a ‘serious’ means any incident or malfunctioning of an AI system that directly or indirectly leads to:

  1. Death or serious injury to a person;
  2. Serious damage to property or the environment;
  3. Serious and irreversible disruption of critical infrastructure; or
  4. Breach of EU law protecting fundamental freedoms.

Once a report is made under Article 73, the provider will be required to investigate, including performing a risk assessment of the incident and corrective action.

The European Commission has an obligation to issue guidance on compliance with these reporting obligations within 12 months from the AI Act entering into force.

Market surveillance (Arts 74 – 94)

The AI Act brings AI systems within the scope of the market surveillance and product compliance regime established under Regulation (EU) 2019/1020. The effect of this is that operators under the AI Act will be required to comply with obligations of ‘economic operators’ under Regulation (EU) 2019/1020. MSAs will have extensive powers under Regulation (EU) 2019/1020 and the AI Act to access information and documentation about AI systems (including access to training, validation and testing data sets and source code of high-risk AI systems, provided certain conditions are met), conduct investigations, evaluate compliance with the AI Act and require corrective actions or withdrawal or recall of non-compliant AI systems in the market. MSAs can also require operators to address risks if they find that any compliant high-risk AI systems presents a risk to health or safety, fundamental rights, or other public interests. 

Further, MSA’s can restrict or prohibit high-risk AI systems where they find that: 

  1. markings and/or declarations of conformity are defective or absent;
  2. registration in the EU database has not occurred; 
  3. an authorised representative has not been appointed; and/or
  4. technical documents are not available.

Suspected wrongful classification as a non-high-risk AI systems by a provider can be investigated, evaluated and lead to re-classification. Fines are possible if the MSA establishes that the provider’s mis-classification was designed to circumvent the AI Act’s requirements.

Anyone can lodge a complaint with the relevant MSA for alleged infringement of the AI Act.

Deployers must provide clear and meaningful explanations (on request) for decisions taken on the basis of the output from a high-risk AI systems which any individual considers adversely impacts their health, safety and fundamental rights.

Monitoring in respect of GPAI models

The AI Office will have primary responsibility to supervise and enforce the AI Act’s provisions on General Purpose AI models. The AI Office will have broad powers to request information that is necessary for the purpose of assessing compliance, conduct evaluations to assess compliance and investigate systemic risks, and (where necessary and appropriate) request providers of General Purpose AI models to take measures to ensure compliance or mitigate systemic risks or restrict, withdraw or recall the model.

Confidentiality (Art. 78)

Although the AI Act includes a number of provisions that are intended to provide transparency to the bodies involved in the application of the AI Act (such as the European Commission and the MSAs), these bodies will be required to respect the confidentiality of information and data obtained in carrying out their tasks and activities in accordance with Article 78.  This requirement applies, for example, to any information and documentation provided by providers of high-risk AI systems to national competent authorities to demonstrate conformity of their high-risk AI systems with the requirements of Chapter II (pursuant to Art. 21) and any information and documentation (including trade secrets) made available by providers of General Purpose AI models or General Purpose AI models with systemic risk, including documentation about the purpose, training and testing of the models (pursuant to Art. 53 and Art. 55).

Such bodies (including any individuals involved in the application of the AI Act) must:

  • only request data that is strictly necessary to carry out their compliance responsibilities;
  • have adequate cybersecurity measures to protect security and confidentiality of such information and data; and
  • delete data as soon as it is no longer needed for its purpose.

In addition, before sharing information on certain high-risk AI systems used by law enforcement border control, immigration or asylum authorities with other national competent authorities or the European Commission, national competent authorities must first consult with the provider if such sharing could jeopardize public and national security interests.  In addition, where such law enforcement border control, immigration or asylum authorities are the providers of such high-risk AI systems, only MSA personnel with the appropriate level of security clearance may access the relevant technical documentation at the premises of such authorities.  These restrictions are without prejudice to the exchange of information and alerts between the Commission, the Member States and their competent authorities and certification bodies, and to the obligations of these parties to provide information under the criminal law of the Member States. Notwithstanding the above, the European Commission and Member States may exchange confidential information with regulatory authorities of third countries, provided that such exchange is necessary and the information is covered by arrangements ensuring an adequate level of confidentiality.