Post-market monitoring, information sharing, market surveillance
(Chapter IX, Art. 72 – 94)
Post-market monitoring (Art. 72)
Under the AI Act, a ‘post-market monitoring system’ refers to all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service, for the purpose of identifying any need to immediately apply any necessary corrective or preventive action.
High-risk systems providers will be required to implement an active post-market monitoring system which:
- collects, documents and analyses relevant data about the performance of the high-risk AI systems through their lifetime,
- allows the provider to evaluate continuous compliance of the high-risk AI systems with the requirements in the AI Act for such high-risk AI systems,
- where relevant, includes analysis of the interaction with other AI systems, and
- is based on a post-market monitoring plan (the European Commission has an obligation under the AI Act to establish a template for such plan within 6 months from when the AI Act starts to apply).
Sharing of information on serious incidents (Art. 73)
High-risk AI systems providers will be required to report any serious incident to market surveillance authorities (MSAs) in the Members States where the incident has occurred, in accordance with the following:
- the report should be made immediately after the provider establishes the causal link between the high-risk AI systems and the serious incident but in any event within 15 days from becoming aware of the incident,
- in the event of a serious and irreversible disruption of critical infrastructure or a widespread infringement (under the AI Act, ‘widespread infringement’ means any act or omission contrary to EU law that harms the interests of individuals in at least two other Member States or, where there are common features, such as the same unlawful practice or same interest being infringed, and that are occurring concurrently, committed by the same operator, in at least three Members States), the report should be made immediately but in any event within 2 days, and
- in the event of death, the report should be made immediately but in any event within 10 days.
Under the AI Act, a ‘serious’ means any incident or malfunctioning of an AI system that directly or indirectly leads to:
- Death or serious injury to a person;
- Serious damage to property or the environment;
- Serious and irreversible disruption of critical infrastructure; or
- Breach of EU law protecting fundamental freedoms.
Once a report is made under Article 73, the provider will be required to investigate, including performing a risk assessment of the incident and corrective action.
The European Commission has an obligation to issue guidance on compliance with these reporting obligations within 12 months from the AI Act entering into force.
Market surveillance (Arts 74 – 94)
The AI Act brings AI systems within the scope of the market surveillance and product compliance regime established under Regulation (EU) 2019/1020. The effect of this is that operators under the AI Act will be required to comply with obligations of ‘economic operators’ under Regulation (EU) 2019/1020. MSAs will have extensive powers under Regulation (EU) 2019/1020 and the AI Act to access information and documentation about AI systems (including access to training, validation and testing data sets and source code of high-risk AI systems, provided certain conditions are met), conduct investigations, evaluate compliance with the AI Act and require corrective actions or withdrawal or recall of non-compliant AI systems in the market. MSAs can also require operators to address risks if they find that any compliant high-risk AI systems presents a risk to health or safety, fundamental rights, or other public interests.
Further, MSA’s can restrict or prohibit high-risk AI systems where they find that:
- markings and/or declarations of conformity are defective or absent;
- registration in the EU database has not occurred;
- an authorised representative has not been appointed; and/or
- technical documents are not available.
Suspected wrongful classification as a non-high-risk AI systems by a provider can be investigated, evaluated and lead to re-classification. Fines are possible if the MSA establishes that the provider’s mis-classification was designed to circumvent the AI Act’s requirements.
Anyone can lodge a complaint with the relevant MSA for alleged infringement of the AI Act.
Deployers must provide clear and meaningful explanations (on request) for decisions taken on the basis of the output from a high-risk AI systems which any individual considers adversely impacts their health, safety and fundamental rights.
Monitoring in respect of GPAI models
The AI Office will have primary responsibility to supervise and enforce the AI Act’s provisions on General Purpose AI models. The AI Office will have broad powers to request information that is necessary for the purpose of assessing compliance, conduct evaluations to assess compliance and investigate systemic risks, and (where necessary and appropriate) request providers of General Purpose AI models to take measures to ensure compliance or mitigate systemic risks or restrict, withdraw or recall the model.
Confidentiality (Art. 78)
Although the AI Act includes a number of provisions that are intended to provide transparency to the bodies involved in the application of the AI Act (such as the European Commission and the MSAs), these bodies will be required to respect the confidentiality of information and data obtained in carrying out their tasks and activities in accordance with Article 78. This requirement applies, for example, to any information and documentation provided by providers of high-risk AI systems to national competent authorities to demonstrate conformity of their high-risk AI systems with the requirements of Chapter II (pursuant to Art. 21) and any information and documentation (including trade secrets) made available by providers of General Purpose AI models or General Purpose AI models with systemic risk, including documentation about the purpose, training and testing of the models (pursuant to Art. 53 and Art. 55).
Such bodies (including any individuals involved in the application of the AI Act) must:
- only request data that is strictly necessary to carry out their compliance responsibilities;
- have adequate cybersecurity measures to protect security and confidentiality of such information and data; and
- delete data as soon as it is no longer needed for its purpose.
In addition, before sharing information on certain high-risk AI systems used by law enforcement border control, immigration or asylum authorities with other national competent authorities or the European Commission, national competent authorities must first consult with the provider if such sharing could jeopardize public and national security interests. In addition, where such law enforcement border control, immigration or asylum authorities are the providers of such high-risk AI systems, only MSA personnel with the appropriate level of security clearance may access the relevant technical documentation at the premises of such authorities. These restrictions are without prejudice to the exchange of information and alerts between the Commission, the Member States and their competent authorities and certification bodies, and to the obligations of these parties to provide information under the criminal law of the Member States. Notwithstanding the above, the European Commission and Member States may exchange confidential information with regulatory authorities of third countries, provided that such exchange is necessary and the information is covered by arrangements ensuring an adequate level of confidentiality.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our Privacy Notice.