We have identified a more suitable language of this document. To change language to please click here or close
We have identified a more suitable language of this document. To change language to please click here or close
For storing your preferred CMS location, analysing referrals from LinkedIn and embedding third party content we need your consent (which you can withdraw any time).
This website uses cookies so that we can provide you with the best user experience possible. Our Cookie Notice is part of our Privacy Policy and explains in detail how and why we use cookies. To take full advantage of our website, we recommend that you click on “Accept All”. You can change these settings at any time via the button “Update Cookie Preferences” in our Cookie Notice.
Technical cookies (required)
Technical cookies are required for the site to function properly, to be legally compliant and secure. Session cookies only last for the duration of your visit and are deleted from your device when you close your internet browser. Persistent cookies, however, remain and continue functioning on repeat visits.
Analytics
CMS does not use any cookie based Analytics or tracking on our websites; see details here.
Personalisation cookies
Personalisation cookies collect information about your website browsing habits and offer you a personalised user experience based on past visits, your location or browser settings. They also allow you to log in to personalised areas and to access third party tools that may be embedded in our website. Some functionality will not work if you don’t accept these cookies.
Social media cookies
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our privacy policy.
General purpose AI models and measures in support of innovation
General purpose AI models
(Chapter V V, Art. 51-56)
The AI Act is founded on a risk based approach. This regulation, intended to be durable, initially wasn’t associated to the characteristics of any particular model or system, but to the risk associated with its intended use. This was the approach when the proposal of the AI Act was drafted and adopted by the European Commission on 22 April, 2021, when the proposal was discussed at the Council of the European Union on 6 December, 2022.
However, after the great global and historical success of generative AI tools in the months following the Commission’s proposal, the idea of regulating AI focusing only on its intended use seemed then insufficient. Then, in the 14 June 2023 draft, the concept of “foundation models” (much broader than generative AI) was introduced with associated regulation. During the negotiations in December 2023, some additional proposals were introduced regarding “very capable foundation models” and “general purpose AI systems built on foundation models and used at scale”.
In the final version of the AI Act, there is no reference to “foundation models”, and instead the concept of “general purpose AI models and systems” was adopted. General Purpose AI models (Arts. 51 to 56 ) are distinguished from general purpose AI systems (Arts. 25 and 75). The General Purpose AI systems are based on General Purpose AI models: “when a general purpose AI model is integrated into or forms part of an AI system, this system should be considered a general purpose AI system” if it has the capability to serve a variety of purposes (Recital 100). And, of course, General Purpose AI models are the result of the operation of AI systems that created them.
“General purpose AI model” is defined in Article 3 (63) as “an AI model (…) that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications”. The definition lacks quality (a model is “general purpose” if it “displays generality”
1
Recital 98 contributes to clarify the concept saying that “generality” means the use of at least a billion of parameters, when the training of the model uses “a large amount of data using self-supervision at scale”.
) and has a remarkable capacity for expansion. Large generative AI models are an example of General Purpose AI models (Recital 99).
The obligations imposed to providers of General Purpose AI models are limited, provided that they don’t have systemic risk. Such obligations include (Art. 53 (I) (a) (i) to draw up and keep up-to-date technical documentation (as described in Annex XI) available to the national competent authorities, as well as to providers of AI systems who intend to integrate the General Purpose AI system in their AI systems, and (ii) to take some measures in order to respect EU copyright legislation, namely to put in place a policy to identify reservations of rights and to make publicly available a sufficiently detailed summary about the content used. Furthermore, they should have an authorised representative in the EU (Art. 54).
The most important obligations are imposed in Article 55 to providers of General Purpose AI models with systemic risk. The definition of AI models with systemic risk is established in Article 55 in too broad and unsatisfactory terms: “high impact capabilities”. Fortunately, there is a presumption in Article 55.2 that helps: “when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25”.
The main additional obligations imposed to General Purpose AI models with systemic risks are (i) to perform model evaluation (including adversarial testing), (ii) to assess and mitigate systemic risks at EU level, (iii), to document and report serious incidents and corrective measures, and (iv) to ensure an adequate level of cybersecurity.
Finally, an “AI system” is “an AI system which is based on a General Purpose AI model, that has the capacity to serve a variety of purposes” (Art. 3 (66)). If General Purpose AI systems can be used directly by deployers for at least one purpose that is classified as high-risk (Art. 75), an evaluation of compliance will need to be done, if there is sufficient reason to consider that the system is not compliant with AI Act.
Measures in support of innovation
(Chapter VI, Art. 57-63)
Chapter VI of the AI Act establishes a robust framework for promoting AI innovation across companies of different sizes and sectors. Particularly, through the AI regulatory sandboxes, which shall be operational within 24 months after entry into force and are designed to foster controlled environments that facilitate the development, testing, and validation of innovative AI systems for a limited period before their placement on the market. Thus, promoting a legal avenue to achieve a safe and controlled space for experimentation, whilst setting the frame for an innovation-friendly environment and future-proof regulation.
AI regulatory sandboxes may operate at both national and Union levels under the coordinated supervision of national competent authorities or the European Data Protection Supervisor; thereby facilitating cross-border cooperation. National authorities are tasked with providing guidance and supervision throughout the sandbox lifecycle, identifying and mitigating potential risks, and reporting sandbox activities, including results and insights, and may, at all times, temporarily or permanently suspend them.
To promote harmonisation among Member States, the European Commission will issue implementing acts clarifying sandbox modalities, development, and operations, including eligibility criteria, application procedures, and participant terms. National authorities must also collaborate to maintain consistent practices across the EU by submitting annual reports to the European AI Office (AI Office) and the Board on sandbox implementation. At the Union level, the European Commission will also adopt a unified interface to facilitate interaction among Member States and stakeholders.
In practical terms, innovation support measures also reveal a modern legal landscape, as:
although AI regulatory sandboxes do not entail a general liability exemption for damages inflicted to third-parties during the testing period, compliance with national guidance rules out the application of administrative fines;
data protection related legal provisions are established to augment legal certainty and cooperation between participants and authorities;
providers of high-risk AI systems may benefit from a special regime for testing these systems under real-world conditions outside AI regulatory sandboxes; and
support of small-scale providers and start-ups within the EU is clear, as the AI Act prioritises and facilitates their access to AI regulatory sandboxes.
In summary, Chapter VI of the AI Act represents a pivotal step towards fostering responsible AI innovation within the EU, striking for an adequate balance between promoting innovation and ensuring the safety and accountability of AI systems.
Social Media cookies collect information about you sharing information from our website via social media tools, or analytics to understand your browsing between social media tools or our Social Media campaigns and our own websites. We do this to optimise the mix of channels to provide you with our content. Details concerning the tools in use are in our privacy policy.