Obligations, standards and compliance
The AIA will have a profound impact on organisations that are developing, using, distributing and importing AI systems in the EU and across CEE, placing an escalating range of obligations on them, dependent upon which systems are used.
The AIA defines an AI system as follows: “A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, can influence physical or virtual environments.”
Katalin Horváth, Partner in the Technology, Media and Telecommunications (TMT) team at CMS Budapest, says: “In preparing for the new AI Act, our clients have many questions. The basic starting point is: what is AI? The definition in the new Act is quite broad, and our clients can’t easily decide whether they have an AI system with a certain level of autonomy, or just machine learning, pure software or an application. Often, they just don't know whether they are using AI or not, because they buy a solution and it looks like software or an application, and they don't know whether there is any AI involved.
“So, the first area where we can help is analysing the definition of AI systems, to make a comparison, find a match: what kinds of software, applications, or AI models fall into the AI system category. This is the basic legal question on which we are advising companies in many different sectors. The second legal issue is whether an AI system is prohibited, high-risk or low-risk. If they can categorise the given AI system, we advise our clients about the obligations and deadlines based on the new Act.”
Under the AIA, systems that are designated as high-risk AI (HRAIS) will be subject to a broad scope of significant obligations, particularly for providers. Furthermore, distributors, importers and deployers (users) of HRAIS are also facing with assorted strict requirements. Specific provisions will apply to general purpose AI (GPAI) models, which will be regulated regardless of how they are used. All other AI systems are considered as low risk and will only be subject to limited transparency obligations when they interact with individuals.
The AIA prohibits the use of certain types of AI system, such as biometric categorisation and identification (including untargeted online scraping of facial data) and subliminal techniques that may exploit personal vulnerabilities or manipulate human behaviour, thereby circumventing fundamental rights or causing physical or psychological harm.
Timeline: EU AI Act
Critically, some obligations will apply during the three-year timeline: both prohibitions on certain AI systems and requirements for AI literacy will apply from 2 February 2025, while GPAI requirements will apply from 2 August 2025. Almost the whole AIA, including provisions on HRAIS will be applicable from 2 August 2026, except for the provisions on AI systems which are safety components of certain products – these are applicable from 2 August 2027. Financial penalties to be applied under the AIA will be very significant, up to €35m or 7% of global annual turnover for the previous financial year (whichever is higher), depending on the type and severity of the infringement and the size of the company.
Ultimately, effective compliance with the AIA will depend upon maintaining the highest ethical standards. “You need to develop your internal processes and standards so that they’re going to be sufficiently high to be able to comply with all relevant regulation around the world, including the EU AI Act,” says Ivan Karpják, Legal Director, Central Region at Johnson & Johnson MedTech.
Compliance can often be seen as a burden. But Julia Bonder Le-Berre, Head of Global Privacy for Iron Mountain, thinks otherwise. “As in-house legal counsel, you always look at how regulation can enable business,” she says. “So, it's not all a compliance burden, but actually how compliance with this new AI Act can create a business competitive advantage for your organisation. It may be that advantage is created when you’re in a position where you can demonstrate compliance to your customers, where you can give them assurance, or maybe you can be the first on the market who created new services that are already compliant with these new rules.”
Go back to Digital Horizons - A series of reports exploring CEE’s digital future Responsible AI
Digital Horizons – Responsible AI
Key Contacts
Bulgaria
| Czech Republic
|
Hungary
Partner
Dóra Petrányi
CEE Managing Director, Co-Head of the Technology, Media and Communications Group
Budapest
| Hungary |
Poland
| Romania |
Slovakia | Ukraine |
Austria and SEE
|