Digital Horizons
A series of reports exploring CEE’s digital future
Data highlights - CEE companies
Responsible AI
Overview
As a fast-evolving technology, artificial intelligence (AI) is driving transformation across every sector. Its increasing complexity provokes questions about how best to build fairness, privacy and safety into AI systems. Ultimately, this requires AI governance that is suitably robust, safe and ethical: to mitigate and manage risk, and to maintain public trust.
Primarily focused on these objectives, the world’s first comprehensive legal framework for AI regulation, the EU’s pioneering AI Act (AIA) entered into force on 1 August 2024. Its impact is far-reaching. If the output of an AI system is intended for use in the EU, the AIA applies extraterritorially to providers or deployers of that system, regardless of whether they are established or based within the EU, or in a third country.
With some exceptions, the AIA will become fully applicable after a two-year implementation period in August 2026, positioning Europe at the forefront of AI regulation. To assist with compliance, guidance and standards will be published during this period.
For companies using AI systems in CEE, the AIA creates a wide range of obligations proportionate to the level of potential risk in the AI that they deploy. At a practical level, these include data quality and governance, technical documentation, recordkeeping, transparency, cybersecurity and human oversight. But for every company using AI, there is one key overriding question: who takes responsibility?
Central to the EU’s approach to AI regulation, implementing responsible AI primarily rests with the companies using AI systems. Under this self-regulation, each company operating in the CEE region that uses AI will determine its own ethical standards. Among prominent international examples is Microsoft, whose Responsible AI Standard is based on six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Among those principles, safety, privacy and security, transparency, and accountability appear in the AIA as main requirements of AI systems.
Similar ethical principles are shared by many CEE companies, which are equally keen to use AI responsibly. Ethical issues such as fairness, inclusiveness are not directly codified by the AIA, because they are subjective and cannot be easily regulated. Instead, core ethical principles remain part of each company's governance bodies, such as an ethics committee.
“It's not all a compliance burden, but how compliance with this new AI Act can create a business competitive advantage for your organisation.“
“Many companies are introducing specific themes for AI transition, and thinking how to make their processes more efficient, and deal with compliance.”
”When it comes to legal requirements, we’re ahead of the big implementation project for the new AI Act.”
“Education is a crucial part of the AI journey. We have to build an AI governance framework - not just because of the EU AI Act, although it's also a driver.”
“We’ve chosen to create a champion programme with a 1 to 10 ratio, meaning that each department needs to designate 10% of its staff as AI champions. You need to create that network effect with a high level of AI literacy.”
"You need to develop your internal processes and standards so that they’re going to be sufficiently high to be able to comply with all relevant regulation around the world, including the EU AI Act.”
“The first area where we can help is analysing the definition of AI systems, to make a comparison, find a match: what kinds of software, applications, or AI models fall into the AI system category.”
“Implementation challenges are mostly not legal, but business, putting pressure on the board.”
“Providers, who produce software and systems, started setting up AI governance procedures earlier: having people in place with better training and technical skills”
“Every employer using or deploying AI will have to provide basic literacy education to their team members about how to use AI”
“Datacentres are uniquely positioned to benefit from AI applications which are shaping the sustainable digital transformation.”
“Any deployment of new technology, and especially AI, comes with risks, cost concerns, and liability – not just liability for the banks and financial institutions, but also for the management”
“No matter how big a company using AI in life sciences and healthcare is, they are concerned, they are taking it seriously.”
Obligations, standards and compliance
Challenges
Training
Sectors
Digital Horizons – Responsible AI
Previous Digital Horizons series
Key Contacts
Bulgaria | Czech Republic |
Hungary | Hungary |
Romania | Romania |
Poland
|
Slovakia | Ukraine |
Austria and SEE |