Challenges
According to our AI Survey respondents of CEE companies, their adoption of AI is increasingly widespread: 17% of respondents are heavy users, 43% use AI to some degree, and 20% plan to do so in the future, while 20% do not. Their main AI challenges when seeking to be responsible in using AI are: privacy and security concerns (66%); data accuracy and quality (55%); and lack of expertise (38%).
Olga Belyakova, CMS Partner and Co-Head of TMT in CEE, identifies wider practical challenges. “First, companies have to decide which platform to use and when, and to agree internally, especially in big organisations, who will be responsible for what, and how they should implement systems so that everyone can use them consistently and compliantly,” she says. “Implementation challenges are mostly not legal, but business, putting pressure on the board.”
She adds: “Liability is an issue for both users and creators, because of the fine line between what you use and how you use it. Given there is no developed practice, liability questions cannot be answered automatically. Common sense rules apply, but common sense can be very subjective.”
According to Alžběta Solarczyk Krausová, a member of the former Expert Group on New Technologies and Liability at the European Commission, “most companies are interested in how to operationalise requirements, such as which documents they need to prepare, and what processes they need to introduce. Companies are also thinking about how to use AI responsibly in areas such as marketing.
“Many companies are introducing specific themes for AI transition, and thinking how to make their processes more efficient, and deal with compliance.” On generative AI, she notes that companies are looking at “how to introduce it, and how to adjust their ethical code of conduct to adapt to the new technology. A few companies are going far beyond what the Act requires them to do.”
Magyar Telekom Group Legal Director Dániel Szeszlér notes: “Transparency is key. We don't want to put anything on the market that is not entirely clear, both for customers who are making good use in their own businesses of the solutions we provide, and for end users who are impacted in any way by the use of AI. So, if something is not transparent in terms of what they encounter - how AI is impacting the output they receive - then it’s a no go.”
He adds that “Human centric approaches are key. When it comes to legal requirements, we are ahead of the big implementation project for the new AI Act.”
Digital Horizons – Responsible AI
Key Contacts
Bulgaria | Czech Republic |
Hungary | Hungary |
Poland
| Romania |
Slovakia
| Ukraine
|
Austria and SEE |