Sectors
Inevitably, there is a wide range of AI adoption across diverse sectors in CEE companies. Those with the highest levels of adoption, according to our survey, are: Information Technology (74%); Telecoms and Media (55%); Banking & Finance (47%); Retail and E-commerce (40%); and Life Sciences & Healthcare (25%).
Among the most prominent IT operations using AI in CEE are data centers, where workloads have increased by more than 340% in the past decade with data center energy demand forecast to increase more than 15 times by 2030. Because data centers are so energy intensive, it is paramount to make the ongoing digital transformation more sustainable.
Eva Talmacsi, CMS global M&A and Corporate Transactions Partner and Co-Head of TMT in CEE, notes: “Datacentres are uniquely positioned to benefit from AI applications which are shaping the sustainable digital transformation. Training and delivering AI solutions requires enormous amounts of computing power and data storage, exponentially increasing the demand for datacentre capacity. AI and machine learning can unlock flexibility by forecasting supply and demand. Simultaneously, data centre operators have embraced AI to help streamline the daily running of services, reducing IT infrastructure inefficiencies.”
As elsewhere, many banks and financial services companies In CEE were early adopters of traditional AI systems. “They deployed AI in several ways,” says Cristina Reichmann, CMS, Banking and Finance Partner in Romania. “But the scale and impact are really skyrocketing now. Any deployment of new technology, and especially AI, comes with risks, cost concerns, and liability – not just liability for the banks and financial institutions, but also for the management,” she says.
“Some are specific to them. They are heavily regulated, and to comply with specific regulations, they must have a proactive approach. Data privacy is a critical concern for them in relation to AI. To protect data, you need robust security measures: encryption, data storage solutions, regular security audits, audits with respect to third party providers, as well as integrating AI within existing legacy systems.”
At OTP, Schin says: “We want to implement stricter regulation than the EU AI Act because we believe that in the finance industry, trust is so important. Because our industry is built upon it, the last thing we want is to lose our clients’ trust. If you misuse the technology, it's very easy to lose people's trust - and AI could give plenty of room for losing trust, because you don't know where the technology may not work as expected. So, you can't make a mistake, or miscommunicate. It's not just about trust, it’s also about being accurate, being clear on what we are doing and being transparent on how we are working.”
Danevych notes: “No matter how big a company using AI in life sciences and healthcare is, they are concerned, they are taking it seriously. Big companies usually have internal ethics committees or business risk assessment committees, focused specifically on AI. But even small startups in CEE often begin by considering the key risks their idea or product will face in a more regulated environment. They’re trying to define how to frame the risks, to take them into account, looking for advice in jurisdictions they’re most focused on. They’re trying to predict whether there will be specific limits and restrictions relating to regulation.”
At Johnson & Johnson, Karpják adds: Once you incorporate any AI element into your products, just as with any other EU legislation around data, it brings an additional complexity you need to consider when building your products.”
Digital Horizons – Responsible AI
Key Contacts
Bulgaria | Czech Republic |
Hungary | Hungary |
Poland
| Romania |
Slovakia | Ukraine |
Austria and SEE
|