Home / Publications

Publications

Discover thought leadership and legal insights by our legal experts from across CMS. In our Expert Guides, written by CMS lawyers from across the jurisdictions where we operate, we provide you with in-depth legal research and insights that can be read both online and offline. You can also find Law-Now articles with focused legal analysis, commentary and insights to help you anticipate future challenges and much more.



Media type
Expertise
26/11/2024
Technology Transformation: Drive innovation, mitigate the risks
Expectations and reality of tech-related risks
20/11/2024
Sectors
Inevitably, there is a wide range of AI adoption across diverse sectors in CEE companies.  Those with the highest levels of adoption, according to our survey, are: Information Technology (74%); Telecoms and Media (55%); Banking & Finance (47%); Retail and E-commerce (40%); and Life Sciences & Healthcare (25%). Among the most prominent IT operations using AI in CEE are data centers, where workloads have increased by more than 340% in the past decade with data center energy demand forecast to increase more than 15 times by 2030. Because data centers are so energy intensive, it is paramount to make the ongoing digital transformation more sustainable. Eva Talmacsi, CMS global M&A and Corporate Transactions Partner and Co-Head of TMT in CEE, notes: “Datacentres are uniquely positioned to benefit from AI applications which are shaping the sustainable digital transformation. Training and delivering AI solutions requires enormous amounts of computing power and data storage, exponentially increasing the demand for datacentre capacity.  AI and machine learning can unlock flexibility by forecasting supply and demand. Simultaneously, data centre operators have embraced AI to help streamline the daily running of services, reducing IT infrastructure in­ef­fi­cien­cies.”As elsewhere, many banks and financial services companies In CEE were early adopters of traditional AI systems. “They deployed AI in several ways,” says Cristina Reichmann, CMS, Banking and Finance Partner in Romania. “But the scale and impact are really skyrocketing now. Any deployment of new technology, and especially AI, comes with risks, cost concerns, and liability – not just liability for the banks and financial institutions, but also for the management,” she says.“Some are specific to them. They are heavily regulated, and to comply with specific regulations, they must have a proactive approach. Data privacy is a critical concern for them in relation to AI. To protect data, you need robust security measures: encryption, data storage solutions, regular security audits, audits with respect to third party providers, as well as integrating AI within existing legacy sys­tems.”   At OTP, Schin says: “We want to implement stricter regulation than the EU AI Act because we believe that in the finance industry, trust is so important. Because our industry is built upon it, the last thing we want is to lose our clients’ trust. If you misuse the technology, it's very easy to lose people's trust - and AI could give plenty of room for losing trust, because you don't know where the technology may not work as expected. So, you can't make a mistake, or miscommunicate. It's not just about trust, it’s also about being accurate, being clear on what we are doing and being transparent on how we are work­ing.” Danevych notes: “No matter how big a company using AI in life sciences and healthcare is, they are concerned, they are taking it seriously. Big companies usually have internal ethics committees or business risk assessment committees, focused specifically on AI. But even small startups in CEE often begin by considering the key risks their idea or product will face in a more regulated environment. They’re trying to define how to frame the risks, to take them into account, looking for advice in jurisdictions they’re most focused on. They’re trying to predict whether there will be specific limits and restrictions relating to regulation.”At Johnson & Johnson, Karpják adds: Once you incorporate any AI element into your products, just as with any other EU legislation around data, it brings an additional complexity you need to consider when building your products.”
20/11/2024
Training
When asked about current AI literacy (knowledge and skills) in their organisations, survey respondents were mixed: 12% said it was high, 39% moderate; 31% low and 8% very low; while 10% were unsure. Asked how well they understood AI systems, the results were comparably low: 8% had in depth understanding; 49% a functional understanding and 43% a basic understanding. Notably, ‘more training’ (40% of respondents) ranked top among factors affecting CEE companies as a result of the AIA, while 94% of respondents acknowledged that more know­ledge/train­ing was needed. The reality is that AI knowledge is progressing at different speeds, according to Tomasz Koryzma, Head of IP/TMC at CMS Poland. “For providers, who produce software and systems, this is critical for their business.” he says. “They started setting up AI governance procedures earlier: having people in place with better training and technical skills. They are now progressing faster and are much more aware. Many deployers are less well ad­vanced.”Ac­cord­ing to Dóra Petrányi, CEE Managing Director at CMS and Global Co-Head of the Technology, Media and Communications Group: “Every employer using or deploying AI will have to provide basic literacy education to their team members about how to use AI: what are the threats, and how to use it in a smart and efficient way. Like what information do you feed it? What are the questions that you can or cannot ask? How do you handle the answers, always with a pinch of salt? So, we expect that AI literacy will be very much part of induction training going forward for any organisation. And that's a big change.”AIA obligations on AI literacy apply from February 2025 and GPAI requirements from August 2025, so training should be happening already. Our clients or their staff are already using generative AI, which requires a lot of training. Doing this the right way can unlock further potential of the workforce enabling businesses to maximise the impact of AI - you should work smarter not harder,” says Petrányi.    Borys Danevych, Partner and CEE Head of Life Sciences & Healthcare at CMS, says: “Some companies are really focused on using AI in healthcare and training their teams. Training is something that we see as an essential, important element in the risk mitigation strategies of com­pan­ies.”Bon­der Le-Berre describes the knowledge sharing approach at Iron Mountain: “In order to ensure responsible use of AI, we've created an AI Center of Excellence (CoE): an AI governance structure which brings together experts from different functions - data, privacy, security, IT, legal. This group meets regularly and exchanges what we’ve learned about emerging AI regulations and best market practices. That way, we build knowledge that helps us determine next steps towards better compliance. Our broader group of employees also needs to have basic knowledge about AI: our AI CoE has prepared training courses for them. To increase engagement, employees earn points for completed training and are recognized for their AI literacy ef­forts.”Lotár Schin, OTP Bank’s AI Center of Excellence Lead, develops the point. “Education is a crucial part of the AI journey,” he says. “We have to build an AI governance framework - not just because of the EU AI Act, although it's also a driver. To be responsible means that we need to increase people’s awareness. We call it transformative digital capability, because similar to the internet, it impacts nearly everybody in the or­gan­isa­tion.”OTP created an awareness training programme for every employee. “We believe they have to understand the potential risks and opportunities of AI technology,” says Schin. “We tell them about different types of AI, what different models can and cannot do. We’re in a good position now: there’s a common understanding of key concepts, Natural Language Processing, generative technology, natural language understanding, predictive statistical models, and whether logistic regression is machine learning or not.”Ákos Janza, Managing Director, MSCI Inc., has significant training experience. “Even though you train people, they do not necessarily know how to use it,” he says. “You show them what generative AI is capable of, but they cannot make a very important distinction between: how do I apply this in my day job, because it needs to become a GPT (general purpose technology). That takes time and you can only do that by creating some network effect.“So, we have chosen to create a champion programme with a 1 to 10 ratio, meaning that each department needs to designate 10% of its staff as AI champions. Because you need to create that network effect with a high level of AI literacy: knowing how to create custom GPTs, knowing the difference between machine learning, deep learning, and generative AI, because they are not necessarily the same. So, it's a combination of multiple things. Number one, proper video training. Number two, a network effect by AI champions, and number three, as many use cases as possible and workshops.”
20/11/2024
Challenges
According to our AI Survey respondents of CEE companies, their adoption of AI is increasingly widespread: 17% of respondents are heavy users, 43% use AI to some degree, and 20% plan to do so in the future, while 20% do not. Their main AI challenges when seeking to be responsible in using AI are: privacy and security concerns (66%); data accuracy and quality (55%); and lack of expertise (38%). Olga Belyakova, CMS Partner and Co-Head of TMT in CEE, identifies wider practical challenges. “First, companies have to decide which platform to use and when, and to agree internally, especially in big organisations, who will be responsible for what, and how they should implement systems so that everyone can use them consistently and compliantly,” she says. “Im­ple­ment­a­tion challenges are mostly not legal, but business, putting pressure on the board.”She adds: “Liability is an issue for both users and creators, because of the fine line between what you use and how you use it. Given there is no developed practice, liability questions cannot be answered automatically. Common sense rules apply, but common sense can be very sub­ject­ive.”Ac­cord­ing to Alžběta Solarczyk Krausová, a member of the former Expert Group on New Technologies and Liability at the European Commission, “most companies are interested in how to operationalise requirements, such as which documents they need to prepare, and what processes they need to introduce. Companies are also thinking about how to use AI responsibly in areas such as marketing.  “Many companies are introducing specific themes for AI transition, and thinking how to make their processes more efficient, and deal with compliance.” On generative AI, she notes that companies are looking at “how to introduce it, and how to adjust their ethical code of conduct to adapt to the new technology. A few companies are going far beyond what the Act requires them to do.”Magyar Telekom Group Legal Director Dániel Szeszlér notes: “Transparency is key. We don't want to put anything on the market that is not entirely clear, both for customers who are making good use in their own businesses of the solutions we provide, and for end users who are impacted in any way by the use of AI. So, if something is not transparent in terms of what they encounter - how AI is impacting the output they receive - then it’s a no go.”He adds that “Human centric approaches are key. When it comes to legal requirements, we are ahead of the big implementation project for the new AI Act.”
18/11/2024
Data and the deal
If you’re buying or selling a business, don’t forget to protect the personal data involved.
18/11/2024
New laws on data
Two significant pieces of EU legislation are making big changes to the EU’s data regulation regime with an impact that extends beyond businesses in EU member states.
18/11/2024
Data | Bandwidth
Data has become a very valuable asset – in many ways, the defining asset of a modern business. It informs decisions in all areas of management. It is vital in processes ranging from sales and marketing to risk mitigation. It is the foundation of the AI revolution and developments like Big data analytics and Data-as-a-ser­vice. New ways of refining, using and monetising data are constantly evolving. In the UK the new government has classed data centres as Critical National Infrastructure, describing them as “the engines of modern life”. But data is not just vital for keeping the wheels turning and the lights on. It is also key to innovation. No significant business development or transformation is likely to succeed without high-quality data. There are plenty of good reasons for businesses to press ahead with transforming themselves into data-driven organisations and riding the wave of tech-based disruption. Many are rightly concerned about surrendering a competitive advantage and fear the risks of facing today’s commercial world with yesterday’s technology. No-one wants to be a data dodo. But as businesses amass ever-larger amounts of data at ever-faster speeds from ever-more-vari­ous sources, business leaders need a firm handle on how they collect, process, store, manage and use it. Inadequate oversight can easily trigger regulatory problems. Deceptively simple choices may cause major difficulties further down the line. The wrongful use of data can be a massive corporate headache. In this section of Bandwidth, we look at just a few of the key data issues that business leaders should be aware of. If you want to know more about these areas, or any aspects of data protection or management, please get in touch with one of our data experts listed below or your usual CMS contact.
13/11/2024
Digital Horizons - A series of reports exploring CEE’s digital future
A series of reports exploring CEE’s digital fu­ture Re­spons­ible AI
13/11/2024
Obligations, standards and compliance
The AIA will have a profound impact on organisations that are developing, using, distributing and importing AI systems in the EU and across CEE, placing an escalating range of obligations on them, dependent upon which systems are used. The AIA defines an AI system as follows: “A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, can influence physical or virtual en­vir­on­ments.”Katal­in Horváth, Partner in the Technology, Media and Tele­com­mu­nic­a­tions (TMT) team at CMS Budapest, says: “In preparing for the new AI Act, our clients have many questions. The basic starting point is: what is AI? The definition in the new Act is quite broad, and our clients can’t easily decide whether they have an AI system with a certain level of autonomy, or just machine learning, pure software or an application. Often, they just don't know whether they are using AI or not, because they buy a solution and it looks like software or an application, and they don't know whether there is any AI involved.“So, the first area where we can help is analysing the definition of AI systems, to make a comparison, find a match: what kinds of software, applications, or AI models fall into the AI system category. This is the basic legal question on which we are advising companies in many different sectors. The second legal issue is whether an AI system is prohibited, high-risk or low-risk. If they can categorise the given AI system, we advise our clients about the obligations and deadlines based on the new Act.”Under the AIA, systems that are designated as high-risk AI (HRAIS) will be subject to a broad scope of significant obligations, particularly for providers. Furthermore, distributors, importers and deployers (users) of HRAIS are also facing with assorted strict requirements. Specific provisions will apply to general purpose AI (GPAI) models, which will be regulated regardless of how they are used. All other AI systems are considered as low risk and will only be subject to limited transparency obligations when they interact with individuals. The AIA prohibits the use of certain types of AI system, such as biometric categorisation and identification (including untargeted online scraping of facial data) and subliminal techniques that may exploit personal vulnerabilities or manipulate human behaviour, thereby circumventing fundamental rights or causing physical or psychological harm.
11/11/2024
Artificial Intelligence and Copyright Case Tracker
Executive summary  The rapid growth of generative AI including popular models such as ChatGPT, Microsoft’s Copilot and Google’s Gemini (formerly Bard) have brought to fore many new legal issues relating to ownership, subsistence and infringement of copyright law. At the heart of the discourse is the reinterpretation of copyright principles in the age of AI-generated content. Long established principles about subsistence, authorship and ownership of content are being re-examined in the wake of the explosion of AI generated work being created and the demand from businesses who want to secure the profitability gains from using generative AI without losing control of any IP rights associated with content which may be in whole or in part created using AI tools. It is unsurprising that the US Courts have led the way in AI & copyright litigation to date given most of the leading AI technology providers are based there, but we are starting to see similar claims being issued before UK and European Courts. Some of the questions UK and European Courts are already grappling with include:Is there a valid infringement claim where the entirety of a copyright work is copied for the purpose of training a large language model but the “allegedly infringing” output seen by consumers only contains “in­fin­ites­im­ally small” reproductions of the original work, which is the case in UK case Getty Images (US) Inc & Ors v Stability AI Ltd?What about if the defendant argues the dataset used for training said machine is using only transient copies of the original work , which is the case in German case Robert Kneschke & LAION e.V.?And how do we define the authorship of work created by AI, particularly where the law may define the author as a “natural person” which is/has been considered in the Czech Republic between S. Š. (individual) v TAUBEL LEGAL, advokátní kancelář s.r.o. Case No. 13/2023? These are some of the issues emerging from the adoption and use of generative AI, an area that is likely to be increasingly complicated by newer and more advanced technologies, and the implementation of the EU AI Act. Working with our CMS colleagues across UK and Europe we have built a copyright + AI case tracker providing accessible, easily digestible case summaries, updated regularly, covering the most topical case law in the area of AI and copyright. Each case summary in this copyright + AI tracker provides details of the relevant litigation, including summaries of each party’s arguments and the final judgment, in order to develop an understanding of this new developing landscape of case law. There are multiple ongoing copyright claims being litigated in the US including between NBC Universal and Anthropic, the New York Times and OpenAI, the Dow Jones and Perplexity, and Getty and Stability AI. All of these cases concern similar issues, namely the lawfulness of training AI models on datasets containing third party copyright works, as well as liability for the outputs produced. If you would like to know more about these cases, or how these issues might impact on your business, please get in touch. We have a network of ‘best friend’ firms across the US, who can assist further, if required. If you become aware of a case which we have not yet summarised, please do let us know by emailing copy­rightAItrack­er@cms-cmno. com
08/11/2024
Time to put COP28 to the test: COP28 reflections and COP29 expectations
With record level of participation from a diverse range of stakeholders, COP28 in Dubai, UAE, put forward a bold agenda and secured notable commitments in the Global Stocktake, climate finance and the energy transition. The real test of success will depend on the concrete actions from COP29 in Baku, Azerbaijan. Billed as the 'Finance COP,' COP29 is expected to address critical gaps in the green transition, and our insights and key takeaways are summarized below.
06/11/2024
COP29
Welcome to CMS’s COP29 Hub, home to our COP29 delegates’ firsthand experience of the conference and our climate change experts’ analysis of COP29’s themes and pledges. We will post updates here throughout COP29 and look forward to discussing the implications of the negotiations with you.