Offices – Kenya
Explore all Offices
Global Reach
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights – Kenya
Explore all insights
Expertise
Insights
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
CMS Kenya
Insights
Trending Topics
About CMS

Select your region

Publication 26 Nov 2024 · Kenya

Section 2 – AI risks in focus

8 min read
Abstract dynamic wave of particles on dark background

On this page

In recent years, technical progress in AI technologies has accelerated rapidly, driving digital transformation. The use of AI opens a wide range of opportunities for companies to optimise their processes and tap into new areas of business.

Areas of application of AI


  • In the Consumer & Retail sector, chatbots based on generative AI are used to provide efficient customer service with personalised and targeted responses. Retailers are implementing AI-based predictive analytics to understand future market opportunities and customer behaviour.
  • In the Healthcare sector, AI systems are used in detection and diagnostics to detect diseases more accurately, reliably and earlier.
  • In the Financial Services sector, AI is helping to develop new investment strategies and is playing an increasingly important role in asset management.
  • In the Media sector, AI is used to create content and create targeted advertising campaigns.
  • In the Energy sector, AI is used to create more precise forecasts for energy generation and consumption. This is particularly important for the integration of renewable energies such as wind and solar power, whose availability fluctuates.

It is therefore not surprising that more and more companies stand to reap the benefits and opportunities provided by AI. A total of 75% of the respondents to our survey said that they expect their organisation to make greater use of new technologies such as AI in the next three years (up from 69% in 2022).

The potential drivers of AI disputes

Whilst 27% of respondents ranked disputes arising from the use of AI in their top three sources of technology disputes in the past three years, this number will only rise as more companies adopt the technology. A total of 52% of respondents agreed with the statement that the use of AI technologies will lead to risks and disputes that cannot be foreseen, and 30% said that the use of AI technologies is likely to lead to more disputes.

The development and use of AI not only offers opportunities, but also creates significant risks for companies that develop, distribute or use AI. AI-related disputes may concern the following:

  • Violation of third-party IP rights by training/use of AI:
    The training of generative AI requires the use of very large data sets. If copyrighted works are incorporated into those data sets without the consent of the respective rights holders, there is a risk of claims by the rights holders. Whether training with copyrighted works constitutes infringement is unclear and could be answered differently from jurisdiction to jurisdiction. In the USA, several prominent court cases against producers of generative AI are pending on this issue.
  •  Liability for products using AI:
    If damage occurs when using an AI product because the AI system does not function as intended, the question arises as to who is liable for the damage caused. Several parties may be involved in the development, distribution and use of the product and the underlying AI system. Further difficulties arise if the error or defect is due to decisions made by the AI system itself based on machine learning processes.
  • Data protection disputes:
    A large amount of data is processed using AI. There is a risk that training data may contain personal information and business secrets, the processing of which could constitute a violation of the relevant data protection laws and laws on the protection of business secrets. These laws must also be observed when AI is subsequently used by company employees. If a violation is suspected, the question arises as to who is responsible for the violation. Companies must also comply with any data protection information requirements.
  • Disputes arising from the regulatory status of AI (e.g. EU AI Act):
    Liability issues may arise from AI-specific laws. In July 2024, the EU established a major set of regulatory ground rules for dealing with AI-controlled systems to avoid discrimination, surveillance and other potentially harmful effects, especially in areas relevant to fundamental rights. The regulations will be applied gradually from 2025. Under the EU AI Act, heavy fines can be imposed for non-compliance.
  • Disputes arising from the use of AI in the workspace:
    To the extent that AI is used by employers for recruitment, promotion or performance evaluation, they will need to manage the risk of discrimination or data protection violation. If these risks are not mitigated, disputes may ensue. In addition, the introduction of an AI system may lead to disputes with an organisation’s works council, trade union or other industry body. In Germany, for example, a works council has a right of co-determination if the employer wants to introduce technical equipment that is suitable for monitoring the behaviour or performance of employees. In principle, this also applies to AI systems. On the other hand, the use of AI tools by employees carries the risk of data protection violations and the infringement of trade secrets.
  • Ownership of AI output:
    The question whether the user of generative AI can claim copyright in the AI-generated content based on their ideas and specifications and, to that extent, whether the unauthorised use of this content can constitute a copyright infringement, has not yet been resolved.

AI generated output will be utilised with increasing prevalence by the creative industries. Consequently, it is inevitable that output that is material to a specific project will be appropriated without permission, obliging the Courts to grapple with issues of authorship and subsistence. In the majority of instances, it seems unlikely that output generated wholly by an AI system will be protectable as a copyright work, as the output will not result from personal, creative, choices, but originates instead from statistical calculations based on the relationships between specific words and letters. The resulting judgment(s) will clarify whether such content is protectable and will subsequently have a material impact on how and when we use AI.

Ben Hitchens
Ben Hitchens, Partner, TMC

While respondents are in principle aware that the use of AI is associated with considerable risks, it seems that they are unsure which areas of dispute (relating to the use of AI) are most likely to increase or decrease.

In 2022, 56% of respondents expected AI-related disputes to increase. This led us to examine the concerns of respondents regarding AI-related disputes in more detail in the 2024 survey.

Respondents to the 2024 survey indicated that the violation of IP rights in the training or use of AI is the area where the risk of disputes is most likely to increase. A total of 69% of respondents said that they expect the number of disputes in this area to increase slightly or significantly. This is followed by liability for products that use AI and data protection disputes, each with 68%.

Disputes arising from the regulatory status of AI, e.g. EU AI Act, came fourth (65%), even though AI is still only marginally regulated in large parts of the world. 

Q: Specifically in relation to AI technologies, do you expect to see an increase in disputes for your organisation in the following areas over the next three years?

With the EU AI Act due to be passed at the time of the survey, it is not surprising that more respondents in Europe (63%) than APAC (60%) expect disputes to increase in the next three years. In the US and Canada, the figure is as high as 71%.

Q: Specifically in relation to AI technologies, do you expect to see an increase in disputes for your organisation in the following areas over the next three years?

*Europe, not EMEA, used in this chart to show potential impact of EU AI Act.

It is noteworthy that at least 60% of respondents expect an increase in almost all areas in which AI-related disputes may arise in the next three years. At the same time, the percentage differences between the individual areas of future AI disputes are small. Clearly, respondents recognise the broad nature of disputes associated with the use of AI.

However, the fact that the survey results are fairly homogeneous also suggests that there is uncertainty where the risks lie with the use of AI and which of those risks may ultimately lead to disputes.

The IP issue: ownership of AI output is the least concerning for survey participants. 50% of participants stated that they anticipate an increase in disputes in this area in the near future.

The most likely counterparties in AI disputes

Regulators are considered to be the most likely counterparties to AI-related disputes, with 48% of respondents ranking them in the top three.

Ranked second are developers of the underlying AI model, at 42%. This is in line with the fact that the violation of third-party IP rights by the training or use of AI had the highest number of responses in an area where respondents expect an increase in disputes. Added to this are the liability and recourse disputes concerning products that use AI.

According to the survey participants, insurers will also very likely be party to an AI dispute (41%).

Surprisingly, the owners of IP rights are at the bottom of the list of potential AI counterparties, notwithstanding the fact that IP violation is considered a growing area of AI-related risks. This strongly indicates considerable uncertainty about the risks associated with the use of AI.

Q: Which of the following are most likely to be the counterparties in AI disputes your organisation might be involved in during the next three years? Answers ranked in the top 3.

Companies may seek cover under a range of different insurance policies when AI-related disputes arise. For example, a professional services firm may seek cover under its professional indemnity policy in relation to advice provided to a client where AI was used to assist. Equally, we can envisage AI related claims in relation to cyber; crime; directors and officers; and general liability policies. We have also seen new AI specific insurance products for those operating in the AI space, e.g. developers. It is important that companies across all sectors who use AI consider the risk mitigation steps they have in place in relation to AI use, carefully check the terms of all potentially relevant policies, and seek advice from their insurance brokers to ensure that they have adequate coverage in place.

Luke Gething
Luke Gething, Senior Associate, Dispute Resolution

AI-related disputes call for new approaches   

Advances in technology are opening up new possibilities, whilst at the same time legal risks must be carefully considered. The potential drivers of AI disputes are many and varied. The future of AI promises an increase in AI disputes. It will be interesting to see whether the resolution of these disputes will follow the same principles as the resolution of disputes in connection with non-AI technologies.

Only 28% of our survey respondents agreed with the statement that disputes arising from AI technologies will be resolved according to the same principles as disputes arising from non-AI technologies. In fact, 57% of survey respondents believe that new forms of dispute resolution should be used to resolve disputes related to new technologies.

previous page

2 Section 1 – Perception and reality of technology risks

next page

4 Section 3 – Managing risks