Open navigation
Search
Offices – United Kingdom
Explore all Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights – United Kingdom
Explore all insights
Search
Expertise
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights
About CMS
UK Pay Gap Report 2024

Learn more

Select your region

Publication 20 Oct 2025 · United Kingdom

Introducing AI: Understand and mitigate the risks

Bandwidth: Enabling AI-driven success

1 min read
An accurate assessment of AI risk – informing early-stage decisions on data use, safeguards, documentation and oversight – is critical to ensuring appropriate risk mitigation and compliance, as well as developing trust among stakeholders.

It’s now common to run small-scale evaluations of AI before committing to it – not just to assess the AI, but to see how the workforce react to it and interact with it. 

You can give people dummy data sets to play with or invite them to workshops to try stuff in a controlled setting. 

But many businesses opt to start with a sandbox – whether that’s an actual sandbox unconnected to other systems, or a trial system with additional guardrails and precautions in place. 

Sandboxes can be particularly useful in heavily regulated business, as testing in a safe environment enables the regulator to keep an eye on what’s happening and provide input where needed. 

In evaluating a trial of AI, you need to think in advance about your KPIs.  

  • What is it feasible to measure? 
  • How can you gauge effectiveness, or compare the time taken with and without AI? 
  • How will you assess accuracy and quality? 
  • How will you take account of the learning curve and engagement of your testing cohort? 

In certain cases you might want to run parallel trials of competing products. This can facilitate comparisons – and may help you learn about things you’d miss otherwise. 

Also, don’t neglect the people who are part of your trial. They may have anxieties or concerns about being judged. Some may be afraid of putting themselves out of a job. 

You need to communicate with them in a way that alleviates such worries, and set in place a clear and robust policy – albeit one that’s flexible enough to evolve as your use of AI evolves. 

You want them to feel there is a structure that they can understand and feel safe about when they trial, explore or discuss AI. And you want them to understand that a trial can be a success even if the product is not ultimately adopted. 

When any technology that is new to an organisation is deployed for the first time, risks can arise without adequate oversight and an understanding of how the technology actually works. This is no different for AI. 

We have worked with clients to establish oversight structures for their use of AI, which includes appointing individuals with key expertise to spearhead governance within their organisation.  For many of our clients, this includes developing new or updating existing policies and procedures, and delivering training and education to make sure that everybody’s comfortable with the organisation’s approach to the adoption and use of AI – and to make sure they understand the risks AI may pose to the business. 

As with any technology, understanding its capabilities and limitations is critical for using it appropriately and for putting in place processes and procedures that can help to avoid or mitigate any missteps, and fully realise all of the benefits that AI can bring. 

AI technologies are evolving rapidly and the regulatory landscape is also undergoing rapid change, making it challenging to keep up with the latest developments and ensure that oversight mechanisms are adequate. 

In the space of just a few years we have moved from a place where many organisations did not have specific AI oversight and governance structures to now, where governance and AI literacy are requirements under the EU AI Act and organisations are recognising the importance of overseeing AI technologies at the highest levels of corporate governance. 

Continuous learning and adaptation are therefore necessary to manage the risks associated with AI deployment. And we’ve already helped clients through multiple iterations of their oversight structures, to keep pace with these changes and their maturing adoption of AI tools.

AI governance and risk are now clearly understood to be a critical, board-level issue. Even just a couple of years ago, that wasn’t always the case. But AI’s rapid development – and the growing awareness of its enormous power – have put it centre stage in the boardroom.  

Governance covers a huge range of issues. And it can be tough to get right. Naturally it has to reflect the company’s overall risk appetite and risk tolerance. But if it’s too relaxed it exposes the business to unnecessary and possibly serious risks, while if it’s too stringent it kills enterprise and innovation. 

And in purely practical terms, it has to facilitate – and support – the myriad and often highly nuanced decisions that businesses have to make every day. 

An appropriate risk assessment is the bedrock of good governance. It enables your business to innovate at scale, because you’ve identified the right guardrails. And it fosters trust as well as growth – not only among your own people, but with customers, regulators and investors. 

However, AI risk assessment can be complex – and there’s no shortage of standards and guidelines that cover it to various degrees. They typically stress the breadth of the assessment that’s needed and the necessity of updating it frequently. Many emphasise some risks that are very specific to AI. 

Traditional tech systems, for example, don’t display emergent behaviours, or evolve by themselves, or engage in ‘black box’ reasoning. And there are some very specific considerations that relate to the developing landscape of AI regulation. But also, as AI permeates every aspect of the business world, the nature of the associated risks can change. 

In some cases the most serious risks may not be technical failings or regulatory breaches. Because some AI risks aren’t really AI risks at all. They manifest themselves when the use of AI exposes or makes a pre-existing weakness worse. So risk assessment has to be holistic, and has to go well beyond the AI itself. 

Above all, though, an AI risk assessment has to be practical. As in the management of any risk, there are going to be trade-offs. And it’s the effective identification, expert analysis and prioritisation of risks that enables those trade-offs to be well-considered – and enables your business to thrive. 

If you’d like to discuss these issues in more depth, please do give me a call.  

Or get in touch with me – or any of my colleagues at CMS – to talk more about how businesses can make the most of the incredible opportunities that AI offers.

AI systems are subject to regulatory requirements of transparency, under the EU AI Act, and where they use personal data, under the GDPR, and other data protection laws.  

Transparency requirements go beyond requiring a description of how AI technology works. The law requires that explanations must provide certain prescribed information, in a way that is clear, easy to understand and accessible. In an AI context, this involves being open and honest with people about how and why an AI-assisted decision will be made about them. It also means individuals must be made aware if their information will be used to train and test AI. 

In practice, existing privacy notices will usually need to be updated prior to the roll-out of AI systems. The use of layered information, icons, dashboards and diagrams can all help to break down the complexities in a way that meets regulatory expectations. To check if you have achieved your transparency objectives, consider testing your draft privacy notice with a trusted focus group. Honest feedback can be documented to support compliance, and drive improvements where needed.  

In terms of the benefits, as well as facilitating legal compliance, transparency helps to build trust, inspire confidence in the technology and guard against complaints and reputational damage. In this sense transparent AI systems uphold the reliability and credibility of businesses, as well as safeguarding individuals. 

Myself and the team here at CMS are experts on this topic, and would be happy to help you with this – so feel free to reach out if you’d like to find out more. 

The pace at which AI technology is being developed is unprecedented, as providers race to build and release their solutions as quickly as possible.   

Whilst some of these providers are amongst the world’s largest technology companies, there are a number that have only recently been established which raises important issues regarding the security controls that may or may not have been put in place to safeguard any use of their solution. 

In particular, poor security controls over systems can lead to cyber breach incidents which may result loss or theft of data, which in turn can lead to customer complaints, regulatory investigations and fines and reputational risks.   

AI also gives bad actors new opportunities to infiltrate your systems, whether by attempting to circumvent your security controls or commit fraudulent activity such as creating fake ID. 

So how do you look to address these risks?  Many organisations will already have sophisticated information security teams that are tasked with preventing an organisation from suffering cyber incidents or other security risks.   

Processes will also already be in place to assess the suitability of new providers when procuring systems.  Utilising these resources will therefore be important, recognising that AI brings new risks that have not been seen before. 

You also need to consider your contractual position with AI providers. Do you have sufficient audit rights so that you can assess the suitability of their security measures. And in addition, do you require them to agree to a documented Information Security policy that regulates their security measures for the system? 

In short, you should ensure that you coordinate your efforts here and take proactive steps to identify and address security issues with AI.  

We know there’s a lot to consider here. So do reach out to me or my fellow experts at CMS if you’d like to know now. 

Back to top