AI and…regulation
Bandwidth: Enabling AI-driven success
AI and...regulation
Slide title
AI regulation is a work in progress around the world. On the one hand, you’ve got people who want less red tape and more freedom to innovate.
On the other, you’ve got those who are terrified that the machines will take over the world and we’ll all be living in a Terminator movie.
And you have the perennial problem of legislators and regulators struggling to keep up with the rate of technological change.
In the UK, both general principles and specific legal frameworks and regulations – some sector-specific – govern the use of AI in areas such as data protection and privacy. And there are significant risks and liabilities associated with the use of AI in these areas. Fortunately a lot of businesses are already on top of privacy and data protection issues.
There’s also quite a lot of AI-specific guidance around topics like explainability, accountability and how to deploy AI in a way that complies with UK data protection rules.
However, a business in the UK – or anywhere else – may also be subject to any rules that apply in other jurisdictions where it does business, or even where its providers process data. And because AI is developing so rapidly, there are many issues that are not clear-cut.
There is also the big, related question of how any specific iteration of AI will be governed.
Not all AI is the same, and it will become increasingly clear to people generally that different AIs come with significantly different risks. There will certainly be situations in which AI doesn’t change anything fundamental.
If you’re just using it, say, to speed up your workflow by drafting responses to customers, there shouldn’t be significant extra regulatory risk – providing access to customer data continues to be appropriately controlled.
It’s still a human being who reviews the content, signs it off and presses send. But if you take the human out of the loop, leaving the machine reviewing a customer issue and then drafting and sending a response automatically, although the commercial situation remains the same it becomes a very different regulatory situation.
So does any situation in which your processes start being run by a third party using AI.
Is the UK falling behind in the global race on AI?
The EU AI Act is described as the world’s first comprehensive AI law, positioning the EU to play a leading role globally. It’s already in force and started to apply this February.
In contrast with the EU, instead of introducing new legislation, the UK has currently adopted a pro-innovation approach.
Hoping to encourage investment and growth, the UK is regulating AI largely through existing legislation, with existing regulators working on AI issues as they arise across different sectors.
Regulators have, in general, been encouraged by the government not to be too heavy handed and to try to promote innovation.
However – in practice there are always going to be limits on how permissive some regulators will be.
While the UK government has endorsed most of the AI Opportunities Action Plan it commissioned from Matt Clifford, it is yet to take many of the steps set out in the plan.
Meanwhile, the UK government’s draft AI legislation has not been published, and arguments are continuing about issues such as the use of copyright content for training AI models.
The plan is for the UK to become an AI superpower. In practice, it still feels like it’s a case of watching this space to see if the UK can catch up or if other jurisdictions will pull ahead in the AI race.
The pace of AI regulation - especially in the EU - has picked up rapidly, reshaping the legal and compliance landscape for businesses deploying or developing AI.
At the centre of this shift is the EU AI Act, which aims to establish a comprehensive framework for AI across Europe.
At CMS, we’ve been closely involved from the start, supporting clients with policy advocacy and their engagement with legislators and regulators, helping shape the emerging law, standards, guidance, and model contract clauses.
That’s included explaining both the risks and benefits of different AI technologies, clarifying how the rules would apply, and ensuring the appropriate allocation of responsibility across the AI supply chain.
Now, as the provisions of the AI Act gradually become applicable, we’re working with clients to unpack what it really means for them in practice.
Although many of the detailed regulations and guidance are still to come, businesses can’t afford to wait – they need to operationalise their AI as a priority.
So, we’re helping them identify, categorise and understand the regulatory requirements that apply – or may in future apply – to:
- the AI systems they are developing and aim to commercialise or use in their own business
- those they license in
- those forming components parts of the third-party products and services they sell
- and those their service providers propose to use.
For many organisations, this means thinking internationally.
Local rules often vary – even within the EU Member States are introducing their own additional obligations. And at EU level, there’s also the Product Liability Directive to consider.
The bottom line? Businesses that want to unlock the full potential of AI need compliance programmes and governance arrangements that work across borders - and that evolve as the regulatory picture continues to develop.
This is a topic we’re regularly advising clients on, so if you would like to know more, please do contact me or my colleagues at CMS.
As AI adoption accelerates, regulatory expectations are evolving too.
And some businesses have been surprised to discover that the EU’s new regime for reporting AI-related incidents may apply to them.
Under the EU AI Act, the providers of high-risk AI systems are required to report serious incidents to the relevant authorities in EU member states.
But a wide variety of systems are categorised as high-risk, and in many cases the business that deploys a high-risk system may count as a provider.
Under the new regime, a serious incident will include any AI malfunction or outcome — direct or indirect — that results in:
- Death or serious injury
- Significant property or environmental damage
- Irreversible disruption to critical infrastructure, or
- A breach of EU fundamental rights protections
Providers are expected to report as soon as they establish a link — or even the reasonable likelihood of a link — between the AI and the incident, and in any case to report within a certain timeframe.
Where death occurs, for example, a report should be made as soon as there is even a suspicion that the AI is involved – if necessary as a preliminary report with more detail later.
Once an incident is reported, the provider must carry out an investigation – which will include a risk assessment and corrective action – and fully cooperate with the relevant authorities.
We’re actively supporting clients in understanding these new obligations — from scoping their reporting responsibilities to incorporating appropriate procedures in the quality management systems that the Act obliges them to create.
Because even though aspects of the regime should be clarified by further guidance from the EU, businesses will still need to make many nuanced judgments about risk, liability and reputational exposure.
At CMS we believe AI offers enormous potential — and that regulation, if approached proactively, can be a catalyst for trust and long-term success.
We aim to help our clients integrate compliance into their innovation strategies – and turn it into a competitive advantage.
If you’d like to know more, feel free to reach out to me or my fellow experts here.
We may, much sooner than many people believe, get to the stage where it is negligent not to use AI. It’s already clear that, in certain areas, AI can outperform a human.
So in those areas, if you don’t use AI to supplement, accelerate and improve your output – for example, as a doctor… or even as a lawyer – it’s going to get increasingly hard to argue that you’ve exercised due skill and care.
And in all probability, it could be deemed negligent by a court – and possibly by a regulator-- to not have used AI in certain areas.
Obviously that means some people will have to get used to using AI and using it effectively, whether they like it or not.
If you have staff who should be using it but don’t, despite your best efforts to train them and convince them to do so, it will ultimately be a risk for you to keep on employing them.
There’s also inevitably going to be a perpetual back-and-forth between the push for more regulation and the resistance to that.
It’s always a game of catch-up, and people often don’t quite know where the law sits – partly because it typically lags some way behind tech and market developments. This may mean that we see some interesting litigation about AI – quite possibly in unexpected areas.
To use an analogy, I don’t think most people a few years ago would have envisaged quite how complex and wide-ranging litigation about data protection would become. It initially felt like quite an anodyne – albeit complicated – set of rules.
Yet over the years, people pushed the boundaries of the data protection regime that led to unanticipated litigation and a fluctuation of the legal position.
AI offers even more scope for disputes – and thus a complex landscape of litigation to develop.
AI providers – especially smaller ones – may face a dilemma over customer data.
Do they want to use it to help their AI evolve?
If so, they may be categorised as data controllers under data protection law – which has a lot more legal ramifications than just processing data as a provider for a client. The key question here is whether the customer data includes any personal data – in other words, information that identifies individuals.
The largest AI providers have already had to address this. But many others haven’t. And in the long term, the commercial reality is that if they want to keep improving their products, it’s likely that a number of these providers will need to accept they are data controllers if they’re using personal data to train their AI model.
So that will make the whole issue of customer data more complex for them – and for some of the businesses whose data they’re using, they may well start looking for additional safeguards.
This can throw up some knotty issues and difficult conversations, and potentially change the balance of a relationship. And these things will have to be discussed on a case-by-case basis.
Uncover more in our latest Bandwidth AI series. But we’re here to support you with this. So if you’d like to know how we can help, do reach out to me or any member of our team at CMS.
AI is going to make it possible to determine many customer disputes much more quickly and cheaply.
And I think regulators will be generally accepting of this, because at the moment so many complaints get resolved slowly and at a disproportionate cost.
In an area such as retail banking, where you’ve got customers making complaints at the level of disputing a particular charge on a credit card, it’s going to be something of a no-brainer, because in many ways that’s such a perfect use case for AI.
There’s a large volume of complaints, but they’re mostly of a repeat nature and so easily understood and evaluated by AI. The relatively small number of complex ones can be dealt with by a different process.
However, where AI is used there will have to be some disclosure and transparency. People will need to understand that their dispute has been determined by a machine.
And there will have to be the possibility of appeal, to say: look, if you’re not happy with this outcome that the machine has delivered, here are some bases on which you can appeal it so that a human can assess it.
But you’ll have to define those bases fairly precisely – and maybe even have appeals reviewed by AI for eligibility – because otherwise everyone will appeal whenever the computer says no, and your workload will go up rather than down.