AI and…liability
Bandwidth: Enabling AI-driven success
AI and…liability
Slide title
New EU rules – not just the Product Liability Directive but things like the AI Act – mean that businesses have a lot more to think about when assessing product liability.
In the context of AI, there’s now a need to look at many issues of categorisation and potential liability – for instance:
- Does a product count as an AI system?
- If so, is it high-risk?
- Does it involve a safety component?
- Is certification required before a product can be marketed?
- Are you having to deal with multiple regulators?
Some of this obviously has to be considered at the very earliest stages of product development and design. From the point of view of liability, compliance is going to be very important.
If something goes wrong and there are injuries, or if recoverable losses arise, the question of whether you breached any obligations in the product safety legislation becomes terribly relevant.
Because if you can’t show that you met all those intensified product safety obligations, you’ll have a de facto presumption of causation of injury. And then you’re be paying out compensation.
And you may also be exposed to enforcement under the EU AI Act, which can carry very significant penalties.
So – a lot of companies are now realising that they need more clarity on their obligations, stronger internal governance on the introduction of products, better compliance and monitoring infrastructure, and more resources devoted to risk assessment.
And all that is going to be particularly challenging for smaller companies and start-ups – as are things like insurance costs.
I think we’ll see more and more small companies licensing their products and IP to bigger businesses that are better able to shoulder those costs and risks.
Which means that, in the longer term, the new rules are going to change the shape of the market.
As the EU presses on with its regulation of AI, we’ve been working with a growing number of businesses who are thinking about the impact of AI regulation.
For some, the issue is not just understanding how they may be regulated, but also getting their heads around the various roles they may be playing under the EU AI Act – whether it’s as a provider or deployer or importer or distributor.
They’re starting to ask –
- How are we most likely to be categorised?
- What does that mean for us as an organisation?
- And how should we then prepare ourselves to meet our responsibilities in relation to AI that falls within the scope of the Act?
And once they do that analysis, lots of actions flow from it, such as –
- Building out new contractual clauses so that they are ready to use; and
- Having a policy framework in place to design and develop AI systems in ways that comply with the requirements set out in the EU AI Act.
Businesses want to know if their use of AI will infringe copyright. And what about the content they create with AI – will that be protected by copyright?
When it comes to copyright infringement, if your use of AI produces something that looks sufficiently similar to a copyright work, the rights holder may have a claim if they can show or infer that copying has taken place.
You may be able to avoid this by showing that your work was created independently, and that the relevant AI tool did not have any access to the copyright work.
But given the ‘black box’ nature of many of the current AI tools, that might be difficult to do in practice.
The question of infringement might also depend on the details of the particular process.
Even if the AI tool was not trained on a particular copyright work, perhaps someone prompted the AI tool to create something very similar to that copyright work. It’s not clear if the output resulting from that prompt would still infringe that copyright work.
For many businesses, the key step may be ensuring that staff are properly trained to use AI tools and can easily seek advice if there are any concerns.
Some major AI providers have provided indemnities, promising to defend customers using their tools against copyright claims. And that’s certainly provided some comfort to those customers. But those indemnities are not unconditional.
If customers use input they do not have the right to use, or they modify the AI tool or don’t use the AI tool in accordance with their licensing requirements, or if they use the output in an inappropriate way, they are probably not going to be protected.
The question of whether the output generated by AI tools can be protected by copyright is even more problematic.
In the UK, copyright law says that works generated by computers without a human author can have copyright protection – although for a shorter time than other copyright works and without moral rights. But – a computer cannot be an author.
Under UK copyright law, the author of a work that’s generated by a computer is “the person by whom the arrangements necessary for the creation of the work are undertaken”.
And in the context of AI, it’s not clear who that is – is it the creator of the AI tool, its trainer, its operator, or the writer of the prompt that resulted in the output?
The UK government is considering removing protection altogether for computer-generated works. Until the UK government confirms its plan, it may be prudent to assume that AI output can only rely on copyright protection where there is significant and original human involvement – perhaps through really detailed prompts or in developing or refining what the AI tool has produced.
That’s also going to be true in the majority of other jurisdictions, where copyright generally does require a human author.
The ability of generative AI to produce content is a real game-changer for many businesses.
But what happens if your AI produces content which is defamatory of an individual – maybe by hallucinating something or mismatching material from different sources?
Critical questions for these purposes are, first, whether the content is actually defamatory. And secondly, who has responsibility for the publication of the content.
An author, editor or publisher can be liable for defamation in England. But in this case, who would the author or originator of the statement be? Is that the AI?
Probably not, as the AI is a machine rather than a “person” – so a company or an individual. Could it be the AI provider?
Whether the AI provider is an author, editor or publisher is an open question, at least as far as English law is concerned.
It is also an open question whether – if they are not author, editor or publisher – they can still be liable or if they can benefit from the defence of “innocent dissemination”.
There’s more certainty if defamatory content that’s produced by AI is repeated. In that case, the liability can rest with whoever repeats it – although the original author or publisher may be liable too.
So while you may not be liable if you prompt the AI to create the defamatory output, you may very well be if you disseminate it.
If you or your business have been defamed in AI-generated content, you may wish to make a civil claim.
In practice, though, getting the defamatory material withdrawn or taken down is likely to be a more immediate priority than going off to court to make new law about it.
If it’s circulating online, you can ask any platforms that might be hosting it to take it down, possibly resorting to legal tools if they’re not responsive.
We’ve already seen the courts get creative in helping people take action against anonymous online content, and we’d expect a similar approach here – even if the material has been produced by AI rather than by an individual.
There’s lots to discuss here, so if you’d like to know more, feel free reach out to me or my colleagues at CMS.
One issue currently getting a lot of attention is deepfakes. If AI is used to create an image or a video of a real person, apparently doing or saying something that they haven’t actually done or said, what can that person do about it?
Image rights (sometimes called personality or publicity rights) do exist in some jurisdictions but, in the UK, there is no such thing as a standalone image right.
Instead, if individuals in the UK want to control the use of their image, they are left to rely on a patchwork of different causes of action.
For example, if the deepfake uses a photograph or video of an individual, that might count as copyright infringement – but often, the individual is not the copyright owner, which complicates things.
For celebrities, if the deepfake makes it look like they are endorsing something when they’re not, they may be able to rely on a passing off action.
If, instead, a deepfake is used to commit fraud, the priority will probably be tackling the fraud.
Or, if it’s one of the growing number of deepfakes showing a politician saying something they didn’t actually say, politicians often prefer to avoid the courts and may decide that the best solution is to do nothing, especially where issues such as freedom of expression may be relevant.
However, often the issue with a deepfake is not political, commercial or fraudulent. Most deepfakes are pornographic or sexually explicit images or videos of people. This obviously raises significant privacy concerns, as well as being hugely upsetting for the individual shown in the deepfake.
The UK government has already made it a criminal offence to share intimate images, including sexually explicit deepfakes, and is in the process of introducing additional criminal offences in relation to creating sexually explicit deepfakes without consent.
It feels like it’s only a matter of time before someone goes to court in the UK in order to get a deepfake taken down. And I would expect the courts to be pretty flexible about how they deal with that, just as they have been in addressing revenge porn.
Some campaigners also want the providers of the AI tools being used to create pornographic or sexually explicit deepfakes to be liable in some way.
On balance that seems unlikely to happen – but it’s a risk AI providers have to consider.