UKJT consultation: Liability for AI harms under English private law
Key contacts
The UK Jurisdiction Taskforce (UKJT) has launched a consultation on a legal statement on liability for AI harms under the private law of England and Wales, seeking input from stakeholders as to whether the draft Legal Statement (annexed to the consultation) sufficiently addresses the key issues of perceived uncertainty as to liability for AI harm.
For the purposes of the draft Legal Statement, AI is defined as “a technology that is autonomous”. That term is intended to capture:
- an unpredictable relationship between the input and output,
- opacity of reasoning, and
- a limited ability on the part of a user to control output.
The autonomous behaviour of AI raises novel issues in terms of perceived and actual legal uncertainty. The draft goes on to explain how in many circumstances the use of AI creates no special difficulty when predicting the legal outcomes under current law, however, the lack of precedent regarding AI harm creates actual uncertainty as to how the courts will in fact address these issues.
Scope of the draft Legal Statement
The draft Legal Statement aims to clarify in what circumstances and on what legal basis English law will impose liability for loss that results from the use of AI, with a primary focus on non-deliberate harm caused by AI. Matters of criminal, competition, taxation, intellectual property, contract formation, regulatory and public law are all out of scope for the consultation.
The draft makes clear that AI has no legal personality and accordingly cannot itself be held legally responsible for harm. Instead, liability is likely to be governed either by the contractual agreements that exist between the parties or, to the extent there is no contractual relationship, on the basis of any non-contractual duty (primarily negligence).
The draft Legal Statement also considers questions around the generation of false statements by AI including the risk of claims for negligent misstatement and defamation.
Questions considered by the draft Legal Statement and its draft responses
- Does the principle of vicarious liability apply to loss caused by AI?
Vicarious liability applies only when one legal person is responsible for another legal person’s actions. Since AI is not a legal person, the user cannot generally be held vicariously liable for its actions or failures.
However, an employer could be vicariously liable for the acts of an employee who causes harm via negligent use of AI.
- In what circumstances can a professional be liable for using or failing to use AI in the provision of their services? If AI used in the provision of professional services produces erroneous output, is the professional liable for loss resulting from the error?
Courts will apply established negligence principles to AI (duty, breach, causation, remoteness), informed by standards, guidance, data selection and output validation, to assess whether reasonable care has been taken by developers and users.
The autonomous nature of AI may complicate causation, however, the ordinary “but for” and scope-of-duty analysis will often suffice.
For pure economic loss, recovery generally requires a “special relationship” involving assumed responsibility. Professionals must exercise reasonable skill and care in choosing, supervising and verifying AI output in order to comply with their contractual and concurrent common law duties. Whilst failing to disclose use of AI will often in itself be a breach of duty, transparency does not excuse poor selection or oversight and failure to conduct proper due diligence on an AI system before use is likely to constitute a breach of duty. Conversely, failure to utilise AI tools may also lead to a breach of duty if such failure results in the professional falling below the expected standard of care (i.e. where a reasonable professional would have used AI).
- Can a person ever be liable for harms caused by use of AI where there is no fault on their part?
Absent negligence or a result-based promise, losses usually lie where they fall.
The key exception is where an AI system is incorporated into a tangible product, as the Consumer Protection Act 1987 will apply. The Act imposes strict liability for defective products causing death, personal injury or certain property damage - subject to safety-expectations analysis and statutory defences.
The Law Commission is currently reviewing the law related to product liability and it is anticipated that this will address the status of “pure software”. However, in the meantime such software, including LLM-based chatbots, will not fall within the remit of the Act.
- Does liability attach to false statements made by an AI chatbot?
A key situation where AI may cause harm is where it produces false statements and leads to the provision of false information or advice. In such circumstances, a claim may arise for (amongst other things) misrepresentation or defamation.
Negligent misrepresentation claims involving AI will likely turn on whether a legal person assumed responsibility and whether reliance on the information provided was reasonable in the context.
With regard to a claim for defamation, a potential claimant will need to show that the AI generated statement caused serious harm (i.e. evidence is required of the impact of the statement). Whether people believe the AI output will be relevant to proving serious harm. Therefore, if a tool is well known to hallucinate, this may limit the scope for damages in respect of any harm.
In addition, there will also likely be a reduction in the available defences to such a defamation claim. For example, the defences of honest opinion and public interest, are unlikely to be available where a legal person did not review the AI output.
Practical tips to reduce risk
Whilst the position on liability for AI harms remains under consideration, there are a number of steps parties can take to reduce risk exposure when using AI:
- Ensure that to the extent there are contractual agreements in place, they contain provisions clearly allocating the risks arising from the use of AI between the parties. Take care to make sure that the content of any warranties, indemnities, exclusions and limitation of liability clauses align both within an individual contract and throughout the AI supply chain as a whole.
- Adopt recognised standards and guidance, have in place robust testing and monitoring procedures, document whether and how AI is used, and ensure human oversight and verification of all AI output.
- Ensure all communications and marketing materials are accurate as to the information they contain in respect of any AI systems being used.
Engaging with the consultation
The consultation is open for responses by email until 13 February 2026.
A virtual public event is also being hosted on 27 January 2026 to gather feedback on the draft Legal Statement.
For further information please email the authors or your usual CMS contact
This article was prepared with the assistance of Taiya Cooper, Trainee Solicitor in CMS London.