Open navigation
Search
Search

Select your region

Who pays when AI causes harm?

25 Mar 2026 United Kingdom 2 min read

On this page

“It needs to be easier to legally prove what caused AI harm and who should pay for it.”

So says the Law Society’s chief executive Ian Jeffery. He has welcomed an initiative by the UK Jurisdiction Taskforce (chaired by the Master of the Rolls and part of the LawtechUK initiative) to tackle current uncertainties and to cement the UK/s position as a jurisdiction of choice for digital disputes.

The Taskforce is due to publish a Legal Statement on Liability for AI Harms shortly, following consultation on a draft statement released in January this year. The draft statement addressed:

  • The definition of AI: AI is “a technology that is autonomous”, i.e. it generates outputs not determined or programmed in advance, characterised by an unpredictable relationship between input and output, opacity of reasoning, and limited user control over output.
  • Legal personality: AI cannot itself be held liable in English law. Liability for AI-caused harm must therefore be attributed to legal persons (individuals or corporate entities), using ordinary legal principles.
  • Professional liability: Professionals who use AI in the provision of services face liability for negligent use (e.g. failing to conduct due diligence, failing to exercise oversight of AI outputs, or breaching client confidentiality by using insecure AI tools) and potentially also for failing to use AI where a reasonable professional in their position would have done so.
  • Liability for AI-generated false statements: applying the established torts of negligent misstatement and defamation, liability for false or harmful statements generated by AI turns on fact-sensitive questions of adoption, duty of care, and reasonable reliance.
Back to top Back to top
You will now find all Law-Now content on CMS.law