Home / Publications / Artificial Intelligence - Who is liable when AI fails...

Artificial Intelligence - Who is liable when AI fails to perform?

Who is liable when AI fails to perform?

Brain-animation GIF

A driverless car runs over a pedestrian; a drone partially operated by a pilot crashes and causes damage; an AI software programme diagnoses the wrong medical treatment.

Who would be liable in each of these circumstances? The user of the car? The programmer of the medical software programme? The manufacturer or pilot of the drone? The developer of the AI system? The AI system itself?

This article addresses the crucial question: who is liable when AI fails?

Artificial Intelligence conversation

As AI technology develops at such a fast pace and human decision making fades into the background, it is inevitable that some AI systems will fail to perform. Given the increasing use of AI technology in our daily lives and potential damage caused by its failure, we will see an increase in AI related disputes over the next decade. Issues of liability for autonomous systems and software driven incidents are not new.

As far back as the 1980s, Therac-25, a radiation therapy machine developed by Atomic Energy of Canada Limited “AECL”, delivered damaging doses of radiation to cancer patients due to a glitch in the computer coding, with fatal results. Liability in this case is still debated today as some hospitals had implemented their own upgrades to the systems that arguably caused the overdoses.

The cause of an AI system’s failure to perform is the key element for establishing:

  • a breach of a duty of care in negligence claims;
  • a breach of an express or implied term in contractual claims; or
  • a link between the defect and damage suffered in consumer protection liability claims.

Under each claim, the fault or defect must have caused the damage or loss.

The inevitability of AI disputes

Who is at fault when an AI system fails to perform?

Artificial Intelligence Finger Point

As there are many parties involved in an AI system (data provider, designer, manufacturer, programmer, developer, user and AI system itself), liability is difficult to establish when something goes wrong and there are many factors to be taken into consideration, such as:

Nature or cause of damage

If so, who is liable?

Was damage caused when in use and were the instructions
followed? Was the AI system provided with any general or specific
limitations and were they communicated to the purchaser?

User or owner?

Was the damage caused while the AI system was still learning?

Developer or data
provider?

Was the AI system provided with open source software?

Programmer?

Can the damage be traced back to the design or production of the
AI system, or was there an error in the implementation by its user?

Designer,
manufacturer or user?

AI liability and current law

Specific rules are being formulated in certain sectors to deal with the risks posed by AI systems. For example the UK is proposing to introduce rules under which the insurer will generally bear primary liability in the case of accidents caused by autonomous vehicles.In the absence of legislation relating to AI, redress for victims who have suffered damage as a result of a failure of AI would most likely be sought under the tort of negligence.

The claimant would need to establish that the defendant (whoever that may be) owed a duty of care, breached that duty and that the breach caused injury to the claimant. Ultimately, liability for negligence would lie with the person, persons or entities who caused the damage or defect or who might have foreseen the product being used in the way that it was used. In the event that the damage results from behaviours by the AI system that were wholly unforeseeable, this could be problematic for negligence claims as a lack of foreseeability could result in nobody at all being liable.

As the law currently stands, the user of an AI system is less likely to be at fault than the manufacturer. Whether a manufacturer is liable will depend on the relevant industry standards of care and whether the specifications were appropriate in light of those standards. There may be further debates as to whether and to what extent fault may lie with the programmer, the designer or the expert who provided the knowledge to the AI system. Contributory negligence may also be a factor.

Where an AI system is fully autonomous or is far removed from human decision making, it will become more difficult to establish proximity and foreseeability. Such cases are likely to involve complicated and competing expert evidence regarding whether the AI system functioned as it should have done. The first examples of AI cases are beginning to appear, for example, a class action started in early 2017 against Tesla over an automated vehicle’s autopilot system, claiming that it contains inoperative safety features and fault enhancements. In our next article, we will consider whether the AI system itself can be liable in the event it fails to perform.

Publication
Artificial Intelligence - Who is liable when AI fails to perform
Download
PDF 94.9 kB

Authors

Portrait ofLee Gluyas
Lee Gluyas
Partner
London
Stefanie Day