It may be difficult to establish who is responsible if an AI system fails to perform as intended due to the number of parties involved in creating, developing and operating the system. This will make it difficult for a consumer who suffers loss, as a consequence of AI failure, to determine the party against which to bring a claim for compensation. Further difficulties will arise if the fault or defect arises from decisions the AI system has made itself based on machine learning principles.
Could this leave the person who has suffered loss or damage with no means of redress?
The benefits of a collective liability regime to consumers
One solution to this problem would be the statutory creation of a collective liability regime, similar to that of a strict liability regime. Under such an arrangement, a consumer who has suffered loss would not need to prove that a particular person or entity was at fault, but would need only to establish: (1) that he or she has suffered loss or damage; and (2) a limited causal link between the behaviour of the AI system and the loss or damage suffered. There would still be a burden of proof on the person advancing the claim, but it would be considerably lighter than going through contested legal proceedings with potentially considerable cost and delay. It would obviously be most welcome to consumers and consumer groups and, as it removes the fault-based approach to establishing liability, it may also be welcome to manufacturers and programmers due to the collective responsibility involved when it comes to liability being established.
An approach of this nature is being considered by the European Parliament Committee on Legal Affairs. It is not a novel approach and it has a number of similarities with the accident compensation scheme, which has almost eliminated personal injury litigation in New Zealand. Strict liability regimes may be implemented as a matter of public policy to encourage the highest standards of care where protection of the public is paramount.
A strict liability regime would require funding. This could be achieved by the implementation of a levy on manufacturers of AI systems. The funds raised would be paid into a centralised pool and distributed as compensation to consumers who establish that they have suffered damage from the failure of an AI system which is covered by the regime. Compensation payments would be determined by reference to a scale, depending on the nature of the damage caused. If several consumers have suffered the same damage from an incident, then they should in principle all receive the same level of compensation. The system could discourage lengthy and expensive class involved action litigation, which has become commonplace in the US in relation to consumer claims, given that legal proceedings would not be necessary if compensation payments are fixed by
reference to the nature of the injury suffered and there is no requirement to establish fault.
Whilst a strict liability regime clearly has its benefits for claimants, it would also give rise to a number of considerations, such as:
- Which types of AI systems would be subject to the regime? AI is a broadly used term ranging from robots with almost full autonomy to single process computer programmes, which may be enhanced by machine learning, but in reality only diverge from their initial programming to a very modest degree. The risk profile for consumers will differ greatly from one AI system to the next.
- How will the levies be determined? Presumably this will be dictated by the level of autonomy of the AI system and the potential damage which could be caused if it failed to operate as intended. However, the manner in which an AI system is deployed by a user may be different to the use intended by the manufacturer, and there may be circumstances where the levy paid is not proportionate to the damage, which could be suffered.
- Could a strict liability regime lead to increased risk to consumers? A system of compensation, which attaches no fault to AI failure, could lead to a reduction in safety standards. This would need to be addressed through firm regulatory measures. The converse argument is that the risk of absolute liability could in fact lead to higher safety standards and make manufacturers of AI more risk averse, potentially stifling innovation, especially amongst smaller businesses.
- Would the strict liability regime apply in all cases of damage to consumers or only those where fault cannot be established? If it can be easily established that the damage resulted from negligence in the manufacturing process and the manufacturer has the means to compensate the consumer, should the manufacturer be liable to pay damages rather than compensation coming from the centralised fund, thus avoiding unnecessary depletion?
- As is often the case with AI, there may be issues relating to jurisdiction. An AI system may be registered and the levy paid in one jurisdiction, but the damage may occur in another jurisdiction. Different regulatory standards may mean that a drone permitted in one jurisdiction is not permitted in a neighbouring jurisdiction. If it crashes and causes damage over the border, would compensation be payable? If the jurisdiction in which the damage occurs does not have a strict liability scheme, will the person who suffers damage be bound to the level of compensation payable for the type of damage suffered under the scheme of the country in which the drone is registered or will he or she be able to issue legal proceedings to claim a greater sum?
- Would a strict liability regime discourage companies from innovation? If the costs of contributing to a levy fund are high then this may discourage smaller companies from experimenting with AI technology and innovating new AI products
These and other issues will no doubt be subject to scrutiny by authorities who consider the implementation of a strict liability regime.
A properly managed strict liability regime would be an attractive way of addressing concerns about recoverability of damage suffered by an AI system, especially in circumstances where a number of consumers have suffered loss or damage from the same incident. However, such a system will have its limitations and it is likely that many claims for damages from AI failure will still have to be conducted through the courts.