It may be difficult to establish who is responsible if an AI system fails to perform as intended due to the number of parties involved in creating, developing and operating the system. This will make it difficult for a consumer who suffers loss, as a consequence of AI failure, to determine the party against which to bring a claim for compensation. Further difficulties will arise if the fault or defect arises from decisions the AI system has made itself based on machine learning principles.
One solution to this problem would be the statutory creation of a collective liability regime, similar to that of a strict liability regime. Under such an arrangement, a consumer who has suffered loss would not need to prove that a particular person or entity was at fault, but would need only to establish: (1) that he or she has suffered loss or damage; and (2) a limited causal link between the behaviour of the AI system and the loss or damage suffered. There would still be a burden of proof on the person advancing the claim, but it would be considerably lighter than going through contested legal proceedings with potentially considerable cost and delay. It would obviously be most welcome to consumers and consumer groups and, as it removes the fault-based approach to establishing liability, it may also be welcome to manufacturers and programmers due to the collective responsibility involved when it comes to liability being established.
An approach of this nature is being considered by the European Parliament Committee on Legal Affairs. It is not a novel approach and it has a number of similarities with the accident compensation scheme, which has almost eliminated personal injury litigation in New Zealand. Strict liability regimes may be implemented as a matter of public policy to encourage the highest standards of care where protection of the public is paramount.
A strict liability regime would require funding. This could be achieved by the implementation of a levy on manufacturers of AI systems. The funds raised would be paid into a centralised pool and distributed as compensation to consumers who establish that they have suffered damage from the failure of an AI system which is covered by the regime. Compensation payments would be determined by reference to a scale, depending on the nature of the damage caused. If several consumers have suffered the same damage from an incident, then they should in principle all receive the same level of compensation. The system could discourage lengthy and expensive class involved action litigation, which has become commonplace in the US in relation to consumer claims, given that legal proceedings would not be necessary if compensation payments are fixed by
Whilst a strict liability regime clearly has its benefits for claimants, it would also give rise to a number of considerations, such as:
These and other issues will no doubt be subject to scrutiny by authorities who consider the implementation of a strict liability regime.
A properly managed strict liability regime would be an attractive way of addressing concerns about recoverability of damage suffered by an AI system, especially in circumstances where a number of consumers have suffered loss or damage from the same incident. However, such a system will have its limitations and it is likely that many claims for damages from AI failure will still have to be conducted through the courts.