Home / Publications / Future Facing Disputes - Is there a future for ro...

Future Facing Disputes - Is there a future for robo-directors?

‘Robo-directors’ and ‘Robo-advisors’ - liability for bad decisions based on AI technologies

AI technologies are increasingly influencing the decisions we make in our private and professional lives. Internet shopping sites and content streaming sites recommend products and films based on past behaviour, steering a consumer’s decisions. Social media sites present content determined by the user’s previous activities and interests, in many cases influencing the user’s outlook and opinions. In professional spheres machine learning technologies are used to assist decision making. For example, by processing very large amounts of data, AI technologies assist in the determination of investment decisions and diagnosis of medical conditions.

A future for ‘robo-directors’?

 

There has been some speculation that the use of AI technologies in a professional capacity could lead to AI programmes formally carrying out the role traditionally performed by company directors ie making decisions relating to the management of a business as opposed to providing information which assists decision-making. 

Indeed, in 2014 a Hong Kong venture capital firm, Deep Knowledge Ventures, appointed an AI programme named VITAL (Validating Investment Tool for Advanced Life Sciences) to assist the other (human) directors with investment decisions (whether in a voting or purely advisory capacity is unclear). This ‘quasi’ appointment was considered  revolutionary by some, a publicity stunt by others. Whatever the true position in that instance, we should not  expect to see AI technologies appointed as directors in the UK (or indeed other jurisdictions) in the near future  as they lack the legal capacity necessary to act as directors. However, we can expect to see much greater use of AI technologies in assisting (human) decision-makers to make decisions.

If a director makes a decision on behalf of a company in reliance on information or advice generated by an AI programme and that decision results in loss to a third party, the company may face a legal claim by that third party. For example, an asset management fund may incur losses to its investors’ portfolios if it carries out a transaction in reliance on inaccurate forecasts generated by an AI programme. The investors may seek to recover their losses from the company. There may also be instances where a director himself or herself could face legal proceedings as a consequence of an incorrect decision, for example proceedings by shareholders of a company if the decision  damages the company’s market standing or is considered not to be in the interests of the company. 

In those circumstances, could the company or the directors as users of the AI technology attempt to blame the technology and seek to pass on the liability?

Liability when AI fails to perform

 

This is a commonly asked question– who is liable when AI technology does not perform as expected? It is a question which remains open for interpretation in the UK. Will liability lie with the individual/company that produced the AI technology, the individual/company that marketed the technology, or someone else? Will it depend on whether the technology was used in the manner the producer intended? 

The position is further complicated by the fact that certain types of AI technology ‘learn’ when used, absorbing and distilling information and refining the decisions or advice generated to take account of that information. After repeated use, a point may come when even the creator may not be able to understand why the technology has made a particular decision. In normal circumstances, the more that machine-learning technology is used and the more relevant information it ingests, the more accurate the output. However, if incorrect data is input, incorrect output will follow especially during the early stages of use. (In the early stages, a small quantity of inaccurate data will amount to a larger proportion of the overall volume of data and will have a greater impact on the AI’s analysis of data). Could liability lie with individuals or entities who input the incorrect data, for example if the inputting of data is outsourced?

These questions are yet to be answered by the Courts. Of course, each situation will be heavily dependent on its facts, and there are likely to be instances where an incorrect decision made in reliance on an AI programme can be traced back to errors with the production, programming or data input into the AI technology. However, it is not difficult to foresee circumstances where the levels of complexity make it impossible to determine the reason why an AI programme steers a user into make a wrong decision. Furthermore, a Court may have limited sympathy for directors of a company who place unquestioning reliance on AI technology if decisions they make on behalf of the company are found to be imprudent. It is possible the Courts will take the view that those using AI technology do so at their peril, and a degree of (or the entirety of) the risk lies with them.

Mitigating risk

 

Whilst the founder of Deep Knowledge Ventures may be correct in his view that by 2027 AI technology will be capable of making many decisions without any human support, the reality is that human users will be answerable for adverse consequences of a decision based on AI technology, and they or the company may be liable for losses. This is particularly important in the case of company directors as they can in certain circumstances face personal liability for decisions made on behalf of a company, even if the use of the AI technology and interpretation of its output has been delegated to others. They cannot operate on the assumption that they will be able to pass on liability to the producer of the AI, let alone the AI itself.

Directors should be mindful of the potential negative impact of a decision they take in reliance on AI technology, if it transpires that the decision was wrong. They should take steps to manage the risk of liability and mitigate the impact.

— First, they should ensure they are able to demonstrate that a decision which may be called into question was in fact a well-founded and sensible decision when it was made. Board minutes which record that the output from the AI technology was properly considered and scrutinised will assist, as will a record of other information which was taken into account in addition to the output of the AI.


— Second, where possible directors and other users should ensure they understand the data which was input into the AI programme on which the output was based, and that the data was correct and properly input. Whilst it is unlikely that the directors themselves will input data, it is in their interests to take responsibility for its accuracy.


— Third, they should consider whether the company’s insurance policies (and their D&O insurance) are adequate and that the use of AI technology would not increase the risk that a claim arising from an incorrect decision would not be covered by insurance.

These steps and other prudent measures should reduce the risks associated with the use of AI technologies, both for the company and for the directors themselves.

 

You can also view other future facing disputes insights papers by visiting our Future Facing Disputes page.

Publication
Future Facing Disputes - Is there a future for robo-directors?
Download
PDF 508.3 kB

Authors

Portrait ofLee Gluyas
Lee Gluyas
Partner
London
Portrait ofBen Trust
Ben Trust
Partner
London
Portrait ofStephanie Cheung
Stephanie Cheung
Senior Associate
Manchester
Portrait ofStephanie Woods
Stephanie Woods
Senior Associate
London
Show more Show less