Governments and regulatory bodies around the world are beginning to enact the legislation and regulatory measures required to facilitate the further development and implementation of AI systems. There is a clear desire for national strategies on AI liability and the countries that are first to implement coherent regulatory frameworks for the use of AI will be the first to benefit both commercially and socially from the development of AI systems and devices.
This article identifies some developments around the globe relating to the introduction of legal systems and processes for dealing with liability when AI fails to perform. Perhaps unsurprisingly given their high profile nature, much of the focus to date has been on the legislation necessary to enable the introduction of autonomous vehicles, but many countries are also turning their attention to the broader use of AI.
In Germany, Poland and the UK, established legal liability regimes currently determine liability for damage caused by malfunctioning AI devices. However, efforts are underway to create new laws and update existing laws to deal with rapid AI developments.
In Germany, the increasing use of AI is reflected in changes to existing laws. For example, in 2017, the German Road Traffic Act was expanded to include provisions for ‘highly automated’ and ‘fully automated’ vehicles. In addition, an Artificial Intelligence Enquiry Commission was convened in June 2018 and a Data Ethics Commission in September 2018, both of which will deal with legal
considerations in respect of AI in the future.
In Poland, the Ministry of Invention and Development and the Ministry of Enterprise and Technology have established a Working Group, which aims to introduce a legal framework for the functioning of robots, and other Internet of Things systems equipped with AI.
The UK government is keen to position the UK as a major player in the development of AI systems and recognises the importance of the development of a legal and regulatory framework to facilitate this objective. The Autonomous and Electric Vehicles Act 2018 received legislative approval in July 2018, paving the way for the introduction of insurance policies and procedures specifically to
respond to liability issues concerning the use of autonomous vehicles. We can expect future measures relating to deployment of AI in other sectors.
United States of America
In the US, there is a desire for a clear national strategy on AI. In the past two years, various bills have been introduced in the House of Representatives and considered by relevant subcommittees. The AV Start Act is focused on the operation of driverless cars; the Future of Artificial Intelligence Act directs the Department of Commerce to establish a Federal Advisory Committee on the development and implementation of AI; and the Self Drive Act regulates the testing and deployment of automated vehicles.
There is no specific legislation governing the use of AI in Asia, but efforts are underway across the region to better understand and address the legal and ethical issues as countries around the region boost their AI investment.
In Singapore, for example, the government has recently convened an Advisory Council on the Ethical Use of AI and Data comprising experts in AI and big data from local and international companies, as well as academia and consumer advocates. Singapore’s Ministry of Transport has set up a Committee on Autonomous Road Transport, which will look into regulating the use of driverless cars in the near future. Singapore’s privacy regulator, the Personal Data Protection Commission recently published a discussion paper proposing a potential governance framework for the use of AI and data in various industries.
In Japan, the Ministry of Economy, Trade and Industry is looking to issue guidelines this year to address issues such as legal liability and user rights.
In China, which has the stated objective of being the world leader in AI, promoting the use of the AI sector has been elevated to a level of national importance and given top priority in the nation’s 13th Five-Year Plan (2016 – 2020). The legal framework governing the development and use of AI and addressing issues of liability in the event of AI failure will need to be considered as a priority.
Governments and regulatory bodies are beginning to address the issues of AI liability and to enact the legislation and regulatory measures required to facilitate the further development and implementation of AI systems. Governments will need to: (1) consider the impact AI will have on their economies and communities; (2) engage in the liability debate; and (3) work hard and quickly to update their legal frameworks so that AI can develop safely in an innovative environment. The countries that respond to these challenges quickly and effectively will be the ones in which AI and AI developers thrive, enabling those countries to take the lead in the development of AI technology.