The function of an AI ethics board within any organization is to provide thought leadership and guidance on how the organization researches and exploits AI technology and associated
data. This raises the fundamental question of whether a separate board is necessary or instead whether the existing framework of accountability, such as the compliance office is enough.
Examples of AI ethics boards
Lucid AI, a small AI company based in Texas with around ten employees, provides a causal reasoning platform. It announced its internal AI ethics board in the following way, “Each step forward for AI is a step into uncharted territory. That’s why we made it our mission to ask the complicated questions that don’t have easy answers. And give birth to something no AI company had ever created before — the Ethics Advisory Panel. So when we build something, we aren’t just asking if it’s great for our customers. We’re asking if it’s great for humanity.”
Microsoft has an internal committee AETHER (AI and ethics in engineering and research) which includes “senior leaders from across Microsoft’s engineering, research, consulting and legal organizations who focus on proactive formulation of internal policies and responses to specific issues as they arise. The AETHER committee considers and defines best practices, provides guiding principles to be used in the development and deployment of Microsoft’s AI products and solutions, and helps resolve questions related to ethical and societal implications stemming from its AI research, product and customer engagement efforts.”
Google DeepMind has an internal AI research ethics board according to Wikipedia which states that “after Google’s acquisition the company established an artificial intelligence ethics board. The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board.”
Google DeepMind Health has a separate AI ethics board which is public. According to the board’s website, it met for the first time in June 2016 and intends to meet four times a year, producing an annual statement outlining the board’s findings.
A suggested list of the duties of an AI ethics board:
- Establish policies and procedures regarding AI ethics
- Provide thought leadership regarding AI ethics
- Audit use and proposed use of AI and data
- Examine proposals for AI research
- Set up a training programme regarding AI ethics
- Handle any complaints regarding use of AI and/or associated data
- Make decisions about how AI is used/and or researched
- Maintain an inventory of AI used by or being researched
A key property of an AI ethics board is that it should be accountable to another body so that it can be challenged if there are any doubts regarding its behaviour. An internal AI ethics board can be configured to be accountable to the board of the overall organisation. An AI ethics board comprising external consultants, such as DeepMind Health’s ethics board, is arguably accountable to the public by virtue of the published annual reports.
Some suggested characteristics of the membership of an AI ethics board:
- a diverse membership as one of its tasks is to check that AI is being used fairly and in an inclusive manner where appropriate.
- at least some members with technical understanding of AI who are able to understand how AI products and services work in detail and to explain this to other members of the board.
- a member who understands legal and regulatory aspects of AI and data.
- a member who understands human resource and education issues related to AI.
- representation of different AI stakeholders, such as customers, business partners, researchers, business leaders.
- a member who is good at communicating AI ethics policy with the public, with customers, with regulators and with competitors.
- an administrative member. We also would recommend that one or more members are well conversant in the fundamental aspects of ethics such as those asked and answered in medical research boards.
Since AI is a rapidly developing field it will be very beneficial to collaborate, where appropriate, with others working in AI ethics, including competitors. Membership of cross-industry AI ethics groups, such as the Partnership on AI, should be considered to benefit people and society.
Pros and Cons
Develop public trust in AI products and services
Develop employee trust within the enterprise with regard to use of AI internally
Participate in public consultations on AI
Possible lack of public accountability
Raise awareness within the organization about AI ethics
Risk of shifting the burden of AI ethics away from others within the enterprise rather than sharing the burden
Provide a central point for employees to go to with proposals for AI products and services and questions about AI ethics
Difficulty in finding appropriate members to compose the AI ethics board, who have good understanding of AI technology as well as understanding of business, legal and human resource issues
Prepare for possible future AI regulation
If the AI ethics board is to be publicly accountable, as in the case of DeepMind Health’s AI ethics board, then confidential information will be made public which increases security risks and gives information to competitors
Meets the recommendation in the Article 29 Data Protection Working Party Guidelines on Automated individual decision-making and profiling of 3 October 2017, which advises data controllers to “establish ethical review boards to assess the potential harms and benefits to society of particular applications for profiling.” (see Annex 1 page 32 of that document.)
Introducing AI ethics board approval steps in product release pipelines will increase the time to market and this is a significant problem for AI because it is a rapidly developing field