Open navigation
Search
Offices – United Kingdom
Explore all Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights – United Kingdom
Explore all insights
Search
Expertise
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights
About CMS
UK Pay Gap Report 2024

Learn more

Select your region

Publication 09 Dec 2019 · United Kingdom

Explaining AI in six steps: The ICO consults on new draft guidance

CMS Digitalbytes

3 min read

On this page

On 2 December 2019 the ICO published its first draft regulatory guidance into the use of AI. The guidance is entitled Explaining decisions made with AI.  It was created by the ICO in conjunction with The Alan Turing Institute. The draft guidance is open for consultation until 24 January 2020.

The ICO says that the aim of the guidance is to help organisations explain how AI-related decisions are made to those affected by them (the 'explainability' of AI systems has been the subject matter of Project ExplAIn, a collaboration between the ICO and The Alan Turing Institute).

The draft guidance is based on four key principles (which have their origins in the GDPR) that organisations should think about when developing AI systems:

  • Be transparent
  • Be accountable
  • Consider context
  • Reflect on impact

The guidance is not short (c.160 pages) and is divided into three Parts:

  • The basics of explaining AI
  • Explaining AI in practice
  • What explaining AI means for your organisation

Part 1 (The basics of explaining AI) covers some of the basic concepts (e.g. what is AI? what is an AI-assisted decision?) and the legal framework (e.g. the GDPR and the Data Protection Act 2018).  This part of the draft guidance proposes six 'main' types of explanation that the ICO/The Alan Turing Institute have identified for explaining AI decisions: rationale explanation, responsibility explanation, data explanation, fairness explanation, safety and performance explanation, and impact explanation.

Part 2 (Explaining AI in practice) - the lengthiest of the three parts - is practical and more technical in nature. It provides guidance on how you might go about explaining meaningful information about the logic of your AI system. This includes examples that apply the six main types of explanation introduced in Part 1.

Part 3 (What explaining AI means for your organisation) focusses on the various roles, policies, procedures and documentation that organisations should consider implementing to ensure that they are in a position to provide meaningful explanations about their AI systems.  This part of the draft guidance covers the role of the 'AI development team' (which includes the people involved with inputting data into the AI system, with building, training and optimising the models that will be deployed in the AI system, and with testing the AI system) as well as the Data Protection Officer (if one is designated) and other key decisions makers within an organisation.

The ICO blog post announcing the opening of this consultation states that real-world applicability is at the centre of its guidance.  It will be interesting to see what sort of feedback the ICO receives, in particular from those who are already deploying AI systems.

We aim to help organisations explain how AI-related decisions are made to those affected by them.

ICO and The Alan Turing Institute open consultation on first piece of AI guidance

The content above was originally posted on CMS DigitalBytes - CMS lawyers sharing comment and commentary on all things tech.

Back to top Back to top