“National AI Strategy”: Step change for the AI economy in the UK
Key contact
On 22 September 2021, the Secretary of State for Digital, Culture, Media and Sport published the National AI Strategy (the “Strategy”), setting out the UK’s ambition to become a world leader in AI and strategic vision to guide AI investment, innovation, and governance over the next 10 years. The Strategy has been released in the aftermath of a stream of government publications on the subject and is based on three AI pillars: long-term investment, benefits for all sectors and regions and effective governance.
The road to the AI Strategy
In late 2020 the House of Lords’ Liaison Committee (the “HL Committee”) published a report, “AI in the UK: No Room for Complacency” – a summary of which can be found here. The HL Committee’s report was one of the first key publications which foreshadowed the UK’s future for AI legislation. In January 2021, the AI Council published the AI Roadmap - a summary of which can be found here – sharing 16 recommendations with the government for the strategic direction for AI. The AI Council was set up in 2019 to engage with leadership in the AI ecosystem and connect with stakeholders across academia, industry and the public sector, ultimately playing a central role in setting up the Strategy. The HL Committee report acknowledged the important role of industry bodies taking the lead in AI governance and public trust. The Strategy reinforces this by committing to strengthening the role of the AI Council, and ensuring that it continues to provide expert advice to the government and leadership in the AI ecosystem.
The UK has kept its pace, as within a matter of months, the Strategy is here. The Strategy feeds into the Government’s intention to build “the most pro-innovation regulatory environment in the world”. In 2020 the UK ranked third worldwide for private investment into AI companies, following only the USA and China. The government has invested more than £2.3 billion into AI since 2014, and the Strategy outlines the UK’s vision to build on this progress in the race to deliver economic and technological transformation. The Strategy reflects not just the UK’s ambition to use AI for regional prosperity and across sectors, but also to play a key part in addressing global challenges: achieving net zero, health resilience and environmental sustainability.
Step change for AI in the UK: The 10-year agenda
The government acknowledge both the unique challenges and opportunities posed by AI in the Strategy. The Strategy notes that no definition of AI is suitable for every scenario, and ultimately settles on the following: “Machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks”.
The key challenges
The Strategy notes a number of key challenges posed by AI for the next 10 years which it seeks to address, including the following:
- Regulatory: The autonomy of a developing AI system raises questions for liability, from risk and safety, to ownership of creative content. There are also questions of transparency and bias which arise from decisions made by AI.
- Infrastructure: Infrastructural requirements for AI services are greater than in cloud/software as a service systems. Access to expensive high performance computing and/or large data sets is required.
- Skills: The product journey of AI systems is more long-term and layered than other technologies, starting with fundamental research and development (“R&D”).
- Diversity: As AI begins to play an increasingly executive function in society, there is a moral and social need to ensure that AI operates in an environment which allows people of all backgrounds to participate in a new AI economy, but also that AI remains representative.
The Three Core Pillars
The agenda for the next ten years is shaped by the three core pillars of the Strategy, under which three key goals have been set:
The Goals:
- To experience significant growth in the number and type of AI systems discovered in the UK.
- To benefit from the highest amount of economic and productivity growth due to AI.
- To establish the most trusted and pro-innovation system for AI governance in the world.
The Three Core Pillars:
(1) Investing in the long-term needs of the AI ecosystem.
(2) Ensuring AI benefits all sectors and regions.
(3) Governing AI effectively.
Each pillar is divided into short (3 months) and long-term (3-12 months) outputs. The Strategy introduces numerous policies and commitments to be implemented, and we have summarised below a selection of those that anticipate tangible outputs in the next 3-12 months.
(1) Investing in the long-term needs of the AI ecosystem
The first pillar concerns the government’s recognition of the importance of investing in the critical inputs that underpin AI innovation, such “inputs” often being people. The Strategy highlights the gap between the demand for AI versus the supply of new AI skills. For instance, last year there was a 16% increase in AI and Data Science job vacancies, but 69% of those were “hard to fill”. It appears that the root of the problem lies in training and recruitment for AI jobs. A survey conducted by the AI Council and The Alan Turing Institute revealed that 81% of respondents agreed that there were significant barriers in recruiting and retaining talent in their field. With two thirds of firms expecting demand for AI skills in their organisation to increase in the next 12 months, the gap will only intensify. Further, the issue of recruitment is not just a question of numbers, but also diversity: over half of firms said none of their AI employees were female, and 40% of firms had no employees from an ethnic minority background.
| Short term (3 months):
|
| Long term (3-12 months):
|
(2) Ensuring AI benefits all sectors and regions
The second pillar set out in the Strategy appears to build particularly on the AI Roadmap’s pragmatic approach to the national, cross-sector adoption of AI advances. The Strategy places a strong emphasis on ensuring that the transition to an AI-enabled economy transcends all sectors and regions in the UK. Research conducted by EY on the adoption and use by US firms of advanced technologies showed that AI remains an emerging technology for private and third sector organisations. Whilst 27% of UK organisations have implemented AI technologies into business processes, 33% of organisations have not adopted AI and are not planning to. The Strategy sets out to address this by recognising and supporting a wide breadth of AI businesses, understanding better how AI can be adopted in organisations, and leveraging the public sector to increase demand for AI across markets.
| Short term (3 months):
|
| Long term (3-12 months):
|
(3) Governing AI effectively
The strong cross-sector approach to AI in pillar 2 also features notably in pillar 3, which deals with the Strategy’s approach to the national governance of AI.
The Strategy notes the existing cross-sector regulations which already capture aspects of development of AI across data protection, competition law and financial regulation. According to the Strategy, as the use of AI increased, the UK reviewed and adopted the regulatory environment. For example, “Data: A New Direction” earlier this month invited views on the role of data protection within the broader context of AI governance. The Strategy also notes that having embraced a strong sector-based approach since 2018, now it is the time to decide whether there is a case for greater cross-cutting AI regulation or greater consistency across regulated sectors. Inconsistent approaches across sectors could introduce contradictory compliance requirements and uncertainty for where responsibility lies. AI regulation could become framed narrowly around prominent existing frameworks (such as data protection). Further, the growing international activity that addresses AI across sectors could overtake a national effort to build a consistent approach.
What is not entirely clear from the Strategy are the obligations (if any) that may be imposed on organisations that develop and use AI. However, considering the general approach adopted in the Government’s policy paper issued in July 2021 “Digital Regulation: Driving growth and unlocking innovation” and its recent proposals on reforming data protection law, an assumption could be made that any future legislation will likely be principle-based, seek to avoid “box-ticking exercises,” and place the emphasis on companies to determine how they seek to comply in practice. If this assumption is correct, this could mean that there is the potential for significant divergence from the approach that is being taken in the EU in relation to the regulation of AI.
| Short term (3 months):
|
| Long term (3-12 months):
|
Comparison with national AI strategies of other countries
The Strategy clearly makes economic growth a priority. This focus on growth is consistent with other countries' national strategies, except maybe for Canada, which has a very strong focus on the UN Sustainable Development Goals.
However, there are some features that differentiate the Strategy from the approaches adopted by other countries, such as its aim to develop UK’s compute infrastructure of AI applications. Despite the significance of such infrastructure to support AI, the only other country to address compute capacity in its national AI strategy is the USA.
Another distinguishing feature of the UK approach is its focus on international technical standards as a mechanism for governing AI. While other jurisdictions are looking to develop AI legislation, the UK (along with Australia) appears to be more focused on influencing technical standards for AI systems, such as requirements for documentation and reliability testing.
Next steps
The Strategy embodies a long list of promises and intentions for the next 10 years to position the UK as a world leader in the AI economy. It appears the government will look very closely into introducing cross-cutting AI regulation. Whilst some of the policies are time-bound commitments, such as publishing the White Paper by early 2022 and reporting back on the state of funding for innovative businesses by Autumn 2022, the majority are not. To avoid the Strategy being interpreted as a “to-do list”, we can ask, what’s next to execute these commitments? For example, it is not clear from the Strategy what level of funding will be allocated to achieve ambitious goals. Hopefully, the spending review which will be published on 27 October and outline public spending plans for the next three years will make it clearer. We can also expect a supplementary document to be published later this year, sharing how the government intends to use measurable plans to execute the first stage of the Strategy. The plan is expected to share the mechanisms the government will put in place to monitor and assess progress of the Strategy over the next 10 years, including a set of qualitative indicators to effectively measure the hard-to-define impacts which AI will have on the economy and society.
The authors would like to thank Jessica Wilkinson, trainee solicitor, for her assistance in writing this article.