AI laws and regulation in United Kingdom
jurisdiction
- Albania
- Austria
- Belgium
- Bosnia and Herzegovina
- Brazil
- Bulgaria
- Chile
- China
- Colombia
- Czech Republic
- France
- Germany
- Hong Kong
- Hungary
- India
- Italy
- Kenya
- Kingdom of Saudi Arabia
- Luxembourg
- Mexico
- North Macedonia
- Norway
- Peru
- Poland
- Portugal
- Romania
- Serbia
- Singapore
- Slovakia
- Slovenia
- South Africa
- Spain
- Sweden
- Switzerland
- The Netherlands
- Türkiye
- Ukraine
- United Arab Emirates
-
United Kingdom
Risk Rating
Medium (The sectoral-based approach means regulators will take jurisdiction and enforce, but there may be a lack of consistency in how or when they enforce.)
AI regulation in your jurisdiction
There is currently no dedicated AI law in force in the United Kingdom.
Existing Regulatory Frameworks Applicable to AI
Whilst there is no specific AI regulation, the UK’s existing legal framework applies to AI.
This includes, for example:
AI and data protection: The UK GDPR and the Data Protection Act 2018 govern the processing of personal data, which includes processing by or in connection with the development and use of AI systems and automated decision-making.
AI and financial services: Use of AI in the financial services sector is required to comply with existing financial services regulation, including in respect of: consumer protection; operational resilience; data quality, governance, and security; AI model risk management, development, validation and governance; overall governance (e.g. risk control, financial crime, outsourcing, audit and record-keeping, and accountability). In addition to financial services sector statutory law, relevant rules are set out in the Financial Conduct Authority (FCA) Handbook and the Prudential Regulation Authority (PRA) Rulebook.
AI and cybersecurity: Use of AI in IT systems operated by relevant digital service providers and operators of certain essential services in the UK will need to comply with the Network and Information Systems Regulations 2018 (SI 2018/506) (NIS Regulations). Additional sector-specific security laws may also apply. For example, deployment of AI in the telecoms sector may be subject to specific security requirements under the Communications Act 2003, and deployment in the financial services sector will need to take into account the applicable principles and rules set out in the FCA Handbook. Note that the UK government introduced a new Cyber Security and Resilience Bill to Parliament on 12 November 2025, which (if enacted) will reform and add to the NIS Regulations (particularly by expanding the scope of services and service providers that will be covered by UK cybersecurity rules). The National Cyber Security Centre provides guidance on AI and cyber security.
AI and employment: Use of AI in the workplace (for example, to make decisions that affect employees or to automate tasks) will need to comply with laws such as: the Equality Act 2010 (which protects against discrimination); the Employment Rights Act 1996 (which protects against unfair dismissal); common law rules on constructive dismissal; and the Health and Safety at Work Act 1974 (which protects people at work).
AI and medical products: Use of AI in medical devices and other digital health products will need to comply with the Medical Devices Regulations 2002 (SI 2002/618).
AI and general product liability: Producers of products that are powered by AI will need to consider their potential liability for defects under the Consumer Protection Act 1987 and for negligence under the law of tort (which may arise where the producer owes a duty to a product user to take reasonable care that the user is not harmed).
AI and online content: Providers of online services (such as websites and apps, including social media, video-sharing platforms, online forums, dating services and instant messaging) will need to ensure that their use of AI is in compliance with the Online Safety Act 2023 (OSA 2023), which includes rules that apply to algorithms used for disseminating content that may be illegal or harmful, and introduces new offences, such as, sending false communications and sharing or threatening to share intimate images without consent, which are relevant to AI-generated online content such as deepfakes.
AI and IP and data rights: Developers of AI systems will need to consider their compliance with existing laws establishing rights in data and intellectual property when developing and training AI models. For example, this includes the law of confidence and the Trade Secrets (Enforcement, etc.) Regulations 2018, the Copyright, Design and Patents Act 1998, as well as UK data protection law (if personal data is processed).
Because to date the UK has taken a sectoral, regulator-led approach to AI regulation, it is essential for producers and deployers of AI to understand and consider their compliance with sector-specific guidance issued by UK regulators (for more about this, see our answers below).
Regulatory Oversight of AI
There is no single designated AI regulator or authority in the UK. To date, the government has continued with the approach set by the previous government, which is to require regulators to develop sector-specific regulatory guidance that applies their existing regulatory framework to the deployment of AI in their sector. This approach was first set out by the previous government in its AI white paper, first published in March 2023 (the AI White Paper). The previous government opted not to introduce AI-specific, cross-sector legislation (like the EU AI Act), with a single, designated AI regulator as it believed this would hold back AI innovation and reduce the ability to respond quickly and in a proportionate way to future technological advances. Instead, they introduced five AI principles on a non-statutory basis, to be implemented by the UK’s existing regulators.
At a government level, the Department for Science, Innovation & Technology (DSIT) takes the lead for many aspects of the government’s AI strategy. The AI Security Institute is a research organisation that sits within DSIT and was established to equip governments with a scientific understanding of the risks posed by advanced AI (see, for example, its recent Frontier AI Trends Report, published on 18 December 2025, which looks at how the world’s most advanced AI systems are evolving).
DSIT also support and funds the AI Standards Hub, a partnership between The Alan Turing Institute, the British Standards Institution (BSI), and the National Physical Laboratory (NPL). The Hub aims to advance trustworthy and responsible AI with a focus on the role that standards can play as governance tools and innovation mechanisms.
AI Guidance, Policies, and Strategic Frameworks
As set out above, to date the UK’s approach has been to adopt a principles-based approach and require existing regulators to develop their own sector-specific guidelines for AI development and deployment. The five principles set out in the AI White Paper are: (1) Safety, security and robustness; (2) Appropriate transparency and explainability; (3) Fairness; (4) Accountability and governance; and (5) Contestability and redress.
Although the new UK government has stated that it is considering introducing AI-specific law to regulate the most powerful AI models (for more on this, please see our answer to question no. 6 below), it has thus far continued this non-statutory, principles-based approach, believing that regulators are best placed to apply rules to the use of AI in their sectors.
Because of the UK’s sector-specific approach, it is essential for producers and deployers of AI to understand and consider the policy documents and guidance issued by UK regulators in sectors that may be relevant to their production and deployment of AI.
We have set out below a selection of the relevant policy documents and guidance that have been issued by the UK’s regulators:
AI and data protection: The UK’s data protection authority, the Information Commissioner’s Office (ICO), has issued detailed guidance on how to apply UK GDPR principles to the use of AI systems, explaining decisions made with AI, and biometric recognition.
AI and financial services: The Bank of England and the PRA have issued a joint update setting out their strategic approach to AI and machine learning. The FCA has also set out its approach to AI in UK financial markets in an AI update, and before that set out its thinking on AI and machine learning in a joint feedback statement (FS23/6) with the Bank of England and the PRA on discussion paper DP5/22 – Artificial Intelligence and Machine Learning, jointly published with the Bank of England in October 2022.
AI and cybersecurity: The National Cyber Security Centre (NCSC) has published guidelines for secure AI system development. These guidelines are the result of an international initiative and were published by NCSC and the US Cybersecurity and Infrastructure Security Agency (CISA) as well as agencies in a number of other countries. They are non-binding recommendations for providers of AI systems and address four key areas: (1) Secure design; (2) Secure development; (3) Secure deployment; and (4) Secure operation and maintenance.
AI and employment: In March 2025, DSIT published guidance on Responsible AI in Recruitment, which explains areas for consideration and assurance actions to be taken by organisations looking to procure and deploy AI for use in their recruitment processes.
AI and communications: The Office of Communications (Ofcom), which regulates UK communications services (including online services, broadband, telephone and mobile services), has published its strategic approach to AI. This sets out how Ofcom is supporting the safe use of artificial intelligence across the sectors it regulates. Ofcom has also published an open letter to online service providers about how the OSA 2023 applies to generative AI and chatbots, and a guide on how AI chatbots are covered by the OSA 2023.
AI and medical products: The Medicines and Healthcare products Regulatory Agency (MHRA) (the UK’s regulator for medicines, medical devices and blood transfusion) set out how it is applying the AI principles in its update on the Impact of AI on the regulation of medical products. The MHRA has also published guidance on software and AI as a medical device.
AI and energy: The Office of Gas and Electricity Markets (Ofgem), which regulates energy in Great Britain, published guidance on the use of AI in the energy sector in May 2025.
AI and competition: The Competition and Markets Authority (CMA), which is the department responsible for promoting competition and protecting consumers in the UK, published an AI strategic update in April 2024 setting out how they are ensuring consumers, businesses and the wider UK economy benefit from AI developments. Prior to this, the CMA published a report on AI Foundational Models in September 2023, updated in April 2024, in which it proposed principles to guide the development and deployment of foundational models to positive outcomes for competition and consumer protection, following its initial review of AI foundational models.
AI and health and safety: The Health and Safety Executive, the regulator for health and safety in the workplace in Great Britain, has published its regulatory approach to AI.
International AI Standards and Guidelines
The UK’s five AI principles set out in the AI White Paper build on the OECD AI Principles. Part 6 of the AI White Paper sets out the government’s intention to promote interoperability and ensure that international technical standards play a role in the wider regulatory ecosystem. The AI White Paper includes an Annex which sets out factors that the UK’s regulators may wish to consider when providing guidance on and implementing the five AI principles. This includes the role of available technical standards (for example, ISO/IEC standards) to clarify regulatory guidance and support the implementation of risk treatment measures.
Forthcoming AI Legislation
New AI-specific laws are under consideration.
Powerful AI models: Shortly after taking office, the current UK government set out its legislative agenda in the King’s Speech 2024, which included an intent to establish “the appropriate legislation” to place requirements on those working to develop the most powerful AI models. The government has subsequently repeated this intent to introduce such targeted AI legislation (whilst otherwise continuing the non-statutory, principles-based approach to AI regulation), although formal proposals are yet to emerge.
AI training and copyright: The government carried out a copyright and AI consultation between 17 December 2024 and 25 February 2025 on potential changes to UK copyright law, including to reform the area of copyright and AI training. The consultation sought views on the government’s policy options to address (what the government describes as) new challenges for the UK’s copyright framework presented by the widespread use of copyright material for training AI models, and the difficulty rights holders have found in exercising their rights in this context. The consultation policy options included introducing an exception to copyright for all text and data mining purposes, and reforming copyright law to introduce statutory transparency measures in relation to AI training to support licensing of copyright works. Following extensive lobbying and considerable Parliamentary pressure (principally from the House of Lords) during the summer of 2025 as the government sought to enact the Data (Use and Access) Act 2025 (which introduced reforms to UK data law – see further below), the government agreed to undertake an economic impact assessment of the policy options set out in the copyright consultation and a report on the use of copyright works in the development of AI systems. The government has committed to presenting the full report and economic impact assessment to Parliament before 18 March 2026.
AI and data law reform: The Data (Use and Access) Act 2025 (DUA Act) has recently introduced reforms to UK data protection law in the UK that are widely seen as being AI-friendly. This includes expanding the circumstances in which decisions can be made based solely on automated processing of personal data, clarifying that ‘scientific research’ can include for commercial research and processing for technological development, and allowing broad consent to processing for the purpose of scientific research. These reforms commenced on 5 February 2026.
AI and deepfakes: The DUA Act amends the Sexual Offences Act 2003 (SOA 2003) to introduce new offences for creating, or requesting the creation of, the purported intimate image of an adult without their consent. This amendment came into effect on 6 February 2026. On 19 December 2025, the Home Office published its Action Plan for halving violence against women and girls within a decade. This sets out the government’s intention to ban nudification apps and other tools designed to create synthetic non-consensual intimate images. The Crime and Policing Bill, which is currently making its way through Parliament, proposes to amend the SOA 2003 and ban any program made for the purpose of creating indecent images of a child (so called ‘AI image generators’), and the supply of tools (such as AI-powered deepfake generators) to create non-consensual intimate images.
AI and online content: Following concerns that the OSA 2023 does not adequately regulate all forms of AI chatbot, on 3 December 2025, Liz Kendall MP, who is the government minister with overall responsibility for DSIT, appeared before the House of Commons Science, Innovation and Technology Committee and said that she has tasked officials with looking at whether there are gaps in the legislation and that the government will introduce legislation to ensure that AI chatbots are covered, if gaps are found.
AI and medical devices: On 18 December 2025, the MHRA launched a call for evidence on the regulation of AI in the healthcare sector to inform the recommendations of the National Commission into the regulation of AI in healthcare. The government is considering new proposals for regulating medical devices that use AI. The government has established the National Commission into the Regulation of AI in Healthcare to support this goal. The National Commission is a cross-sector body of experts that look at how AI should be regulated and give recommendations to the MHRA, which will be published in 2026. The call for evidence closed on 2 February 2026.
Useful links
Key resources:
Thought leadership:
- UK Supreme Court Ruling on AI Patentability: Key Insights
- UK weighs up tighter rules for AI chatbots amid child safety concerns
- AI Assurance: Building Trust in Responsible AI Systems in the UK
- Key Takeaways from our AI in Financial Services Panel Event
- UK government publishes delayed update on AI policy
- AI in financial services – Autumn 2023 update
- UK Government sets out proposals for a new AI Rulebook
- CMS | Law-Now | Explaining AI decisions: The UK ICO publishes new guidance
- CMS | Law-Now | UK Regulators continue to scrutinise AI: The FCA and the ICO announce new AI initiatives
- Explaining AI in six steps: The ICO consults on new draft guidance
- Assessing personal data risks in AI systems