Open navigation
Search
Offices – United Kingdom
Explore all Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights – United Kingdom
Explore all insights
Search
Expertise
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights
About CMS
UK Pay Gap Report 2024

Learn more

Select your region

Publication 02 Dec 2025 · United Kingdom

UK proposes new law to combat AI-generated Child Sexual Abuse Material (CSAM)

2 min read

On this page

AI is powering incredible innovation, but it’s also enabling harm. The UK government has announced new legislation to tackle the growing threat of AI-generated child sexual abuse material (CSAM). Under the proposals, AI developers and government-designated child-protection organisations would be able to test and scrutinise AI models for CSAM-related vulnerabilities, ensuring child-safety protections are built in from the start.

Currently, creating or possessing CSAM is a criminal offence in the UK, regardless of intent. This new framework introduces proactive testing to prevent abuse before it happens. This news follows alarming findings from the Internet Watch Foundation: reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025, with a particularly alarming surge in depictions of infants aged 0–2 (rising from 5 to 92 cases). Experts warn these numbers barely scratch the surface, as “nudify” apps and deepfake tools making harmful content easier produce than ever. 

Commentary

The announcement marks a positive step forward in the battle against AI-generated CSAM. Testing should flush out loopholes around existing safeguards.

However, the proposals fail to address the critical issue of identifying and removing CSAM from training datasets. For example, the LAION-5B dataset, used to train models such as Stable Diffusion, was found by Stanford researchers to feature “many hundreds of instances of known CSAM”. Ensuring CSAM is excluded from datasets would reduce the risk of CSAM being generated. AI models have great difficulty producing outputs which are unrepresented in their training dataset. This is why, until recently, models have been unable to produce a glass of red wine filled to the brim. Of course, excluding CSAM prior to training is preferable, but recent technological developments regarding the “unlearning” of content included in pre-trained models provide a potential practical solution for avoiding AI-generated CSAM persisting in models. 

Back to top