UK proposes new law to combat AI-generated Child Sexual Abuse Material (CSAM)
Key contacts
AI is powering incredible innovation, but it’s also enabling harm. The UK government has announced new legislation to tackle the growing threat of AI-generated child sexual abuse material (CSAM). Under the proposals, AI developers and government-designated child-protection organisations would be able to test and scrutinise AI models for CSAM-related vulnerabilities, ensuring child-safety protections are built in from the start.
Currently, creating or possessing CSAM is a criminal offence in the UK, regardless of intent. This new framework introduces proactive testing to prevent abuse before it happens. This news follows alarming findings from the Internet Watch Foundation: reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025, with a particularly alarming surge in depictions of infants aged 0–2 (rising from 5 to 92 cases). Experts warn these numbers barely scratch the surface, as “nudify” apps and deepfake tools making harmful content easier produce than ever.
Commentary
The announcement marks a positive step forward in the battle against AI-generated CSAM. Testing should flush out loopholes around existing safeguards.
However, the proposals fail to address the critical issue of identifying and removing CSAM from training datasets. For example, the LAION-5B dataset, used to train models such as Stable Diffusion, was found by Stanford researchers to feature “many hundreds of instances of known CSAM”. Ensuring CSAM is excluded from datasets would reduce the risk of CSAM being generated. AI models have great difficulty producing outputs which are unrepresented in their training dataset. This is why, until recently, models have been unable to produce a glass of red wine filled to the brim. Of course, excluding CSAM prior to training is preferable, but recent technological developments regarding the “unlearning” of content included in pre-trained models provide a potential practical solution for avoiding AI-generated CSAM persisting in models.