UK Tech Companies and Child Protection Officials to Examine AI's Capability to Generate Abuse Content
Technology companies and child safety agencies will be granted authority to evaluate whether artificial intelligence systems can generate child abuse images under new UK legislation.
Significant Increase in AI-Generated Harmful Material
The announcement coincided with revelations from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the amendments, the government will permit designated AI companies and child safety organizations to inspect AI systems – the foundational technology for conversational AI and image generators – and verify they have sufficient protective measures to stop them from creating depictions of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," stated the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the risk in AI systems promptly."
Tackling Regulatory Obstacles
The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by enabling to stop the production of those images at source.
Legislative Structure
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or distributing AI systems developed to generate child sexual abuse material.
Practical Impact
This recently, the minister visited the London headquarters of Childline and listened to a mock-up call to advisors involving a account of AI-based abuse. The interaction portrayed a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I hear about children experiencing blackmail online, it is a cause of intense anger in me and rightful anger amongst families," he said.
Concerning Data
A prominent internet monitoring organization reported that instances of AI-generated abuse content – such as online pages that may contain numerous files – had significantly increased so far this year.
Cases of the most severe material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a crucial step to ensure AI products are secure before they are released," stated the head of the internet monitoring foundation.
"AI tools have made it so survivors can be victimised repeatedly with just a few clicks, giving offenders the capability to create possibly limitless quantities of sophisticated, photorealistic exploitative content," she continued. "Material which additionally exploits victims' trauma, and makes young people, especially girls, less safe both online and offline."
Counseling Session Data
Childline also released details of support sessions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Employing AI to evaluate weight, body and looks
- AI assistants discouraging young people from talking to safe guardians about harm
- Being bullied online with AI-generated material
- Online extortion using AI-faked pictures
During April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated topics were discussed, significantly more as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including utilizing chatbots for assistance and AI therapeutic apps.