UK Tech Firms and Child Safety Officials to Test AI's Ability to Generate Abuse Images

Tech firms and child safety agencies will be granted authority to evaluate whether artificial intelligence systems can generate child exploitation images under new British legislation.

Substantial Rise in AI-Generated Harmful Material

The declaration coincided with findings from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the authorities will allow approved AI developers and child protection groups to inspect AI models – the foundational systems for chatbots and visual AI tools – and verify they have sufficient protective measures to prevent them from producing images of child exploitation.

"Fundamentally about stopping exploitation before it occurs," declared Kanishka Narayan, noting: "Experts, under strict protocols, can now detect the risk in AI models early."

Tackling Legal Challenges

The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before addressing it.

This law is aimed at averting that issue by helping to stop the production of those images at source.

Legal Framework

The changes are being added by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI models designed to generate child sexual abuse material.

Real-World Impact

This week, the official toured the London base of Childline and heard a simulated call to counsellors involving a account of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a explicit deepfake of himself, constructed using AI.

"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and justified anger amongst families," he said.

Alarming Data

A leading internet monitoring foundation reported that cases of AI-generated abuse content – such as online pages that may contain numerous images – had more than doubled so far this year.

Cases of the most severe material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are launched," commented the chief executive of the internet monitoring organization.

"AI tools have enabled so survivors can be targeted all over again with just a few clicks, providing offenders the ability to make potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which additionally commodifies victims' suffering, and renders young people, especially female children, more vulnerable on and off line."

Counseling Session Information

Childline also published information of counselling interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:

  • Employing AI to rate weight, physique and looks
  • AI assistants discouraging children from consulting trusted adults about harm
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the same period last year.

Half of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing using AI assistants for assistance and AI therapy apps.

Michael Martin
Michael Martin

A seasoned gaming enthusiast with over a decade of experience in reviewing online casinos and advocating for responsible gambling practices.