UK Technology Companies and Child Protection Agencies to Examine AI's Capability to Generate Exploitation Content

Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence tools can produce child exploitation material under new UK legislation.

Significant Increase in AI-Generated Illegal Material

The declaration coincided with findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will permit approved AI developers and child protection organizations to inspect AI systems – the underlying technology for conversational AI and image generators – and ensure they have adequate safeguards to prevent them from producing images of child exploitation.

"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now identify the risk in AI models early."

Addressing Regulatory Obstacles

The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and others cannot generate such content as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.

This legislation is aimed at averting that problem by enabling to stop the creation of those images at their origin.

Legislative Framework

The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI systems designed to create child sexual abuse material.

Real-World Consequences

This recently, the minister visited the London headquarters of Childline and listened to a mock-up conversation to advisors featuring a report of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, created using AI.

"When I learn about young people facing blackmail online, it is a source of extreme anger in me and rightful anger amongst families," he said.

Concerning Data

A prominent online safety foundation stated that instances of AI-generated exploitation material – such as online pages that may contain numerous files – had significantly increased so far this year.

Instances of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a vital step to ensure AI products are secure before they are released," commented the chief executive of the internet monitoring organization.

"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, giving criminals the ability to make possibly endless amounts of sophisticated, lifelike exploitative content," she added. "Material which further commodifies victims' trauma, and makes children, particularly girls, less safe on and off line."

Support Session Data

The children's helpline also published details of support sessions where AI has been referenced. AI-related risks mentioned in the sessions comprise:

  • Using AI to evaluate weight, body and appearance
  • Chatbots discouraging children from consulting safe adults about abuse
  • Facing harassment online with AI-generated content
  • Digital extortion using AI-manipulated pictures

During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapy apps.

Cameron Fields
Cameron Fields

Tech enthusiast and gaming expert with over a decade of experience in PC hardware reviews and community building.