Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence tools can produce child exploitation material under new UK legislation.
The declaration coincided with findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will permit approved AI developers and child protection organizations to inspect AI systems β the underlying technology for conversational AI and image generators β and ensure they have adequate safeguards to prevent them from producing images of child exploitation.
"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now identify the risk in AI models early."
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and others cannot generate such content as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at averting that problem by enabling to stop the creation of those images at their origin.
The changes are being introduced by the government as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI systems designed to create child sexual abuse material.
This recently, the minister visited the London headquarters of Childline and listened to a mock-up conversation to advisors featuring a report of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a sexualised deepfake of themselves, created using AI.
"When I learn about young people facing blackmail online, it is a source of extreme anger in me and rightful anger amongst families," he said.
A prominent online safety foundation stated that instances of AI-generated exploitation material β such as online pages that may contain numerous files β had significantly increased so far this year.
Instances of category A material β the most serious form of exploitation β increased from 2,621 images or videos to 3,086.
The law change could "constitute a vital step to ensure AI products are secure before they are released," commented the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, giving criminals the ability to make possibly endless amounts of sophisticated, lifelike exploitative content," she added. "Material which further commodifies victims' trauma, and makes children, particularly girls, less safe on and off line."
The children's helpline also published details of support sessions where AI has been referenced. AI-related risks mentioned in the sessions comprise:
During April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapy apps.
Tech enthusiast and gaming expert with over a decade of experience in PC hardware reviews and community building.