UK Tech Companies and Child Safety Agencies to Examine AI's Capability to Create Abuse Content
Technology companies and child protection agencies will receive authority to evaluate whether AI tools can generate child abuse material under new British laws.
Significant Increase in AI-Generated Illegal Material
The declaration came as revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will allow designated AI companies and child safety organizations to inspect AI models – the foundational technology for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from producing images of child exploitation.
"Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under strict protocols, can now identify the risk in AI systems promptly."
Addressing Regulatory Challenges
The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at averting that issue by helping to stop the production of those images at source.
Legal Structure
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI systems designed to generate exploitative content.
Practical Impact
This recently, the official toured the London base of a children's helpline and listened to a simulated conversation to advisors featuring a account of AI-based abuse. The call depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, constructed using AI.
"When I hear about children facing blackmail online, it is a source of intense anger in me and justified concern amongst parents," he stated.
Alarming Data
A leading online safety organization reported that instances of AI-generated exploitation content – such as online pages that may include multiple images – had more than doubled so far this year.
Instances of category A material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, making up 94% of illegal AI images in 2025
- Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to ensure AI tools are secure before they are released," commented the head of the internet monitoring organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, giving criminals the capability to create possibly limitless quantities of sophisticated, photorealistic child sexual abuse material," she added. "Material which additionally exploits survivors' suffering, and makes children, particularly female children, less safe both online and offline."
Counseling Interaction Data
Childline also published information of counselling sessions where AI has been mentioned. AI-related harms discussed in the conversations comprise:
- Employing AI to evaluate body size, physique and appearance
- Chatbots discouraging children from consulting safe adults about harm
- Facing harassment online with AI-generated material
- Digital extortion using AI-faked images
During April and September this year, Childline delivered 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, encompassing utilizing chatbots for assistance and AI therapy applications.