🔗 Share this article UK Tech Firms and Child Safety Agencies to Test AI's Capability to Generate Exploitation Content Tech firms and child protection agencies will be granted permission to assess whether artificial intelligence tools can generate child exploitation material under new UK laws. Substantial Rise in AI-Generated Harmful Material The declaration came as findings from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025. New Legal Structure Under the amendments, the government will permit approved AI developers and child safety organizations to examine AI systems – the foundational systems for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation. "Ultimately about preventing abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now detect the danger in AI models early." Tackling Regulatory Challenges The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI creators and others cannot create such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it. This legislation is designed to preventing that problem by enabling to halt the production of those materials at source. Legislative Framework The amendments are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on possessing, producing or distributing AI models designed to create exploitative content. Practical Impact This week, the official visited the London headquarters of Childline and heard a mock-up conversation to advisors featuring a account of AI-based exploitation. The interaction portrayed a teenager seeking help after facing extortion using a explicit AI-generated image of themselves, created using AI. "When I learn about young people facing extortion online, it is a source of intense frustration in me and rightful concern amongst parents," he stated. Alarming Statistics A leading internet monitoring organization stated that instances of AI-generated abuse material – such as webpages that may include numerous images – had more than doubled so far this year. Cases of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086. Girls were overwhelmingly victimized, accounting for 94% of illegal AI images in 2025 Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025 Industry Response The law change could "represent a crucial step to ensure AI tools are secure before they are launched," stated the head of the online safety organization. "AI tools have enabled so survivors can be victimised all over again with just a few clicks, giving criminals the ability to create possibly endless quantities of advanced, photorealistic exploitative content," she continued. "Material which further commodifies victims' suffering, and makes children, especially girls, less safe on and off line." Counseling Session Data The children's helpline also released information of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise: Employing AI to evaluate weight, physique and looks Chatbots dissuading young people from talking to trusted guardians about harm Facing harassment online with AI-generated content Digital blackmail using AI-faked images During April and September this year, Childline conducted 367 support sessions where AI, conversational AI and related topics were discussed, four times as many as in the equivalent timeframe last year. Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including utilizing AI assistants for assistance and AI therapy applications.