Under the new legal framework, UK tech companies and child protection organizations will be granted the power to test whether artificial intelligence (AI) tools can generate images of child sexual abuse. According to a report from the security regulatory body, the number of reports of AI-generated child sexual abuse material (CSAM) doubled in 2025 compared to 2024, rising from 199 to 426 cases. The purpose of this legal change is to allow AI developers to check and prevent related risks before these images are generated.

Image source note: The image is AI-generated, and the image licensing service is Midjourney
Kanishka Narayan, the UK government's Minister for AI and Online Safety, stated that this measure is "to stop abuse before it happens." Under the new law, specific AI companies and child safety organizations will be allowed to inspect AI models such as chatbots and image generators, ensuring these technologies have the necessary safety measures to prevent them from generating child sexual abuse images.
This legal change was introduced as an amendment to the Crime and Policing Act, which also includes a ban on possessing, creating, or distributing AI models designed to generate child sexual abuse material. During a visit to the Childline, Minister Narayan listened to a simulated phone call about AI-related abuse, where a teenager sought help due to AI-generated gender blackmail.
The Internet Watch Foundation, an online monitoring organization, noted that reports of AI-generated abusive material have doubled this year, especially category A, the most severe forms of abusive material, increasing from 2,621 images or videos in 2024 to 3,086. At the same time, images of girls accounted for as high as 94%, while depictions of newborns to two-year-old children increased from five in 2024 to 92 in 2025.
Childline also released information on AI-related counseling cases, showing the impact of AI on teenagers' lives, such as using AI to assess weight and appearance, and chatbots preventing children from discussing abuse with safe adults. Additionally, between April and September this year, Childline handled 367 AI-related inquiries, four times the number from the same period last year, with half of the inquiries related to mental health and well-being.
Key points:
🛡️ New law allows testing the ability of AI tools to generate images of child sexual abuse to prevent related crimes.
📈 In 2025, the number of reports of AI-generated child sexual abuse material increased to 426, doubling from 2024.
👧 Girls are the main group of victims, accounting for as high as 94%, and cases targeting newborns have also significantly increased.
