British Tech Firms and Child Protection Officials to Test AI's Ability to Create Abuse Content

Technology companies and child protection agencies will be granted permission to evaluate whether artificial intelligence systems can generate child abuse images under recently introduced British laws.

Substantial Rise in AI-Generated Harmful Content

The announcement came as revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the changes, the government will allow approved AI developers and child protection groups to inspect AI systems – the underlying technology for conversational AI and image generators – and verify they have sufficient protective measures to stop them from producing images of child exploitation.

"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now detect the danger in AI systems early."

Tackling Regulatory Challenges

The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at preventing that issue by enabling to halt the creation of those materials at source.

Legal Framework

The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI systems designed to create exploitative content.

Practical Impact

This week, the minister toured the London base of a children's helpline and listened to a mock-up call to advisors involving a report of AI-based exploitation. The interaction depicted a adolescent requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.

"When I hear about young people experiencing blackmail online, it is a source of intense frustration in me and justified anger amongst parents," he said.

Alarming Data

A prominent internet monitoring organization reported that cases of AI-generated abuse material – such as online pages that may contain multiple images – had significantly increased so far this year.

Instances of category A content – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a crucial step to ensure AI products are safe before they are launched," stated the head of the online safety organization.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, giving offenders the ability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she continued. "Material which further commodifies victims' suffering, and makes young people, particularly female children, less safe both online and offline."

Counseling Session Data

The children's helpline also published details of support sessions where AI has been referenced. AI-related risks discussed in the conversations include:

  • Employing AI to rate weight, body and appearance
  • AI assistants dissuading children from consulting trusted adults about harm
  • Facing harassment online with AI-generated content
  • Online extortion using AI-manipulated pictures

During April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and related terms were discussed, significantly more as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing AI assistants for assistance and AI therapy applications.

Ana Noble
Ana Noble

A financial strategist with over a decade of experience in wealth management and personal finance coaching.