February 2, 2025
4 mins read

Crackdown on AI tools used for child sexual abuse   

UK will be first country to bring in tough new laws to tackle the technology behind the creation of abusive material 

 

Britain is to become the first country to introduce laws tackling the use of AI tools to produce child sexual abuse images, amid warnings from law enforcement agencies of an ­alarming proliferation in such use of the technology. 

In an attempt to close a legal ­loophole that has been a major ­concern for police and online safety campaigners, it will become illegal to possess, create or distribute AI tools designed to generate child sexual abuse material. 

Those found guilty will face up to five years in prison. 

It will also become illegal for anyone to possess manuals that teach potential ­offenders how to use AI tools to either make abusive imagery or to help them abuse children, with a potential prison sentence of up to three years. 

A stringent new law targeting those who run or moderate websites designed for the sharing of images or advice to other offenders will be put in place. Extra powers will also be handed to the Border Force, which will be able to compel anyone who it suspects of posing a sexual risk to children to unlock their digital devices for inspection. 

The news follows warnings that the use of AI tools in the creation of child sexual abuse imagery has more than quadrupled in the space of a year. There were 245 confirmed reports of AI-generated child sexual abuse images last year, up from 51 in 2023, according to the Internet Watch Foundation (IWF). 

Over a 30-day period last year, it found 3,512 AI images on a single dark web site. It also identified an increasing proportion of “category A” images – the most severe kind. 

AI tools have been deployed in a variety of ways by those seeking to abuse children. It is understood that there have been cases of deploying it to “nudify” images of real children, or applying the faces of children to ­existing child sexual abuse images. 

The voices of real children and victims are also used. 

Newly generated images have been used to blackmail children and force them into more abusive situations, including the live streaming of abuse. 

AI tools are also helping perpetrators disguise their identity to help them groom and abuse their victims. 

Technology secretary Peter Kyle said the UK has ‘failed to keep up’ with the malign applications of the AI revolution. Photograph: Wiktor Szymanowicz/Future Publishing/Getty Images 

Senior police figures say that there is now well-established evidence that those who view such images are likely to go on to abuse children in person, and they are concerned that the use of AI imagery could ­normalise the sexual abuse of children. 

The new laws will be brought in as part of the crime and policing bill, which has not yet come to parliament. Peter Kyle, the technology ­secretary, said that the state had “failed to keep up” with the malign applications of the AI revolution. 

Writing for the Observer, he said he would ensure that the safety of children “comes first”, even as he attempts to make the UK one of the world’s leading AI markets. 

“A 15-year-old girl rang the NSPCC recently,” he writes. “An online ­stranger had edited photos from her social media to make fake nude images. The images showed her face and, in the background, you could see her bedroom. The girl was terrified that someone would send them to her parents and, worse still, the ­pictures were so convincing that she was scared her parents wouldn’t believe that they were fake. 

“There are thousands of stories like this happening behind bedroom doors across Britain. Children being exploited. Parents who lack the knowledge or the power to stop it. Every one of them is evidence of the ­catastrophic social and legal failures of the past decade.” 

The new laws are among changes that experts have been demanding for some time. 

“There is certainly more to be done to prevent AI technology from being exploited, but we welcome [the] announcement, and believe these measures are a vital starting point,” said Derek Ray-Hill, the interim IWF chief executive. 

Rani Govender, policy manager for child safety online at the NSPCC, said the charity’s Childline service had heard from children about the impact AI-generated images could have. She called for more measures stopping the images being produced. “Wherever possible, these abhorrent harms must be prevented from happening in the first place,” she said. 

“To achieve this, we must see robust regulation of this technology to ensure children are protected and tech companies undertake thorough risk assessments before new AI products are rolled out.” 

ALSO READ: DeepSeek gives sneak peek into Chinese censorship

Previous Story

25% TRUMP TARIFF ON CANADA, CHINA, MEXICO   

Next Story

Top UK universities cutting staff   

Latest from -Top News

Border Sealed, Hopes on Hold

A Hindu family from Balochistan’s Sibi was reportedly denied entry into India after the closure of border crossing….reports Asian Lite News Pakistan and India’s decision to shut down the Wagah-Attari border crossing

West visits Cambodia to strengthen ties 

The visit also addresses shared security concerns, including combating serious organised crime and human trafficking, and future defence cooperation initiatives  The UK and Cambodia are collaborating to advance climate initiatives and promote

UK and Ukraine deepen community ties  

 Thousands of school children across the UK and Ukraine have applied to take part in a landmark 100 Year Partnership programme between the two countries   Thousands of school children across the
Go toTop

Don't Miss

India’s Growing Role in Generative AI Adoption

The region is likely to see GenAI spending soar to

EU lawmakers inch closer to deal on strong AI rules

European lawmakers on Thursday took a first step towards European