February 2, 2025
4 mins read

Crackdown on AI tools used for child sexual abuse   

UK will be first country to bring in tough new laws to tackle the technology behind the creation of abusive material 

 

Britain is to become the first country to introduce laws tackling the use of AI tools to produce child sexual abuse images, amid warnings from law enforcement agencies of an ­alarming proliferation in such use of the technology. 

In an attempt to close a legal ­loophole that has been a major ­concern for police and online safety campaigners, it will become illegal to possess, create or distribute AI tools designed to generate child sexual abuse material. 

Those found guilty will face up to five years in prison. 

It will also become illegal for anyone to possess manuals that teach potential ­offenders how to use AI tools to either make abusive imagery or to help them abuse children, with a potential prison sentence of up to three years. 

A stringent new law targeting those who run or moderate websites designed for the sharing of images or advice to other offenders will be put in place. Extra powers will also be handed to the Border Force, which will be able to compel anyone who it suspects of posing a sexual risk to children to unlock their digital devices for inspection. 

The news follows warnings that the use of AI tools in the creation of child sexual abuse imagery has more than quadrupled in the space of a year. There were 245 confirmed reports of AI-generated child sexual abuse images last year, up from 51 in 2023, according to the Internet Watch Foundation (IWF). 

Over a 30-day period last year, it found 3,512 AI images on a single dark web site. It also identified an increasing proportion of “category A” images – the most severe kind. 

AI tools have been deployed in a variety of ways by those seeking to abuse children. It is understood that there have been cases of deploying it to “nudify” images of real children, or applying the faces of children to ­existing child sexual abuse images. 

The voices of real children and victims are also used. 

Newly generated images have been used to blackmail children and force them into more abusive situations, including the live streaming of abuse. 

AI tools are also helping perpetrators disguise their identity to help them groom and abuse their victims. 

Technology secretary Peter Kyle said the UK has ‘failed to keep up’ with the malign applications of the AI revolution. Photograph: Wiktor Szymanowicz/Future Publishing/Getty Images 

Senior police figures say that there is now well-established evidence that those who view such images are likely to go on to abuse children in person, and they are concerned that the use of AI imagery could ­normalise the sexual abuse of children. 

The new laws will be brought in as part of the crime and policing bill, which has not yet come to parliament. Peter Kyle, the technology ­secretary, said that the state had “failed to keep up” with the malign applications of the AI revolution. 

Writing for the Observer, he said he would ensure that the safety of children “comes first”, even as he attempts to make the UK one of the world’s leading AI markets. 

“A 15-year-old girl rang the NSPCC recently,” he writes. “An online ­stranger had edited photos from her social media to make fake nude images. The images showed her face and, in the background, you could see her bedroom. The girl was terrified that someone would send them to her parents and, worse still, the ­pictures were so convincing that she was scared her parents wouldn’t believe that they were fake. 

“There are thousands of stories like this happening behind bedroom doors across Britain. Children being exploited. Parents who lack the knowledge or the power to stop it. Every one of them is evidence of the ­catastrophic social and legal failures of the past decade.” 

The new laws are among changes that experts have been demanding for some time. 

“There is certainly more to be done to prevent AI technology from being exploited, but we welcome [the] announcement, and believe these measures are a vital starting point,” said Derek Ray-Hill, the interim IWF chief executive. 

Rani Govender, policy manager for child safety online at the NSPCC, said the charity’s Childline service had heard from children about the impact AI-generated images could have. She called for more measures stopping the images being produced. “Wherever possible, these abhorrent harms must be prevented from happening in the first place,” she said. 

“To achieve this, we must see robust regulation of this technology to ensure children are protected and tech companies undertake thorough risk assessments before new AI products are rolled out.” 

ALSO READ: DeepSeek gives sneak peek into Chinese censorship

Previous Story

25% TRUMP TARIFF ON CANADA, CHINA, MEXICO   

Next Story

Top UK universities cutting staff   

Latest from -Top News

54 killed in overnight airstrikes in Gaza

It was the second night of heavy bombing, after airstrikes Wednesday on northern and southern Gaza killed at least 70 people, including almost two dozen children Multiple airstrikes have hit Gaza’s southern

No Military Fix for Ukraine War, Says Rubio

Rubio stated that the US hopes that progress will soon be made in the negotiation process…reports Asian Lite News U.S. Secretary of State Marco Rubio stated on Thursday that the Russia-Ukraine conflict

BNP seeks non-interference with India

Calls for non-interference, long-term cooperation, and bilateral trust-building as region faces new challenges A senior leader of the Bangladesh Nationalist Party (BNP) has emphasised the need for India and Bangladesh to build

Taiwan tests new missile system

Visuals released by the MND showed the Land Sword II in action, with footage capturing the successful launch of the missile system in a test-firing exercise. Taiwan has conducted back-to-back military drills
Go toTop

Don't Miss

Google to help India in developing responsible AI

Google is developing AI in a way that maximises the

Google develops AI to spot misinformation

Google said it is taking several steps to reduce the