The UK is the first country to introduce laws targeting AI tools used to create child abuse material, with harsh penalties for offenders.
The UK government has announced a series of groundbreaking laws to combat the growing threat of AI-generated child sexual abuse material (CSAM). These new measures aim to make it illegal to create, possess, or distribute AI tools designed to generate such disturbing content, marking a significant step in protecting children from online abuse.
The Home Office has stated that the UK will be the first country in the world to enforce these laws, with offenders facing up to five years in prison. In addition, possessing “paedophile manuals” that teach how to use AI for sexual exploitation will be a criminal offence, carrying a sentence of up to three years.
Home Secretary Yvette Cooper spoke out against the alarming use of AI in the creation of CSAM, describing how the technology has “industrialised the scale” of online child abuse. She stressed that the government’s measures may need to go further in response to the rapidly evolving threat posed by AI.
As part of these new laws, running websites that share CSAM or provide advice on grooming children will also be made illegal, with a maximum sentence of ten years in prison. Furthermore, the Border Force will be granted powers to inspect the digital devices of individuals suspected of posing a sexual risk to children when entering the UK. Those found to be in possession of CSAM could face up to three years in prison, depending on the severity of the images.
AI-generated CSAM involves images that are either partly or entirely computer-generated, including software that can alter real images of children, making them appear realistic. In some cases, the voices of real-life victims are also used, re-victimising innocent survivors of abuse. These fake images are not only used for exploitation but are also being used to blackmail children and coerce them into further abuse.
The National Crime Agency (NCA) revealed that there are over 800 arrests each month related to online child abuse, with 840,000 adults in the UK posing a potential threat to children, both online and offline. Cooper described how perpetrators are using AI to groom or blackmail children, distorting images and drawing young people into deeper levels of abuse.
While the government’s measures have been largely welcomed, some experts have pointed out significant gaps in the legislation. Professor Clare McGlynn, an expert in the legal regulation of online abuse, argued that the government should go further to address the “normalisation” of simulated child sexual abuse content found on mainstream pornography websites. She called for a ban on “nudify” apps, which digitally alter images to make adults appear childlike.
The Internet Watch Foundation (IWF) has warned that AI-generated CSAM is becoming more prevalent, with reports rising by 380% in 2024 compared to the previous year. The charity’s interim CEO, Derek Ray-Hill, stated that the availability of such content emboldens abusers and makes children less safe.
Lynn Perry, CEO of Barnardo’s, a children’s charity, also supported the new laws, highlighting the need for tech companies to implement stronger safeguards to protect children from abuse online.
These new laws will be introduced as part of the Crime and Policing Bill in Parliament in the coming weeks, representing a vital first step in curbing the rising threat of AI-powered child sexual exploitation.