Disaster Child predators are using AI to create sexual images of their favorite ‘stars’: ‘My body will never be mine again’ - Safety groups say they’re increasingly finding chats about creating images based on past child sexual abuse material

1.png
Since dark web browsers let users be anonymous or untraceable, child safety groups have few means of requesting images be removed or reporting users to law enforcement. Composite: The Guardian/Getty Images
Safety groups say they’re increasingly finding chats about creating images based on past child sexual abuse material

Predators active on the dark web are increasingly using artificial intelligence to create sexually explicit images of children, fixating especially on “star” victims, child safety experts warn.

Child safety groups tracking the activity of predators chatting in dark web forums say they are increasingly finding conversations about creating new images based on older child sexual abuse material (CSAM). Many of these predators using AI obsess over child victims referred to as “stars” in predator communities for the popularity of their images.

“The communities of people who trade this material get infatuated with individual children,” said Sarah Gardner, chief executive officer of the Heat Initiative, a Los Angeles non-profit focused on child protection. “They want more content of those children, which AI has now allowed them to do.”

These abuse survivors may now be grown adults, but AI has exacerbated the prospect that more people may be viewing sexual content depicting them as children, according to experts and abuse survivors interviewed. They fear that images of them circulating the internet or their communities could threaten the lives and careers they’ve built since their abuse ended.


Megan, a survivor of CSAM, whose last name is being withheld because of past violent threats, says that the potential for AI to be used to manipulate her images has become an increasingly stressful prospect over the past 12 months, though her own abuse occurred a decade ago.

“AI gives perpetrators the chance to create even more situations of my abuse to feed their own fantasies and their own versions,” she said. “The way my images could be manipulated with AI could give the false impression it was not harmful or that I was enjoying the abuse.”

Since dark web browsers enable users to be anonymous or untraceable, child safety groups have few means of requesting these images be removed or reporting the users to law enforcement.

Advocates have called for legislation that goes beyond criminalization to prevent the production of CSAM, by AI and otherwise. They are pessimistic that not much can be done to enforce bans on the creation of new sexualized images of children though, now that AI enabling it has become open source and private. Encrypted messaging services, now often default options, allow predators to communicate undetected, say advocates.

Creating new CSAM and reviving old CSAM with AI

The Guardian has viewed several excerpts of these dark web chat room conversations, with the names of victims redacted for safeguarding. The discussions take an amiable tone, and forum members are encouraged to create new images with AI to share in the groups. Many said they were thrilled at the prospect of new material made with AI, others were uninterested because the images do not depict real abuse.

One message from November 2023 reads: “Could you get the AI to recreate the beautiful images of former CP [child porn] stars [redacted victim name] and [redacted victim name] and get them in some scenes – like [redacted victim name] in a traditional catholic schoolgirl’s uniform at Elementary School, and [redacted victim name] in a cheerleader’s outfit at Junior High?”

In another chat room conversation, predators also discussed using AI to digitally remaster decades-old popular child exploitation material of low quality.

“Wow you are awesome,” one predator wrote to another in January. “I appreciate your effort keep going upscaling classical vids.”

While predators have used photo editing software in the past, new advancements in AI models present easy-access opportunities to create more realistic abuse images of children.

Much of this activity focuses on so-called “stars”.

“In the same way there are celebrities in Hollywood, in these online communities on the dark web, there’s a celebrity-like ranking of some of the favourite victims,” said Jacques Marcoux, director of research and analytics at the Canadian Centre for Child Protection. “These offender groups know them all, and they catalogue them.”

“Offenders eventually exhaust all the material of a specific victim,” said Marcoux. “So they can take an image of a victim that they like, and they can make that victim do different poses or do different things. They can nudge it with an AI model to do different poses on a bed or be in different stages of undress.”

Data bears out the phenomenon of predators’ preoccupation with “stars”. In a 2020 assessment to the National Center for Missing and Exploited Children, Meta reported that just six videos accounted for half of all the child sexual abuse material being shared and re-shared on Facebook and Instagram. Roughly 90% of the abusive material Meta tracked in a two-month period was the same as previously reported content.

Real Hollywood celebrities are also potential targets for victimization with AI-generated CSAM. The Guardian reviewed chatroom threads on the dark web discussing desires for predators who are proficient in AI to create child abuse images of celebrities, including teen idols from the 1990s who are now adults.

How child sexual abuse material made by AI spreads​

Predators’ use of AI became prevalent at the end of 2022, child safety experts said. The same year as OpenAI debuted ChatGPT, the LAION-5B database, an open-source catalogue of more than 5bn images that anyone can use to train AI models, was launched by an eponymous non-profit.

A Stanford University report released in December 2023 revealed that hundreds of known images of child sexual abuse had been included in LAION-5B and are now being used to train popular AI image generation models to generate CSAM. Though the images were a minor fraction of the whole database, they carry an outsize risk, experts said.

“As soon as these things were open sourced, that’s when the production of AI generative CSAM exploded,” said Dan Sexton, chief technology officer at the Internet Watch Foundation, a UK-based non-profit that focuses on preventing online child abuse.

The knowledge that real abuse images are used to train AI models has resulted in additional trauma for some survivors.

“Non-consensual images of me from when I was 14 years old can be resurrected to create new child sexual abuses images, and videos of victims around the world,” said Leah Juliett, 27, a survivor of child sexual abuse material and activist. “To know my photos can still be weaponized without my consent to harm other young children, it’s a pain and a feeling of helplessness and injustice.”

“My body will never be mine again, and that’s something that many survivors have to grapple with,” they added.

Experts say they’ve seen a shift towards predators using encrypted private messaging services such as WhatsApp, Signal and Telegram to spread and access CSAM. A great deal of CSAM is still shared outside of mainstream channels on the dark web, though. In an October 2023 report, the Internet Watch Foundation (IWF) says it found more than 20,000 AI-generated sexual images of children that were posted on just one forum on the dark web in a one-month period in September.

“Images show the rape of babies and toddlers; famous pre-teen children being sexually abused; BDSM (bondage and discipline, dominance and submission, and sadomasochism); content featuring tweens and teenagers, and more,” the report states.

Over the last year, AI image generators have improved across the board, and their output has become increasingly realistic. Child safety experts said AI-generated still images are often indistinguishable from real-life photos.

“We’re seeing discussions happen where [offenders] are discussing how to fix problems, such as signs the image is fake like extra fingers. They’re coming up with solutions. The realism is getting better,” said Sexton. “There is a demand to create more images of existing victims using fine-tune models.”

What effect will AI-generated CSAM have?​

Experts say the impact of AI-generated CSAM is only starting to come in focus. In certain circumstances, viewing CSAM online can cause a predator’s behavior to escalate to committing contact offences with children, and it remains to be seen how AI plays into that dynamic.

“There are examples of men that I’ve worked with where their online behavior reinforced a sexual interest in children and led to a greater preoccupation of that sort of behavior,” said Tom Squire, head of clinical engagement at the Lucy Faithfull Foundation in the UK, a non-profit focused on preventing child sexual abuse. The organization operates an anonymous helpline for anyone with a concern about child sexual abuse, including their own thoughts or behaviors.

“They joined a group online where there was a currency to the sharing of images, and they wanted to contribute to that, then directly on from there they’ve gone on to sexually abuse children, and perhaps take images of that abuse and share it online,” said Squire.

Some predators mistakenly believe that viewing AI-generated CSAM may be more ethical than “real life” material, experts said.

“One of our concerns is the capacity for them to justify their behavior because these are somehow images of a victimless crime that doesn’t involve real-world harm,” said Squire. “Some of the people who call us make an argument to minimize the gravity of what they’re doing.”

What can be done to curb AI-generated sexualized images of children?​

In many countries, including the US and UK, decades-old laws already criminalize any CSAM created using AI via prohibitions on any indecent or obscene visual depictions of children. Pornographic depictions of Taylor Swift made by AI and circulated early this year prompted the introduction of legislation in the US that would regulate such deepfakes.

In April, a 51-year-old US man was arrested in Florida on allegations he created CSAM using AI with the face of a child he’d taken pictures of in his neighborhood. On May 20, the US Department of Justice announced the arrest of a 42-year-old man in Wisconsin on criminal charges related to his alleged production, distribution and possession of more than 10,000 AI-generated images of minors engaged in sexually explicit conduct.

“We need legislative reform to ensure that abuse has no place to fester,” said Juliett. “But we also need cultural reform to stop abuse like this from happening in the first place.”

Child safety and tech experts interviewed were pessimistic on whether it is possible to prevent the production and distribution of AI-generated CSAM. They highlight that much of the production goes undetected by the authorities.

“Once it became open source, it was problematic,” said Michael Tunks, head of policy and public affairs at the Internet Watch Foundation. “Anybody can use text to image-based AI-generated tools to create any AI imagery they want.”

AI software is downloadable, which means these abusive and illegal activities can be taken offline.

“This means offenders can do it in the privacy of their own home, within the walls of their own network, therefore they’re not susceptible to getting caught doing this,” said Marcoux.

Article Link

Archive
 
AI software is downloadable, which means these abusive and illegal activities can be taken offline.

“This means offenders can do it in the privacy of their own home, within the walls of their own network, therefore they’re not susceptible to getting caught doing this,” said Marcoux.
this was always possible with image editing software, AI just makes it easier and faster

“Once it became open source, it was problematic,” said Michael Tunks, head of policy and public affairs at the Internet Watch Foundation. “Anybody can use text to image-based AI-generated tools to create any AI imagery they want.”
the true purpose of the article -- tell people that don't understand technology that if they don't let their life be filtered through the government and large corporations then they're using tools of evil
 
The Guardian has viewed several excerpts of these dark web chat room conversations, with the names of victims redacted for safeguarding. The discussions take an amiable tone, and forum members are encouraged to create new images with AI to share in the groups. Many said they were thrilled at the prospect of new material made with AI, others were uninterested because the images do not depict real abuse.

One message from November 2023 reads: “Could you get the AI to recreate the beautiful images of former CP [child porn] stars [redacted victim name] and [redacted victim name] and get them in some scenes – like [redacted victim name] in a traditional catholic schoolgirl’s uniform at Elementary School, and [redacted victim name] in a cheerleader’s outfit at Junior High?”

People like this should be executed.
 
At what point do we draw the line with having machine learning tools at our homes?

The premise in this article is tough because on the one hand, pedophiles are still generating revenue even if the material is generated. But on the other hand the prospects for regulating or stopping the misuse of ML tools seem dire and akin to opening the door to the government.

How?
 
Anybody can use text to image-based AI-generated tools to create any AI imagery they want.”
I could do this before with photoshop or a pen and paper. Are we hand wringing over Unreal Engine? People are creating porn and exposing it to kids in roblox ffs. The only thing thats changed is that this tech has become accessible to everyone almost overnight to generate realistic images. Upscaling was always a thing, it just took hours before to complete. This is just an excuse for them to tie personal identifiable information to AI usage so they can monitor and police what you create, then prosecute you for it. Thats going to be the scary next step.
 
We don't have to outlaw AI to stop AI CSAM, you just ban AI CSAM, easy peasy.

You want your CSAM "art" it can't pass the uncanny valley, sorry chomos.

This is pretty much what the USA is doing. The FBI said it would treat AI Generated CSAM as if it were real CSAM, because since the end result is photorealistic and not really possible to distinguish from the real stuff it is better to have it treated the same than allow real CSAM to squeeze by. A decent compromise.
 
There are examples of men that I’ve worked with where their online behavior reinforced a sexual interest in children and led to a greater preoccupation of that sort of behavior,”
This is why the arguments about drawn CP not being bad fall flat. It’s addictive and the behaviour escalates
This article is disgusting - it’s exploiting victims of terrible abuse to push the notion that encrypted messaging is bad, anonymity is bad and AI open source is bad.
Criminalising anonymity tools will do nothing to save children. It needs agents to infiltrate and destroy these rings and groups constantly - they’re accessible if they’re looking. I know that’s one hell of a job and I respect anyone who can do it and not crack, but removing everyone privacy is not going to help a single child.
This is how internet anonymity will be stripped - for ‘the good of the children.’
 
“Once it became open source, it was problematic,” said Michael Tunks, head of policy and public affairs at the Internet Watch Foundation. “Anybody can use text to image-based AI-generated tools to create any AI imagery they want.”
This is the key right here, FAGMAN really wants to ban open source AI and "think of the children" is the easiest way to do it. There are trillions of dollars to be made in this field, and they want to make sure you have to rent their model and pay a monthly subscription fee to get diverse, inclusive, and safe slop.
 
This is pretty much what the USA is doing. The FBI said it would treat AI Generated CSAM as if it were real CSAM, because since the end result is photorealistic and not really possible to distinguish from the real stuff it is better to have it treated the same than allow real CSAM to squeeze by. A decent compromise.
There are still pretty trivial ways to identify AI-generated images, and detection tools are also improving quite rapidly. It seems inappropriate to treat images of an actual child being abused with the same severity as an image generated by a machine learning algorithm.

Isn't basing """art""" on actual CSAM already illegal or is that certain countries?
It depends. Lots of countries have laws designed to prevent someone from slapping a digital filter over actual CSAM and claiming that it's art. Drawings are a gray area.
 
  • Informative
Reactions: frozen_runner
It's a no brainer that AI CP should be illegal, there's no justified reason to AI photos of naked children
 
  • Thunk-Provoking
Reactions: frozen_runner
First, Go arrest the cucks actually molesting children.

Second, Four plus articles about privacy being violated in the name of "protecting children" in a week. I said it in the other thread, but why do people fall for it every single time? It's not about protecting children. It's about controlling what you can and can't do ; while watching everything you do, so you can't bypass their control over you.- to protect the children of course.
 
  • Agree
Reactions: CansOfCant
There is way too much shit that deserves legitimate outrage these days, people making fake cheese pizza is below my threshold. If we can get some of the serious shit under control, then maybe I can give a crap about fake cheese pizza.
 
  • Agree
Reactions: Jimjamflimflam
There are still pretty trivial ways to identify AI-generated images, and detection tools are also improving quite rapidly. It seems inappropriate to treat images of an actual child being abused with the same severity as an image generated by a machine learning algorithm.

True, you can check it out and tell with some tools. But I think the idea is to put a massive roadblock on it spreading and sabotage any attempt at nonces trying to get out of it with "akshually it's all AI generated", along with making it so people cannot get exposed to it and stop caring.
 
Back