Law Man in Florida arrested for creating Child porn through AI prompts

A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

he investigation kicked off after Indian River County Sheriff's Office received tips that McCorkle was prompting an AI image generator to make child sexual imagery and distributing it via the social media app Kik.

McCorkle's arrest was part of a county operation taking down people possessing child pornography, but the generative AI wrinkle in this particular arrest shows how technology is generating new avenues for crime and child abuse.
 
Last edited:
So then where does the issue of AI come in? Is it because AI images are just made up of thousands of bits of real world images, so in a round about way you are still using real exploration material?
Models that can generate nude images might be biased toward doing so, but they are trained on a basic model that can create most things, including children.

With targeted prompting, the use of IP adapters, and ControlNet, you could easily create realistic-looking CP with most models without there being a single CP image in the training data. I had the displeasure to see some sus images on civitai.

Anyone caught with a model trained on CP (I'm not tech-savvy enough to know if you could check that) would simply be charged with possession.
If someone gets caught making CP, they should face consequences.

The wrong approach would be to try and regulate image generation tools. That would be the same as policing Photoshop or Gimp because you could create realistic-looking CP.
There are many tools that could create realistic-looking CP, including pen and paper.
1724529196126.png
 
The amount of autism in this post is fucking staggering. Apparently according to him, not wanting AI generated CSAM is anti white jewishness.

Guess ill start wearing a tiny hat and being a greedy fuck, oi vey, fuck hwite goyim
Fuck kids to save the white race, all while Ignoring that for millennium within the Judeo-Christian world fucking minors and those with large age gaps was seen unhealthy even then. It's almost like what was pushed were either marriages of mid to late teenagers/early twenties to each other (let the young, strong and horny get married and start a family while the grandparents/extended family are still fit enough to help raise the many children) or if a large age gap occurred it would be adults marrying for financial reasons (Jimbob despite being in his mid 40s (and you are in your late twenties), is a well established man/his wife has died. by marrying him you will ensure that you not only will have kids but will be financially set for the rest of your life). Or for the weird edge cases if it wasn't outright pedophilia it was exactly for financial reasons (even though you are 14 and Tim is 78. Tim is a civil war vet and on a pension. Everyone knows Tim will die soon and his wife will be granted a stream of income for the rest of her life, we the parents are very poor and this pension will allow you to have some income and not be as poor as us) or weird off case political reasons (keep the kingdoms together and produce an heir nonsense).


I don't understand why these retards think they are in any of the situations I outlined and not an outright pedo.

a
Models that can generate nude images might be biased toward doing so, but they are trained on a basic model that can create most things, including children.

With targeted prompting, the use of IP adapters, and ControlNet, you could easily create realistic-looking CP with most models without there being a single CP image in the training data. I had the displeasure to see some sus images on civitai.

Anyone caught with a model trained on CP (I'm not tech-savvy enough to know if you could check that) would simply be charged with possession.
If someone gets caught making CP, they should face consequences.

The wrong approach would be to try and regulate image generation tools. That would be the same as policing Photoshop or Gimp because you could create realistic-looking CP.
There are many tools that could create realistic-looking CP, including pen and paper.
View attachment 6343370
The horrific question then is, could those non cp trained ais be the sick loophole these creeps so desire?

Wouldn't it just be simple to say: ok you can have A and you can have B but if you mix A and B together to produce C then you have a crime because C is illegal?
 
AI is like a dumb autistic human at this point. Think a tranny or a nigger or a furry.

So these major companies have their own pet retards.

The absolutely last thing, besides making a kike feel slight discomfort, is their pet retard AI being a pedophile.

They bend over backwards to scrub even training data, which costs real money paying niggers and pajeets to label the data, and imagine the kike founders and operators at jewgle or (((openai))) having to *gasp* spend a nickel they didn’t have to.

I find it astounding that there are even self hosted models that wouldn’t “sorry I can’t do that Dave” to pedo prompts.
 
The horrific question then is, could those non cp trained ais be the sick loophole these creeps so desire?

Wouldn't it just be simple to say: ok you can have A and you can have B but if you mix A and B together to produce C then you have a crime because C is illegal?
These image generators need to be holistic to connect language to the objects depicted.
Without limiting them to the point where they lose a lot of functionality, you can't police this at a technological level.

People need to see AI for what it is, a tool. It can be used for illegal and reprehensible things, but most use it for nonsense and cool stuff.

The most reasonable approach is what was done in the article in this thread: go after the guy who made and distributed CP. That's how it should be done.

The only reason CP is still such a big problem is that there are people high up and in the agencies that should prosecute it who are pedos themselves.
 
Soooo ... maybe I'm late but has anyone asked the obvious question yet -- since this kind of AI generation has to be driven by existing material being fed to it for it to mimic, where the fuck did the AI model get its source data that it included CSAM? Please tell me this isn't/wasn't a publicly available model. I don't want to lose my last drop of hope for this world.

The models are typically trained via scraping the internet automatically, so it most likely sucked up some CSAM content off some forum or corner of the web that the feds weren't paying attention to/are running as a honeypot.

There are tons of legal and ethical problems with web-scraping, which is why the real "revolution" in AI isn't new algorithms so much as it is autistic faggots willing to do things that the buttoned-down blue-chip companies working on this stuff 15-20 years ago weren't.
 
Policing cartoons is a good use of resources, fuck eating and gas for the car

Bro. An ai image is a cartoon. It's computer generated. It's not real. It's pixels creating a picture of something that doesn't exist

Jesus christ no wonder they have you faggots wearing masks

The dude is a pedo, but...I find it odd that kiwifarm people would be advocating imprisonment for something victimless, like words. If this guy can be arrested, so can you. The pixel police won't make distinctions

It's a simulation of something that doesn't exist or events that never happened.

Guys, you're falling for this shit, it's embarrassing lol

No it isn't. It's made up. Photographs are trained on real content. Ai is a computer generated image. It's Pixar. It's not real

The same people who will arrest you for thinking about pedo shit are the same who will and want to arrest you for thinking about "racist" shit
I bet you cash this nigger posts in kotaku in action. I've seen the EXACT same arguments used in defence of child porn on that subreddit a million times.


1724531764226.png
 
A 2023 study from Stanford University also revealed that hundreds of child sex abuse images were found in widely-used generative AI image data sets.
Details on this:

LAION 5B was a database of links to 5.85 billion images. In order to avoid infringement and massive hosting costs, the images weren't scraped/collected for you, you had to use the links to scrape the images yourself. If you indiscriminately scrape practically everything accessible online, you are bound to accidentally end up with some CSAM due to sites not properly dealing with this content themselves, even mass market sites like Reddit. The links were not manually reviewed. It would've been tremendously impractical to review 5.85 billion images to ensure not a single one was illegal.

From what I understand, many of the links to those images were already dead at the time the study was performed, since no shit, it's awful stuff that most hosts delete as soon as it's discovered.

The fact that these links were contained in the database does not mean that any AI was ever actively trained on them. You aren't somehow forced to train on all of LAION. All models that I've read about pruned LAION down at least by half if not more to focus on higher quality, higher resolution images.

Additionally, if an AI had been trained with a few hundred CSAM images among billions, that does not mean the contents of those images influence the model in a significant way. Any given image has a minuscule influence on the model overall and you generally cannot reproduce them to any significant extent except in the case of overtraining, when the model looks at copies and variations of the same image thousands of times (like the Mona Lisa).

There might be a dozen photos of Ethan Ralph that were trained on somewhere in the depths of Stable Diffusion. That doesn't mean you can make pictures of him using it out of the box.

AI models do not actually contain the images they train on, they're not compressed or chopped up. Even compressed, billions of images couldn't fit into a few gigabytes.

Indicting AI for the presence of mostly dead links to CSAM within LAION is like refusing to eat an apple because the tree it came from may have been watered with piss once.

The focus should be on arresting the people who misuse AI to create this shit, just like people who might misuse Photoshop to make it, or any other tool.
 
Last edited:
Can someone please explain to me how this shit is (rightfully) illegal but drawn pedo shit that Japan makes isn't? Is it some nonsense first amendment argument or has no one gotten the balls to actually arrest these people?

Personally I think if you consume anything involving minors (drawn, real or otherwise) or what a reasonable person would consider a minor (looking at you "but she's 10,000 years old) for the purpose of sexual exploitation/sexual use (including shit like cuties) you should be worked to death/shot. I'm just wondering why any the line was drawn where it was or why we don't have a line at all for certain things.


Btw: Kill all pedos, their supporters and those who shield them.

Also I'm not for loosening the laws, I want them to be strengthened. Anime drawn shit and that crap on Netflix and what trannies make should be illegal
The very short answer is that it's a legal grey area due to the First Amendment and obscenity laws.

People have been convicted for possessing and distributing drawn lolicon, but under existing obscenity laws. Usually, it involves distribution or the person was otherwise extremely brazen about it. It doesn't get broadly enforced (except in extreme cases like the guy in OP's news article) because law enforcement has bigger fish to fry (e.g. actual pedophiles and producers of CSAM involving real children) and because the laws are not as uniform or strong. The fact that 4chan allows this stuff, despite being based in the US, is a good indicator of this. I'm sure that someone on the website with a serious legal background could give a more competent answer than someone like me who only has an armchair legal opinion.

I think that the case in OP's article is the first in the US where someone is being prosecuted for AI-generated content. AI-generated content is disturbing because it can (and clearly has been) used to generate extremely realistic content, and because it uses real pictures of real children (thus not only being highly fucked up, but also involving potentially countless victims). Lolicons could fall back on "iT's JuSt PiXeLs, MaN," but not so much with AI-generated content.

Also, if the guy in this Florida case presented his content as real, then he's extra uber fucked. It's sort of like laws where you get charged with having a real gun in a robbery even if it turns out to be a spray-painted toy.
 
Last edited:
Pictures of children, which people voluntarily post on the internet by the millions.

Fwiw, that's not really how ai works but even still, it's a depiction of something that isn't real either way

If I asked ai to create an image of a unicorn with a machine gun horn...that isn't trained on any such real thing either.

It's a simulation of something that doesn't exist or events that never happened.

Guys, you're falling for this shit, it's embarrassing lol
I don't know which is worse, the first sentence or the whole post having reddit spacing.
 
AI is trained on real content, then when requested spits out a blend of everything in the dataset, matching the requested output.

You're a retard that doesn't know how AI works.
That's something which has no bearing on how the law is applied.
Keyword: a blend of everything in the dataset. An AI is capable of picking apart subjects and mixing concepts. That's where almost all its utility comes from. If an AI could only generate things it has seen before then it would be pretty fucking pointless. "Real content" could mean anything.

Use of generative AI to satiate pedophiles is a disgusting use case, but the point being made here seems to be whether or not this legally counts as distribution of illicit material based on the fact it was generated and not photographed. If it is not trained off of child pornography then the law does not have an explicit rule for this.

Law enforcement understandably agrees that it should count as photography since it is convincing enough. But the law itself is not written with AI generated images in mind and it's something that will need to be challenged at the judicial level before we actually know.

TL:DR; Suel's point was never to justify AI pornography of children, the point was that an arrest like this is legally questionable because the law treats artwork and photography differently and we don't know how this all works yet.
 
Last edited:
Fuck it, I feel like putting my reaction score to the negatives. You all are retarded niggers and @Suel Forrester didn't deserve his ban. This makes me disappointed in KF and disgusted by the tactics used here. It is no different than libtards on Twitter. For shame.

The man was a retard who didn't fucking know how anything worked but made good points anyhow, and instead of arguing his points at face value, you retarded faggots decided to call him a pedophile despite him claiming the contrary many times. I fully expected to read this thread and see the most retarded takes ever from this user but it was everyone else with them instead.

Should AI CP be legal? Fuck no, it is disgusting as shit, and I know this. I'm no pedophile, lolicon enjoyer, whatever fucking terms they slap on to cover what it really is, but the guy talked about things that I think should be discussed and you all slandered him with such. So let this be known here: Give me a double barrel and I'll happily blow the brains on any pedophile ever. Now do you think we should really be focusing our efforts on getting faggots who AI generate CP, or should we go after the sources of actual CP?
 
Soooo ... maybe I'm late but has anyone asked the obvious question yet -- since this kind of AI generation has to be driven by existing material being fed to it for it to mimic, where the fuck did the AI model get its source data that it included CSAM? Please tell me this isn't/wasn't a publicly available model. I don't want to lose my last drop of hope for this world.
As already said somewhere in the thread, some of these big datasets used to train big models have billions of images scraped from the web, containing a small amount of CSAM, people's tax returns, x-rays, stuff like that.

It's possible that these AI models are combining clothed children and nude adults.

But what's a lot more likely is that people are training LoRAs on actual CSAM, child models, and other pornography. This kind of training is done on top of existing models and requires orders of magnitude less resources.

The article has a clue. They "received tips that McCorkle was prompting an AI image generator to make child sexual imagery". Either someone had to be looking over his shoulder while he was doing that, or someone he was distributing the images to on Kik ratted him out, or he used some kind of online generator that logged his IP. Court docs could shed more light on this.
 
Last edited:
Law enforcement understandably agrees that it should count as photography since it is convincing enough. But the law itself is not written with AI generated images in mind and it's something that will need to be challenged at the judicial level before we actually know.
There are existing federal statutes under the PROTECT Act of 2003 which prohibit realistic, computer-generated content. While I'm not aware of any laws explicitly dealing generative AI, there are applicable federal (and, in the case of this news article, state) laws.
questionable stuff
The problem is that some AIs are trained on real CSAM. Even if they are trained on non-CSAM, it still uses pictures of real children, which is both fucked up and illegal.

This is not a hill worth dying on.
 
The problem is that some AIs are trained on real CSAM. Even if they are trained on non-CSAM, it still uses pictures of real children, which is both fucked up and illegal.
I have never said it wasn't a problem. Police should focus efforts on the core source of that CSAM and the pedos who train AI models on it. We don't know much about the AI model this guy mentioned in the article used, but it would be (while concerning) completely possible to generate CP with nothing but legal material.
 
A lot of you sound like Kamala explaining what the cloud is when trying to explain AI.

I also think the guy was a retard and his point didn’t come across. Unless I’m a huge retard (big possibility), he was onto some conspiracy that whites will be targeted for undefined crimes. Was he a crazy nigger? Yes. Was he defending pedophilia? Idk but from what I read it didn’t seem to be the case.

EDIT: I'm not well-versed in schizo talk. It looks like he did make some questionable arguments, mainly saying that it wasn't that bad and that it's ok to scrub child images from the internet. I take back my words.
 
Last edited:
I have never said it wasn't a problem. Police should focus efforts on the core source of that CSAM and the pedos who train AI models on it. We don't know much about the AI model this guy mentioned in the article used, but it would be (while concerning) completely possible to generate CP with nothing but legal material.
I don't know the exact law off the top of my head, and I could be misremembering, but I believe that it is indeed illegal in the US to make pornography - drawn, 3D generated, or otherwise - of actual, existing minors. For example, Shadman got in serious trouble for drawing porn of Keemstar's daughter.

I really, really don't think you're correct on this one. Suel was not only defending AI-generated CP, but he was also refusing to acknowledge the aforementioned illegal aspects of it while being a tremendous douchebag about it.
 
Back