Discord, big data, and pedophiles - The application of data science to Discord degeneracy

it might be worth it to start hording the messages that the more dangerous users are sending. the more data you can give the feds the more likely they might be able to do something.

as much as i hate the glowies and the invasion of privacy this bot does, it is worth it if online spaces are made safer and can weed out pedos as a result
 
  • Like
Reactions: Kiwi2Shoes
it might be worth it to start hording the messages that the more dangerous users are sending. the more data you can give the feds the more likely they might be able to do something.

as much as i hate the glowies and the invasion of privacy this bot does, it is worth it if online spaces are made safer and can weed out pedos as a result

To add to this, perhaps once manually identified as a legit pred/pedo the bot could be set to specifically monitor and segregate data relating to specifically identified individuals.

Another interesting thing that would be useful for analytical purposes is to monitor how organic flow of degeneracy either decreases or increases over time.

OP is right. It really does look like a Petri dish. I wonder if the degeneracy spreads like bacteria in one.

I really hope you manage to turn this into something meaningful both for yourself and the safety of children. God speed OP. You are truly the hero we needed.
 
I support the idea that you need to get this published academically by the way behavior patterns on the internet WAS a hot topic when I was there. Idk about now, but we will need this shit in the future if/when people start going after groomers again.
I find it somewhat unsurprising that the most openly degenerate of these people are fringe personalities with little interaction, but somewhat disturbing how some of the absolute worst of them appear to be pillars of the community, so to speak.
 
I find it somewhat unsurprising that the most openly degenerate of these people are fringe personalities with little interaction, but somewhat disturbing how some of the absolute worst of them appear to be pillars of the community, so to speak.
Considering the prototype of this behavior involved BBS/IRC set ups by one person I'm not really surprised. The clusters also looked like a thing I saw involving tracking them for honeypot reasons.

Honestly if I still had that data I'd post them. The viewing history of the people that got bugged really showed the same patterns of clusters. Which I guess would make sense considering the mental illness required to fuck children also involves laziness.

I just don't know how we got from fringe boards/chats to literal fucking mainstream grooming farms though. I feel like I should blame Chris Hansen for some reason though. His method of ousting pedos really overhyped all this shit and made busting big pedo hives impossible. Though shitting on that senator was kino.

Oh right before I forget, I don't know how to explain this properly. But the modus operandi for this shit in the glowie sector was essentially "top to bottom" which is why they didn't care about individuals btw OP. Now though? I doubt they actually care about pedos in general since theyll get a paycheck anyways doing their job or no.

Speaking of: New behavior pattern for groomers on discord due to said mainstream appeal- subsuming actual discords like gamer discords. inegriating yourself by turning general into your own personal blog, and then luring kids into said discord groomerinos.
 
  • Informative
Reactions: make_it_so
I can see why it's tempting to believe that this is intentionally abusive behavior from Discord higher ups, even though I know the reality is that they just don't care.
I can kind of understand wanting to think this, but it's not a helpful point of view to espouse. Given >99% of discord's userbase just uses it to casually interact with their friends, even their grandma, for maybe an hour a day at most. It's not productive to tell someone "you're using that dang pedophile app" when they're just using it to tell grammy about their day or send them something they drew, or use it for voice chat for their friends in fortnite.

Bad actors are present everywhere, as you stated in the OP, and have always been there. But now they're more visible, and sloppier operators have more ways to get caught. There are more paths for vulnerable people - not just children! - to be exposed to predators, but "discord bad" isn't a solution, unless you're willing to cut yourself or whoever you're trying to protect off from primary communications hubs related to a wide variety of hobbies and interests.

There's a rampant "x bad, kill x" attitude among certain people that doesn't do anything to address the systematic issues that allow predators to remain in power - whatever that scale of power may be. Whether they're a discord moderator, a priest, a movie director, a teacher, a sports team instructor, a scout leader... Whatever the case may be. You're not just going to be *safe* because you don't use discord... So when conversation revolves around specific applications being le bad, it pisses me off. Limiting access to applications for certain people, again, not just children, can reduce harm. Smart filters like what you've created can reduce harm. Conversations with people who you are concerned may be victims of abuse can reduce harm. Le discord bad is a flippant attitude adjacent to those types of parents who give their children free reign over the internet with no guidance whatsoever on how to behave there.

Which leads into the other issue of victims primarily being unparented children, in which case they aren't going to stop using discord. Nobody is going to make them do that. The onus is on discord to moderate their own services to a suitable degree if they wish to legally allow minors access to their platform... But that's every platform. People have been victimized through twitter, facebook, my space, tumblr, roblox, minecraft, discord, steam, xbox live, email, text, whatsapp, snapchat... Etc. You can make the argument that the structure of discord incentivizes a relatively large amount of vulnerable behavior, which is valid, but not the application's fault - except through their failure to implement completely feasible automated moderation.

This is getting rambly, but I think it's really important to recognize that you can't expect people to be free from victimization even if they avoid whatever platform you think is bad. There are people out there claiming that if they have kids, or with their currently infant children, that they won't be allowed access to the internet at all, or like, just be allowed to access wikipedia or whatever... And I hope they have nepotistic connections to get those children manual labor jobs if they ever follow through on those claims. You are setting your children up for failure if you cannot employ them in a technology-free environment yourself. You are setting them up for social failure unless you live in an enclave of people similarly technophobic. You are setting them up to rebel and seek use of technology through avenues not controlled by you if you live anywhere normal. And they're going to be even more vulnerable than they would have been otherwise under those circumstances, as they are already engaging in the taboo of disobeying their parents, and their access to such devices or services may be modified or withheld pending bribe-type behavior from their so-called "friends".

So what I'm saying is, acting like this feels adjacent to the non-parenting type of behavior that perpetuates this abuse in the first place. You need to talk to your children about how people may abuse them. As a homeschooled Catholic, the limit of my sexual knowledge until I was like, 14, and discovered it myself on the internet, was that God randomly gave babies to women via lottery and that my pernus felt funny when I saw big boobies, and unironically wondered if my wife would ever let me see her naked. If there'd been any bad actors in my life I could've been groomed so hard. And if I'd gotten anything in exchange, I probably would have found it pretty hard to reveal them, depending on the exact situation. I dunno how you explain to your child to avoid getting raped irl or on the internet if you want to avoid talking about sex with them entirely. I don't have all the answers, but I have a whole list of problems that go beyond discord existing!
 
I feel like i need a bath, the thread really black pills me that major instutions and corpos really dont give 2 shit and are happy doing business with pedos as long as they not being too much "problematic".

Thanks OP, there is a bot on telegram too
Known as anti pedo bot. It shows these wicked people are.
Screenshot_20231208-065841_Telegram.jpg
 
I'd just like to add the chorus here congratulating @grand larsony, but more to the point encouraging OP to compile and release a dataset of these messages to facilitate training of similar filtering solutions. I mean, there's plenty of datasets for training spam filters, but I don't think anyone's created a text dataset for training a pedo filter. You'd be doing the world a huge service.
 
I think it's worth stopping to define the terminology here a bit. When I call the nodes with strong sexual/minors scores pedophiles, I think I should emphasize that these people are very openly talking about children sexually. I view these people as especially dangerous to children they may come into contact with when compared to someone like, say, Lionmaker.
Off the bat, I want to say - excellent and informative post. I'll say the best part about all of this upfront - you have demonstrated that a solution to this is actually not beyond the wit of man. I'm not a programmer or techy sort, so I thought this was an almost intractable problem and yet you have convinced me otherwise. Discord (and services with similar issues like Roblox) have no real excuse for not doing more to protect kids from this. The recent lawsuit against Omegle actually sets some precent that sites have a requirement to protect children, which Discord clearly isn't fulfilling.

I'll comment on a few things though. First, regarding to types of pedos, I once got a lecture from an expert on this - one of the first people to catch pedos in these mass CP groups online in the very early 2000s - and based on their responses to questions in a clinical setting there are 7 types. I'll not bore you with all the details - but there is a difference between what you might call "preferential" predators and people who aren't specifically drawn to minors but have no qualms in taking advantage of them given the opportunity. Both are problems, but the former are more motivated than the latter to go after kids.
At the highest level, law enforcement isn't especially interested in these cases.
I think this is a little unfair, the issue isn't interest - it's capacity. Where I'm from they think approximately 1 in 30 adults access child abuse material on a semi-regular basis. As a result, law enforcement only actively go after people who produce new material, or who they can identify as being in a high risk position. For example, teachers or step parents...

It's fucked, but at the same time you have to concentrate resources to do the best you can. How could the police function if they went after each individual case of this?

The numbers are just that frightening.
Modern children are exposed to dangers that weren't around when I was a kid.
Its interesting, we didn't have discord but certainly AOL online, Habbo Hotel, Omegle, Chat Roulette were hotbeds of Nonces when I was younger. I still had a laptop in my room, access to the internet without being monitored - my friends and I used to just laugh at the pedos. I suspect for most that's the same today. However, this wouldn't be a problem if all children reacted like this - it's the minority that don't that need protection.

I think the major changes is that more kids have phones, that their parents can't check. A laptop is all well and good, but if you can just sit and talk with your discord all hours then who knows what you get up to. Additionally, I feel like Discord provides outcasts a form of community they never got before - they were forced to sink or swim, but now they just look out a community online they will be accepted by. Friends on Xbox Live or whatever didn't ever seem to have this huge community where you are exposed to people at the same rate. The TL;DR is less parental oversight, and platforms that create more chances to meet and bond with new people.
 
Last edited:
I will say some of the sample messages feel like possible false positives?
A few are talking in the third person about someone else being a pedo, one is a potential victim talking about their experience, and a few are talking about a mommy-dom fetish which is kind of the opposite thing lol

AI is a tool that can be used by a human moderator to narrow down how much content they need to review, but it shouldn't be making the end decision.
 
They could, but that would require the senders of these messages to out themselves as the ones doing pedophilic mommy breastfeeding roleplays on Discord. It’s possible that nobody who has access to these databases and could do the reverse lookup you’re suggesting will do so, so that the spotlight is never turned on them.
I'd like to agree, but don't underestimate their dumbness. Recently when a bunch of pedo furfags got banned on Patreon, they literally wanted to go to court because they lost income, lol. They can do uno reverse card with "it was a roleplay between two consenting adults and this person is spying on us and posting our private messages on a hate nazi cyberstalking forum". And then go go turbojannies (especially if one of these posters is also a furfag).
 
I'm wondering...

...how many of the bottom feeder 'obvious' ppl talking on the net about 'pedo' stuff might NOT actually be pedos?
1: They may themselves be FEDS? Possibly using bots to look for rando's they can then send packages of images to, which will be tagged and easily to chase around the net?
2: They may be laying bait to trap pedos, as part of some 'unofficial' hunter/baiter type group? There's loads of pedohaters out there, looking for ppl they can hound and expose for brownie points.
3: They may be scammers, looking for pedos they can blackmail/enslave/extort? Because, shit, it's still considered a heinous crime in most circles, and most ppl who do this shit don't want to be exposed, which makes them easy pickings for scammers.

I'm just going to spout a load of random stuff here... some of it might be useful... or not.

as Mr E Grifter states... there's a few different types of pedo, and some of them are more dangerous than others. The majority of scum sniffing 'round kids on discord servers will be, generally, small-time wanker types. Ugly, and nasty, sure, but not that likely to actively do IRL meets, or be the kind of pedo who will kidnap and rape kids. If they find kids who will send them nudes, whatever, then they have what they need, but most online groomers do the same as offline groomers: they groom. they manipulate, they lie, they deceive. it's a process. Just 'coming out with it? Just laying it all out there? Is a risky strategy, really, isn't it? Most serious predators play the long game, and it's like 'boiling frog'. You don't notice the water getting warmer until...

there's been an explosion in CSAM material online since the pandemic. We're talking, internationally, about millions of pedos. Billions of images. The people who try to stem the tide of this ugly shit are swamped. The 'old' figures about 1 in 32 men (on average), looking at CSAM? It's more like 1 in 20 now.

There's been a (perhaps deliberate?) grooming of society with this shit, and it's paying off. I suspect it's 'kompromat', myself, but it could be some org. like PIE or NAMBLA pushing this shit. I also suspect that there's several high profile 'tech people' who are 'so inclined' and I wouldn't be surprised at all to see one of the big 'household name' tech ppl to be arrested within the next year with something so horrific that it will change how we'll view the whole internet full stop.

when you're looking for pedos: be more suspicious. the ones who 'vent' and look for 'safe spaces' to talk about their 'trauma': that whole 'victim' card is one often played as a 'way in'. The majority of ppl online already know that the net isn't a safe space, and know that they'll be potentially providing wank fodder for pedos with their tales of woe, so most ppl DON'T do it. Those that do? Especially if it's detail oriented? I'd be viewing them with suspicion, myself.

I'd also be viewing with suspicion anything related to age play, or any 'role play' involving incest. Again, 'it's just a fantasy/game/recreational' type angle is often plausible deniability, if/when the shit hits the fan, or ppl complain about 'content'.

thirdly: you need to know the lingo. Searching for the 'obvious' keywords, etc, is all well and good, but most of these ugly mofo's use codewords, and the code changes pretty regularly. The more intelligent, and therefore more dangerous ones? They're not on discord.
 
How many of these pedophiles have nitro? My personal theory is the reason why Discord doesn't ban them, is simply because they have a subscription to nitro, and banning them would mean cutting off money. Of course, they could ban them and they could make another account which is so damn easy to bypass, but the chances of them resubbing is very low.
Nitro has very little to do with it. $9.99/month is not worth the exposure from knowingly allowing a particular offending account to remain on your platform.

Discord's business model relies on having unpaid server jannies do 99% of the grunt work so they don't have to pay for a large team of human content moderators (similar to Reddit). But unpaid jannies don't have any powers beyond their particular servers, so degenerates simply move their degeneracy to another server when they do get banned, and there aren't enough corporate jannies to track and remove them from the Discord ecosystem overall.
 
This isn't even remotely surprising to me, I used to work professionally as a moderator for a few childrens chat-spaces in the first half of the 2000's, and the depraved shit we used to have to deal with back then was insane (the users I was taking care of were anywhere from 10 - 17 y/o, if you're curious). You have to remember that back in those days, people were much more willing to just outright share CSAM material when they thought nobody was looking, because the automatic detection software didn't really exist back then, and a lot of people were just oblivious to what encryption even was, so the shit they'd say and do in open spaces was absolutely vile. I have some proper horror stories I could get into, especially second-hand ones from my colleagues who had to deal with an internal pedo problem at one of the companies we worked for. You know shit's fucked when people actually resigned over it and went to some British TV show in an attempt whistleblow the whole thing.

Really good thread OP, a lot of this confirms what I've assumed for years. The issue still exists, but has grown tenfold with the growth of the Internet itself. I think the reason why a lot of this is more noticeable these days than it was back when I was dealing with these cunts is because there are more normal people online now, and due to technology catching up to them it's become much harder for pedos to get away with operating unnoticed in these communities like they once did. Keep up the good work, brother.
 
I'd just like to add the chorus here congratulating @grand larsony, but more to the point encouraging OP to compile and release a dataset of these messages to facilitate training of similar filtering solutions. I mean, there's plenty of datasets for training spam filters, but I don't think anyone's created a text dataset for training a pedo filter. You'd be doing the world a huge service.
You don't really need to do that. Using embeddings would be simple enough
I believe Discord bought Sentropy (an AI company) a while back which fought against this, and one of the technologies they bought was for PhotoDNA which scans images for CSAM. This a while back also was used to trick people into posting a CSAM image that was “innocent” but actually got your account banned. (referring to this YT vid).

This post, also showed how a lot of this can be easily done on their platform.

Do you think you could also collect Discord IDs as well (like the users who say these)? I believe they do get banned but maybe it is not reported back to you or the user. This happened as well to “prevent” people from knowing their actual ban reason.
PhotoDNA is pretty terrible and is still for some reason used even though the hash can be reversed.
Unironically, this is why we haven't had tech pushed out that could detect CSAM and other abuse materials. Because every tool that has ever been conceived as a positive is then immediately used to fuck up the general population. The technology exists (Apple tried to implement it massively but the backlash was immense).

Another issue is that the tech can only detect historic CSAM
Technology exists today that you can make a CSAM filter without ever seeing CSAM.
 
I don't know what worse, whether it's Discord not caring, or them actively not doing anything because it will hurt their bottom line, or that they are also groomers.

It's ridiculous how everyone knows the situation is bad but god forbid the government will do something about a social media that is identified with the left.
 
While we're at it, here's an example of the types of content that scored highly lately:
This sample has a fair bit of stuff in it that look like possible false positives to me, even though you say in a later post that posts above 0.8 are usually unambiguous. "Grown asian men jerk it to this" sounds a lot like "look at this degenerate anime thing, isn't this sick, people?". "How to seduce little children?" could, for all I know, be a reply like "[picture of hated guy with book]" - "what is he reading about" - "How to seduce little children?"; you may say that's contrived, but exchanges like these happen on the farms and you literally only have the last line to go off of. One post is clearly just talking about a pedophile, which in all fairness is reasonable to catch but puts some of the conclusions into question.

Do you have some way to see the context? I know from experience that AI moderation is flaky about that - I've seen it let you threaten to murder people in graphic detail if you phrased it weirdly but autoban you for homophobia if you merely mentioned gays. Examples from an account that gets flagged very often could also be illustrative. Also, do all of the flagged messages get moderated? If not, can you sample only from messages that did?

Regardless, I'll echo what people already said several times: There are researchers who'd be really interested in this.
 
Last edited:
Back