Student Makes Tool That Identifies ‘Radicals’ on Reddit, Deploys AI Bots to Engage With Them - Your dystopian news of the day, sir

Article / Archive

Student Makes Tool That Identifies ‘Radicals’ on Reddit, Deploys AI Bots to Engage With Them​


The tool scans for users writing certain keywords on Reddit and assigns those users a so-called “radical score,” before deploying an AI-powered bot to automatically engage with the users to de-radicalize them.

A student has made a tool, which they call PrismX, which scans for users writing certain keywords on Reddit and other social media networks, assigns those users a so-called “radical score,” and can then deploy an AI-powered bot to automatically engage with the users in conversation in an attempt to de-radicalize them.

The news highlights some of the continuing experiments people are running on Reddit which can involve running AI against unsuspecting human users of the platform, and shows the deployment of AI to Reddit more broadly. This new tool comes after a group of researchers from the University of Zurich ran a massive, unauthorized AI persuasion experiment on Reddit users, angering not just those users and subreddit moderators but Reddit itself too.

“I’m just a kid in college, if I can do this, can you imagine the scale and power of the tools that may be used by rogue actors?” Sairaj Balaji, a computer science student at SRMIST Chennai, India, told 404 Media in an online chat.
The tool is described as “a next-generation dashboard to flag, analyze and engage with at-risk Reddit users. Powered by advanced language analysis and real-time engagement.”

In a live video call Balaji demonstrated his tool to 404 Media. In a box called “keyphrases,” a user can search Reddit for whatever term they want to analyze. In this demo Balaji he typed the term “fgc9.” This is a popular type of 3D-printed weapon that has been built or acquired by far right extremists, criminals, and rebels fighting the military coup in Myanmar.

prismx-screenshot.webp
Screenshot from a video posted by Balaji to LinkedIn.

The tool then searched Reddit for posts mentioning this term and returned a list of Reddit users it found using it. The tool put those users’ posts through a large language model, gave each a “radical score,” and provided its reason for doing so.

One real Reddit user given a score of 0.85 out of 1, with a higher score being more ‘radical’, was “seeking detailed advice on manufacturing firearms with minimal resources, referencing known illicit designs (FGC8, Luty SMG). This indicates intent to circumvent standard legal channels for acquiring firearms—a behavior strongly associated with extremist or radical circles, particularly given the explicit focus on durability, reliability, and discreet production capability,” the tool says.

Another user, also given a 0.85 score, was “seeking technical assistance to manufacture an FGC-9” the tool says.

The tool can then focus on a particular user, and provide what the tool believes are the user’s “radical affinity,” “escalation potential,” “group influence,” and "psychological markers.”

Most controversially, the tool is then able to attempt an AI-powered conversation with the unsuspecting Reddit user. “It would attempt to mirror their personality and sympathize with them and slowly bit by bit nudge them towards de-radicalisation,” Balaji said. He added he has had no training in, or academic study around, de-radicalisation. “I would describe myself as a completely tech/management guy,” he said.

Balaji says he has not tested the conversation part of the tool on real Reddit users for ethical reasons. But the experiment and tool development has some similarities with research from the University of Zurich in which researchers deployed AI-powered bots into a popular debate subreddit called r/changemyview, without Reddit users’ knowledge, to see if AI could be used to change peoples’ minds.

In that study the researcher’s AI-powered bots posted more than a thousand comments while posing as a “Black man” opposed to the Black Lives Matter movement; a “rape victim;” and someone who says they worked “at a domestic violence shelter.” Moderators of the subreddit went public after the researchers contacted them, users were not pleased, and Reddit issued “formal legal demands” after the researchers, calling the work an “improper and highly unethical experiment.”

In April, 404 Media reported on a company called Massive Blue which is helping police deploy AI-powered social media bots to talk to people they suspect are criminals or vaguely defined “protesters.”

Reddit did not respond to a request for comment.
 
"Sairaj Balaji"

The code is shit
crocadillo bombadillo has entered through the dimensional merge gate caused by that shitty code.

Redditors are still living in 2016
The worlds been stagnant in a loop of the worst parts of 2016 (constant fear mongering and politisperging) which is a shame because that was a fun year otherwise. I wish to escape this eternal 2016. Even after the current president guy goes out of office for good they're still gonna be fear mongering about him and you know it due to last time he lost.
 
Last edited:
  • Like
Reactions: clipartfan92
Something to target radicals on Reddit is something they actually probably need...
Oh, wait, the FGC-9 is the kind of stuff it's targeting? This is pissing into an ocean of piss, congrats on further enforcing the bullshit that bots (and tranny jannies) on Reddit were already enforcing. What a fucking faggot.
 
  • Agree
Reactions: The Lawgiver
Imagine if they made a bot that identifies pedophiles on Reddit. It would immediately be declared antisemitic by the ADL.
 
Back