reddit General

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
I can't believe nobody posted any of the comments from the AI yet. Redditors got all of the accounts deleted but there are still some archives (basically just the first page of the profile). The archives are all from about a month ago and I don't think there's any way to see if any of the comments were given a delta unless an archive of the actual page with the question exists but I'm too lazy to go that far.
1745919619733.webp1745919671433.webp1745919732521.webp1745919753654.webp1745919771799.webp1745919792242.webp1745919942376.webp1745919997792.webp1745920009844.webp1745920129012.webp1745920259191.webp1745920322681.webp1745920411662.webp1745920435152.webp1745920473209.webp1745920522871.webp1745920670353.webp1745920713381.webp1745920920042.webp1745920972390.webp1745921136174.webp1745921240136.webp1745921839229.webp1745921915801.webp1745922003769.webp1745922129627.webp1745922172520.webp1745922268149.webp1745922333213.webp1745922426289.webp1745922497066.webp1745922827870.webp1745923005828.webp1745923393864.webp1745923469177.webp1745923552941.webp1745923665959.webp
 
Almost definitely if a university is admitting they did this study, corporations and governments(spooks) have done studies without ever informing the public likely with far larger sample sizes and across far longer timespans-producing far more accurate and useful results.

In all likelihood there is not a single mass platform that hasn’t been subjected to this. 4chan, Twitter, Facebook, Plebbit, and the rest.

Smaller fora like Kiwifarms-this doesn’t work as well. As users build reputations, and you’d basically need multiple bots working in an environment where manipulation is much harder to disguise(if not impossible).

Everyone here has almost definitely talked too, read or otherwise engaged with a bot(or a human researcher/influencer trying to gauge your beliefs or manipulate them or both).

If you don’t want to be even open to manipulation-you basically need to live as a hermit now in the wilderness with no internet.
Oh I am 100% sure there are attempts to influence opinions here as well. But yeah, much smaller scale/much quicker caught.
 
A group from University of Zurich used /r/ChangeMyView to run a study which involved using large language model bots (archive)
View attachment 7282831
Redditors, angry that their comments in circlejerk about the current thing subreddit #9845734 could easily be replaced by robots, suddenly become experts in AI and research ethics or something.
Using this logic placebo groups should be told they're being given the placebo in medical research because transparency 🧠
View attachment 7282839
Not the heckin' publicorino! No, I'm not mad that reddit and other companies do this same sort of testing all the time without anyone's knowledge or consent. You can't use gmail... because you just can't!
View attachment 7282840
The word "community" has become completely meaningless, example 105943022345
View attachment 7282841
No bad actors have ever thought to bot reddit before!
View attachment 7282842
WON'T SOMEONE THINK OF THE SUBREDDIT RULES + "I am not a lawyer" (trust me buddy, we all already knew)
View attachment 7282856
A LLM pretending to be PART OF A MARGINALIZED COMMUNITY (read: 85% of humanity). That's too far!
View attachment 7282860
We believe in science! We believe in the method!

No, NoT lIkE tHaT!!!!
 
Redditors are guillible as fuck retard morons, but the bot is WAY better than I expected it to be, considering its experimental too, I think governments/corporations might actually start deploying these for propaganda en mass.
 
University of Zurich disclosed that they were using bots after the fact. How many bots are operating that we don't know about? I would guess probably Dead Internet Theory levels.

Reddit responded:

1745926508946.webp
 
A group from University of Zurich used /r/ChangeMyView to run a study which involved using large language model bots (archive)
View attachment 7282831
Redditors, angry that their comments in circlejerk about the current thing subreddit #9845734 could easily be replaced by robots, suddenly become experts in AI and research ethics or something.
Using this logic placebo groups should be told they're being given the placebo in medical research because transparency 🧠
View attachment 7282839
Not the heckin' publicorino! No, I'm not mad that reddit and other companies do this same sort of testing all the time without anyone's knowledge or consent. You can't use gmail... because you just can't!
View attachment 7282840
The word "community" has become completely meaningless, example 105943022345
View attachment 7282841
No bad actors have ever thought to bot reddit before!
View attachment 7282842
WON'T SOMEONE THINK OF THE SUBREDDIT RULES + "I am not a lawyer" (trust me buddy, we all already knew)
View attachment 7282856
A LLM pretending to be PART OF A MARGINALIZED COMMUNITY (read: 85% of humanity). That's too far!
View attachment 7282860
Same reason why they hate April's Fools with a burning passion and made it disappear from most media, they think so highly of themselves that falling for things like that is a huge, huge blow to their fragile ego.
 
you want to make AI's hate troons too?

Out of curiosity I really want them to try. How many time more efficient it would be on kiwifarms? Would it be more or less than predditors? Time to place your bet my fellow kiwis!
The thing that makes the study little more than a novelty to me is that they went to "Consensus Central: The website", and then went to "Consensus Central: The subreddit", in essence they taught a bot how to manipulate retards that are already predisposed to consensus (because they use reddit), in the consensus finding subreddit into finding consensus.

It does demonstrate that AI can indeed manipulate the retards of society, but those retards could have been manipulated without AI to begin with.

The most disagreement you can find on reddit is "I think that maybe we should limit 3 day old babies to puberty blockers instead of full blown surgery", and there's very little dissent allowed so its easy to find a bot that hits the sweetspot.

Finding that sweetspot in more wild websites where you're actually allowed to disagree is gonna be considerably harder.
 
The thing that makes the study little more than a novelty to me is that they went to "Consensus Central: The website", and then went to "Consensus Central: The subreddit", in essence they taught a bot how to manipulate retards that are already predisposed to consensus (because they use reddit), in the consensus finding subreddit into finding consensus.

It does demonstrate that AI can indeed manipulate the retards of society, but those retards could have been manipulated without AI to begin with.

The most disagreement you can find on reddit is "I think that maybe we should limit 3 day old babies to puberty blockers instead of full blown surgery", and there's very little dissent allowed so its easy to find a bot that hits the sweetspot.

Finding that sweetspot in more wild websites where you're actually allowed to disagree is gonna be considerably harder.
look up cognitive infiltration.

The goal in “disrupting extremist communities” is not so much convincing them they are wrong-but causing them to doubt their own convictions.

An influence op on Kiwifarms for example might aim to make most Kiwis doubt that trannies actually are repulsive vermin, arguing that this belief is irrational, not backed by science or makes you stupid or a social outcast.

TLDR; The goal is more often to confuse, demoralize and disconcert.
 
One of the things I saw was how the bots actually observed the posting histories of the people they were trying to convince.

Personalized propaganda-imagine everything you’ve ever written online, being absorbed into a profile that is then used to accurately gauge your beliefs, personality, weaknesses and what would be most effective at changing your opinion.
That is the most insidious part of the study and the most important one imo. Figuring out how to personalize propaganda to this degree is probably the next real step and the biggest advancement in propaganda generation in human history.
 
TLDR; The goal is more often to confuse, demoralize and disconcert.
no different from schizoid ramblings, at least schizos try to make sense and that is something AI's can't do yet without being too on the nose.
the message from plebbit admin is too funny however "oh we are going to take legal action..." for what again?
they might get something because of the regular boomer/politician inherent hatred for AI, put in judges and lawyers too since AI has been proven to do a better job than them, time will tell.
 
As someone who is also doing a research project on AI, I had to fill out an ethics form that would be sent to the board of ethics to make sure I'm not performing some kind of horrid experiment there. Mine didn't involve any humans so it was pretty straightforward but there were like four or five pages of nothing but "What is the scope of human involvement in your project", "Can you guarantee that all human subjects are aware of the experiment with written consent", "Can you ensure that the data will not be used for harm, and if so, what measures are you taking to minimize harm?", "Is the research being performed in a public forum?" and stuff like that.
So what I'm wondering is, how the fuck did a research project like this get approved by any sane ethics committee?
I did some research in Psychology, where you often _can't_ fully inform your participants up-front, and sometimes do have to intentionally upset them. Lying to participants will be accepted by the ethics board if you can justify that the deceit would cause minimal distress/harm and you pinky pwomise to debrief your unwitting subjects afterward. Most commonly, you rope them in under false pretenses (i.e. lie about the purpose of the study, but still get consent from them to be part of "an experiment") rather than covertly fucking with John Q Public on the street. That said, you can be given permission to proceed without consent when risks are trivial and it's absolutely necessary that participants be unaware.

Further, some experiments are "messy" and require the researcher to cause some non-trivial amount of harm or distress (medical trials, studies on fear/stress responses, etc.); in those scenarios it's the job of the researcher to argue to the ethics board that the ends justify the means. The buck typically stops at causing "lasting damage" (e.g. breaking bones, inducing mental disorders, poisoning, etc. - anything likely to stick around for a while in spite of any standard debriefing), anything short of that is usually fair play if you've got a compelling hypothesis. It is perfectly ethical to knowingly harm some of your test subjects as long as you clean the mess up after, and you're up-front about it by warning your participants that they will get hurt/upset before they opt in.

The novelty here is they've *allegedly* done both, which is generally a no-no; you can harm participants, you can dupe participants, but you should never dupe a participant into harm. The obvious justification from the ethics board here would be that unhappiness at being "tricked" by an AI is trivial and not real harm (minimal risk), and consent can't be sought without invalidating the experiment. I.e. the risk is low enough and the deception is necessary for the experiment to work.

If it were me, my chief concern wouldn't have been the lying, but that they outright broke the subreddit rules. Not because redditors matter, but because it makes the researchers and the uni (and their entire field of study, to some degree) look like assholes to a big audience, which is a much more real issue than Timmy from Reddit being upset at being confronted by his own retardation because he fell for a bot's appeals to pathos.

I don't think this was 100% ethical conduct because of that reputation risk, but I don't care and think the results are still useful and worth it. Depending on how much this spreads in academia we might not see any future experiments like it because too many tards broke out in histrionics. Same as what happened with Milgram; Sad!
 
So what I'm wondering is, how the fuck did a research project like this get approved by any sane ethics committee?
Zurich Research Lab: I want to do an experiment
Ethics Board: Sure, what are you experimenting on?
Zurich Research Lab: Redditors
Ethics Board: Perfect. No Humans. You can proceed with the trials.
1745930569748.webp
 
Back
Top Bottom