Are a few people ruining the internet for the rest of us? - The Guardian learns to touch grass

Are a few people ruining the internet for the rest of us?


Archive

Are a few people ruining the internet for the rest of us?​

Why does the online world seem so toxic compared with normal life? Our research shows that a small number of divisive accounts could be responsible – and offers a way out


When I scroll through social media, I often leave demoralized, with the sense that the entire world is on fire and people are inflamed with hatred towards one another. Yet, when I step outside into the streets of New York City to grab a coffee or meet a friend for lunch, it feels downright tranquil. The contrast between the online world and my daily reality has only gotten more jarring.

Since my own work is focused on topics such as intergroup conflict, misinformation, technology and climate change, I’m aware of the many challenges facing humanity. Yet, it seems striking that people online seem to be just as furious about the finale of The White Lotus or the latest scandal involving a YouTuber. Everything is either the best thing ever or the absolute worst, no matter how trivial. Is that really what most of us are feeling? No, as it turns out. Our latest research suggests that what we’re seeing online is a warped image created by a very small group of highly active users.

In a paper I recently published with Claire Robertson and Kareena del Rosario, we found extensive evidence that social media is less like a neutral reflection of society and more like a funhouse mirror. It amplifies the loudest and most extreme voices while muting the moderate, the nuanced and the boringly reasonable. And much of that distortion, it turns out, can be traced back to a handful of hyperactive online voices. Just 10% of users produce roughly 97% of political tweets.

Let’s take Elon Musk’s own platform, X, as an example. Despite being home to hundreds of millions of users, a tiny fraction of them generate the vast majority of political content. For instance, Musk posted 1,494 times in his first 15 days of implementing government cuts for the so-called department of government efficiency (Doge)earlier this year. He was, essentially, writing non-stop. And many of his posts spread misinformation to his 221 million followers.

On 2 February he wrote, “Did you know that USAID, using YOUR tax dollars, funded bioweapon research, including Covid-19, that killed millions of people?” His behaviour fits the pattern of many misinformation super-spreaders. A mere 0.1% of users share 80% of fake news. Twelve accounts – known as the “disinformation dozen” – created most of the vaccine misinformation on Facebook during the pandemic. These few hyperactive users produced enough content to create the false perceptions that many people were vaccine hesitant.

Similar patterns can be observed across the internet. Only a small percentage of users engage in truly toxic behaviour, but they’re responsible for a disproportionate share of hostile or misleading content on nearly every platform, from Facebook to Reddit. Most people aren’t posting, arguing, or fuelling the outrage machine. But because the super-users are so active and visible, they dominate our collective impression of the internet.

That means the resulting problems don’t remain confined to this small cohort, which distorts how the rest of us make sense of the world. Humans create mental models about what other people think or do. It’s how we figure out social norms and navigate groups. But on social media, this shortcut backfires. We don’t get a representative sample of opinions. Instead, we see a flood of extreme, emotionally charged content.

In this way, many of us are led to believe that society is far more polarized, angry, and deluded than it really is. We think everyone on the other side of the generation gap, political spectrum, or fandom community is radical, malicious, or just plain dumb. Our information diet is shaped by a sliver of humanity whose job, identity, or obsession is to post constantly.

This distortion fuels pluralistic ignorance – when we misperceive what others believe or do – and can shift our own behaviour accordingly. Think of voters who see only the angriest hot takes about immigration or climate change and assume there’s no common ground to be found.

The problem isn’t just the individual extremists, of course – it’s the platform design and algorithms that amplify their content. These algorithms are built to maximise engagement, which means they privilege content that is surprising or divisive. The system is optimised to promote the very users who are most likely to distort our shared perception of reality.

It gets worse. Imagine you’re sitting in a busy restaurant, having to speak a little louder just to be heard. Before long, everyone is shouting. These same dynamics happen online. People exaggerate their beliefs or repeat outrageous narratives to get attention and approval. In other words, even people who aren’t especially extreme may start acting that way online, because it gets rewarded.
Most of us aren’t spending time on our phones trolling our foes. We’re busy working, raising families, spending time with friends, or simply trying to find some harmless entertainment on the internet. Yet, our voices are drowned out. We have effectively handed over a megaphone to the most obnoxious people and let them tell us what to believe and how to act.

With over 5 billion people now on social media, this technology isn’t going away. But the toxic dynamic I’ve described doesn’t have to hold sway. The first step is to see through the illusion and understand that a silent majority often lurks behind each incendiary thread. And we, as users, can take back some control – by curating our feeds, resisting the outrage bait, and refusing to amplify the nonsense. Think of it like deciding to follow a healthier, less processed diet.

In a recent series of experiments, we paid people a few dollars to unfollow the most divisive political accounts on X. After a month, they reported feeling 23% less animosity towards other political groups. In fact, their experience was so positive that nearly half the people declined to refollow those hostile accounts after the study was over. And those who maintain their healthier newsfeed reported less animosity a full 11 months after the study.

Platforms could easily redesign their algorithms to stop promoting the most outrageous voices and prioritise more representative or nuanced content. Indeed, this is what most people want. The internet is a powerful, and often valuable tool. But if we keep letting it reflect only the funhouse mirror world created by the most extreme users, we’ll all suffer the consequences.

Jay Van Bavel is a professor of psychology at New York University.
 
Yes. It's you. You are that "people" Mr. Jay Van Bavel

1752538080233.webp

You can feel the smug liberalism coming off that picture. You just know this faggot is one of those "hate speech is not free speech". Also people who claim to study morality tend to be some of the least moral people ever, like the Antifa Bike Lock guy and the one NYU professor who argued one year olds could consent to sex.
 
Yes. It's you. You are that "people" Mr. Jay Van Bavel

View attachment 7644815

You can feel the smug liberalism coming off that picture. You just know this faggot is one of those "hate speech is not free speech". Also people who claim to study morality tend to be some of the least moral people ever, like the Antifa Bike Lock guy and the one NYU professor who argued one year olds could consent to sex.

Man is a sniper's dream
 
Yes. It's you. You are that "people" Mr. Jay Van Bavel

View attachment 7644815

You can feel the smug liberalism coming off that picture. You just know this faggot is one of those "hate speech is not free speech". Also people who claim to study morality tend to be some of the least moral people ever, like the Antifa Bike Lock guy and the one NYU professor who argued one year olds could consent to sex.
He's a leaf. Deport his ass.
 
Professor of psychology, so a very handsomely paid bullshit artist. Fuck off. You're losing the culture war, and Kameltoe's defeat was what finally drove it home for you. The harder you astroturfed her, the worse she looked. She didn't just lose, she was fucking terminated. So dilate harder, you dickless globohomo shill.
 
Platforms could easily redesign their algorithms to stop promoting the most outrageous voices and prioritise more representative or nuanced content. Indeed, this is what most people want.

There are a couple problems with that:

1) While everyone(sic) wants the "outrageous voices" silenced and "nuanced content" promoted, there is no consensus at all about what content should go in either category. What people tend to want is confirmation bias. A massive number of people see themselves - regardless of their own beliefs - as being "representative" of some sort of common middle ground which is supported by all good people. Even though that is invariably not true.

2) The algorithms never deliver nuance in any form. By their very nature and the scope of what they are trying to analyze, they can only do a crude sort of analysis of any content and the results of the analysis tends to do as much harm as good.
 
  • Semper Fidelis
Reactions: Safir
Giving every paki and poojab a budget smartphone really destroyed the internet
it didnt. their code and their scams are the issue, not them using the internet.
they dont cry to the corpos about banning stuff.
 
He's not entirely wrong, but he is wrong in just the right way to make any actionable advice disastrous.

It's not just a very few prolific (right wing) extreme people who make most (right wing) extreme content on social media, it's a very few prolific [x] people who make most [x] content on social media.

In the world of web 2.0 content production that must receive engagement and content from users for line to keep going up, if you tweak things with the goal of merely eliminating the reach of (right wing) extremists, you'll just get another group of serial slop-pushing time wasters bubbling to the top, but if all you care about is (right wing) extremism, then you can declare "mission accomplished!" as whichever website "solved the problem" then goes to shit for reason of a different group of schizos.
 
Yes, the internet was far better when it was only used by educated white people and a smattering of Japs (and even then the Japs mostly hung out in their own corner of the net).
It wasn't so much education it was how difficult it was to use and how much patience it required.

It's Steve Jobs fault.
 
Back