Opinion Deplatforming hate forums doesn't work, British boffins warn - Industry intervention alone can't deal with harassment


Depriving online hate groups of network services - otherwise known as deplatforming - doesn't work very well, according to boffins based in the United Kingdom.

In a recently released preprint paper, Anh Vu, Alice Hutchings, and Ross Anderson, from the University of Cambridge and the University of Edinburgh, examine efforts to disrupt harassment forum Kiwi Farms and find that community and industry interventions have been largely ineffective.

Their study, undertaken as lawmakers around the world are considering policies that aspire to moderate unlawful or undesirable online behavior, reveals that deplatforming has only a modest impact and those running harmful sites remain free to carry on harassing people through other services.

"Deplatforming users may reduce activity and toxicity levels of relevant actors on Twitter and Reddit, limit the spread of conspiratorial disinformation on Facebook, and minimize disinformation and extreme speech on YouTube," they write in their paper. "But deplatforming has often made hate groups and individuals even more extreme, toxic and radicalized."

As examples, they cite how Reddit's ban of r/incels in November 2017 led to the creation of two incel domains, which then grew rapidly. They also point to how users banned from Twitter and Reddit "exhibit higher levels of toxicity when migrating to Gab," among other similar situations.

The researchers focus on the deplatforming of Kiwi Farms, an online forum where users participate in efforts to harass prominent online figures. One such person was a Canadian transgender streamer known as @Keffals on Twitter and Twitch.

In early August last year, a Kiwi Farms forum member allegedly sent a malicious warning to police in London, Ontario, claiming that @Keffals had committed murder and was planning further violence, which resulted in her being "swatted - a form of attack that has proved lethal in some cases.

Following further doxxing, threats, and harassment, @Keffals organized a successful campaign to pressure Cloudflare to stop providing Kiwi Farms with reverse proxy security protection, which helped the forum defend against denial-of-service attacks.

The research paper outlines the various interventions taken by internet companies against Kiwi Farms. After Cloudflare dropped Kiwi Farms on September 3 last year, DDoS-Guard did so two days later. The following day, the Internet Archive and hCaptcha severed ties.

On September 10, the kiwifarms.is domain stopped working. Five days later, security firm DiamWall suspended service for those operating the site.

On September 18, all the domains used by the forum became inaccessible, possibly related to an alleged data breach. But then, as the researchers observe, the Kiwi Farms dark web forum was back by September 29. There were further intermittent outages on October 9 and October 22, but since then Kiwi Farms has been active, apart from brief service interruptions.

"The disruption was more effective than previous DDoS attacks on the forum, as observed from our datasets. Yet the impact, although considerable, was short-lived." the researchers state.

"While part of the activity was shifted to Telegram, half of the core members returned quickly after the forum recovered. And while most casual users were shaken off, others turned up to replace them. Cutting forum activity and users by half might be a success if the goal of the campaign is just to hurt the forum, but if the objective was to 'drop the forum,' it has failed."

Hate is difficult to shift

One reason for the durability of such sites, the authors suggest, is that activists get bored and move on, while trolls are motivated to endure and survive. They argue that deplatforming doesn't look like a long-term solution because, while casual harassment forum participants may scatter, core members become more determined and can recruit replacements through the publicity arising from censorship.

Vu, Hutchings, and Anderson argue that deplatforming by itself is insufficient and needs to be done in the context of a legal regime that can enforce compliance. Unfortunately, they note, this framework doesn't currently exist.

"We believe the harms and threats associated with online hate communities may justify action despite the right to free speech," the authors conclude. "But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law."

They also contend that police work needs to be paired with social work, specifically education and psycho-social support, to deprogram hate among participants in such forums.

"There are multiple research programs and field experiments on effective ways to detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate," they argue. "But most countries still lack a unifying strategy for violence reduction." ®
 
I’d really like Josh to
I’d like it if people with stubbornness and morality, like Josh, held more political / power wielding roles. Imagine if the people in charge actually had principles, valued freedom and governed instead of ruled. However I wouldn’t wish being a politician on anyone.
 
Amazing. They are just now figuring out what autists have been discussing since before 2016. Censoring discussion only drives it underground and the mere fact of the censorship adds legitimacy to the message.

The fact of the matter is, these dipshits are not operating with objective truth on their side, which is why their memes will always be cringe, their shills will always be ineffective and their agents will always stick out like a sore thumb.

View attachment 5065905
Reminds me of the Streisand Effect, where trying to hide, cover up, or complain about anything like it's a big deal will just result in much more public curiosity about that thing and cause people to probe for it.

In either case the best tactic is to completely ignore the thing or movement you don't want people noticing. It's happened time and time again no one seems to have figured this out yet.
 
Probably late since I haven't read all 15 pages but I’ve been an El Reg loyalist since the late 90s. Simon/BOFH was still new, it was only register.co.uk, etc. funny to see that their intentionally irreverent take to tech is starting to shift.
Starting? It's been on the way out for years. The register essentially died with Lester. The final destruction of its former personality was the decision to enforce American English in all its articles.
 
Starting? It's been on the way out for years. The register essentially died with Lester. The final destruction of its former personality was the decision to enforce American English in all its articles.
They actually cucked out to that degree while still having a .co.uk address? This is not the Register that I remember.

What gormless cucks.
 
Do these "people," if they can even be called that, realize how insane they are saying people who disagree with them should be literally rounded up into concentration camps and brainwashed, and obviously murdered if the brainwashing doesn't take?
 
I’d like it if people with stubbornness and morality, like Josh, held more political / power wielding roles. Imagine if the people in charge actually had principles, valued freedom and governed instead of ruled. However I wouldn’t wish being a politician on anyone.

I see KF as largely Libertarians, at least in spirit. But this is how I see their approach is currently turning out.

Ft1NRRiWYAEOPc_.jpg
 
Last edited by a moderator:
You'd get banned in minutes for exclaiming any overt masculine thoughts. Not misognistic. Not chavaunistic, Just male thoughts.
At least they are consistent; banning all biological men, both trans-identifying and normal.
so what you're saying is we need TAD to improve the signal to noise ratio
Found Pol Pot's account.
yeah, i just view those people, minus the tranny, as potential recruits. put them in a room full of kiwis and you get more kiwis.
Just imagine, we could finally get a Vietnamese guy to say the magic word.
No Vietnamese ever.jpg
 
The title of the OP reminded me of this article I ran across last year:

Deplatforming Does Not Solve the Problem of Extremist Content Online
Centre for International Governance Innovation (archive.ph)
By Natasha Tusikov
2022-10-27 14:02:00GMT
The Kiwi Farms case in August 2022 demonstrates the persistent, significant challenges to addressing online hate.
It has been five years since the violent events of August 2017 in Charlottesville, Virginia, where white supremacists rallied by torchlight in their effort to “Unite the Right“ as part of the “Summer of Hate” organized by the neo-Nazi Daily Stormer website. Legal proceedings against rally organizers revealed they had used Facebook, Instagram and Discord to plan a violent riot, in which anti-racism activist Heather Heyer was killed. It’s a pertinent moment to reflect on the regulatory responsibilities of technology companies.

Despite regulatory efforts worldwide, major questions remain about when and how these platform companies — and governments — should act in relation to illegal and harmful conduct online. As regards online hate, we’re arguably in even greater need of meaningful action now than in 2017 — with the increased visible presence of white supremacist groups in Canada, the United States and elsewhere, and right-wing political parties courting extremists’ votes.

Following that march in Charlottesville, anti-racism activists called on companies to withdraw their services from the white supremacists. At the time, I wrote about Cloudflare’s termination of its security services to the neo-Nazi site The Daily Stormer, and reflected upon enforcement challenges. In the years since, those challenges have multiplied, as governments have been largely absent in this area (Germany’s online anti-hate legislation being a notable exception), leaving civil society groups to pressure tech companies to remove hate speech and ban offenders.

The Kiwi Farms case in August 2022 demonstrates the persistent, significant challenges to addressing online hate. Founded in 2013, Kiwi Farms is a far-right web forum whose members have a history of doxing, harassing and advocating violence against LGBTQ+ people, particularly trans people, including making death threats. LGBTQ+ activists called on Cloudflare, the company that provided its security services to Kiwi Farms to deter cyberattacks, to terminate its services. Cloudflare initially resisted, stating on August 31 that “voluntarily terminating access to services that protect against cyberattack is not the correct approach.”

After concerted pressure from activists who orchestrated popular social media campaigns against Cloudflare, it caved. On September 3, Cloudflare blocked Kiwi Farms from its services. Since then, Kiwi Farms has struggled to maintain services, similar to other entities that lost services because of their promotion of violent hate speech, such as the neo-Nazi Daily Stormer.

What can we learn from the Kiwi Farms case? First, we should not consider the deplatforming of such entities as a straightforward success, even as we applaud the outcome. Marginalizing Kiwi Farms to the murky edges of the web is undoubtedly a social good, but it operated unscathed for years, causing misery. It was only after considerable pressure that Cloudflare acted, a pattern similar to cases of hate speech I have written about in which online intermediaries tend to respond to public pressure and possible reputational damage.

Second, we should question why these cases require such significant labour from civil society activists, coupled with well-timed, successful social media campaigns, to compel action from often-reluctant companies. Activists’ multi-year efforts to take down Kiwi Farms garnered mainstream attention when Canadian Twitch streamer and trans advocate Clara Sorrenti, known as “Keffals” online, mounted a social media campaign against Cloudflare for its service provision to Kiwi Farms, which forced Cloudflare into action. Sorrenti began the campaign after Kiwi Farms harassed her, forcing her to temporarily leave Canada for her safety. Relying upon civil society groups, often from vulnerable, marginalized communities, to orchestrate global campaigns to convince big tech companies to take action against known bad actors is unsustainable and unfair to communities struggling to assert their human rights.

Third, and most controversial from the perspective of free-speech advocates, the state needs to play a direct role in regulating hate speech. We should ask why governments, including their law enforcement agencies, can be notably — and inexcusably — absent in addressing cases of online hate. In some cases, it’s about enforcing the laws already on the books. Kiwi Farms was a straightforward case of bad actors employing hate speech, inciting violence, stalking and harassing victims — all criminal offences with real-world manifestations.

In other cases, hate speech legislation may need to be amended. In Canada, for example, the federal government is in the process of strengthening legislation on hate speech. However, any legislative change must be accompanied by sufficient enforcement resources, and the institutional will, to address crimes.

Finally, we need to strongly counter tech companies’ claims of neutrality. Companies often contend that they are neutral providers of technical services and cannot effectively judge the legality of content on their networks. Cloudflare, for example, compared itself to a telephone company and a fire department in arguing that it should not distinguish among users, even though days later it terminated its services to Kiwi Farms.

Private companies supply the critical infrastructure making up the content layer of the internet. Their offerings range from the familiar services of payment, web hosting and marketplaces to the somewhat less commonly understood services of the domain name system and security providers that guard against cyberattacks. Many of these services have become essential as critical infrastructure, thereby leading civil society groups to regularly call upon the providers to act against problems such as hate groups.

The reality is that tech companies routinely do differentiate among users to serve their commercial interests. Social media companies, for example, amplify certain content and downrank other types. They and other intermediaries also block legal sexual content, in part because companies fear violating broadly worded, controversial US laws targeting sex trafficking. In short, companies already intervene to regulate speech and discipline their users, but they do so on their terms and according to their commercial preferences.

Five years from now we cannot be in the same position: taking a reactive, ad hoc approach to violent online hate speech and sitting back while activists push big tech firms into action. We need to resist companies’ self-serving claims of neutrality. We must also recognize companies’ commercial moderation practices for what they are — regulation, but designed to serve corporate interests rather than the public good.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the author:
Natasha Tusikov is an associate professor of criminology in the Department of Social Science at York University and a visiting fellow with the School of Regulation and Global Governance (RegNet) at the Australian National University.

Good luck rounding people up and locking them away and thinking that will stop shitposting. In American prisons, smuggled cellphones are practically ubiquitous.

On another note, let this be a reminder to keep your online identities compartmentalized from each other and your real life. Misdirection and obfuscating details about yourself are important, doubly so when you're a wrongthinker.
 
I’d like it if people with stubbornness and morality, like Josh, held more political / power wielding roles. Imagine if the people in charge actually had principles, valued freedom and governed instead of ruled. However I wouldn’t wish being a politician on anyone.
To be honest I kinda wish I were one just to Leeroy Jenkins any discussion about topics like free speech and so on "Time's up let's do this! DOWN WITH HATE SPEECH LAWS!"
 
  • Like
Reactions: Otterly
Thank god we gave women the right to vote
Trannies, faggots, waves nigger and muslim migrants and now wrong think camps. We've really go everything

Editing my post to call this guy a retard:
Amazing. They are just now figuring out what autists have been discussing since before 2016. Censoring discussion only drives it underground and the mere fact of the censorship adds legitimacy to the message.

The fact of the matter is, these dipshits are not operating with objective truth on their side, which is why their memes will always be cringe, their shills will always be ineffective and their agents will always stick out like a sore thumb.
Imagine living in China or North Korea and thinking that the government cracking down on dissent somehow means "you're winning."
"People like me who are against the system are being thrown in camps? Awesome! The system is bound to fall any day now"
 
  • Like
Reactions: Bananadana
There are multiple research programs and field experiments on effective ways to detox young men from misogynistic attitudes, whether in youth clubs
Sorry, old man. By the time they start implementing this you will definitely not qualify.
Your kids though, different story. You can only imagine devilishness of a young lad's mind, who scores all the tolerance tests at school while still mocking trannies on this site.
If they were smart they would just recruit such kids to be leaders of controlled opposition. Maybe the idea of these camps is to be resort as a bribe for them, with blackjack and hookers. Sadly, you are not invited. Only glowie-ops and social media life ruination for you.
 
It has been five years since the violent events of August 2017 in Charlottesville, Virginia, where white supremacists rallied by torchlight in their effort to “Unite the Right“ as part of the “Summer of Hate” organized by the neo-Nazi Daily Stormer website. Legal proceedings against rally organizers revealed they had used Facebook, Instagram and Discord to plan a violent riot, in which anti-racism activist Heather Heyer was killed. It’s a pertinent moment to reflect on the regulatory responsibilities of technology companies.
These people have really created an alternative reality. Almost nothing in this one opening paragraph is true. No "violent riot" was planned; it was a protest against removing a statue, and the protesters were attacked. Yes organizers talked to each other on social media as people do. There was no broader "Summer of Hate" effort, and it was not Anglin's protest. And so on.

Yet the ominous last sentence is on point. Tech companies are censoring free thought or will be subject to policies requiring them to censor, based on 1984-style rewriting of history like this.
 
Back