Opinion Deplatforming hate forums doesn't work, British boffins warn - Industry intervention alone can't deal with harassment

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.

Depriving online hate groups of network services - otherwise known as deplatforming - doesn't work very well, according to boffins based in the United Kingdom.

In a recently released preprint paper, Anh Vu, Alice Hutchings, and Ross Anderson, from the University of Cambridge and the University of Edinburgh, examine efforts to disrupt harassment forum Kiwi Farms and find that community and industry interventions have been largely ineffective.

Their study, undertaken as lawmakers around the world are considering policies that aspire to moderate unlawful or undesirable online behavior, reveals that deplatforming has only a modest impact and those running harmful sites remain free to carry on harassing people through other services.

"Deplatforming users may reduce activity and toxicity levels of relevant actors on Twitter and Reddit, limit the spread of conspiratorial disinformation on Facebook, and minimize disinformation and extreme speech on YouTube," they write in their paper. "But deplatforming has often made hate groups and individuals even more extreme, toxic and radicalized."

As examples, they cite how Reddit's ban of r/incels in November 2017 led to the creation of two incel domains, which then grew rapidly. They also point to how users banned from Twitter and Reddit "exhibit higher levels of toxicity when migrating to Gab," among other similar situations.

The researchers focus on the deplatforming of Kiwi Farms, an online forum where users participate in efforts to harass prominent online figures. One such person was a Canadian transgender streamer known as @Keffals on Twitter and Twitch.

In early August last year, a Kiwi Farms forum member allegedly sent a malicious warning to police in London, Ontario, claiming that @Keffals had committed murder and was planning further violence, which resulted in her being "swatted - a form of attack that has proved lethal in some cases.

Following further doxxing, threats, and harassment, @Keffals organized a successful campaign to pressure Cloudflare to stop providing Kiwi Farms with reverse proxy security protection, which helped the forum defend against denial-of-service attacks.

The research paper outlines the various interventions taken by internet companies against Kiwi Farms. After Cloudflare dropped Kiwi Farms on September 3 last year, DDoS-Guard did so two days later. The following day, the Internet Archive and hCaptcha severed ties.

On September 10, the kiwifarms.is domain stopped working. Five days later, security firm DiamWall suspended service for those operating the site.

On September 18, all the domains used by the forum became inaccessible, possibly related to an alleged data breach. But then, as the researchers observe, the Kiwi Farms dark web forum was back by September 29. There were further intermittent outages on October 9 and October 22, but since then Kiwi Farms has been active, apart from brief service interruptions.

"The disruption was more effective than previous DDoS attacks on the forum, as observed from our datasets. Yet the impact, although considerable, was short-lived." the researchers state.

"While part of the activity was shifted to Telegram, half of the core members returned quickly after the forum recovered. And while most casual users were shaken off, others turned up to replace them. Cutting forum activity and users by half might be a success if the goal of the campaign is just to hurt the forum, but if the objective was to 'drop the forum,' it has failed."

Hate is difficult to shift

One reason for the durability of such sites, the authors suggest, is that activists get bored and move on, while trolls are motivated to endure and survive. They argue that deplatforming doesn't look like a long-term solution because, while casual harassment forum participants may scatter, core members become more determined and can recruit replacements through the publicity arising from censorship.

Vu, Hutchings, and Anderson argue that deplatforming by itself is insufficient and needs to be done in the context of a legal regime that can enforce compliance. Unfortunately, they note, this framework doesn't currently exist.

"We believe the harms and threats associated with online hate communities may justify action despite the right to free speech," the authors conclude. "But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law."

They also contend that police work needs to be paired with social work, specifically education and psycho-social support, to deprogram hate among participants in such forums.

"There are multiple research programs and field experiments on effective ways to detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate," they argue. "But most countries still lack a unifying strategy for violence reduction." ®
 
One reason for the durability of such sites, the authors suggest, is that activists get bored and move on, while trolls are motivated to endure and survive. They argue that deplatforming doesn't look like a long-term solution because, while casual harassment forum participants may scatter, core members become more determined and can recruit replacements through the publicity arising from censorship.
To actual hate groups, not gossip forums like this one and some of 4chan's old guard, the very censorship is the reason those groups get so large and so strong. Isolating and silencing angry people does nothing but concentrate that hate in ways that might go off in very ugly ways:
1707368120348.jpeg


Don't support pouring fuel on a fire.
 
There's an academic paper on the battle to take down that evil nazi hate speech forum, Kiwi Farms. It's way too long to copy here, so here's the PDF:

God that was enraging. The conclusion makes me want to throttle someone.

Online communities may not only act as a discussion place but provide mutual support for members who share common values. For some, it may be where they hang out; for others, it may become part of their identity. Legislators who propose to ban an online community might consider precedents such as Britain’s ban on Provisional Sinn Fein from 1988–94 due ´ to its support for the Provisional IRA during the Troubles, or the bans on the Muslim Brotherhood enacted by various Arab regimes.14 Declaring a community to be illegal and thus forcing it underground may foster paranoid worldviews, increase signals associated with toxicity and radicalisation [44], [36] and have many other unintended consequences. The KIWI FARMS disruption, which involved a substantial effort by the industry, is perhaps the best outcome that could be expected even if the censor were agile, competent and persistent. Yet this has demonstrated that merely trying to deplatform an active online community is not enough to deal effectively with online hate and harassment.

We believe the harms and threats associated with online hate communities may justify action despite the right to free speech. But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law. It is unlikely that taking down a whole community because of a crime committed by a single member can be proportionate. For a takedown to be justified as necessary, it must also be effective, and this case study shows how high a bar that could be. For a takedown to be in accordance with the law, it cannot simply be a response to public pressure. There must be a law or regulation that determines predictably whether a specific piece of content is illegal, and a judge or other neutral finder of fact would have to be involved.

The last time a Labour government won power in Britain, it won on a promise to be ‘Tough on Crime, and Tough on the Causes of Crime’. Some scholars of online abuse are now coming to a similar conclusion that the issue may demand a more nuanced approach [3], [21]: as well as the targeted removal of content that passes an objective threshold of illegality, the private sector and governments should collaborate to combine takedowns with measures such as education and psycho-social support [89]. And where the illegality involves violence, it is even more vital to work with local police forces and social workers rather than just attacking the online symptoms [88].

There are multiple research programmes and field experiments on effective ways to detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate. But most countries still lack a unifying strategy for violence reduction [90]. In both the US and the UK, for example, while incel-related violence against women falls under the formal definition of terrorism, it is excluded from police counterterrorism practice, and the politicisation of misogyny has made this a tussle space in which political leaders and police chiefs have difficulty in taking effective action. In turbulent debates, policymakers should first ask which tools are likely to work, and it is in this context that we offer the present case study
Condoning the shit pulled on Null, still bitching about incels, encouraging ignoring people's rights to free speech, pretending the Labour government did anything about crime bar sow the seeds for things to get worse. This is a propaganda piece not a case study.
 
Condoning the shit pulled on Null, still bitching about incels, encouraging ignoring people's rights to free speech, pretending the Labour government did anything about crime bar sow the seeds for things to get worse. This is a propaganda piece not a case study.
Gee when you start conspiring against people they get this weird idea that you're out to get them or something. How confusing.
 
Gee when you start conspiring against people they get this weird idea that you're out to get them or something. How confusing.
To be fair to them they do point out that is a risk of deplatforming. Unfortunately their conclusion remains "so we need to get laws of laws in place to let us do so within a clear legal framework and find justification to use them."

Bear in mind that amongst recent humiliations experienced one of our counter terror efforts Prevent was found to be focusing too much on right wing terror that was not manifesting and ignoring Muslim terror risks while another counter terror effort turned out to be giving money to pro-Taliban groups. Seeing that shit occur and watching them learn nothing from it slowly moves opinions of people like these authors from "deliberately ignorant" to "actively malicious."

Note by the way that Alice Hutchings got a European Research Council grant for work, to quote the below article;
The grant to Dr Hutchings, who is Deputy Director of our Cambridge Cybercrime Centre, will fund five years of interdisciplinary research into high-tech cybercrime activities – such as unauthorised access to computer systems, denial-of-service attacks and some types of online frauds.


From this case study we can conclude she considers DDOS attacks fine when they aim at the right targets. How unusual.

The case study also repeats the Near suicide claim unquestioningly and claims that a forum member was behind Keffal's swatting despite no evidence of the same. Wikipedia quality research.
 
Back