The Rise, Fall, and Return of Kiwi Farms After Deplatforming Efforts - Pseudo intellectual article attempts to frame the farms deplatforming as authoritatively retarded as possible

Authors:
(1) Anh V. Vu, University of Cambridge, Cambridge Cybercrime Centre (anh.vu@cl.cam.ac.uk);
(2) Alice Hutchings, University of Cambridge, Cambridge Cybercrime Centre (alice.hutchings@cl.cam.ac.uk);
(3) Ross Anderson, University of Cambridge, and University of Edinburgh (ross.anderson@cl.cam.ac.uk).

2.1. Related Work

Most studies assessing the impact of deplatforming have worked with data on social networks. Deplatforming users may reduce activity and toxicity levels of relevant actors on Twitter [28] and Reddit [29], [30], limit the spread of conspiratorial disinformation on Facebook [31], reduce the engagement of peripheral members with hateful content [44], and minimise disinformation and extreme speech on YouTube [32]. But deplatforming has often made hate groups and individuals even more extreme, toxic and radicalised. They may view the disruption of their platform as an attack on their shared beliefs and values, and move to even more toxic places to continue spreading their message. There are many examples: the Reddit ban of r/incels in November 2017 led to the emergence of two standalone forums, incels.is and incels.net, which then grew rapidly; users banned from Twitter and Reddit exhibit higher levels of toxicity when migrating to Gab [33]; users migrated to their own standalone websites after getting banned from r/The Donald expressed higher levels of toxicity and radicalisation, even though their posting activity on the new platform decreased [45], [46]; the ‘Great Deplatforming’ directed users to other less regulated, more extreme platforms [47]; the activity of many right-wing users moved to Telegram increased multi-fold after being banned on major social media [34]; users banned from Twitter are more active on Gettr [48]; communities migrated to Voat from Reddit can be more resilient [49]; and roughly half of QAnon users moved to Poal after the Voat shutdown [50]. Blocking can also be ineffective for technical and implementation reasons: removing Facebook content after a delay appears to have been ineffective and had limited impact due to the short cycle of users’ engagement [51].

The major limitation of focusing on social networks is that these platforms are often under the control of a single tech company and thus content can be permanently removed without effective backup and recovery. We instead examine deplatforming a standalone website involving a concerted effort on a much wider scale by a series of tech companies, including some big entities that handle a large amount of Internet traffic. Such standalone communities, for instance, websites and forums, may be more resilient as the admin has control of all the content, facilitating easy backups and restores. While existing studies measure changes in posting activity and the behaviours of actors when their place is disrupted, we also provide insights about other stakeholders such as the forum operators, the community leading the campaign, and the tech firms that attempted the takedown.

Previous work has documented the impacts of law enforcement and industry interventions on online cybercrime marketplaces [20], cryptocurrency market price [52], DDoSfor-hire services [14], [15], the Kelihos, Zeus, and Nitol botnets [53], and the well-known click fraud network ZeroAccess [54]; yet how effective a concerted effort of several tech firms can be in deplatforming an extreme and radicalised community remains unstudied.

2.2. The Kiwi Farms Disruption
KIWI FARMS had been growing steadily over a decade (see Figure 1) and had been under Cloudflare’s DDoS protection for some years.[2] An increase of roughly 50% in forum activity happened during the COVID-19 lockdown starting in March 2020, presumably as people were spending more time online. Prior interventions have resulted in the forum getting banned from Google Adsense, and from Mastercard, Visa and PayPal in 2016; from hundreds of VPS providers between 2014–2019 [55]; and from selling merchandise on the print-on-demand marketplace Redbubble in 2016. XenForo, a close-source forum platform, revoked its license in late 2021 [56]. DreamHost stopped its domain registration in July 2021 after a software developer killed himself after being harassed by the site’s users. This did not disrupt the forum as it was given 14 days to seek another registrar [57]. While these interventions may have had negative effects on its profit and loss account, they did not impact its activity overall. The only significant disruption in the forum’s history was between 22 January and 9 February 2017 (19 days), when the forum’s owner suspended it himself due to his family being harassed [58].[3]

The disruption studied in this work was started by the online community in 2022. A malicious alarm was sent to the police in London, Ontario by a forum member on 5 August 2022, claiming that a Canadian trans activist had committed murders and was planning more, leading to her being swatted [23]. She and her family were then repeatedly tracked, doxxed, threatened, and generally harassed. In return, she launched a campaign on Twitter on 22 August 2022 under the hashtag #dropkiwifarms and planned a protest outside Cloudflare’s headquarters to pressure the company to deplatform the site [59]. This campaign generated lots of attention and mainstream headlines, which ultimately resulted in several tech firms trying to shut down the forum. This is the first time that the forum was completely inaccessible for an extended period due to an external action, with no activity on any online places including the dark web. It attempted to recover twice, but even when it eventually returned online, the overall activity was roughly halved.

The majority of actions taken to disrupt the forum occurred within the first two months of the campaign. Most of them were widely covered in the media and can be checked against public statements made by the industry and the forum admins’ announcements (see Figure 2). The forum came under a large DDoS attack on 23 August 2022, one day after the campaign started. It was then unavailable from 27 to 28 August 2022 due to ISP blackholing. Cloudflare terminated their DDoS prevention service on 3 September 2022 – just 12 days after the Twitter campaign started – due to an “unprecedented emergency and immediate threat to human life” [24]. The forum was still supported by DDoSGuard (a Russian competitor to Cloudflare), but that firm also suspended service on 5 September 2022 [25]. The forum was still active on the dark web but this .onion site soon became inaccessible too. On 6 September 2022, hCaptcha dropped support; the forum was removed from the Internet Archive on the same day [60]. This left it under DiamWall’s DDoS protection and hosted on VanwaTech – a hosting provider describing themselves as neutral and noncensored [61]. On 15 September 2022, DiamWall terminated their protection [26] and the ‘.top’ domain provider also stopped support [27]. The forum was completely down from 19 to 26 September 2022 and from 23 to 29 October 2022. From 23 October 2022 onwards, several ISPs intermittently rejected announcements or blackholed routes to the forum due to violations of their acceptable use policy, including Voxility and Tier-1 providers such as Lumen, Arelion, GTT and Zayo. This is remarkable as there are only about 15 Tier-1 ISPs in the world. The forum admin devoted extensive effort to maintaining the infrastructure, fixing bugs, and providing guidance to users in response to password breaches. Eventually, by routing through other ISPs, KIWI FARMS was able to get back online on the clearnet and remain stable, particularly following its second recovery in October 2022.

(Link/Archive)

----------------------------------------------------------------------------------------------------

Formatting was fucked from the website so I'm not transferring any of the images. If this has already been posted then help me figure out how I can't find it on the site.

Edit: Sorry if it wasn't clear. This is not the entire article. I just posted over the part where they talk the most about the farms disruption. Check out archive link to see the full autistic hate of the farms on display!
 
Last edited:
We believe the harm and threats associated with online hate communities may justify action despite the right to free speech. But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law. It is unlikely that taking down a whole community or arresting its maintainer because of a crime committed by a single member can be proportionate. For a takedown to be justified as necessary, it must also be effective, and this case study shows how high a bar that could be. For a takedown to be in accordance with the law, it cannot simply be a response to public pressure. There must be a law or regulation that determines predictably whether a specific piece of content is illegal, and a judge or other neutral finder of fact would have to be involved.
The researchers flat out state that shutting down these online "hate" communities has had unintended consequences of making their users worse, and they admit that the "best" deplatforming attempt on Kiwi Farms ultimately failed... so the solution is to do the same thing, but harder?

Oh Lord, make my enemies ridiculous. And God granted it.
 
CC: @Null

You dumb niggers are commenting/posting about an article is fucking really old and was just reposted under a different title: https://kiwifarms.st/threads/no-eas...forum-to-suppress-hate-and-harassment.202595/

This was the original title of the paper: https://arxiv.org/pdf/2304.07037

That thread was posted a year ago and even then @Null closed it and said this:

edit: where is my forum award for reading good and being able to search shit properly?
Wait that wasn't even the first mention of the paper! It was the penultimate first mention of the paper, it looks like it first came up because Liz Fong Jones gave a guest talk about it at fucking Cambridge and gave feedback about it: https://kiwifarms.st/threads/liz-fong-jones-elliot-william-fong-lizthegrey.128419/post-17161355


Honorary penultimate reading, research and comprehension award for 2025 of all of Kiwifarms users goes to @neger psykolog

TLDR: this paper is old as fucking shit, hackernoon is just rehashing it under a new title.
You niggas got clickbaited
 
What a shit paper, their datasets end in 2022, we've gotten significantly more traffic and active members since then.
I bet you were one of those increasingly toxic newcomers (2 times higher toxicity rating, as judged by academic peers) they reference here.

Toxicity Levels​


We further examine the toxicity of posts made by the surviving actors and newcomers, before and after the disruption. Figure 9 shows the average levels of toxicity, identity attack and threat of core survivors, casual survivors, and newcomers by days. In general, the toxicity, identity attack, and threat scores were rather low as most postings are non-toxic (despite some having very high scores). There were small changes in the average scores of surviving actors, notably the peaks occurred 2 days after the campaign sparked on Twitter, with the average scores increasing significantly to around 30–50%, especially toxicity and identity attack. However, these dropped quickly a couple of days after and retreated to normal levels.

Newcomers, on the other hand, expressed a significant increase of toxicity and identity attack during the first two weeks after the disruption took place (about 2–2.5 times higher), largely surpassing surviving actors. Their scores for threat did not increase at that time but largely peaked after the forum first recovered on 27 September 2022, with around 2 times higher. These activities suggest that while the surviving members were becoming more toxic when their community was under attack, new users became much more toxic for a few weeks after they engaged in the discussion before declining gradually to the same levels as old users. This is in line with the recent finding that users moving to other platforms can become more toxic than before [33].
 
  • Informative
Reactions: IAmNotAlpharius
Imagine putting in all of that effort into collecting all these stats when you could just spend a week on the site reading through the various subforums on the site and realize that KF is a collection of people from all walks of life.
The second they saw a slur, they would have to stop for the day and decompress until they had the spoons to continue. They do not have the ability to read and consume raw inputs, they can only do "meta studies" that scan other people's work. They live off pre-assembled data sets (summarized for their convenience), generated graphs, and academic abstracts that lay out the accepted conclusions. They can't actually get out in the "field" and do "work" like learning how to use a forum. What do you think they are, filthy underpaid grad students?

These people are not trained to do critical analysis. They are further ideologically inured against the dangerous conclusions it might force on them. They know the lingo and the format of intellectual discourse, but not the source of the insights they should convey.

Modern academics are incapable of doing basic intellectual work.
 
There are multiple research programmes and field experiments to effectively detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate. But most countries still lack a unifying strategy for violence reduction [113]. In both the US and the UK, for example, while incel-related violence against women falls under the formal definition of terrorism, it is excluded from police counterterrorism practice, and the politicisation of misogyny has made this a tussle space in which political leaders and police chiefs have difficulty in taking effective action. In turbulent debates, policymakers should first ask which tools are likely to work, and it is in this context that we offer the present case study.
Funny that the country with the world monopoly on importing child rapists is mostly concerned with people saying problematic shit online.
 

She is a professor of emergent harms.

Can someone please explain to me wtf is an emergent harm?
Screenshot from 2025-08-08 19-02-19.webp
 
@neger psykolog Have you considered maybe even someone terminally online doesn’t have time to read every thread or research every iteration of an article and that recent exposure of previous hilarious ‘scientific’ wankery over the deadly KiwiFarms is still hilarious and deserves mocking even if it is old?
this forum is supposed to be the land of the terminally online.

i'm only familiar with this area because I find weird research papers about the farms, chris chan and other shit from time to time but i've always skipped this one over because it is very boring

here are some examples of bizarre/weird/interesting ones:
 
I’ve learned more here about the human condition, than any psych class I blew thousands of dollars on.
Shit, I think I’ve been more tolerant to people’s weird traits because now I know it could be much worse, they could be eating microwaved diapers or torturing animals, fucking thier mothers or playing with feces.
 
Last edited:
Academics claim to be smarter than the rest of us, yet open that door not realizing how quickly it can turn on them.
This reminds me of my critical theory seminar in grad school. There were a few members of my cohort with clear Dark Triad traits, who all had plans for marketing or law graduate programs after completing our lit program. As we studied Foucault, Baudrillard, Deleuze and Guattari, etc., I observed that my peers were not using their theories as interpretive methods or tools, but as instruction manuals for cultural and social control. I reached my breaking point after an academic conference where I saw this manipulation of theory in full effect. There were dogmatic frameworks of interpretation structuring nearly every panel, and heterodox ideas were met with silence or hostility. Nearly ten years after dropping out, it’s wild seeing how this dogma has exfiltrated the academy and governs so much popular discourse.
 
An excerpt from the older article

"They were actively maintaining infrastructure, fixing bugs, and giving instructions to users to deal with their passwords when the forum experienced a breach. The operators’ effort seemed to be competent and consistent."

>The operators’ effort seemed to be competent and consistent

:'( They misspelled ooperator!
 
I'm sure the Academic doesn't want to call for death of every website admin when it lives in a shithole known as the UK. Perhaps the people there would be more inclined to kill bobbies than to go against mean admins.
 
i've always skipped this one over because it is very boring
I find it very interesting. Thanks for linking the older thread, it's much easier to read and has a very good transcript.

The study itself isn't as one sided as I was expecting. They give credit where it is due and mostly focus on data analysis. They discuss methods in which full shutdown of "unpleasant" sites could be achieved and offer a relatively competent analysis of the #dropkiwifarms timeline.

All three authors hold seminars fucking EVERYWHERE! And have the ears of many academic institutions all over the world. For example Alice recently held a seminar at The Alan Turing Institute. None of them are small fish it seems.

It's very valuable to see how these circles think and observe things, cause they will have the ears of people who might use their advice and insight in a more active and offensive role.
 
Back