The Rise, Fall, and Return of Kiwi Farms After Deplatforming Efforts - Pseudo intellectual article attempts to frame the farms deplatforming as authoritatively retarded as possible

Authors:
(1) Anh V. Vu, University of Cambridge, Cambridge Cybercrime Centre (anh.vu@cl.cam.ac.uk);
(2) Alice Hutchings, University of Cambridge, Cambridge Cybercrime Centre (alice.hutchings@cl.cam.ac.uk);
(3) Ross Anderson, University of Cambridge, and University of Edinburgh (ross.anderson@cl.cam.ac.uk).

2.1. Related Work

Most studies assessing the impact of deplatforming have worked with data on social networks. Deplatforming users may reduce activity and toxicity levels of relevant actors on Twitter [28] and Reddit [29], [30], limit the spread of conspiratorial disinformation on Facebook [31], reduce the engagement of peripheral members with hateful content [44], and minimise disinformation and extreme speech on YouTube [32]. But deplatforming has often made hate groups and individuals even more extreme, toxic and radicalised. They may view the disruption of their platform as an attack on their shared beliefs and values, and move to even more toxic places to continue spreading their message. There are many examples: the Reddit ban of r/incels in November 2017 led to the emergence of two standalone forums, incels.is and incels.net, which then grew rapidly; users banned from Twitter and Reddit exhibit higher levels of toxicity when migrating to Gab [33]; users migrated to their own standalone websites after getting banned from r/The Donald expressed higher levels of toxicity and radicalisation, even though their posting activity on the new platform decreased [45], [46]; the ‘Great Deplatforming’ directed users to other less regulated, more extreme platforms [47]; the activity of many right-wing users moved to Telegram increased multi-fold after being banned on major social media [34]; users banned from Twitter are more active on Gettr [48]; communities migrated to Voat from Reddit can be more resilient [49]; and roughly half of QAnon users moved to Poal after the Voat shutdown [50]. Blocking can also be ineffective for technical and implementation reasons: removing Facebook content after a delay appears to have been ineffective and had limited impact due to the short cycle of users’ engagement [51].

The major limitation of focusing on social networks is that these platforms are often under the control of a single tech company and thus content can be permanently removed without effective backup and recovery. We instead examine deplatforming a standalone website involving a concerted effort on a much wider scale by a series of tech companies, including some big entities that handle a large amount of Internet traffic. Such standalone communities, for instance, websites and forums, may be more resilient as the admin has control of all the content, facilitating easy backups and restores. While existing studies measure changes in posting activity and the behaviours of actors when their place is disrupted, we also provide insights about other stakeholders such as the forum operators, the community leading the campaign, and the tech firms that attempted the takedown.

Previous work has documented the impacts of law enforcement and industry interventions on online cybercrime marketplaces [20], cryptocurrency market price [52], DDoSfor-hire services [14], [15], the Kelihos, Zeus, and Nitol botnets [53], and the well-known click fraud network ZeroAccess [54]; yet how effective a concerted effort of several tech firms can be in deplatforming an extreme and radicalised community remains unstudied.

2.2. The Kiwi Farms Disruption
KIWI FARMS had been growing steadily over a decade (see Figure 1) and had been under Cloudflare’s DDoS protection for some years.[2] An increase of roughly 50% in forum activity happened during the COVID-19 lockdown starting in March 2020, presumably as people were spending more time online. Prior interventions have resulted in the forum getting banned from Google Adsense, and from Mastercard, Visa and PayPal in 2016; from hundreds of VPS providers between 2014–2019 [55]; and from selling merchandise on the print-on-demand marketplace Redbubble in 2016. XenForo, a close-source forum platform, revoked its license in late 2021 [56]. DreamHost stopped its domain registration in July 2021 after a software developer killed himself after being harassed by the site’s users. This did not disrupt the forum as it was given 14 days to seek another registrar [57]. While these interventions may have had negative effects on its profit and loss account, they did not impact its activity overall. The only significant disruption in the forum’s history was between 22 January and 9 February 2017 (19 days), when the forum’s owner suspended it himself due to his family being harassed [58].[3]

The disruption studied in this work was started by the online community in 2022. A malicious alarm was sent to the police in London, Ontario by a forum member on 5 August 2022, claiming that a Canadian trans activist had committed murders and was planning more, leading to her being swatted [23]. She and her family were then repeatedly tracked, doxxed, threatened, and generally harassed. In return, she launched a campaign on Twitter on 22 August 2022 under the hashtag #dropkiwifarms and planned a protest outside Cloudflare’s headquarters to pressure the company to deplatform the site [59]. This campaign generated lots of attention and mainstream headlines, which ultimately resulted in several tech firms trying to shut down the forum. This is the first time that the forum was completely inaccessible for an extended period due to an external action, with no activity on any online places including the dark web. It attempted to recover twice, but even when it eventually returned online, the overall activity was roughly halved.

The majority of actions taken to disrupt the forum occurred within the first two months of the campaign. Most of them were widely covered in the media and can be checked against public statements made by the industry and the forum admins’ announcements (see Figure 2). The forum came under a large DDoS attack on 23 August 2022, one day after the campaign started. It was then unavailable from 27 to 28 August 2022 due to ISP blackholing. Cloudflare terminated their DDoS prevention service on 3 September 2022 – just 12 days after the Twitter campaign started – due to an “unprecedented emergency and immediate threat to human life” [24]. The forum was still supported by DDoSGuard (a Russian competitor to Cloudflare), but that firm also suspended service on 5 September 2022 [25]. The forum was still active on the dark web but this .onion site soon became inaccessible too. On 6 September 2022, hCaptcha dropped support; the forum was removed from the Internet Archive on the same day [60]. This left it under DiamWall’s DDoS protection and hosted on VanwaTech – a hosting provider describing themselves as neutral and noncensored [61]. On 15 September 2022, DiamWall terminated their protection [26] and the ‘.top’ domain provider also stopped support [27]. The forum was completely down from 19 to 26 September 2022 and from 23 to 29 October 2022. From 23 October 2022 onwards, several ISPs intermittently rejected announcements or blackholed routes to the forum due to violations of their acceptable use policy, including Voxility and Tier-1 providers such as Lumen, Arelion, GTT and Zayo. This is remarkable as there are only about 15 Tier-1 ISPs in the world. The forum admin devoted extensive effort to maintaining the infrastructure, fixing bugs, and providing guidance to users in response to password breaches. Eventually, by routing through other ISPs, KIWI FARMS was able to get back online on the clearnet and remain stable, particularly following its second recovery in October 2022.

(Link/Archive)

----------------------------------------------------------------------------------------------------

Formatting was fucked from the website so I'm not transferring any of the images. If this has already been posted then help me figure out how I can't find it on the site.

Edit: Sorry if it wasn't clear. This is not the entire article. I just posted over the part where they talk the most about the farms disruption. Check out archive link to see the full autistic hate of the farms on display!
 
Last edited:
There are only two ways to stop these kind of people. There’s the pol pot method, which is a bit much for most of us and then there’s laughing at them.
Laughter and mockery is kryptonite to them. They cannot deal with it. This is why the farms is hated. Taking the piss is the most effective way of defusing an ideology.
After my experience, I came to fear the people who can avoid both laughter and reprisal, especially coming out of the humanities. They are either former humanities academics who moved onto administration, or people holding advanced humanities degrees that made lateral professional choices. They are both the silent replicators of the ideology laid bare in this paper. They are the actual architects of ideological dissemination, whether it’s determining which professors are hired at their institutions, or deciding which new flavor of identityslop you’ll be served today. You can’t laugh or scream at those people because they refuse to be seen or known.
 
  • Like
Reactions: Core Theorist
David Kirk Ginder attempted to blackmail trap Null.
Null reacted the same way he reacts to all of such games, he declined. Hell, he humored Ginder far longer than he probably should have. That thing with the money, Ginder was trying to set Null up for something ugly.

The only person responsible for a suicide is the dipshit who kills themselves, and wikipedia can go fuck themselves.

I do want to know my toxicity rating.
 
Legislators who propose to ban an online community might consider precedents such as Britain’s ban on Provisional Sinn Féin from 1988–94 due to its support for the Provisional IRA during the Troubles, or the bans on the Muslim Brotherhood enacted by various Arab regimes.
It's fucking hysterical that they are literally comparing kiwifarms, a meme forum full of jokes about people who have publicly posted their own information online and choose to be public figures, to actual literal real life terrorist organizations. I guess it's true, words do hurt. Words clearly and obviously hurt just as much as a car bomb. That's the conclusion they are drawing.
 
Not even that will take us down. They'll have to fucking kill us. Good luck, you worthless faggots.
It would explain why this "researcher" is advocating for null to be arrested or "incapacitated" for no reason. Even if they drop internet censorship laws just to target us, we can always operate out of a different country and we, the users, can use TOR and or a VPN to get around it.
 
  • DRINK!
Reactions: Pure Autism
David Kirk Ginder attempted to blackmail trap Null.
Null reacted the same way he reacts to all of such games, he declined. Hell, he humored Ginder far longer than he probably should have. That thing with the money, Ginder was trying to set Null up for something ugly.

The only person responsible for a suicide is the dipshit who kills themselves, and wikipedia can go fuck themselves.

I do want to know my toxicity rating.
Ah yes, let us remember that tweet that aged so well
IMG_4746.webp
 
1754686172074.webp

Unbelievably British sentence from this "person" literal bug mentality, no real person wants everything they say to be mediated by someone in the government. The only country on earth that benefits from being completely replaced by Indians.
1754686218329.webp
 
  • Like
Reactions: Spooky Space Kook

No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum to Suppress Hate and Harassment​

Anh V. Vu
University of Cambridge
Cambridge Cybercrime Centre
anh.vu@cl.cam.ac.uk

Alice Hutchings
University of Cambridge
Cambridge Cybercrime Centre
alice.hutchings@cl.cam.ac.uk

Ross Anderson
University of Cambridge and University of Edinburgh
ross.anderson@cl.cam.ac.uk

Abstract - Legislators and policymakers worldwide are debating options for suppressing illegal, harmful and undesirable material online. Drawing on several quantitative data sources, we show that deplatforming an active community to suppress online hate and harassment, even with a substantial concerted effort involving several tech firms, can be hard. Our case study is the disruption of the largest and longest-running harassment forum Kiwi Farms in late 2022, which is probably the most extensive industry effort to date. Despite the active participation of a number of tech companies over several consecutive months, this campaign failed to shut down the forum and remove its objectionable content. While briefly raising public awareness, it led to rapid platform displacement and traffic fragmentation. Part of the activity decamped to Telegram, while traffic shifted from the primary domain to previously abandoned alternatives. The forum experienced intermittent outages for several weeks, after which the community leading the campaign lost interest, traffic was directed back to the main domain, users quickly returned, and the forum was back online and became even more connected. The forum members themselves stopped discussing the incident shortly thereafter, and the net effect was that forum activity, active users, threads, posts and traffic were all cut by about half. The disruption largely affected casual users (of whom roughly 87% left), while half the core members remained engaged. It also drew many newcomers, who exhibited increasing levels of toxicity during the first few weeks of participation. Deplatforming a community without a court order raises philosophical issues about censorship versus free speech; ethical and legal issues about the role of industry in online content moderation; and practical issues on the efficacy of private-sector versus government action. Deplatforming a dispersed community using a series of court orders against individual service providers appears unlikely to be very effective if the censor cannot incapacitate the key maintainers, whether by arresting them, enjoining them or otherwise deterring them.

1. Introduction​

Online content is now prevalent, widely accessible, and influential in shaping public discourse. Yet while online places facilitate free speech, they do the same for hate speech [1], and the line between the two is often contested. Some cases of stalking, bullying, and doxxing such as Gamergate have had real-world consequences, including violent crime and political mobilisation [2]. Content moderation has become a critical function of tech companies, but also a political tussle space, since abusive accounts may affect online communities in significantly different ways [3]. Online social platforms employ various mechanisms, for example, artificial intelligence [4], to detect, moderate, and suppress objectionable content [5], including “hard” and “soft” techniques [6]. These range from reporting users of illegal content to the police, through deplatforming users breaking terms of service [7], to moderating legal but obnoxious content [8], which may involve actions such as flagging it with warnings, downranking it in recommendation algorithms [9], or preventing its being monetised through ads [10].

Deplatforming may mean blocking individual users, but sometimes the target is not a single bad actor, but a whole community, such as one involved in crime [11]. It can be undertaken by industry, as when Cloudflare, GoDaddy, Google and some other firms terminated service for the Daily Stormer after the Unite the Right rally in Virginia in 2017 [12] and for 8Chan in August 2019 [13]; or by law enforcement, as with the FBI taking down DDoS-for-hire services in 2018 [14, 15] and 2022 [16, 17], and seizing Raid Forums in 2022 [18]. Industry disruption has often been short-lived; both 8Chan and Daily Stormer re-emerged or relocated shortly after being disrupted. Police intervention is often slow and less effective, and its impact may also be temporary [11]. After the FBI terminated Silk Road [19], the online drug market fragmented among multiple smaller ones [20]. The seizure of Raid Forums [18] led to the emergence of its successors Breach Forums, Exposed Forums, and Onni Forums. Furthermore, the FBI takedowns of DDoS-for-hire services cut the attack volume significantly, yet the market recovered rapidly [14, 15].

Kiwi Farms is the largest and longest-running online harassment forum [21]. It is often associated with real-life trolling and doxxing campaigns against feminists, gay rights campaigners and minorities such as disabled, transgender, and autistic individuals; some have killed themselves after being harassed [22]. Despite being unpleasant and widely controversial, the forum has been online for a decade and had been shielded by Cloudflare’s DDoS protection for years. This came to an end following serious harassment by forum members of a Canadian trans activist, culminating in a swatting incident in August 2022. (This is when a harasser falsely reports a violent crime in progress at the victim’s home, resulting in the arrival of a special-weapons-and-tactics (SWAT) team to storm the premises, placing the victim and family at risk.) This resulted in a community-led campaign on Twitter to pressure Cloudflare and other tech firms to drop the forum [23]. This escalated quickly, generating significant social media attention and mainstream headlines. A series of tech firms then attempted to take the forum down; they included DDoS protection services, infrastructure providers, and even some Tier-1 networks [24, 25, 26, 27]. This extraordinary series of events lasted for a few months and was the most sustained effort to date to suppress an active online hate community. It is notable that tech firms gave in to public pressure in this case, while they have in the past resisted substantial pressure from governments.

Existing studies have investigated the efficacy of deplatforming social-media users [28, 29, 30, 31, 32, 33, 34], yet there has been limited research – both quantitative and qualitative – into the effectiveness of industry disruptions against standalone hate communities such as bulletin-board forums, which tend to be more resilient as the content can be fully backed up and restored by the admins. This paper investigates how well the industry – the entities offering digital infrastructure for online services such as hosting and domain providers, security and protection services, certificate authorities, and ISP networks – dealt with a hate and harassment site.

We outline the disruption landscape in §2, then describe our methods, datasets, and ethics in §3. Our ultimate goal is to evaluate the efficacy of the effort, and to understand the impacts and challenges of deplatforming as a means to suppress online hate and harassment. Our primary research questions are tackled in subsequent sections: the impact of deplatforming on the forum activity and traffic is assessed in §4; the changes in the behaviour of forum members when their gathering place is disrupted, as well as the effects on the forum operators and the community who started the campaign are examined in §5. We discuss the role of industry in tackling online harassment, censorship and content regulation, as well as legal, ethical, and policy implications of the incident in §6. Our data collection and analyses were approved by our institutional Ethics Review Board (ERB). Our data and scripts are available to academics on request.

1754683576979.webp
Figure 1: Number of daily posts, threads, users; and the incidents affecting Kiwi Farms during its one-decade lifetime.

1754683676849.webp
Figure 2: Major incidents disrupting Kiwi Farms from September to December 2022. Green stars indicate the forum recovery.

2. Deplatforming and the Impacts​

There is a complex ecosystem of online abuse that has been evolving for decades [35], where toxic content, surveillance, and content leakage are growing threats [36, 37]. While the number of personally targeted victims is relatively low, an increasing number of individuals, including children, are being exposed to online hate speech [38]. There can be a large grey area between criminal behaviour and socially acceptable behaviour online, just as in real life. And just as a pub landlord will throw out rowdy customers, so platforms have acceptable-use policies backed by content moderation [39], to enhance the user experience and protect advertising revenue [40].

Deplatforming refers to blocking, excluding or restricting individuals or groups from using online services, on the grounds that their activities are unlawful, or that they do not comply with the platform’s acceptable-use policy [7]. Various extremists and criminals have been exploiting online platforms for over thirty years, resulting in a complex ecosystem in which some harms are prohibited by the criminal law (such as terrorist radicalisation and child sex abuse material) while many others are blocked by platforms seeking to provide welcoming spaces for their users and advertisers. For a history and summary of current US legislative tussles and their possible side-effects, see Fishman [41]. The idea is that if a platform is used to disseminate abusive speech, removing the speech or indeed the speakers could restrict its spread, make it harder for hate groups to recruit, organise and coordinate, and ultimately protect individuals from mental and physical harm. Deplatforming can be done in various ways, ranging from limiting users’ access and restricting their activity for a time period, to suspending an account, or even stopping an entire group of users from using one or more services. For example, groups banned from major platforms can displace to other channels, whether smaller websites or messenger services [7].

Different countries draw the line between free speech and hate speech differently. For example, the USA allows the display of Nazi symbols while France and Germany do not [42]. Private firms offering ad-supported social networks generally operate much more restrictive rules, as their advertisers do not want their ads appearing alongside content that prospective customers are likely to find offensive. People wishing to generate and share such material therefore tend to congregate on smaller forums. Some argue that taking down such forums infringes on free speech and may lead to censorship of legitimate voices and dissenting opinions, especially if it is perceived as politically motivated. Others maintain that deplatforming is necessary to protect vulnerable communities from harm. Debates rage in multiple legislatures; as one example, the UK Online Safety Bill will enable the (politically-appointed) head of Ofcom, the UK broadcast regulator, to obtain court orders to shut down online places that are considered harmful [43]. This lead us to ask: how effective might such an order be?

2.1. Related Work​

Most studies assessing the impact of deplatforming have worked with data on social networks. Deplatforming users may reduce activity and toxicity levels of relevant actors on Twitter [28] and Reddit [29, 30], limit the spread of conspiratorial disinformation on Facebook [31], reduce the engagement of peripheral members with hateful content [44], and minimise disinformation and extreme speech on YouTube [32]. But deplatforming has often made hate groups and individuals even more extreme, toxic and radicalised. They may view the disruption of their platform as an attack on their shared beliefs and values, and move to even more toxic places to continue spreading their message. There are many examples: the Reddit ban of r/incels in November 2017 led to the emergence of two standalone forums, incels.is and incels.net, which then grew rapidly; users banned from Twitter and Reddit exhibit higher levels of toxicity when migrating to Gab [33]; users migrated to their own standalone websites after getting banned from r/The_Donald expressed higher levels of toxicity and radicalisation, even though their posting activity on the new platform decreased [45, 46]; the ‘Great Deplatforming’ directed users to other less regulated, more extreme platforms [47]; the activity of many right-wing users moved to Telegram increased multi-fold after being banned on major social media [34]; users banned from Twitter are more active on Gettr [48]; communities migrated to Voat from Reddit can be more resilient [49]; and roughly half of QAnon users moved to Poal after the Voat shutdown [50]. Blocking can also be ineffective for technical and implementation reasons: removing Facebook content after a delay appears to have been ineffective and had limited impact due to the short cycle of users’ engagement [51].

The major limitation of focusing on social networks is that these platforms are often under the control of a single tech company and thus content can be permanently removed without effective backup and recovery. We instead examine deplatforming a standalone website involving a concerted effort on a much wider scale by a series of tech companies, including some big entities that handle a large amount of Internet traffic. Such standalone communities, for instance, websites and forums, may be more resilient as the admin has control of all the content, facilitating easy backups and restores. While existing studies measure changes in posting activity and the behaviours of actors when their place is disrupted, we also provide insights about other stakeholders such as the forum operators, the community leading the campaign, and the tech firms that attempted the takedown.

Previous work has documented the impacts of law enforcement and industry interventions on online cybercrime marketplaces [20], cryptocurrency market price [52], DDoS-for-hire services [14, 15], the Kelihos, Zeus, and Nitol botnets [53], and the well-known click fraud network ZeroAccess [54]; yet how effective a concerted effort of several tech firms can be in deplatforming an extreme and radicalised community remains unstudied.

2.2. The Kiwi Farms Disruption​

Kiwi Farms had been growing steadily over a decade (see Figure 1) and had been under Cloudflare’s DDoS protection for some years. (Cloudflare’s service tries to detect suspicious patterns and drop malicious ones, only letting legitimate requests through.) An increase of roughly 50% in forum activity happened during the Covid-19 lockdown starting in March 2020, presumably as people were spending more time online. Prior interventions have resulted in the forum getting banned from Google Adsense, and from Mastercard, Visa and PayPal in 2016; from hundreds of VPS providers between 2014–2019 [55]; and from selling merchandise on the print-on-demand marketplace Redbubble in 2016. XenForo, a close-source forum platform, revoked its license in late 2021 [56]. DreamHost stopped its domain registration in July 2021 after a software developer killed himself after being harassed by the site’s users. This did not disrupt the forum as it was given 14 days to seek another registrar [57]. While these interventions may have had negative effects on its profit and loss account, they did not impact its activity overall. The only significant disruption in the forum’s history was between 22 January and 9 February 2017 (19 days), when the forum’s owner suspended it himself due to his family being harassed [58]. (Minor suspensions observed in our forum dataset are on 2 Feb 2013, 24 Jan 2016, 29 Sep 2017, and 11 Jan 2021, yet without any clear reasons.)

The disruption studied in this work was started by the online community in 2022. A malicious alarm was sent to the police in London, Ontario by a forum member on 5 August 2022, claiming that a Canadian trans activist had committed murders and was planning more, leading to her being swatted [23]. She and her family were then repeatedly tracked, doxxed, threatened, and generally harassed. In return, she launched a campaign on Twitter on 22 August 2022 under the hashtag #dropkiwifarms and planned a protest outside Cloudflare’s headquarters to pressure the company to deplatform the site [59]. This campaign generated lots of attention and mainstream headlines, which ultimately resulted in several tech firms trying to shut down the forum. This is the first time that the forum was completely inaccessible for an extended period due to an external action, with no activity on any online places including the dark web. It attempted to recover twice, but even when it eventually returned online, the overall activity was roughly halved.

The majority of actions taken to disrupt the forum occurred within the first two months of the campaign. Most of them were widely covered in the media and can be checked against public statements made by the industry and the forum admins’ announcements (see Figure 2). The forum came under a large DDoS attack on 23 August 2022, one day after the campaign started. It was then unavailable from 27 to 28 August 2022 due to ISP blackholing. Cloudflare terminated their DDoS prevention service on 3 September 2022 – just 12 days after the Twitter campaign started – due to an “unprecedented emergency and immediate threat to human life” [24]. The forum was still supported by DDoS-Guard (a Russian competitor to Cloudflare), but that firm also suspended service on 5 September 2022 [25]. The forum was still active on the dark web but this .onion site soon became inaccessible too. On 6 September 2022, hCaptcha dropped support; the forum was removed from the Internet Archive on the same day [60]. This left it under DiamWall’s DDoS protection and hosted on VanwaTech – a hosting provider describing themselves as neutral and non-censored [61]. On 15 September 2022, DiamWall terminated their protection [26] and the ‘.top’ domain provider also stopped support [27]. The forum was completely down from 19 to 26 September 2022 and from 23 to 29 October 2022. From 23 October 2022 onwards, several ISPs intermittently rejected announcements or blackholed routes to the forum due to violations of their acceptable use policy, including Voxility and Tier-1 providers such as Lumen, Arelion, GTT and Zayo. This is remarkable as there are only about 15 Tier-1 ISPs in the world. The forum admin devoted extensive effort to maintaining the infrastructure, fixing bugs, and providing guidance to users in response to password breaches. Eventually, by routing through other ISPs, Kiwi Farms was able to get back online on the clearnet and remain stable, particularly following its second recovery in October 2022.

Table I: Complete snapshots of public posts on Kiwi Farms and its primary competitor Lolcow Farm until 31 Dec 2022.
ForumsNo. postsNo. threadsNo. active users
Lolcow Farm4 593 07610 029Unavailable
Total14 742 77658 285Unavailable

3. Methods, Datasets, and Ethics​

Our primary method is data-driven, with findings supported by quantitative evidence derived from multiple longitudinal data sources, which we collect on a regular basis. Where quantitative measurements require enrichment – as when analysing relevant public statements of tech firms directly involved in the disruption, and announcements made by the forum operators – we use qualitative content analysis.

3.1. Forum and Imageboard Discussions​

Besides common mainstream social media channels like Facebook and Twitter, independent platforms such as xenForo (The xenForo Platform: https://xenforo.com/) and Infinity (The Infinity Imageboard: https://github.com/ctrlcctrlv/infinity/) have gained popularity as tools for building online communities. Despite being less visible and requiring more upkeep, these can offer greater resistance against external intervention as the operators have full control over the content and databases, thereby allowing easy backup and redeployment in case of disruption. These platforms typically share a hierarchical data structure ranging from bulletin boards down to threads linked to specific topics, each containing several posts. While facilitating free speech, these also increasingly nurture and disseminate hate and abusive speech. We have been scraping the two most active forums associated with online harassment for years due to their increasingly toxic content, as part of the ExtremeBB dataset [62]: Kiwi Farms and Lolcow Farm.

Our collection includes not only posts but also associated metadata such as posting time, user profiles, reactions, and levels of toxicity, identity attack and threat measured by the Google Perspective API as of January 2023. (Google Perspective API: https://perspectiveapi.com/) Perspective API also offers other measures such as insult and profanity [63], but we exclude these due to lack of relevance to the aim of this paper. This API uses crowdsourced annotations for model training and substantially outperforms the alternatives [64]. We strive to ensure data completeness by designing our scrapers to visit all sub-forums, threads, and posts while keeping track of every single crawl’s progress to resume incrementally in case of any interruption. A summary of the forum discussion data is shown in Table I.

Kiwi Farms is built on xenForo, but the operators have been maintaining the forum by their own efforts since late 2021 when xenForo officially revoked their license. Our data covers the entire history of the forum from early January 2013 to the end of 2022 with 10.1M posts in 48.3k threads made by 59.2k active users, providing a full landscape through its evolution over time. While some extremist forums experienced fluctuating activity and rapid declines in recent years [62], Kiwi Farms has shown stable growth until being significantly disrupted in 2022 (see Figure 1). Our data precisely capture major reported suspensions, including those in 2017 and 2022.

The primary rival of Kiwi Farms is Lolcow Farm, an imageboard built on Infinity [65, 66]. While Kiwi Farms discussions are largely text-based, Lolcow Farm is centred on descriptive images. While Kiwi Farms users adopt pseudonyms, Lolcow Farm users mostly remain hidden under the unified ‘Anonymous’ handle. We gathered a complete snapshot of Lolcow Farm from its inception in June 2014 to the end of 2022, encompassing 4.6M posts made in 10.0k threads. Lolcow Farm has much fewer threads, but each typically contains lots of posts. This collection brings the total number of posts for both forums to 14.7M (and still growing). We exclude Lolcow, a smaller competitor to Kiwi Farms (also based on xenForo), as it vanished in mid-2022 and had less than 30k posts in total. As Lolcow Farm is now the largest competitor, analysing it lets us estimate platform displacement when Kiwi Farms was down.

3.2. Telegram Chats​

During periods of inaccessibility, the activity level increased in the Telegram groups associated with Kiwi Farms. There are two channels: one is primarily used by the forum operators to disseminate announcements and updates, particularly about where and when the forum could be accessed; and one is adopted by the forum users mainly for normal discussions. Both channels permit public access, allowing people to join and view historical messages. We used Telethon (Telethon: https://telethon.dev/) to collect a snapshot of these channels during their entire lifespan until the end of 2022, encompassing 525k messages, 298k replies, and associated metadata such as view counts and 356k emoji reactions made by 2 502 active users. The data is likely complete as our scraper is running in near real time, and messages with metadata are fully captured through the use of official Telegram APIs. As the forum operators are highly incentivised to keep users quickly informed, their announcements provide a reliable incident and response timeline.

3.3. Web Traffic and Search Trends Analytics​

We found from announcements in the Telegram group that Kiwi Farms could be accessed through six major domains: the primary one is kiwifarms.net and four alternatives are kiwifarms.ru, kiwifarms.top, kiwifarms.is, and kiwifarms.st, while a Pleroma decentralised web version is at kiwifarms.cc. (Other domains include kiwifarms.tw and kiwifarms.hk, however they are either new or insignificant so their traffic data is trivial.) To investigate how users navigated across these domains when the forum experienced disruption, we analysed traffic analytics towards all six domains provided by Similarweb – the leading platform in the market providing insights and intelligence into web traffic and performance. (Similarweb: https://similarweb.com/). Another popular web analytics is Semrush at https://semrush.com/, but it does not offer daily statistics. Their reports aggregate anonymous statistics from multiple inputs, including their own analytic services, data sharing from ISPs and other measurement companies, data crawled from billions of websites, and device traffic data (both website and app) such as plugins, add-ons and pixel tracking. Their algorithm then extrapolates the substantial aggregated data to the entire Internet space. Their estimation therefore may not be completely precise, but reliably reflects trends at both global and country levels. To test that reliability, we deployed our own infrastructure to collect over 19M ground-truth traffic records over six months, grouped them into 30-minute sessions then compared with Similarweb visits. We find that while underestimating the amount of traffic due to how repeat pageviews are counted, Similarweb is able to capture trends with a strong positive linear relationship (Pearson correlation coefficient r=0.83). Our analysis in the next section also suggests a high correlation between the traffic data and the forum activity.

As Similarweb does not offer an academic license, we use a free trial account (A business subscription offers 6 months of historical data, but neither it nor the free trial provides access to longitudinal country-based records.) to access longitudinal web traffic and engagement data going back the past three months. This includes information about total visits, unique visitors, visit duration, pages per visit, bounce rate, and page views. It also provides figures on search activity, data for marketing such as visit sources (e.g., direct, search, email, social, referral, ads), and non-temporal insight into audience geography and demographics. These data, covering both desktop and mobile traffic, provide valuable perspectives. They span from July to December 2022, two months before and four months after the disruption; this time frame is sufficient as there was no significant industry intervention against the forum in the past (as shown in Figure 1), and the disruption campaign mostly ended after a few months (see §4). In addition, we also collected search trends by countries and territories over time from Google Trends, covering the entire lifetime of the forum. Both of these datasets are likely to be complete as they were gathered directly from Similarweb and Google.

3.4. Tweets Made by the Online Community​

The disruption campaign started on Twitter on 22 August 2022 with tweets posted under the hashtag #dropkiwifarms. We gathered the main tweets plus associated metadata, such as posting time and reactions (e.g., replies, retweets, likes, and quotes) using Snscrape, an open-source Python framework for social network scrapers. (Snscrape: https://github.com/JustAnotherArchivist/snscrape/) As they use Twitter APIs as the underlying method, the data are likely to be complete. We collected 11 076 tweets made by 3 886 users, spanning the entire campaign period. This data helps us understand the community reaction throughout the campaign, when the industry took action, and when the forum recovered. There might be more related tweets without the hashtag #dropkiwifarms of which we are unaware, but scanning the whole Twitter space is infeasible. It is likely that the trend measured by our collection is representative as the campaign was congregated around this hashtag.

3.5. Data Licensing​

Our datasets and scripts for data collection and analysis are available to academics, as well as an interactive web portal to assist those who lack technical skills to access our data [67]. However, as both researchers and actors such as forum members might be exposed to risk and harm [68], we decline to make our data publicly accessible. It is our standard practice at the Cambridge Cybercrime Centre to require our licensees to sign an agreement to prevent misuse, to ensure the data will be handled appropriately, and to keep us informed about research outcomes [69]. We have a long history of sharing such sensitive data, and robust procedures carefully crafted in conjunction with legal academics, university lawyers and specialist external counsel to enable data sharing across multiple jurisdictions.

3.6. Ethical Considerations​

Our work was formally approved by our institutional Ethics Review Board (ERB) for data collection and analysis. Our datasets are collected on publicly available forums and channels, which are accessible to all. We collected the forum when it was hosted in the US; according to a 2022 US court case, scraping public data is legal [70]. Our scraping method does not violate any regulations and does not cause negative consequences to the targeted websites e.g., bandwidth congestion or denial of service. It would be impractical to send thousands of messages to gain consent from all forum and Telegram members; we assume they are aware that their activity on public online places will be widely accessible.

In contrast to some previous work on online forums, we name the investigated forums in this paper. Pseudonymising the forum name is pointless because of the high-profile campaign being studied. Thus, we avoid the pretence that the forum is not identifiable and shift the focus to accounting for the potential harms to both researchers and involved actors associated with our research. We designed our analysis to operate ethically and collectively by only presenting aggregated behaviours to avoid private and sensitive information of individuals being inferred. This is in accordance with the British Society of Criminology Statement on Ethics [71].

Researchers may be at risk and may experience various elevated digital threats when doing work on sensitive resources [68, 72]. Studying extremist forums may introduce a higher risk of retaliation than other forums, resulting in mental or physical harm. We have taken measures to minimise potential harm to researchers and involved actors when doing studies with human subjects and at-risk populations [73, 74]. For example, we consider options to anonymise authors’ names or use pseudonyms for any publication related to the project, including this paper, if necessary. We also refrain from directly looking at media, which may cause emotional harm; our scrapers thus only collect text while discarding images and videos. Although all datasets are widely accessible and can be gathered by the public, we refrain from scraping private and protected posts behind the login wall due to safety and legality concerns.

1754684355535.webp
Figure 3: Normalised levels of global search and web traffic to Kiwi Farms. The red bubble indicates the Streisand effect.

1754684373836.webp
Figure 4: Number of daily posts, threads, and active users on Kiwi Farms, its Telegram channels, and its primary competitor Lolcow Farm, as well as major disruptions and displacement between platforms. The red star indicates the Streisand effect.​

4. The Impact on Forum Activity and Traffic​

On 3 September 2022, Cloudflare discontinued its DDoS prevention service, which attracted major publicity. This intervention led to a sudden and significant increase in global search interest about Kiwi Farms with a seven-fold spike, along with the web traffic to the six major domains doubling on 4 September 2022 (see Figure 3). This phenomenon, known as the Streisand effect, might be caused by people’s curiosity about what happened to the platform, which is relatively rare but mainly seen with ‘freedom of speech’ issues [11]. It suggests that attempts at censorship may eventually end up being counterproductive [75]: disruptive effort aiming to reduce user interactions instead led to the unintended consequences of increased attention, despite lasting for only a few days before declining sharply.

We examine in detail the impacts of the disruption and the forum recovery on Kiwi Farms within 6 months from July to December 2022. This timeframe provides a sufficient understanding, as the campaign was mostly over and the forum was growing stably before the disruption. To assess the impacts, we separate the observed data points into the post-disruption period (first group) and pre-disruption period (second group), split by 3 September 2022. We then use the Mann–Whitney U test (as the samples are not paired and the data does not follow a normal distribution) to compare the difference between mean ranks of the two populations. The effect size – indicating the magnitude of the observed difference – is assessed by Cliff’s Delta, ranging [-1, 1].

4.1 The Impact of Major Disruptions​

While some DDoS attacks were large enough to shut the forum down, their impact was temporary. For example, the DDoS attack on 23 August 2022 – which was probably associated with the Twitter campaign the previous day – led to a drop of roughly 35% in posting volume, yet the forum activity recovered the next day to a slightly higher level (see the first graph of Figure 4). The DDoS attack during Christmas 2022 was also short-lived. The ISP blackholing on 26 August 2022 was more critical, silencing the forum for two consecutive days, yet it again recovered quickly.

The most significant, long-lasting impact was caused by the substantial industry disruption that we analyse in this paper. While forum activity immediately dropped by around 20% after Cloudflare’s action on 3 September 2022, the forum was still online at kiwifarms.ru, hosting the same content. Activity did not degrade significantly until DDoS-Guard’s action on 5 September 2022, which took down the Russian domain. By 18 September 2022, all domains were unavailable, including .onion (presumably their hosting was identified); forum activity dropped to zero and stayed there for a week. The operator managed to get the forum back online for the first time on 27 September 2022, after which it ran stably on both the dark web and clear web for roughly one month until Zayo – a Tier-1 ISP – blocked it on 23 October 2022. This led to another silent week before the forum eventually recovered a second time on 30 October 2022. It has been stable since then without serious downtime except for the ISP blackholing on 22 December 2022 which led to a 70% drop in activity.

In general, although the forum is now back online stably, hosted on 1776 Solutions – a company also founded by the forum’s owner – it has (at the moment) failed to bounce back to the pre-disruption level, with the number of active users and posting volume roughly halved. The concerted effort we analysed was much more effective than previous DDoS attacks, yet still could not silence the forum for long.

1754684509013.webp
Figure 5: Number of daily estimated visits to Kiwi Farms and the fragmentation to previously abandoned domains. We see non-zero traffic to the primary domain when the forum was down, presumably Similarweb counted unsuccessful attempts.

4.2. Platform Displacement​

The natural behaviour of online communities when their usual gathering place becomes inaccessible is to seek alternative places or channels to continue their discussions. The second graph in Figure 4 illustrates an initial shift of forum activity to Telegram that occurred on 27 August 2022, right after the ISP blackholing. This was accompanied by thousands of emoji reactions on the admin’s announcements since commenting was not allowed at that time. Community reactions (e.g., replies, emojis) seem to have been consistent with the overall Telegram posting activity, which increased rapidly afterwards and even occasionally surpassed the forum’s activity, especially after the publicity given to the Cloudflare and DDoS-Guard actions. At some point, for instance in early October and November 2022, the total number of messages on Kiwi Farms and its Telegram channels significantly exceeded the pre-disruption posting volume on the forum. However, significant displacements only occurred when all domains were completely inaccessible on 18 September 2022, and again when Zayo blocked the second incarnation of the forum on 22 October 2022. The shift to the Telegram channels appears to be rapid yet rather temporary: Kiwi Farms users quickly returned to the primary forum when it became available, while discussion activity on the Telegram channels gradually declined.

There was no significant shift in activity from the forum to its primary competitor Lolcow Farm (see the third graph of Figure 4), however, there was an increase in posting on Lolcow Farm about the incident, indicating a minor change of discussion topic (see more in §5.4). It is unclear if these posting users migrated from Kiwi Farms, as Lolcow Farm do not use handles, making user counts unavailable. Lolcow Farm also experienced downtime on 17 and 18 September 2022 (the same day as Kiwi Farms) yet we have no reliable evidence to draw any convincing explanation. Another drop occurred around Christmas 2022 in sync with Kiwi Farms, perhaps because of the holiday. The activity of Lolcow Farm returned to its previous level quickly after these drops, suggesting that the campaign did not significantly impact Lolcow Farm or drive content between the rival ecosystems; the displacement we observed on Kiwi Farms was mostly ‘internal’ within its own ecosystem, rather than an ‘external’ shift to other forums. While the disruption impact on Kiwi Farms and its Telegram channels is highly significant with a very large effect size, it is not significant for Lolcow Farm (the effect size is small), see Table II.

Table II: The significance of the disruption in daily activities of Kiwi Farms, its Telegram channels, and Lolcow Farm.
PlatformsVariablesMann-Whitney U testsδ
Kiwi Farms# postsU = 418.00, p < .0001-0.89
# threadsU = 416.50, p < .0001-0.89
# usersU = 388.00, p <.0001-0.90
Kiwi Farms Telegram# messagesU = 7461.50, p <.00010.94
# repliesU = 6980.00, p <.00010.82
# emojisU = 7201.00, p <.00010.88
# usersU = 7361.50, p <.00010.92
Lolcow Farm# postsU = 3349.00, p = .1540-0.13
# threadsU = 3830.50, p = .9791-0.00

Table III: The significance of the disruption in daily traffic to each individual domain of Kiwi Farms, and to all domains.
PlatformsVariablesMann-Whitney U testsδ
kiwifarms.net# visitsU = 99.00, p < .0001-0.97
kiwifarms.ru# visitsU = 3310.50, p = .1228-0.14
kiwifarms.top# visitsU = 6125.50, p < .00010.60
kiwifarms.cc# visitsU = 3670.00, p = .6221-0.04
kiwifarms.is# visitsU = 748.00, p < .0001-0.81
kiwifarms.st# visitsU = 6560.00, p < .00010.71
All domains# visitsU = 545.00, p < .0001-0.86

4.3. Traffic Fragmentation​

Before Cloudflare’s action, traffic towards Kiwi Farms (measured by Similarweb) was relatively steady, mostly occupied by the primary domain. However, we see the Streisand effect with an immediate peak in traffic of around 50% more visits and 85% more visitors once the site was disrupted. The publicity given by the takedown presumably boosted awareness and attracted people to visit both the primary and alternative domains. Traffic to the primary domain was then significantly fragmented to other previously abandoned domains, resulting in the kiwifarms.net accounting for less than 50% one day after Cloudflare’s intervention, as shown in Figure 5.

ollowing the unavailability of kiwifarms.net, most traffic was directed to kiwifarms.ru, which was under DDoS Guard’s protection (accounting for around 60% total traffic on 4 September 2022). The DDoS-Guard’s action on 5 September 2022 reduced traffic towards kiwifarms.ru sharply, while traffic towards kiwifarms.top peaked. The suspension of kiwifarms.top on the following day led to increased traffic towards kiwifarms.cc (a Pleroma decentralised web instance), but it only lasted for a couple of days before traffic shifted again to kiwifarms.is. The seizure of kiwifarms.is later led to the traffic shifting to kiwifarms.st, but it was also short-lived.

The forum recovery on 27 September 2022 gradually directed almost all traffic back to the primary domain, and by 22 October 2022, kiwifarms.net mostly accounted for all traffic, albeit at about half the volume. This effect is highly consistent with what has been found in our forum data, indicating a reliable pattern. Overall, our evidence suggests a clear traffic fragmentation across different domains of Kiwi Farms, in which people attempted to visit surviving domains when one was disrupted. While the observed fragmentation is clear, the impact on two domains is not significant enough when assessing the period as a whole. However, it is highly significant on the total traffic, notably the substantial drops of the primary domain kiwifarms.net (see Table III).

5 The Impacts on Relevant Stakeholders​

We have looked at the impacts of the disruption on Kiwi Farms itself. This section examines the effects on relevant stakeholders, including the harassed victim, the community leading the campaign, the industry, the forum operators, and active forum users who posted at least once. As our ethics approval does not allow the study of individuals, all measurements are conducted collectively on subsets of users. Besides quantitative evidence, we also qualitatively look at statements made by tech firms about the incident.

1754685409655.webp
Figure 6: Number of daily tweets and reactions made by the community about the campaign. Figure scales are different.

5.1. The Community that Started the Campaign​

There were 3 886 users in the online community involved in starting the campaign. Of these, 1 670 users (42.97%) were responsible for around 80% of tweets. There was a sharp increase in tweets and reactions at the beginning (see Figure 6). The first peak was on 25 August 2022 with nearly 900 tweets by around 600 users. However, this dropped rapidly to less than 100 per day after a few weeks when Cloudflare and DDoS-Guard took action, and almost to zero two weeks later. The number of tweets specifically mentioning Cloudflare (such as their official account, as well as those for jobs, help, and developers) was around 200 in the beginning but decreased over time, and dropped to zero after they took action. This lasted for roughly one month until after the forum recovered: we see around 400 tweets mentioning Cloudflare, twice the previous peak, and accounting for almost all such tweets that day. However, these tweets appeared to be primarily associated with another campaign counted by the hashtag #stopdoghate, suggesting a short-lived outlier instead of a genuine peak.

The trans activist who launched the campaign was engaged at the beginning but then became much less active in posting new tweets, although she still replied to people. Her posting volume was, however, trivial compared to the overall numbers: she made only four tweets on the day the campaign started, the number then dropped quickly to only one on 4 September 2022 after Cloudflare took action, and zero thereafter. It suggests that although she sparked the campaign, she might not be the primary maintainer.

We see no notable peak of tweets after the forum was completely shut down, suggesting a clear loss of interest in pursuing the campaign, both from people posting tweets and people reacting to tweets. The community seemed to get bored quickly after a few weeks when they appeared to have gotten what they wanted – ‘Kiwi Farms is dead, and I am moving on to the next campaign’, tweeted the activist.

5.2. The Industry Responses​

Unlike measurement of forum activity, there is no such quantitative data to cover the impact on industry actors, so we switch to qualitative analysis of public statements made by those who directly attempted to terminate the forum. We first compile a list of involved tech firms seen from the takedown incidents, then look at their websites, news, and blogs to spot their official statements if available. We repeated the search regularly, and the final list consists of four firms: Cloudflare, DDoS-Guard, DiamWall, and Harica. We then took a deductive approach to understand (1) their hosting policy, (2) their perspective on Kiwi Farms, and (3) their reactions to the community pressure.

Cloudflare stated their abuse policies on 31 August 2022 without directly mentioning the Twitter campaign [76]. In summary, the firm offers traffic proxy and DDoS protection to lots of (mostly non-paid) sites regardless of the content hosted, including Kiwi Farms. The firm maintains that abusive content alone is not an issue, and the forum – while immoral – still deserves the same protection as other customers, as long as it does not violate US law. Although Cloudflare are entitled to refuse business from Kiwi Farms, they initially took the view that doing so because of its content would create a bad precedent, leading to unintended consequences on content regulation and making things harder for Cloudflare. This could affect the whole Internet, as Cloudflare handles a large proportion of network traffic. They did not want to get involved in policing online content, but if they had to do it they would rather do so in response to a court order instead of popular opinion. The firm previously had dropped the neo-Nazi website Daily Stormer [12] and the extremist board 8Chan [13] because of their links with terrorist attacks and mass murders, and a false claim about Cloudflare’s secret support. They also claimed that dropping service for Kiwi Farms would not remove the hate content, but only slow it down for a while.

Nevertheless, Cloudflare did a U-turn a few days later on 3 September 2022, announcing that they would terminate service for Kiwi Farms [24]. They explained that the escalation of the pressure campaign led to users being more aggressive, which might lead to crime. They reached out to law enforcement in multiple jurisdictions regarding potential criminal acts, but as the legal process was too slow compared to the escalating threat, they made the decision alone [24]. They still claimed that following a legal process would be the correct policy, and denied that the decision was the direct result of community pressure. Cloudflare’s action also inadvertently led to the termination of a neo-Nazi group in New Zealand, as it was hosted by the same company as the forum [77].

DDoS-Guard’s statements about the incident told a similar story [25]. Although they can restrict access to their customers if they violate the acceptable use policy, content moderation is not their duty (except under a court order) so they do not need to determine whether every site they protect violates the law. DiamWall took the same line; they claimed that they are not responsible, and are unable to moderate content hosted on websites [26]. They also maintained that terminating services in response to public pressure is not good policy, but the case of Kiwi Farms was exceptional due to its ‘revolting’ content. They also noted that their actions could only delay things but not fix the root cause, as the forum could find another provider. DiamWall’s statement was removed afterwards without any clear explanations; it is now only accessible through online archives.

Unlike the three firms above, Harica – a Greek Authority providing certificates for .onion sites – took a different line. They confirmed their support for freedom of speech and stated that they will not censor any website, but are obligated to investigate complaints about websites violating the law, the Certificate Policy (CP) and the Certificate Practice Statement (CPS). After a review process, on 15 May 2023 they announced they would revoke the .onion certificates issued to Kiwi Farms due to concerns about harassment connecting to suicides. After setting a 3-day timeframe for Kiwi Farms to seek a new authority, their support team were targeted by various threats and harassment, but based on one ‘kind and polite’ message highlighting that Harica is one of the only two authorities issuing .onion certificates, they postponed the decision the day after and waited for further law enforcement investigation as Kiwi Farms has very limited alternatives to protect their site [78].

Whether blocking Kiwi Farms or not, it is understandable that infrastructure and certificate providers may not want to get involved in content regulation the way Facebook and Google have to, as moderation is complex, challenging, contentious and expensive [79].

5.3. The Forum Operators​

1754685567341.webp
Figure 7: Number of public announcements posted daily by the forum operators since the Telegram channel was created.

The disruption of Kiwi Farms led to a cat-and-mouse game where tech firms tried to shut it down by various means while the forum operators tried to get it back up. We extract messages of the forum operators from a Telegram channel activated after the Twitter campaign, where they posted 107 announcements during the period, mostly about when and where the forum was back, the ongoing issues (e.g., DDoS attacks, industry blocks), and their plans to fix.

The admins were very active, for example, sending seven consecutive messages on 23 August 2022 that mostly concerned the large DDoS attack on that day, see Figure 7. The second peak was on 6 September 2022 after Cloudflare and DDoS-Guard’s withdrawal of service, mostly about forum availability. The number of announcements then gradually decreased, especially after the second recovery, with many days having no messages. A DDoS attack hitting the forum during Christmas 2022 caught the admins’ attention for a while. Their activity was inversely correlated with the forum’s stability; they were less active when the site was up and running stably or when there were no new incidents, for example, many announcements were posted in September, late October, and late December 2022, when the forum was under DDoS attacks and disruptions as shown in Figure 4.

We took a deductive approach based on the extracted announcements to comprehend the effort made by the forum operator to restore service. Kiwi Farms needed DDoS protection to hide its original IP address and evade cyberattacks, so the operators first switched their third-party DDoS protection to DDoS-Guard, then DiamWall, yet these firms also resigned their business. They then attempted to build an anti-bot mechanism themselves based on HAProxy – an open-source software to stop bots, spam, and DDoS using proof-of-work [80] – and claimed to be resilient to thousands of simultaneous connections. They also changed hosting providers to VanwaTech and eventually their own firm 1776 Solutions, and attempted to route their traffic through other ISPs. They were actively maintaining infrastructure, fixing bugs, and giving instructions to users to deal with their passwords when the forum experienced a breach. The operators’ effort seemed to be competent and consistent.

5.4 The Forum Members​

People sharing the same passion naturally coalesce into communities, in which some key actors may play a crucial role in influencing the ecosystem [81, 82, 83]. We separate the pre-disruption and post-disruption by 3 September 2022, when Cloudflare took action. Kiwi Farms activity is highly skewed, with around 80% (We make use of the 80/20 rule – the Pareto principle [84].) of pre-disruption posts made by 8.96% most active users (5 159), while the remaining 20% posts were made by the 91.04% less active (52 430). There was around a 30% drop in the number of users after the disruption, as seen in Figure 4.

There were 1 571 new usernames after the disruption, which could be either newcomers or old members creating new accounts after losing access to old ones. Multi-platform users tend to pick similar pseudonyms on different platforms [85]; we believe returning users are also likely do that to preserve their reputation, so can be detected if their usernames are very similar and rare enough, although common handles are often picked up by multiple individuals [86]. While the similarity of two usernames can be determined by the Levenshtein distance, we use a n-gram model trained by the Reuters corpus [87] to estimate the rarity of usernames, considering one is rare if the highest probability observed is not greater than 1%. We found 5.31% such users among 1 571 new pseudonyms: 11 returning core actors (0.21% of core users), 72 returning casual actors (0.14% of casual users), while the rest 1 488 are newcomers. The estimation may overlook all-new usernames, yet we believe this number is relatively small as a mass password reset was mandated after the breach instead of account replacement.

We analyse the behaviour of those active after the disruption, namely the ‘core survivors’ (2 529, returning actors included), ‘casual survivors’ (6 915, returning actors included), and ‘newcomers’ (1 488). Around half of key users (49.02%) remained engaged, while only 13.19% of casual users stayed (86.95% has left). On average, before the disruption, each ‘core survivor’ posted 22.3 times more than each ‘casual survivor’ (1800.03 vs 80.82 posts), while their active period – between their first post and last post – was around 2.5 times longer (1306.94 vs 516.84 days).

1754685714932.webp
Figure 8: Number of average posts per day of survivors and newcomers, who posted at least once after the disruption.

1754685730166.webp
Figure 9: Average toxicity, identity attack, and threat levels of posts made by survivors and newcomers after the event.

Posting Activity​

Before the takedown, each core survivor made about 3.5 posts per day on average, while it was around 3 afterwards – see Figure 8. The activity of the other survivors appears consistent with the pre-disruption period; their average posts were at around 2 per day before the incident and almost unchanged afterwards. These figures suggest that the decreasing posting volume seen in Figure 4 was mainly due to users leaving the forum, instead of surviving ones largely losing interest – they engaged back quickly after the forum recovered. Newcomers posted slightly less than casual survivors before the forum was completely down on 18 September 2022 (less than 2 posts per day), yet their average posting volume then increased quickly. This suggests that the disruption, besides removing a very large proportion of old casual users, drew in many new users who then became roughly as active as the core survivors.
 

Toxicity Levels​

We further examine the toxicity of posts made by the surviving actors and newcomers, before and after the disruption. Figure 9 shows the average levels of toxicity, identity attack and threat of core survivors, casual survivors, and newcomers by days. In general, the toxicity, identity attack, and threat scores were rather low as most postings are non-toxic (despite some having very high scores). There were small changes in the average scores of surviving actors, notably the peaks occurred 2 days after the campaign sparked on Twitter, with the average scores increasing significantly to around 30–50%, especially toxicity and identity attack. However, these dropped quickly a couple of days after and retreated to normal levels.

Newcomers, on the other hand, expressed a significant increase of toxicity and identity attack during the first two weeks after the disruption took place (about 2–2.5 times higher), largely surpassing surviving actors. Their scores for threat did not increase at that time but largely peaked after the forum first recovered on 27 September 2022, with around 2 times higher. These activities suggest that while the surviving members were becoming more toxic when their community was under attack, new users became much more toxic for a few weeks after they engaged in the discussion before declining gradually to the same levels as old users. This is in line with the recent finding that users moving to other platforms can become more toxic than before [33].

Social Interactions​

To measure how these survivors interact with each other, we build a social interaction network among Kiwi Farms members over time. We consider each active user as a node, with an edge between two users if they posted in the same thread (weighted by the number of such interactions) [88]. We then explore changes of that shared-interest engagement in the network structure with a focus on Degree Centrality, which indicates how well-connected a user is over the entire network [89]. In a healthy community, such engagement should grow steadily.

1754686729997.webp
Figure 10: Number of nodes and edges in the social interaction network made by Kiwi Farms members over time.

Table IV: Number of posts mentioning the two major involved parties during the period, with proportions of the total posts.
PlatformsMentioning Kiwi FarmsMentioning CloudflareMentioning both parties
Kiwi Farms10 099 (1.45%)1 515 (0.22%)300 (0.04%)
Telegram3 794 (0.72%)286 (0.05%)44 (0.01%)
Lolcow Farm1 494 (0.31%)197 (0.04%)44 (0.01%)

1754686698266.webp
Figure 11: The degree centrality of survivors and newcomers in the network over time. Figures are in different scales.

The network had developed stably before the disruption, with around 55.3k nodes and 131.3M edges on 1 July 2022, reaching to around 57.2k nodes and 137.6M edges just before the Twitter campaign started (see Figure 10). There was a rapid increase in both nodes and edges shortly after the Twitter campaign, suggesting that the campaign drew more actors involved in interacting with others. The Cloudflare and DDoS-Guard actions paused the network for a few weeks, yet it resumed shortly after the forum’s recovery. As of 31 December 2022, the network size has reached 59.1k nodes and 149.3M edges.

Core users are better connected than casual users, see Figure 11. The Twitter campaign largely boosted the centrality of both core and casual survivors. Before that, while core survivors were getting more centralised over time, casual survivors were becoming less centralised. But after the campaign on Twitter, the centralisation of both steadily increased. Newcomers came into play quickly afterwards and the forum recovery also made them more centralised.

Discussion of the Incident​

We examine how users mention the two major involved parties (Kiwi Farms and Cloudflare) during the period by extracting posts containing case-insensitive keywords ‘kiwifarm’, ‘kiwi farm’, ‘cloudflare’, and ‘cloud flare’ from Kiwi Farms, its Telegram channel, and Lolcow Farm. Table IV shows that discussions about the two parties were highly skewed and significantly dependent on the platforms. Telegram users tended to discuss things relevant to Kiwi Farms more than Cloudflare (13.3 times higher), while the ratios were less skewed for Kiwi Farms and Lolcow Farm, with 6.7 and 7.6, respectively. These discussions are centralised around a small number of conversations, for example, over 50% of posts mentioning Cloudflare on Kiwi Farms are just from 4 threads.

Although these posts accounted for a trivial contribution to the total posting volume on all three platforms as shown in Figure 4, most happened after the Twitter campaign, with almost no discussion before. The topic was popular for a short period, as shown in Figure 12. Users on both forums started discussing the incident shortly after the campaign started on 22 August 2022. The topic was energised on both forums after Cloudflare’s action on 3 September 2022, peaking on 4 September 2022 on Kiwi Farms with over 400 and 600 posts about Kiwi Farms and Cloudflare (around 5% and 7.5% of all posts on that day), respectively. After Kiwi Farms activity was significantly reduced due to DDoS-Guard’s action on 5 September 2022, posts mentioning Kiwi Farms and Cloudflare on Lolcow Farm peaked at around 80 and 20, respectively. (The numbers for Lolcow Farm are typically lower than Kiwi Farms as Lolcow Farm is smaller and centred on images instead of text. We do not collect images for safety and ethical reasons, but we believe the trends observed are likely indicative if not reliable.) Telegram activity regarding the incident was a bit different, as comments were only allowed after the forum was completely down; it followed the same trends as overall activity, with a peak of discussion about Kiwi Farms happening largely when the forum was inaccessible, as part of the discussion had moved here.

Discussion mentioning Kiwi Farms greatly exceeded those mentioning Cloudflare until the day Cloudflare took action (see the first graph in Figure 12). The pattern seen on Lolcow Farm suggests that the attention toward the incident was reflected there, although the peak did not correlate with the overall volume observed in Figure 4 as this contribution is trivial compared to the total. There were almost no posts about Cloudflare after Kiwi Farms became completely inaccessible, but there were still around 20 posts about Kiwi Farms seen on Lolcow Farm during that week. While nothing changed on Kiwi Farms during the second recovery, there was an increase in posts on Lolcow Farm about the incident, presumably as people got the news.

Overall, attention on Kiwi Farms, its Telegram channels, and Lolcow Farm was directed to the incident by the Twitter campaign, with posting volume peaking after the industry action. We believe it shows a genuine effect as none of the users there discussed Cloudflare and Kiwi Farms before. However, the effect was temporary and almost dropped to the pre-disruption level after the second recovery: they lasted for a few days on Kiwi Farms, around one week on Lolcow Farm (partly due to many domains of Kiwi Farms being down while Lolcow Farm was still active), and a few weeks on Telegram. Users’ interest was fleeting; they largely stopped talking about the incident after a few weeks.

1754686743109.webp
Figure 12: Number of daily posts regarding the disruption on Kiwi Farms, its Telegram channels, and Lolcow Farm.

6. Tensions, Challenges, and Implications​

The disruption we analysed could be the first time a series of infrastructure firms were involved in a collective effort to terminate a website. While deplatforming can reduce the spread of abusive content and safeguard people’s mental and physical safety, and is already routine on social-media platforms like Facebook, doing so without due process raises a number of philosophical, ethical, legal, and practical issues. For this reason, Meta set up its own Oversight Board.

6.1. The Efficacy of the Disruption​

The disruption was more effective than previous DDoS attacks on the forum, as observed from our datasets. Yet the impact, although considerable, was short-lived. While part of the activity was shifted to Telegram, half of the core members returned quickly after the forum recovered. And while most casual users were shaken off, others turned up to replace them. Cutting forum activity and users by half might be a success if the goal of the campaign is just to hurt the forum, but if the objective was to “drop the forum”, it has clearly failed. There is a lack of data on real-world harassment caused by forum members, such as online complaints or police reports, so we are unable to measure if the campaign had any effect in mitigating the physical and mental harm inflicted on people offline.

Kiwi Farms suffered further DDoS attacks and interruption after our study period but it managed to recover quickly, at some point reaching the same activity level as before the interruption. It then moved primarily to the dark web in May-July 2023. The forum operator has shown commitment and persistence despite much disruption by DDoS attacks and infrastructure providers. He attempted to get Kiwi Farms back online on the clearnet in late July 2023 under a new domain kiwifarms.pl, protected by their in-house DDoS mitigation system KiwiFlare, but the clearnet version appears to be unstable.

One lesson is that while repeatedly disrupting digital infrastructure might significantly lessen the activity of online communities, it may just displace them, which has been also noted in previous work [90]. Campaigners can also get bored after a few weeks, while the disrupted community is more determined to recover their gathering place. As with the re-emergence and relocation of extremist forums like 8Chan and Daily Stormer, Kiwi Farms is now back online. This supports the argument that truly disrupting online active platforms can be very challenging, much like the short-term impact of shutting down cybercrime marketplaces [20], DDoS-for-hire services [14, 15], and other security threats combining efforts of both law enforcement and industry interventions such as botnets and fraudulent ad networks when botmaster is capable of momentarily deploying new modules to counteract the takedown [53, 54]. Deplatforming alone may be insufficient to disperse or suppress an unpleasant online community in the long term, even when concerted action is taken by a series of tech firms over several months. It may weaken a community for a while by fragmenting their traffic and activity, and scare away casual observers, but it may also make core group members even more determined and recruit newcomers via the Streisand effect, whereby attempts at censorship can be self-defeating [11, 91].

6.2. Censorship versus Free Speech​

One key factor may be whether a community has capable and motivated defenders who can continue to fight back by restoring disrupted services, or whether they can be somehow disabled, whether through arrest, deterrence or exhaustion. This holds whether the defenders are forum operators or distributed volunteers. So under what circumstances might law enforcement take decisive action to decapitate an online forum, as the FBI did for example with the notorious Raid Forums [18] and Breach Forums [92]?

If some of a forum’s members break the law, are they a dissident organisation with a few bad actors, or a terrorist group that should be hunted down? Many troublesome organisations do attract hot-headed young members, from animal-rights activists, climate-change protesters through to trade union organisers do occasionally fall foul of the law. But whether they are labelled as terrorists or extremists is often a political matter. People may prefer censoring harmful misinformation [93], yet taking down a website on which a whole community relies will often be hard to defend as a proportionate and necessary law-enforcement action. The threat of legal action can be countered by the operator denouncing whatever specific crimes were complained of. In this case, the Kiwi Farms founder denounced SWAT attacks and other blatant criminality [55]. Indeed, a competent provocateur will stop just short of the point at which their actions will call down a vigorous police response.

The free speech protected by the US First Amendment [94] is in clear tension with the security of harassment victims. The Supreme Court has over time established tests to determine what speech is protected and what is not, including clear and present danger [95], a sole tendency to incite or cause illegal activity [96], preferred freedoms [97, 98], and compelling state interest [99]; however, the line drawn between them is not always clear-cut. Other countries are more restrictive, with France and Germany banning Nazi symbolism and Turkey banning material disrespectful of Mustafa Kemal Atatürk. In the debates over the Online Safety Bill currently before the UK Parliament, the Government at one point proposed to ban ‘legal but harmful’ speech online, while not making these speech acts unlawful face-to-face [43]. These proposals related to websites encouraging eating disorders or self-harm. Following the tragic suicide of a teenage girl [100], tech firms are under pressure to censor such material in the UK using their terms of service or by tweaking their recommendation algorithms.

There are additional implications in taking down platforms whose content is harmful but not explicitly illegal. Requiring firms to do this, as was proposed in the Online Safety Bill, will drastically expand online content regulation. The UK legislation hands the censor’s power to the head of Ofcom, the broadcast regulator, who is a political appointee. It will predictably lead to overblocking and invite abuse of power by government officials or big tech firms, who may suppress legitimate voices or dissenting opinions. There is an obvious risk of individuals or groups being unfairly targeted for political or ideological reasons.

6.3. The Role of Industry in Content Moderation​

The rapid increase of cybercrime-as-a-service throughout the 2010s makes attacks easier than ever. A teenager with as little as $10 can use a DDoS-for-hire service to knock your website offline [101], so controversial websites depend on the grace and favour of a large hosting company or a specialist DDoS prevention contractor. This is just one aspect of a broader trend in tech: that the Internet is becoming more centralised around a small number of big firms, ranging from online social platforms, hosting companies, transit networks, to service providers and exchange points [102]. While some do provide moderation tools that are favoured by content creators [103], some claim to be committed to fighting hate, harassment, and abuse yet are disproportionately responsible for serving online bad content [90], and the effort they put into the fight is variable [104, 105]. Content moderation has recently shifted to the infrastructure layer [106]; now that activists have pressured infrastructure providers to act as content moderators, policymakers will be tempted too. Some may stand up to political or social pressure, because moderation is both expensive and difficult, but others may fold from time to time because of political pressure or legal compulsion. This would undermine the end-to-end principle of the Internet, as enshrined for example in COPA s 230 in the USA and in the EU’s Net Neutrality Law [107].

Private companies must comply and remove illegal content from their infrastructure when directed to do so by a court order. However, deplatforming Kiwi Farms or any other customers does not violate the principle of free speech. It is essentially a contractual matter; they have the right to cease their support for a website that violates their policies. Infrastructure providers may occasionally need to work expediently with law enforcement in the case of an imminent threat to life. Most providers have worked out ways of doing this, but the mechanisms can be too sluggish. Cloudflare attempted to collaborate with law enforcement to sort out the case of Kiwi Farms, yet the process could not keep up with the escalating threats and it ended up taking unilateral action, relying on its terms of service [24]. In an ideal world, we would have an international legal framework for taking down websites that host illegal content or that promote crime; unfortunately, this framework does not exist.

The Budapest Convention [108] criminalises some material on which all states agree, such as child sex abuse images, but even there the boundaries are contested [109]. Online drug markets such as Silk Road and Hansa Market have been taken down because of other laws – drug laws – that also enjoy international standardisation and collaboration. Copyright infringement also gets the attention of international treaties and coordinated action by tech majors, though civil law plays a greater role here than criminal law. Then there is material about which some states feel strongly but others do not; ‘one man’s freedom fighter is another man’s terrorist’. And then there’s a vast swamp of fake news, animal cruelty, conspiracy theories, and other material that many find unpleasant or distressing, and which social networks moderate for the comfort of both their users and their advertisers. Legislators occasionally call for better policing of some of this content.

6.4. Policy Implications​

Content moderation has become a political, policy, and public concern [110, 111]. The UK Online Safety Bill proposes a new regulator who will be able to apply for a court order mandating that tech firms disrupt an objectionable online activity [43]. One might imagine Ofcom deciding to take down Kiwi Farms if their target had been a resident of Britain rather than Canada, and going to the various tech firms that were involved in the disruption we describe here, serving them one after another with orders signed by a judge in the High Court in London. Even if all the companies were to comply, rather than appealing or just ignoring the court altogether, it is hard to see how such an operation could be anything like as swift, coordinated or effective as the action taken on their own initiative by tech companies that we describe here. Where the censor’s OODA loop – the process by which it can observe, orient, decide and act – involves a government agency assessing the effects of each intervention and then going to court to order the next one, the time constant would stretch from hours to months. And in any case, government interventions in this field are often significant but rather short-lived [14, 15].

One factor contributing to the resilience of Kiwi Farms is the technical competence of the forum owner. He has consistently and capably dealt with DDoS attacks on the forum, maintaining its codebase after xenForo stopped their licence, upgrading server hardware and network capability, and developing in-house DDoS protection mechanisms. Deplatforming can be more effective if the maintainer of a blatantly illegal website can be arrested and jailed (or otherwise incapacitated), as happened with Silk Road. With a forum like Kiwi Farms, whose operator has denounced criminal acts perpetrated via his infrastructure [55], the criminal-law option may simply not be available. The art of being a provocateur includes stopping just short of the point at which an aggressive criminal-law response would follow. This exposes the limits of civil-law remedies and voluntary action by platforms.

Previous work has also explored why governments are less able to take down bad sites than private actors [11]; that work analysed single websites with clearly illegal content, such as those hosting malware, phishing lures or sex-abuse images. This study shows why taking down an active community is likely to be even harder. Even when several tech firms roll their sleeves up and try to suppress a community some of whose members have indulged in crime and against whom there is an industry consensus, the net effect may be modest at best. Our case study may be the best result that could be expected for online censorship, but it only cut the users, posts, threads and traffic by about half. Our findings suggest that using content moderation law to suppress an unpleasant online community may be very challenging.

6.5. Limitations and Future Work​

Measuring the link between physical harassment and Kiwi Farms, as well as the cost of actual harm caused by the forum members to real-world victims, would be a valuable contribution. However, we lack ground-truth data about real-life events, which cannot be solely observed from forum discussions. Investigating doxxing-related posts that share real-victim information would be a good start, but the main challenge is validating data posted by untrusted users at scale, in the absence of a robust way to identify users. Our forum data is limited in studying user migration from Kiwi Farms to its competitor Lolcow Farm as pseudonyms are unavailable on Lolcow Farm, so it is unclear if some Kiwi Farms members have shifted there.

Our data scrapers are running in near real time, but there is still a chance of missing messages that are posted but removed swiftly thereafter. We expect the number of such missing messages to be relatively small. More insights can be revealed from private or protected posts as people can be more extreme when posting in private. However, we choose not to analyse them due to potential harm, legality, and ethical issues. Kiwi Farms is now back online, and may well succeed in maintaining its accessibility on the clearnet. We will continue to monitor it, and extend our measurement to the more recent incidents in a follow-up report.

7. Conclusion​

Online communities may not only act as a discussion place but provide mutual support for members who share common values. For some, it may be where they hang out; for others, it may become part of their identity. Legislators who propose to ban an online community might consider precedents such as Britain’s ban on Provisional Sinn Féin from 1988–94 due to its support for the Provisional IRA during the Troubles, or the bans on the Muslim Brotherhood enacted by various Arab regimes. (During the Sinn Féin ban, it was illegal to transmit the voice or image of their spokesmen in Britain, so the BBC and other TV stations employed actors to read the words of Gerry Adams and Martin McGuinness.) Declaring a community to be illegal and thus forcing it underground may foster paranoid worldviews, increase signals associated with toxicity and radicalisation [45, 33] and have many other unintended consequences. The Kiwi Farms disruption, which involved a substantial concerted effort by the industry, is perhaps the best outcome that could be expected even if the censor were agile, competent and persistent. Yet this has demonstrated that merely attempting to deplatform an active standalone online community is not enough to deal effectively with hate and harassment, especially as the attempt failed to arrest, exhaust, or otherwise incapacitate the forum’s maintainer.

We believe the harm and threats associated with online hate communities may justify action despite the right to free speech. But within the framework of the EU and the Council of Europe which is based on the European Convention on Human Rights, such action will have to be justified as proportionate, necessary and in accordance with the law. It is unlikely that taking down a whole community or arresting its maintainer because of a crime committed by a single member can be proportionate. For a takedown to be justified as necessary, it must also be effective, and this case study shows how high a bar that could be. For a takedown to be in accordance with the law, it cannot simply be a response to public pressure. There must be a law or regulation that determines predictably whether a specific piece of content is illegal, and a judge or other neutral finder of fact would have to be involved.

The last time a Labour government won power in Britain, it won on a promise to be ‘Tough on Crime, and Tough on the Causes of Crime’. Some scholars of online abuse are now coming to a similar conclusion that the issue may demand a more nuanced approach [3, 62]: as well as the targeted removal of content that passes an objective threshold of illegality, the private sector and governments should collaborate to combine takedowns with measures such as education and psycho-social support [112]. And where the illegality involves violence, it is even more vital to work with local police forces and social workers rather than just attacking the online symptoms [109].

There are multiple research programmes and field experiments to effectively detox young men from misogynistic attitudes, whether in youth clubs and other small groups, at the scale of schools, or even by gamifying the identification of propaganda that promotes hate. But most countries still lack a unifying strategy for violence reduction [113]. In both the US and the UK, for example, while incel-related violence against women falls under the formal definition of terrorism, it is excluded from police counterterrorism practice, and the politicisation of misogyny has made this a tussle space in which political leaders and police chiefs have difficulty in taking effective action. In turbulent debates, policymakers should first ask which tools are likely to work, and it is in this context that we offer the present case study.

Acknowledgments​

We thank the anonymous reviewers and the shepherd for their insightful and constructive feedback. We are grateful to Richard Clayton, Alastair R. Beresford, Yi Ting Chua, Ben Collier, Tina Marjanov, Konstantinos Ioannidis, Daniel R. Thomas, and Ilia Shumailov for their invaluable comments. This work is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 949127).

References​

  1. M. Mondal, L. A. Silva, and F. Benevenuto, “A Measurement Study of Hate Speech in Social Media,” in Proceedings of the ACM Conference on Hypertext and Social Media (HT), 2017.
  2. S. A. Aghazadeh, A. Burns, J. Chu, H. Feigenblatt, E. Laribee, L. Maynard, A. L. Meyers, J. L. O’Brien, and L. Rufus, “GamerGate: A Case Study in Online Harassment,” Online Harassment, 2018.
  3. D. Kumar, J. Hancock, K. Thomas, and Z. Durumeric, “Understanding the Behaviors of Toxic Accounts on Reddit,” in Proceedings of the ACM World Wide Web Conference (WWW), 2023.
  4. K. Gunton, “The Use of Artificial Intelligence in Content Moderation in Countering Violent Extremism on Social Media Platforms,” in Artificial Intelligence and National Security, 2022.
  5. M. Singhal, C. Ling, P. Paudel, P. Thota, N. Kumarswamy, G. Stringhini, and S. Nilizadeh, “SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice,” in Proceedings of the IEEE European Symposium on Security and Privacy (EuroS&P), 2023.
  6. E. de Keulenaar, A. Glyn Burton, and I. Kisjes, “Deplatforming, Demotion and Folk Theories of Big Tech Persecution,” Revista Fronteiras, 2021.
  7. R. Rogers, “Deplatforming: Following Extreme Internet Celebrities to Telegram and Alternative Social Media,” European Journal of Communication, 2020.
  8. H. Habib, M. B. Musa, F. Zaffar, and R. Nithyanand, “To Act or React: Investigating Proactive Strategies for Online Community Moderation,” arXiv:1906.11932, 2019.
  9. T. Gillespie, “Do Not Recommend? Reduction as a Form of Content Moderation,” Social Media + Society, 2022.
  10. I. Kayes, N. Kourtellis, D. Quercia, A. Iamnitchi, and F. Bonchi, “The Social World of Content Abusers in Community Question Answering,” in Proceedings of the ACM World Wide Web Conference (WWW), 2015.
  11. A. Hutchings, R. Clayton, and R. Anderson, “Taking Down Websites to Prevent Crime,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2016.
  12. Cloudflare, “Why We Terminated Daily Stormer,” 2017.
  13. Cloudflare, “Terminating Service for 8Chan,” 2019.
  14. B. Collier, D. R. Thomas, R. Clayton, and A. Hutchings, “Booting the Booters: Evaluating the Effects of Police Interventions in the Market for Denial-of-service Attacks,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2019.
  15. D. Kopp, M. Wichtlhuber, I. Poese, J. Santanna, O. Hohlfeld, and C. Dietzel, “DDoS Hide & Seek: On the Effectiveness of a Booter Services Takedown,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2019.
  16. Bleeping Computer, “FBI Seized Domains Linked to 48 DDoS-for-hire Service Platforms,” 2022.
  17. Bleeping Computer, “FBI Seizes 13 More Domains Linked to DDoS-for-hire Services,” 2023.
  18. U.S. Department of Justice, “U.S. Leads Seizure of One of the World’s Largest Hacker Forums and Arrests Administrator,” 2022.
  19. U.S. District Court, “United States v. Ross William Ulbricht,” 2014.
  20. K. Soska and N. Christin, “Measuring the Longitudinal Evolution of the Online Anonymous Marketplace Ecosystem,” in Proceedings of the USENIX Security Symposium (USENIX Security), 2015.
  21. M. Pless, “Kiwi Farms, the Web’s Biggest Stalker Community,” 2016.
  22. S. Ambreen, “Kiwi Farms Linked to At Least 2 Murders and 4 Suicides,” 2019.
  23. Wired, “The End of Kiwi Farms, the Web’s Most Notorious Stalker Site,” 2022.
  24. Cloudflare, “Blocking Kiwifarms,” 2022.
  25. DDoS-Guard, “DDoS-Guard Terminating Services for Kiwi Farms,” 2022.
  26. DiamWall, “Service Continuation of Kiwi Farms,” 2022.
  27. Daily Dot, “Kiwi Farms Gets Booted from Another Major Domain,” 2022.
  28. S. Jhaver, C. Boylston, D. Yang, and A. Bruckman, “Evaluating the Effectiveness of Deplatforming as a Moderation Strategy on Twitter,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2021.
  29. E. Chandrasekharan, U. Pavalanathan, A. Srinivasan, A. Glynn, J. Eisenstein, and E. Gilbert, “You Can’t Stay Here: the Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2017.
  30. H. M. Saleem and D. Ruths, “The Aftermath of Disbanding an Online Hateful Community,” arXiv:1804.07354, 2018.
  31. H. Innes and M. Innes, “De-platforming Disinformation: Conspiracy Theories and Their Control,” Information, Communication & Society, 2023.
  32. A. Rauchfleisch and J. Kaiser, “Deplatforming the Far-right: An Analysis of YouTube and BitChute,” The Social Science Research Network (SSRN), 2021.
  33. S. Ali, M. H. Saeed, E. Aldreabi, J. Blackburn, E. De Cristofaro, S. Zannettou, and G. Stringhini, “Understanding the Effect of Deplatforming on Social Networks,” in Proceedings of the ACM Web Science Conference (WebSci), 2021.
  34. K. Bryanov, D. Vasina, Y. Pankova, and V. Pakholkov, “The Other Side of Deplatforming: Right-Wing Telegram in the Wake of Trump’s Twitter Ouster,” in Proceedings of the International Conference on Digital Transformation and Global Society (DTGS), 2022.
  35. K. Thomas, D. Akhawe, M. Bailey, D. Boneh, E. Bursztein, S. Consolvo, N. Dell, Z. Durumeric, P. G. Kelley, D. Kumar et al., “SoK: Hate, Harassment, and the Changing Landscape of Online Abuse,” in Proceedings of the IEEE Symposium on Security and Privacy (S&P), 2021.
  36. M. Wei, S. Consolvo, P. G. Kelley, T. Kohno, F. Roesner, and K. Thomas, ““There’s So Much Responsibility on Users Right Now:” Expert Advice for Staying Safer from Hate and Harassment,” in Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), 2023.
  37. M. Aliapoulios, K. Take, P. Ramakrishna, D. Borkan, B. Goldberg, J. Sorensen, A. Turner, R. Greenstadt, T. Lauinger, and D. McCoy, “A Large-scale Characterization of Online Incitements to Harassment Across Platforms,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2021.
  38. M. Williams, “Hatred Behind the Screens: A Report on the Rise of Online Hate Speech,” Cardiff University and Mishcon de Reya, Tech. Rep., 2019.
  39. J. A. Pater, M. K. Kim, E. D. Mynatt, and C. Fiesler, “Characterizations of Online Harassment: Comparing Policies Across Social Media Platforms,” in Proceedings of the ACM International Conference on Supporting Group Work (GROUP), 2016.
  40. R. Jiménez Durán, “The Economics of Content Moderation: Theory and Experimental Evidence from Hate Speech on Twitter,” The Social Science Research Network (SSRN), 2021.
  41. B. Fishman, “Dual-use Regulation: Managing Hate and Terrorism Online Before and After Section 230 Reform,” 2023.
  42. F. Schauer, “The Exceptional First Amendment,” The Social Science Research Network (SSRN), 2005.
  43. R. Anderson and S. Gilbert, “The Online Safety Bill,” Policy Brief, Bennett Institute for Public Policy, 2022.
  44. D. R. Thomas and L. A. Wahedi, “Disrupting Hate: the Effect of Deplatforming Hate Organizations on their Online Audience,” Proceedings of the National Academy of Sciences (PNAS), 2023.
  45. M. Horta Ribeiro, S. Jhaver, S. Zannettou, J. Blackburn, G. Stringhini, E. De Cristofaro, and R. West, “Do Platform Migrations Compromise Content Moderation? Evidence from r/the_donald and r/incels,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2021.
  46. G. Russo, L. Verginer, M. H. Ribeiro, and G. Casiraghi, “Spillover of Antisocial Behavior from Fringe Platforms: the Unintended Consequences of Community Banning,” in Proceedings of the AAAI International Conference on Web and Social Media (ICWSM), 2023.
  47. C. Buntain, M. Innes, T. Mitts, and J. Shapiro, “Cross-platform Reactions to the Post-January 6 Deplatforming,” Journal of Quantitative Description, 2023.
  48. A. Mekacher, M. Falkenberg, and A. Baronchelli, “The Systemic Impact of Deplatforming on Social Media,” arXiv:2303.11147, 2023.
  49. C. Monti, M. Cinelli, C. Valensise, W. Quattrociocchi, and M. Starnini, “Online Conspiracy Communities are More Resilient to Deplatforming,” arXiv:2303.12115, 2023.
  50. A. Papasavva and E. Mariconti, “Waiting for Q: An Exploration of QAnon Users’ Online Migration to Poal in the Wake of Voat’s Demise,” arXiv:2302.01397, 2023.
  51. I. Goldstein, L. Edelson, D. McCoy, and T. Lauinger, “Understanding the (In) Effectiveness of Content Moderation: A Case Study of Facebook in the Context of the US Capitol Riot,” arXiv:2301.02737, 2023.
  52. S. Abramova and R. Böhme, “Out of the Dark: The Effect of Law Enforcement Actions on Cryptocurrency Market Prices,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2021.
  53. Y. Nadji, M. Antonakakis, R. Perdisci, D. Dagon, and W. Lee, “Beheading Hydras: Performing Effective Botnet Takedowns,” in Proceedings of the ACM Conference on Computer and Communications Security (CCS), 2013.
  54. P. Pearce, V. Dave, C. Grier, K. Levchenko, S. Guha, D. McCoy, V. Paxson, S. Savage, and G. M. Voelker, “Characterizing Large-scale Click Fraud in ZeroAccess,” in Proceedings of the ACM Conference on Computer and Communications Security (CCS), 2014.
  55. Kiwi Farms, “Principles of the Kiwi Farms,” 2022.
  56. Kiwi Farms, “XenForo Has Revoked Our License,” 2021.
  57. Vice Motherboard, “Notorious Website Kiwi Farms Loses Its Domain Registrar,” 2021.
  58. Heatst, “Notorious Forum Kiwi Farms Closed Following Alleged Harassment of Founder’s Family,” 2017.
  59. Keffals, “Keffals Led a Protest against Cloudflare to Drop the Kiwi Farms Forum,” 2022.
  60. The Verge, “Kiwi Farms Has Been Scrubbed from the Internet Archive,” 2022.
  61. Vice Motherboard, “QAnon’s Jim Watkins Tried to Save Kiwi Farms. Now His Site 8Kun Is Down.” 2022.
  62. A. V. Vu, L. Wilson, Y. T. Chua, I. Shumailov, and R. Anderson, “ExtremeBB: A Database for Large-Scale Research into Online Hate, Harassment, the Manosphere and Extremism,” in ACL Workshop on Online Abuse and Harms (WOAH), 2023.
  63. Perspective API, “Attributes and Languages,” 2023.
  64. S. Zannettou, M. ElSherief, E. Belding, S. Nilizadeh, and G. Stringhini, “Measuring and Characterizing Hate Speech on News Websites,” in Proceedings of the ACM Web Science Conference (WebSci), 2020.
  65. Similarweb, “Top Competitors of Kiwi Farms,” 2023.
  66. Semrush, “Top Competitors of Kiwi Farms,” 2023.
  67. I. Pete, J. Hughes, A. Caines, A. V. Vu, H. Gupta, A. Hutchings, R. Anderson, and P. Buttery, “PostCog: A Tool for Interdisciplinary Research into Underground Forums at Scale,” in Proceedings of the IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2022.
  68. P. Doerfler, A. Forte, E. De Cristofaro, G. Stringhini, J. Blackburn, and D. McCoy, ““I’m a Professor, Which isn’t Usually a Dangerous Job”: Internet-facilitated Harassment and Its Impact on Researchers,” Proceedings of the ACM on Human-Computer Interaction (HCI), 2021.
  69. L. Wilson, A. V. Vu, I. Pete, and Y. T. Chua, “Identifying and Collecting Public Domain Data for Tracking Cybercrime and Online Extremism,” in Open-Source Verification in the Age of Google, 2024.
  70. TechCrunch, “Web Scraping is Legal, U.S. Appeals Court Reaffirms,” 2022.
  71. British Society of Criminology, “Statement of Ethics,” 2015.
  72. N. Warford, T. Matthews, K. Yang, O. Akgul, S. Consolvo, P. G. Kelley, N. Malkin, M. L. Mazurek, M. Sleeper, and K. Thomas, “SoK: A Framework for Unifying At-risk User Research,” in Proceedings of the IEEE Symposium on Security and Privacy (S&P), 2022.
  73. A. E. Marwick, L. Blackwell, and K. Lo, “Best Practices for Conducting Risky Research and Protecting Yourself from Online Harassment,” Data & Society, 2016.
  74. R. Bhalerao, V. Hamilton, A. McDonald, E. M. Redmiles, and A. Strohmayer, “Ethical Practices for Security Research with At-Risk Populations,” in Proceedings of the IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2022.
  75. S. C. Jansen and B. Martin, “The Streisand Effect and Censorship Backfire,” International Journal of Communication, 2015.
  76. Cloudflare, “Cloudflare’s Abuse Policies & Approach,” 2022.
  77. Keffals, “Cloudflare Inadvertently De-platformed a Neo-nazi Group Based in New Zealand,” 2022.
  78. Harica, “Harica Announcement on Kiwi Farms,” 2023.
  79. D. Bromell, “Challenges in Regulating Online Content,” in Regulating Free Speech in a Digital Age: Hate, Harm and the Limits of Censorship, 2022.
  80. Thomas Lynch, “HAProxy Protection,” 2023.
  81. J. Hughes, B. Collier, and A. Hutchings, “From Playing Games to Committing Crimes: A Multi-technique Approach to Predicting Key Actors on an Online Gaming Forum,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2019.
  82. S. G. van de Weijer, T. J. Holt, and E. R. Leukfeldt, “Heterogeneity in Trajectories of Cybercriminals: A Longitudinal Analyses of Web Defacements,” Computers in Human Behavior Reports, 2021.
  83. A. V. Vu, J. Hughes, I. Pete, B. Collier, Y. T. Chua, I. Shumailov, and A. Hutchings, “Turning Up the Dial: the Evolution of a Cybercrime Market Through Set-up, Stable, and Covid-19 Eras,” in Proceedings of the ACM Internet Measurement Conference (IMC), 2020.
  84. R. Sanders, “The Pareto Principle: its Use and Abuse,” Journal of Services Marketing, 1987.
  85. O. Goga, H. Lei, S. H. K. Parthasarathi, G. Friedland, R. Sommer, and R. Teixeira, “Exploiting Innocuous Activity for Correlating Users Across Sites,” in Proceedings of the ACM World Wide Web Conference (WWW), 2013.
  86. J. Liu, F. Zhang, X. Song, Y.-I. Song, C.-Y. Lin, and H.-W. Hon, “What’s in a Name? An Unsupervised Approach to Link Users Across Communities,” in Proceedings of the ACM International Conference on Web Search and Data Mining (WSDM), 2013.
  87. T. Russell-Rose, M. Stevenson, and M. Whitehead, “The Reuters Corpus Volume 1 - from Yesterday’s News to Tomorrow’s Language Resources,” in Proceedings of the International Conference on Language Resources and Evaluation (LREC), 2002.
  88. I. Pete, J. Hughes, Y. T. Chua, and M. Bada, “A Social Network Analysis and Comparison of Six Dark Web Forums,” in Proceedings of the IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2020.
  89. M. Newman, Networks. Oxford University Press, 2018.
  90. C. Han, D. Kumar, and Z. Durumeric, “On the Infrastructure Providers that Support Misinformation Websites,” in Proceedings of the AAAI International Conference on Web and Social Media (ICWSM), 2022.
  91. Y. T. Chua, S. Parkin, M. Edwards, D. Oliveira, S. Schiffner, G. Tyson, and A. Hutchings, “Identifying Unintended Harms of Cybersecurity Countermeasures,” in Proceedings of the APWG Symposium on Electronic Crime Research (eCrime), 2019.
  92. U.S. Department of Justice, “Justice Department Announces Arrest of the Founder of One of the World’s Largest Hacker Forums and Disruption of Forum’s Operation,” 2023.
  93. A. Kozyreva, S. M. Herzog, S. Lewandowsky, R. Hertwig, P. Lorenz-Spreen, M. Leiser, and J. Reifler, “Resolving Content Moderation Dilemmas Between Free Speech and Harmful Misinformation,” Proceedings of the National Academy of Sciences (PNAS), 2023.
  94. F. Schauer, “The First Amendment as Ideology,” William & Mary Law Review, 1991.
  95. U.S. Supreme Court, “Schenck v. United States, 249 U.S. 47,” 1919.
  96. U.S. Supreme Court, “Abrams v. United States, 250 U.S. 616,” 1919.
  97. U.S. Supreme Court, “Jones v. Opelika, 316 U.S. 584,” 1942.
  98. U.S. Supreme Court, “Jones v. Opelika, 319 U.S. 103,” 1943.
  99. U.S. Supreme Court, “Korematsu v. U.S., 323 U.S. 214,” 1944.
  100. Wired, “How A British Teen’s Death Changed Social Media,” 2022.
  101. A. Hutchings and R. Clayton, “Exploring the Provision of Online Booter Services,” Deviant Behavior, 2016.
  102. T. Mirrlees, “GAFAM and Hate Content Moderation: Deplatforming and Deleting the Alt-right,” in Media and Law: Between Free Speech and Censorship, 2021.
  103. K. Thomas, P. G. Kelley, S. Consolvo, P. Samermit, and E. Bursztein, ““It’s Common and a Part of Being a Content Creator”: Understanding How Creators Experience and Cope with Hate and Harassment Online,” in Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI), 2022.
  104. D. Konikoff, “Gatekeepers of Toxicity: Reconceptualizing Twitter’s Abuse and Hate Speech Policies,” Policy & Internet, 2021.
  105. D. G. Heslep and P. Berge, “Mapping Discord’s Darkside: Distributed Hate Networks on Disboard,” New Media & Society, 2021.
  106. C. Busch, “Regulating the Expanding Content Moderation Universe: A European Perspective on Infrastructure Moderation,” UCLA Journal of Law & Technology, 2022.
  107. Body of European Regulators for Electronic Communications, “All You Need to Know about Net Neutrality Rules in the EU,” 2022.
  108. A. Seger, “The Budapest Convention on Cybercrime: A Framework for Capacity Building,” 2016.
  109. R. Anderson, “Chat Control or Child Protection?arXiv:2210.08958, 2022.
  110. T. Gillespie, P. Aufderheide, E. Carmi, Y. Gerrard, R. Gorwa, A. Matamoros Fernandez, S. T. Roberts, A. Sinnreich, and S. Myers West, “Expanding the Debate about Content Moderation: Scholarly Research Agendas for the Coming Policy Debates,” Internet Policy Review, 2020.
  111. M. Alizadeh, F. Gilardi, E. Hoes, K. J. Klüser, M. Kubli, and N. Marchal, “Content Moderation as a Political Issue: the Twitter Discourse Around Trump’s Ban,” Journal of Quantitative Description, 2022.
  112. C. Lally and R. Bermingham, “Online Extremism,” Research Briefing, U.K. Parliament, 2020.
  113. L. Bates, Men Who Hate Women: the Extremism Nobody is Talking About. Simon & Schuster, 2021.

Appendix A​

A.1. Summary​

This paper examines the impact of large-scale industry disruption on the online harassment forum Kiwi Farms, as well as its competitor Lolcow Farm. The authors use a variety of measurement techniques to show a net reduction of activity on the forum.

A.2. Scientific Contributions​

  • Independent confirmation of important results with limited prior research.
  • Provides a new data set for public use.
  • Provides a valuable step forward in an established field.

A.3. Reasons for Acceptance​

  1. This paper provides a valuable step forward in the field of harassment measurement prevention by confirming important prior results. This paper examines deplatforming on an internet-wide scale, rather than focusing on one social network – a limitation of most prior work.
  2. This paper provides a new data set for public use. On request, the authors will provide a very detailed dataset of forum discussions with metadata, Telegram chats, web analytics, and relevant tweets, allowing independent confirmation and future research on harassment sites.

A.4. Noteworthy Concerns​

  1. There is a significant lack of information about how the qualitative analysis of public announcements and press releases was conducted, which makes evaluation of that analysis challenging - details about reliability and how the coding process was done would be useful.
  2. The discussion does a good job of describing why deplatforming is hard, but does not offer much in the way of suggestions for making this problem easier beyond arresting the people responsible.
 
What "digital threats"? Nigga like turn off the computer like walk away from the screen close your eyes.
To be fair you can’t just ‘walk away’ from the computer like you used to. It’s on your phone, on your watch, on your fridge if you’re really insufferable. It’s the main method of communication and education.
 
I bet you were one of those increasingly toxic newcomers (2 times higher toxicity rating, as judged by academic peers) they reference here.
My original account was made before the whole drop kiwi farms movement, lurked since 2019 and think I registered in 2021. My first account was under a burner email app, so when I changed phones, and had to re-download that app and it gave me a different email address so there was no way for me to get my new password when the hack happened. So I had to email Null to get an invite back. This is a continuation from my first account. But I probably am one of the "toxic" users.
 
  • Like
Reactions: Pure Autism
operator has denounced criminal acts perpetrated via his infrastructure [55], the criminal-law option may simply not be available. The art of being a provocateur includes stopping just short of the point at which an aggressive criminal-law response would follow. This exposes the limits of civil-law remedies and voluntary action by platforms.
"We can't arrest/'otherwise incapacitate' people who aren't breaking the law!"
 
Very funny how this gay keening over how the government couldn’t stop these evil nazi terrorists is posted shortly after a catfish vtuber groyper’s name and likeness were broadcasted sitewide for everyone’s amusement and derision. Josh Moon is the world’s first and only woke jewish nazi nigger-loving racist.
 
It would explain why this "researcher" is advocating for null to be arrested or "incapacitated" for no reason. Even if they drop internet censorship laws just to target us, we can always operate out of a different country and we, the users, can use TOR and or a VPN to get around it.
Some of us live places where we're allowed to shoot back.
 
The moral of the study is that you need to kill or arrest website administrators. The entire thesis is that industry censorship doesn't work and you need to go after the minds behind it. This is what they're laying groundwork for.
It is becoming disturbingly clear, isn't it?

"Will no one rid us of this troublesome webmaster"?

They used to have the balls to call these just-within-plausible-deniability calls to violence and even murder "stochastic terrorism".

This place is special; please never let them take it from you, from us.

Nice implication that poking fun at retards on the internet is "cybercrime" like I'm hacking someone's bank account or something.
It is ever thus with the authoritarian mind:

The problem is never people being retarded, no; the problem is daring to notice people being retarded.
 
Last edited:
I feel like the Right right now is pushing a bit too hard. Not that I disagree with it but it makes people want revenge.
Oh yeah.
I predicted this way back in the 2010's that, because of how invasive the left is being, when the pendulum swings, the right will do the exact same authoritarian censorious bullshit but they will give different reasons for it.
It's already starting to happen but 5 years from now, you will really see them go after freedom of speech and various human right.
They already have the template of how to do it from the woke lefties.
 
  • Feels
Reactions: FunPosting101
We thank the anonymous reviewers and the shepherd for their insightful and constructive feedback.
They thank absolute bullshit liars, then they engage in citogenesis, where a bunch of lying lunatics incestuously cite each other's absolute garbage as if just a bunch of morons quoting each other make their quotations of each other somehow magically true.

After all, their main argument is that anyone who disagrees with their crackpot bullshit should be forcibly shut down by the government, so their deranged beliefs are "on the right side of history."

Unfortunately, society is insane, so in the sane parts of society, we should be prepared to shoot back when these psychos get around to their whole idea of "incapacitating," i.e. murdering anyone who runs a website that allows people to disagree with their madness.

As old as this article is, it states the intentions of these evil freaks, which is to start murdering anyone who disagrees with them. They have realized this whole "deplatforming" thing hasn't worked. They think they need to get more radical.

This is from Cambridge, an organized university of vastly wealthy people. They pretend to be socialists, but they are acutally enemies of all people.
 
Back