US The Verge: How Cloudflare got Kiwi Farms wrong

How Cloudflare got Kiwi Farms wrong
The Verge (archive.ph)
By Casey Newton
2022-09-06 20:00:00GMT

Today let’s talk about Kiwi Farms, Cloudflare, and whether infrastructure providers ought to take more responsibility for content moderation than they have generally taken.

I.
Kiwi Farms is a nearly 10-year-old web forum, founded by a former administrator for the popular QAnon wasteland 8chan, that has become notorious for waging online harassment campaigns against LBGT people, women, and others. It came to popular attention in recent weeks after a well known Twitch creator named Clara Sorrenti spoke out against the recent wave of anti-trans legislation in the United States, leading to terrifying threats and violence against her by people who organized on Kiwi Farms.

Ben Collins and Kat Tenbarge wrote about the situation at NBC:

Sorrenti, known to fans of her streaming channel as “Keffals,” says that when her front door opened on Aug. 5 the first thing she saw was a police officer’s gun pointed at her face. It was just the beginning of a weekslong campaign of stalking, threats and violence against Sorrenti that ended up making her flee the country.
Police say Sorrenti’s home in London, Ontario, had been swatted after someone impersonated her in an email and said she was planning to perpetrate a mass shooting outside of London’s City Hall. After Sorrenti was arrested, questioned and released, the London police chief vowed to investigate and find who made the threat. Those police were eventually doxxed on Kiwi Farms and threatened. The people who threatened and harassed Sorrenti, her family and police officers investigating her case have not been identified.

In response to the harassment, Sorrenti began a campaign to pressure Cloudflare into no longer providing its security services to Kiwi Farms. Thanks to her popularity on Twitch, and the urgency of the issue, #DropKiwiFarms and #CloudflareProtectsTerrorists both trended on Twitter. And the question became what Cloudflare — a company that has been famously resistant to intervening in matters of content moderation — would do about it.

Most casual web surfers may be unaware of Cloudflare’s existence. But the company’s offerings are essential to the functioning of the internet. And it provided at least three services that have been invaluable to Kiwi Farms.

One, Cloudflare made Kiwi Farms faster and thus easier to use, by generating thousands of copies of it and storing it at end points around the world, where they could be more quickly delivered to end users. Two, it protected Kiwi Farms from distributed denial-of-service (DDoS) attacks, which can crash sites by overwhelming them with bot traffic. And third, as Alex Stamos points out here, it hid the identity of their web hosting company, preventing people from pressuring the hosting provider to take action against it.

Cloudflare knew it was doing all this, of course, and it has endeavored to make principled arguments for doing so. Twice before in its history, it has confronted related high-profile controversies in moderation — once in 2017, when it turned off protection for the neo-Nazi site the Daily Stormer, and again in 2019, when it did the same for 8chan. In both cases, the company took pains to describe the decisions as “dangerous” — warning that it would create more pressure on infrastructure providers to shut down other websites, a situation that would likely disproportionately hurt marginalized groups.

Last week, as pressure on the company to do something about Kiwi Farms grew, Cloudflare echoed that sentiment in a blog post. (One that did not mention Kiwi Farms by name.) Here are CEO Matthew Prince and head of public policy Alissa Starzak:

“Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online. We believe cyberattacks, in any form, should be relegated to the dustbin of history.”

It’s admirable that Cloudflare has been so principled in developing its policies and articulating the rationale behind them. And I share the company’s basic view of the content moderation technology stack: that the closer you get to hosting, recommending, and otherwise driving attention to content, the more responsibility you have for removing harmful material. Conversely, the further you get from hosting and recommending, the more reluctant you should be to intervene.

The logic is that it is the people hosting and recommending who are most directly responsible for the content being consumed, and who have the most context on what the content is and why it might (or might not be) a problem. Generally speaking, you don’t want Comcast deciding what belongs on Instagram.

Cloudflare also argues that we should pass laws to dictate what content should be removed, since laws emerge from a more democratic process and thus have more legitimacy. I’m less sympathetic to the company on that front: I like the idea of making content moderation decisions more accountable to the public, but I generally don’t want the government intervening in matters of speech.

However principled these policies are, though, they are undeniably convenient to Cloudflare. They allow the company to rarely have to consider content moderation issues, and this has all sorts of benefits. It helps Cloudflare serve the largest number of customers; keep it out of hot-button cultural debates; and stay off the radar of regulators who are increasingly skeptical of tech companies moderating too little — or too much.

Generally speaking, when companies can push content moderation off on someone else, they do. There’s generally very little upside in policing speech, unless it’s necessary for the survival of the business.

II.
But I want to return to that sentiment in the company’s blog post, the one that says: “Giving everyone the ability to sign up for our services online also reflects our view that cyberattacks not only should not be used for silencing vulnerable groups, but are not the appropriate mechanism for addressing problematic content online.” The idea is that Cloudflare wants to take DDoS and other attacks off the table for everyone, both good actors and bad, and that harassment should be fought in (unnamed) other ways.

Certainly it would be a good thing if everyone from local police departments to national lawmakers took online harassment more seriously, and developed a coordinated strategy to protect victims from doxxing, swatting, and other common vectors of online abuse — while also doing better at finding and prosecuting their perpetrators.

In practice, though, they don’t. And so Cloudflare, inconvenient as it is for the company, has become a legitimate pressure point in the effort to stop these harassers from threatening or committing acts of violence. Yes, Kiwi Farms could conceivably find other security providers. But there aren’t that many of them, and Cloudflare’s decision to stop services for the Daily Stormer and 8chan really did force both operations further underground and out of the mainstream.

And so its decision to continue protecting Kiwi Farms arguably made it complicit in whatever happened to poor Sorrenti, and anyone else the mob might decide to target. (Three people targeted by Kiwi Farms have died by suicide, according to Gizmodo.)

And while we’re on the subject of complicity, it’s notable that for all its claims about wanting to bring about an end to cyberattacks, Cloudflare provides security services to… makers of cyberattack software! That’s the claim made in this blog post from Sergiy P. Usatyuk, who was convicted of running a large DDoS-for-hire scheme. Writing in response to the Kiwi Farms controversy, Usatyuk notes that Cloudflare profits from such schemes because it can sell protection to the victims.

In its blog post, Cloudflare compares itself to a fire department that puts out fires no matter how bad a person the resident of the house may be. In response, Usatyuk writes: “CloudFlare is a fire department that prides itself on putting out fires at any house regardless of the individual that lives there. What they forget to mention is they are actively lighting these fires and making money by putting them out!”

Again, none of this is to say that there aren’t good reasons for Cloudflare to stay out of most moderation debates. There are! And yet it does matter to whom the company decides to deploy its security guards — a service it often provides for free, incidentally — enabling harassment and worse for a small but committed group of the worst people on the internet.

III.
In the aftermath of Cloudflare’s initial blog post, Stamos predicted the company’s stance wouldn’t hold. “There have been suicides linked to KF, and soon a doctor, activist or trans person is going to get doxxed and killed or a mass shooter is going to be inspired there,” he wrote. “The investigation will show the killer’s links to the site, and Cloudflare’s enterprise base will evaporate.”

Fortunately, it hasn’t yet come to that. But credible threats against individuals did escalate over the past several days, the company reported, and on Saturday Cloudflare did indeed reverse course and stopped protecting Kiwi Farms.

“This is an extraordinary decision for us to make and, given Cloudflare’s role as an Internet infrastructure provider, a dangerous one that we are not comfortable with,” Prince wrote in a new blog post. “However, the rhetoric on the Kiwi Farms site and specific, targeted threats have escalated over the last 48 hours to the point that we believe there is an unprecedented emergency and immediate threat to human life unlike we have previously seen from Kiwi Farms or any other customer before.”

It feels like a massive failure of social policy that the safety of Sorrenti and other people targeted by online mobs comes down to whether a handful of companies will agree to continue protecting their organizing spaces from DDoS attacks, of all things. In some ways, it feels absurd. We’re offloading what should be a responsibility of law enforcement onto a for-profit provider of arcane internet backbone services.

“We do not believe we have the political legitimacy to determine generally what is and is not online by restricting security or core Internet services,” the company wrote last week. And arguably it doesn’t!

But sometimes circumstances force your hand. If your customers are plotting violence — violence that may in fact be possible only because of the services you provide — the right thing to do isn’t to ask Congress to pass a law telling you what to do. It’s to stop providing those services.

There isn’t always a clear moment when an edgy forum, full of trolls, tips over into incitement of violence. Instead, far-right actors increasingly rely on “stochastic terrorism” — actively dehumanizing groups of people over long periods of time, suggesting that it sure would be nice if someone did something about “the problem,” confident that some addled member of their cohort will eventually take up arms in an effort to impress their fellow posters.

One reason why this has been so effective is that it is a strategy designed to resist content moderation. It offers cover to the many social networks, web hosts, and infrastructure providers that are looking for reasons not to act. And so it has become a loophole that the far right can exploit, confident that so long as they don’t explicitly call for murder they will remain in the good graces of the platforms.

It’s time for that loophole to close. In general we should resist calls for infrastructure providers to intervene on matters of content moderation. But when those companies provide services that aid in real-world violence, they can’t turn a blind eye until the last possible moment. Instead, they should recognize groups that organize harassment campaigns much earlier, and use their leverage to prevent the loss of life that will now forever be linked to Kiwi Farms and the tech stack upon which it sat.

In its blog posts, Cloudflare refers repeatedly to its desire to protect vulnerable and marginalized groups. Fighting for a free and open internet, one that is resistant to pressure from authoritarian governments to shut down websites, is a critical part of that. But so, too, is offering actual protection to the vulnerable and marginalized groups that are being attacked by your customers.

I’m glad Cloudflare came around in the end. Next time, I hope it will get there faster.
 
There isn’t always a clear moment when an edgy forum, full of trolls, tips over into incitement of violence. Instead, far-right actors increasingly rely on “stochastic terrorism” — actively dehumanizing groups of people over long periods of time, suggesting that it sure would be nice if someone did something about “the problem,” confident that some addled member of their cohort will eventually take up arms in an effort to impress their fellow posters.
Kind of like how Republicans are now Nazis and semi-fascist insurrectionists who need to be dealt with in order to save America and Our Democracy?
 
Referencing gizmodo .
Journalism truly is dead, and everyone are spineless cowards.
General Sherman once said "The American press is a shame and a reproach to a civilized people. When a man is too lazy to work and too cowardly to steal, he becomes an editor and manufactures public opinion"

The only difference between then and now, is that reporters and journalists are editors too. (The number of times I see spelling and grammar mistakes in "trusted sources" like the AP is sickening, I guess editors are too busy making editorials and not checking the work of those under them).
 
Citing Gizmodo is not evidence that people have killed themselves because of Kiwi Farms. There is no evidence that anyone has an heroed because of the Farms, because it has never fucking happened. Remember when journalists actually did fucking research into their stories to ensure they were accurate? Now they cite social media and other lazy, unethical rags and call it a day. There should be a law that holds journalists and the news agency they work for legally accountable whenever they lie in an article or story and punishes them for unethical practices. Free speech should be for individuals only.
 
"Stochastic terrorism" is simply the one of the most bullshit anti-speech arguments ever conceived.

It's literally just a more intelligent-sounding version of "ooh they're stirring up trouble with that kind of talk". Can use it to argue for censoring pretty much anything.

By reporting on the negative things Trump did, CNN and MSNBC were committing stochastic terrorism.

Really, all reporters are stochastic terrorists, if they report on anything that could inspire somebody to feel negative towards the government or any group/individual, and possibly take action. Bad news is terrorism.

Talking publicly about being raped or abused is stochastic terrorism if you give any information as to who did it or any groups they were a part of. Someone might want to take matters into their own hands, you see, and that would make you a stochastic terrorist for inspiring them.

Jennette McCurdy is a stochastic terrorist because she implied in her new book that Dan Schneider was abusive, and therefore, people may dislike and want to harm Dan Schneider now. Terrorism.

Writing favorably about communism or other political systems is stochastic terrorism because some readers will undoubtedly be inspired to become revolutionaries if you do so.

J.D. Sallinger is a stochastic terrorist for writing The Catcher in the Rye, since some individuals were inspired by it to commit shootings. You can't write something that could potentially cause somebody to harm somebody else, or you are therefore responsible under modern progressive morality.

Really, it's funny (and sad) how we've come full circle back to the censorship arguments of the early 19th century and before. You can dig through U.S. free speech jurisprudence from over 100 years ago and find the exact same fucking arguments. I'm convinced at this point that libs would endorse a modern Schenck v United States (a since-overturned 1919 case stating one didn't have free-speech to protest the WW1 draft), if it were only about Ukraine/Russia instead. Or better yet, Whitley v California (1927), which said the state could punish speech that is "tending to incite crime, disturb the public peace, or endangers the foundations of organized government and threatens its overthrow". Sure does sound a lot like "you can't criticize our hecking Democracy™ or say anything that could put LGBT lives at risk!"
 
I'm still confused how archiving and making fun of what stupid/evil/degenerate/hilarious shit they post themselves is harassment and terrorism. The Internet is not sacred. If you don't want people to post evidence of you being a pedophile maybe don't be a pedophile openly??? Idk man.

You post things online. They are fair game. We're the internet equivalent of a gossip Coffee House in ye olden days discussing how shit everyone is and how stupid their latest political book was.
 
I wonder how many of the "journalists" who have spent hours typing miles of text about us, about this place, about our "victims" has spent even a single second looking through the farms for themselves. How many of them have a single original thought that isn't just a rehash of what they've been told before?

Modern mainstream journalism is a closed club that circles information over and over again. No outside ideas are allowed in that have not been vetted and approved of. One idea launches a hundred articles that all reference each other as proof of veracity. One opinion is broadcast and repeated and never questioned.

Fuck journos. Fuck trannies.
 
Certainly it would be a good thing if everyone from local police departments to national lawmakers took online harassment more seriously, and developed a coordinated strategy to protect victims from doxxing, swatting, and other common vectors of online abuse — while also doing better at finding and prosecuting their perpetrators.
Yes it would be a good thing if the MSM was...oh they mean anyone who is not them.

CNN showing up to film a raid on an elderly man.
A child being made a national villain because a grifter bashed a drum in his face.
The national media convincing people that Rittenhouse did anything wrong by lying about laws and repeating that he crossed state lines and repeating that he was a murderer even after being acquitted.
CNN threatening to dawx someone for making a meme.

But yes...the problem is that a child abuser maybe had someone cause them to be approached by police in a respectful manner. Of course they lied about that to scam more money out of their retarded fans.

And there is that phrase again KF linked to...One of the three is not dead which is constantly ignored. The other two were very clear about why they killed themselves and it is only third parties who claim it was KF.

Why is KF never linked to Josh's attempt to help a retard make money even though he caught shit for it(was not Josh's fault that Chris raped his mom)?

Why no mention of the fact that KF archiving Nick Bates confessions may have taken a predator off the streets?

Why no mention of the DDoS campaigns waged against KF that are illegal?

Oh guess that does not matter because KF laughs at people and that is bad unless those people are Republicans...which KF laughs at but no matter.

Let us be real here: The media hates KF because it does a better job of getting it right. Troons hate KF because we will not pretend they are not insane. Convicted revenge pornographers hate the KF because we will not let them off the hook.

Just as a closer I find it very hard to believe that most incidents of swatting originate from KF especially when it comes to Lucas. I still maintain that he swatted himself just like Ralph likely swatted himself. Just like I think Jim sent himself a bag of potatoes.
 
I wonder how many of the "journalists" who have spent hours typing miles of text about us, about this place, about our "victims" has spent even a single second looking through the farms for themselves.
They're trying to make it impossible for any normie to come here for themselves and see what we're actually about. They're literally rewriting history, and if they win, all of their lies and bullshit articles will become the truth.
 
There isn’t always a clear moment when an edgy forum, full of trolls, tips over into incitement of violence. Instead, far-right actors increasingly rely on “stochastic terrorism” — actively dehumanizing groups of people over long periods of time, suggesting that it sure would be nice if someone did something about “the problem,” confident that some addled member of their cohort will eventually take up arms in an effort to impress their fellow posters.

obviously this is a dumb shit take, but the thing that gets me the most is the term dehumanizing. this is so disingenuous. what the lolcow threads do is not dehumanizing, it's humanizing. most lolcows are psycho retards who have a profound investment in managing and altering their internet presentation to make it look like their shit doesn't stink. they actively lionize themselves to create a falsely positive image. the lolcow forum simply archives evidence showing they are, in fact, human, and are not only capable of fucking up but have done so extensively. and the frenzied squealing from these people is not the cry of the unfairly targeted and oppressed, but of the exposed narcissist. if any of these people had a single drop of based in their veins they would take the L, log off, own it, anything other than spend hours and days and weeks and months of their pathetically empty lives being unable to stop obsessing over the existence of a thread on a fringe forum being read by a handful of people, because it robs them of the ability to keep everyone in the dark about their unflattering realities.
 
Kiwi Farms is a nearly 10-year-old web forum, founded by a former administrator for the popular QAnon wasteland 8chan, that has become notorious for waging online harassment campaigns against LBGT people, women, and others.
Founded by a former... no that's wrong.
QAnon wastelan- wait, that's wrong, too.
Notorious for waging online harassment campaigns... it's all wrong! This is only the second sentence. Ahhhhh I'm getting confused again awooOOooOoooOo~~~
 
  • Feels
Reactions: I'm Retarded?
Back