Culture Twitter Releases New Policy on 'Dehumanizing Speech'

https://www.wired.com/story/twitter-dehumanizing-speech-policy/
https://archive.is/cYcT4
Twitter on Tuesday announced a new policy addressing “dehumanizing speech,” which will take effect later this year, and for the first time the public will be able to formally provide the company with feedback on the proposed rule.

The policy will prohibit “content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target.” It expands upon Twitter’s existing hateful conduct policyprohibiting users from threatening violence or directly attacking a specific individual on the basis of characteristics such as race, sexual orientation, or gender. Twitter’s users, especially women and minority groups, long have complained that the company’s rules have been ineffective and inconsistent in addressing harassment and abuse.

“We obviously get reports from people about content that they believe violates our rules that does not. The dehumanizing content and the dehumanizing behavior is one of the areas that really makes up a significant chunk of those reports,” says Del Harvey, Twitter’s vice president of trust and safety. She adds that many Twitter users, as well as researchers who study dehumanizing speech’s real-world effects, told the company allowing that content to stay was “deeply problematic.”

Susan Benesch, whose research Twitter cites in its announcement, defines dehumanizing speech as “describing other people in ways that deny or diminish their humanity,” like comparing them to insects, demons, or bacteria. The Dangerous Speech Project she founded and directs argues that it's one hallmark of a wider category called “dangerous speech,” which covers any form of expression that can increase the risk that an audience will participate in or accept violence against another person or group.

“Dehumanization is important since it leads to real harm; it's just challenging to define precisely, and it's critical to protect freedom of speech as well,” says Benesch. “This initiative shows that Twitter staff are thinking hard about the variety of offline harms to which online content can lead, and trying to reduce them. It's easiest for platforms to respond to more obvious forms of harm, such as a credible threat of violence directed at a named, specific person.”

Previously on Twitter, a comment like “all women are scum and should die” would need to be targeted at an individual to break Twitter’s rules. The new policy would remove the requirement that a user who is potentially a member of the protected class be referenced or discussed in the tweet itself.


'Our hope is that actually having this feedback period will serve to bring people sort of along with us on the process.'

DEL HARVEY, TWITTER

News that Twitter was considering a policy on dehumanizing speech first broke in August, as major tech companies like YouTube and Facebook moved to ban conspiracy theorist Alex Jones from their platforms. Twitter initially declined to follow suit, and CEO Jack Dorsey defended his company’s decision by arguing that Jones had not broken the rules. (Media outlets like CNN went on to point out multiple instances where Jones did appear to violate Twitter policies.) After the judgment caused an uproar at Twitter, Harvey sent an email to staff that she was “shifting our timeline forward for reviewing the dehumanization policy.”

Twitter is giving users two weeks to comment on the new rule via a survey form; questions include whether the policy is clear and how it could be improved. It will be available in English, Spanish, Arabic, and Japanese. “Historically we have been less transparent than, quite frankly, I think is ideal about our policies and how we develop them,” says Harvey, who has worked at the company for more than a decade. “Our hope is that actually having this feedback period will serve to bring people sort of along with us on the process.”

Twitter also has a so-called Trust and Safety Council, made up of third-party nonprofits and other organizations that are consulted on new policies. The company will process the feedback and then continue its normal internal procedurefor new rules.

After the dehumanizing speech statute becomes a permanent part of Twitter’s rules, the hardest work of actually enforcing it will begin. Unlike, say, a cryptocurrency scam, dehumanizing speech might be difficult to spot—in part because it’s dependent on context and hard to exactly define.



“Not all dangerous speech has dehumanizing language, and not all comparisons of human beings with animals are dehumanizing,” says Benesch. “Twitter and other platforms should be careful not to define dehumanization too broadly. For example, it’s tempting to say that any demeaning remark about a group of people, such as ‘the X people are all thieves’ or ‘all corrupt’ is dehumanizing. That one is not dehumanizing, since corruption is a specialty of humans.”

Recent real-world incidents have proved challenging for other social media companies to police effectively. Facebook, for instance, has been accused of helping to facilitate the Muslim Rohingya crisis in Myanmar, which the UN is now calling to prosecute as genocide. Buddhist leaders in the country used the platform to spread misinformation and hate speech, including comparing Rohingya to dogs and pests. (Facebook already prohibits users from publishing “violent or dehumanizing speech.”)

Twitter’s new policy is part of a greater soul-searching initiative the company announced in March, after it was widely criticized for allowing misinformation and automated bots to flourish on its platform during the lead-up to the 2016 US presidential election. Since then, Twitter has limited the influence of suspicious accounts, deletedmore than 140,000 third-party apps that violated its policies, and begun hiding tweets from potentially harmful users, among other efforts.

The fight against dehumanizing speech is only the latest part of that effort. Hate groups that use the platform to spread their messages would likely be impacted, for example, even if they’re not harassing a specific individual. But a rule against dehumanization won’t help in circumstances where harassment is facilitated through lying or misinformation. Conspiracy theorist Alex Jones’ claims that school shootings never happened wouldn’t necessarily be addressed by the policy, but they’re still extremely harmful nonetheless.

What's more, Twitter may need to decide what to do about high-profile users like President Donald Trump, who once tweeted that Democrats want illegal immigrants “to pour into and infest our Country.” The company historically has allowed for world leaders whose statements may violate policies but also are newsworthy. “It’s not something where it’s set in stone,” says Harvey. “It’s something we’re going to be continuing to explore.”
 
And how does one measure this metric, exactly?


>implying that it will be used fairly and unbiased
>implying that anything other than a few token leftists will be banned to preserve an illusion of balanced fairness
>implying
https://blog.twitter.com/official/en_us/topics/company/2018/Creating-new-policies-together.html

So reading the actual proposal, here's their definitions:
Dehumanization: Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).

Identifiable group: Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location, or social practices.
This is terrible.

Here's the big problem with this shit (same with the open source CoC stuff too): they don't define the rules strictly. They're expansive rules, not limited rules.

I don't mind working under some rules. At the very worst, if the rules seem too harsh, I'll just quit and move on. No, what I really hate are rules that are fuzzy, where I can't even tell how harsh they are. When I can't make a reasonable evaluation ahead of time.

In contrast to those kinds of shitty rules, I like US anti-discrimination law. It's strict, it's even handed, and they plainly lay out which identities you can or can't discriminate against. Can't discriminate against someone because they're white, black or crippled. But someone has a stupid tshirt? 100% can discriminate against them. That is not a protected group.

Good, I like that. Even if I disagree about some of the specifics, at least I know where the line is.

With twitter's rules here, they say "... their shared characteristics such as ...". Such as? What? That's boundless.

Fuck this shit, I'm packing my shit up and going to mastodon.

I still think we can catch a big bunch of SJWs in their own stupid rules. But still, this is really faggy.
 
Who even cares at the point? The only reason to go to Twitter is to grab screencaps of lolcows. And the only reason to be on Twitter is if you are some corporate crony who's willing to say whatever they want for networking and job opportunities. The only bad thing that would happen if Twitter disappeared, is we'd lose a lot of easy to grab milk.
 
Kinda :autism: but I do wonder if Del Harvey would consider it dehumanizing speech to say "Pedos are worse than AIDS"

Previously under our hateful conduct policy, if you @mentioned or specified an individual, it would be in violation,” the spokesperson said. “With the expanded policy, ‘X group’ is a virus’ would be a violation.”

I mean, I'm not saying Pedos = AIDS. Or would you need to say "Pedos deserve to get AIDS" ? That wouldn't be "animalistic dehumanization [or] comparing groups to animals and viruses... or reducing groups to their genitalia"
 
Imagine this tweet on a big board in front of a televised Congressional Committee headed by Ted Cruz or whatever.
Even though that won't happen, ngl that sounds fucking hilarious.
or reducing groups to their genitalia (mechanistic).
It's often claimed that not wanting to suck girldick or saying that women have vaginas falls under this. So... buckle up
 
This isn't gonna be applied to leftists more than right..ists... but it's gonna be applied to them enough that it's going to cause a major shitstorm.

Twitter isn't able to look the other way in a lot of cases, they're under way too much from the US executive branch over conservative censorship.
 
My concern is this nonsense is not going to get limited to Twitter. Everyone laughed about Tumblr and what they got up too 10 years ago. Nobody is laughing now that Tumblr is running the Universities and huge swaths of the media and corporate America.
The universities and parts of corporate America aren't terrible because of Tumblr. Tumblrites are terrible because they were manufactured by these universities and Silicon Valley.
 
Saw this in reference to the this announcement:

E853A90B-000B-42E5-8990-9279A1B40BFF.jpeg


This is why you don’t try to appease them. It’ll never be enough.
 
Ya'll know that End of Evangelion poster? Big, blue bitch sinks into the red water while Shinji and red hair bitch watch her demise? Imagine that, but the blue bitch is emerging. That's what's going on with Twitter.

I doubt this is going to be the revolution of Twitter, but it's a step in the right direction. Bravo, Jack.
 
This isn't gonna be applied to leftists more than right..ists... but it's gonna be applied to them enough that it's going to cause a major shitstorm.

Twitter isn't able to look the other way in a lot of cases, they're under way too much from the US executive branch over conservative censorship.
Good point. Twitter doesn't have much choice but to play it fair and fairness rustles the jimmies of both the far right and left.
 
Good point. Twitter doesn't have much choice but to play it fair and fairness rustles the jimmies of both the far right and left.
Yeah, there's at least some modicum of balance practiced on Twitter now that these rules will be enforced. Hopefully it'll get to a point where it's every man for himself. I've saved a special box of popcorn packages for an event like this.
 
Child molesters are an identifiable group according to this definition. So are Nazis.
They'll totally help the child molesters without batting an eye. They won't even entertain nazis.

But maybe TERFs... (I mean, TERFs are practically nazis to a lot of them already, but it still has the word "feminist" in it.)
 
I mean if Trump is the most action you have on any platform besides rank and file SJW's your future is grim regardless of how much "success" a vague CoC yields in the long run. We hear this shit every year and no substantial changes are noticed.
 
Last edited:
I mean if Trump is the most action you have on any platform besides rank and file SJW's your future is grim regardless of how much "success" a vague CoC yields in the long run. We hear this shit every year and no substantial changes are noticed.

True, the endgame of this is everyone abandoning the platform because of unpredictable, inconsistent and overbearing moderation creating an environment where only the most toxic of the toxic SJW elements can coalesce without issue, leaving nothing but an IDPOL hugbox that nobody who wants to do something other than blame the patriarchy would ever venture to.

What happened to games forums and comics forums will just happen again, only the few approved overlords will be left standing over a mountain of banned accounts declaring themselves the winner.....

These places will destory themselves with this shit, it's just that the fallout from the collapse will be a huge pain in the ass for us all until we get there. I'm not a Twitter user, never have been, never will be, and the thing that makes me upset about all this is the spillover from the infighting will spillover into real world places I stay in specifically to avoid the pointless IDPOL wars.... but rest assured, Twitter will be a cesspool like NeoGAF, where you join only to worship the leadership caste and go on witch hunts for someone who used a problematic word so you can score some points in eSempai's eyes by giving them some Nazis to ban...
 
Last edited:
Back