Culture Twitter Releases New Policy on 'Dehumanizing Speech'

  • 🔧 At about Midnight EST I am going to completely fuck up the site trying to fix something.
https://www.wired.com/story/twitter-dehumanizing-speech-policy/
https://archive.is/cYcT4
Twitter on Tuesday announced a new policy addressing “dehumanizing speech,” which will take effect later this year, and for the first time the public will be able to formally provide the company with feedback on the proposed rule.

The policy will prohibit “content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target.” It expands upon Twitter’s existing hateful conduct policyprohibiting users from threatening violence or directly attacking a specific individual on the basis of characteristics such as race, sexual orientation, or gender. Twitter’s users, especially women and minority groups, long have complained that the company’s rules have been ineffective and inconsistent in addressing harassment and abuse.

“We obviously get reports from people about content that they believe violates our rules that does not. The dehumanizing content and the dehumanizing behavior is one of the areas that really makes up a significant chunk of those reports,” says Del Harvey, Twitter’s vice president of trust and safety. She adds that many Twitter users, as well as researchers who study dehumanizing speech’s real-world effects, told the company allowing that content to stay was “deeply problematic.”

Susan Benesch, whose research Twitter cites in its announcement, defines dehumanizing speech as “describing other people in ways that deny or diminish their humanity,” like comparing them to insects, demons, or bacteria. The Dangerous Speech Project she founded and directs argues that it's one hallmark of a wider category called “dangerous speech,” which covers any form of expression that can increase the risk that an audience will participate in or accept violence against another person or group.

“Dehumanization is important since it leads to real harm; it's just challenging to define precisely, and it's critical to protect freedom of speech as well,” says Benesch. “This initiative shows that Twitter staff are thinking hard about the variety of offline harms to which online content can lead, and trying to reduce them. It's easiest for platforms to respond to more obvious forms of harm, such as a credible threat of violence directed at a named, specific person.”

Previously on Twitter, a comment like “all women are scum and should die” would need to be targeted at an individual to break Twitter’s rules. The new policy would remove the requirement that a user who is potentially a member of the protected class be referenced or discussed in the tweet itself.


'Our hope is that actually having this feedback period will serve to bring people sort of along with us on the process.'

DEL HARVEY, TWITTER

News that Twitter was considering a policy on dehumanizing speech first broke in August, as major tech companies like YouTube and Facebook moved to ban conspiracy theorist Alex Jones from their platforms. Twitter initially declined to follow suit, and CEO Jack Dorsey defended his company’s decision by arguing that Jones had not broken the rules. (Media outlets like CNN went on to point out multiple instances where Jones did appear to violate Twitter policies.) After the judgment caused an uproar at Twitter, Harvey sent an email to staff that she was “shifting our timeline forward for reviewing the dehumanization policy.”

Twitter is giving users two weeks to comment on the new rule via a survey form; questions include whether the policy is clear and how it could be improved. It will be available in English, Spanish, Arabic, and Japanese. “Historically we have been less transparent than, quite frankly, I think is ideal about our policies and how we develop them,” says Harvey, who has worked at the company for more than a decade. “Our hope is that actually having this feedback period will serve to bring people sort of along with us on the process.”

Twitter also has a so-called Trust and Safety Council, made up of third-party nonprofits and other organizations that are consulted on new policies. The company will process the feedback and then continue its normal internal procedurefor new rules.

After the dehumanizing speech statute becomes a permanent part of Twitter’s rules, the hardest work of actually enforcing it will begin. Unlike, say, a cryptocurrency scam, dehumanizing speech might be difficult to spot—in part because it’s dependent on context and hard to exactly define.



“Not all dangerous speech has dehumanizing language, and not all comparisons of human beings with animals are dehumanizing,” says Benesch. “Twitter and other platforms should be careful not to define dehumanization too broadly. For example, it’s tempting to say that any demeaning remark about a group of people, such as ‘the X people are all thieves’ or ‘all corrupt’ is dehumanizing. That one is not dehumanizing, since corruption is a specialty of humans.”

Recent real-world incidents have proved challenging for other social media companies to police effectively. Facebook, for instance, has been accused of helping to facilitate the Muslim Rohingya crisis in Myanmar, which the UN is now calling to prosecute as genocide. Buddhist leaders in the country used the platform to spread misinformation and hate speech, including comparing Rohingya to dogs and pests. (Facebook already prohibits users from publishing “violent or dehumanizing speech.”)

Twitter’s new policy is part of a greater soul-searching initiative the company announced in March, after it was widely criticized for allowing misinformation and automated bots to flourish on its platform during the lead-up to the 2016 US presidential election. Since then, Twitter has limited the influence of suspicious accounts, deletedmore than 140,000 third-party apps that violated its policies, and begun hiding tweets from potentially harmful users, among other efforts.

The fight against dehumanizing speech is only the latest part of that effort. Hate groups that use the platform to spread their messages would likely be impacted, for example, even if they’re not harassing a specific individual. But a rule against dehumanization won’t help in circumstances where harassment is facilitated through lying or misinformation. Conspiracy theorist Alex Jones’ claims that school shootings never happened wouldn’t necessarily be addressed by the policy, but they’re still extremely harmful nonetheless.

What's more, Twitter may need to decide what to do about high-profile users like President Donald Trump, who once tweeted that Democrats want illegal immigrants “to pour into and infest our Country.” The company historically has allowed for world leaders whose statements may violate policies but also are newsworthy. “It’s not something where it’s set in stone,” says Harvey. “It’s something we’re going to be continuing to explore.”
 
upload_2018-9-25_14-3-14.png

https://twitter.com/Twitter/status/1044283285684318210

Time for Twitter to gas itself then.
 
It is in the nature of a business to quash competition. All I see is a big corporation being as shitty as they usually are.
Except the left used to sperg about that and demand regulation over petty shit, but of course when they're on the same bandwagon concerning a fundamental right and spite over losing an election, it's completely fine.
 
profits? from Twitter shares? never heard of those

View attachment 550592

My greatest hope is the social media giants go bankrupt through their own hubris before we get the chance to trust-bust them.

And when are they going to learn denying a person a platform to express their views does NOT change them? You can only ever create the illusion of perfect harmonious group-think that way, while the discontented will just be meeting in dark alleyways, planning on how to take you down....
 
Me too. It's the best way out since it avoids us potentially establishing a bunch of precedent about government regulating speech on the internet.

There are already regulations. No one is enforcing them. The moment these social media companies start curating their content and deciding who can and cannot post they legally move from being platforms to publishers. This makes them legally responsible for every post.
 
Well, if Twitter is that incompetent as we've been seeing, it wouldn't matter in the long run, considering everyone on both sides keep acting like children with their attitudes and death threats. Twitter might as well just ban everyone that isn't their admin staff.

I'm pretty sure that the only reason that Twitter doesn't collapse on its own is mainly because of The Man They Call Hitler.

The irony is probably lost on them.
 
Why have we not recruited an flip moderator who works in a filipino computer farm that will take specialized requests for our fair & balanced reports on bad social media behavior?
Sumali ngayon, ngayon nagbabayad sa hotpockets!
 
Get a load of this guy, he actually thinks tech companies apply their policies evenly and in an unbiased manner.
Facebook does. Twitter doesn't.

However, the differences aren't because of intent of the employees or the politics of Silicon Valley. If those were the main driving forces, I'm sure Facebook would apply their rules just as SJW-ey as twitter does.

A few things: facebook is profitable, twitter isn't. Facebook is looking to extend their reach to foreign countries, and they've discovered that cultural differences make applying stupid San Fran cultural mores to, say, Indians does not work as well as they thought it would. Because they're a profitable corporation, Facebook had to tell their employees to shut the fuck up, and they established a ridiculously spergy set of guidelines for moderating content. Oh, and they're consistent about it too. I've seen plenty of people screeching about white men (including our very own lolcow queerdykylie) get booted.

This is why Facebook is profitable and Twitter is on life support, courtesy of Donald Trump.

Now the upside about these new rules is that they're less vague than the old rules. Twitter uses a lot of human moderators and they're all ridiculously biased. But it's harder to prove bias with such flimsy rules.

Since the rules are getting more spergy, a la Facebook's, I think we'll see a lot more SJWs getting banned. There's no reason to think the enforcement will be 100% 1:1, but it'll definitely cause a big of enough splash to piss a lot of people off.
There are already regulations. No one is enforcing them. The moment these social media companies start curating their content and deciding who can and cannot post they legally move from being platforms to publishers. This makes them legally responsible for every post.
No, that's not how CDA 230 works. Platforms have a first amendment right to choose who they host on their services, without becoming publishers. CDA 230 still applies.

If that weren't the case, you couldn't have specialized forums like kiwifarms, because you wouldn't have the power to curate it. You need to become extremely involved in manipulating the content before you become a publisher.
 
Susan Benesch, whose research Twitter cites in its announcement, defines dehumanizing speech as “describing other people in ways that deny or diminish their humanity,” like comparing them to insects, demons, or bacteria. The Dangerous Speech Project she founded and directs argues that it's one hallmark of a wider category called “dangerous speech,” which covers any form of expression that can increase the risk that an audience will participate in or accept violence against another person or group.
And how does one measure this metric, exactly?

This is gonna backfire on every left-wing nutter that applauds it SO HARD.
>implying that it will be used fairly and unbiased
>implying that anything other than a few token leftists will be banned to preserve an illusion of balanced fairness
>implying
 
Back