Culture Musk’s ‘free speech’ agenda dismantles safety work at Twitter, insiders say - Amid the wider turmoil since his takeover last month, Musk has moved rapidly to undermine Twitter’s deliberative content-moderation system

1669148540270.png

Hours after Elon Musk took control of Twitter in late October, the Trust and Safety team responsible for combating hate speech on the site received an urgent directive: Bring back the Babylon Bee.

To some Twitter employees, the order was troubling. The self-described Christian satirical site had been banned since March for refusing to delete a tweet naming Biden health official Rachel Levine its “Man of the Year.” Levine is a transgender woman, and the tweet violated a 2018 rule prohibiting Twitter users from targeting transgender people by referring to them by the name or gender they used before transitioning.

To Musk, the suspension was emblematic of the kind of liberal overreach he has vowed to purge from the platform. In a small kitchen on the second floor of the company’s San Francisco headquarters, he huddled with his lawyer, Alex Spiro, Trust and Safety lead Yoel Roth, another Twitter employee and a Tesla employee to discuss the suspension.

While the Babylon Bee’s tweet was “not cool,” Musk told them, it also wasn’t “sticks and stones” — violent threats that he argued should be the new standard for who gets booted.

Since that Friday in October, Musk’s brief reign at Twitter has been marked by chaos and upheaval as the notoriously mercurial and impatient billionaire seeks to impose his will on a company famous for its deliberative culture. Massive layoffs and Musk’s demand that remaining employees pledge to work “hardcore” hours have left one of the world’s most influential social media sites operating with a skeleton staff and experts predicting an eventual crash. Meanwhile, advertisers are fleeing, raising doubts about Musk’s ability to generate sufficient profits to satisfy investors in the $44 billion deal.

Amid the turmoil, Musk has stoked the culture-war issues that helped inspire him to purchase the company in the first place. A fierce advocate for the right “to speak freely within the bounds of the law,” Musk has moved rapidly — at times erratically — to undermine a regime built over a decade to define dangerous language on the site and protect vulnerable communities, replacing it with his own more impulsive style of ad hoc decision-making.

The Trust and Safety team lies at the heart of that battle. Known for examining every question from every angle before taking action, the team is responsible for many of the polarizing decisions that have prompted the right to cry censorship, including the ban on former president Donald Trump after his supporters attacked the U.S. Capitol on Jan. 6, 2021.

Interviews with more than a dozen current and former employees and people close to Musk who spoke on the condition of anonymity for fear of retribution, as well as documents obtained by The Washington Post, detail the clash of cultures as Musk and his allies have fired or alienated Trust and Safety team leaders and reversed their decisions. In recent days, Musk has reinstated both Trump and the Babylon Bee, along with a handful of other controversial accounts.

Now, Musk is looking to automate much of the Trust and Safety team’s work to police content — eliminating some of the nuance from complicated decisions for a cheaper approach.

Already, the team is dwindling. Its numbers have shrunk to dozens from a staff of 120 before Musk’s takeover. On his first night as owner, Musk fired its leader of 11 years, former Trust and Safety head Vijaya Gadde. Even before the takeover, Musk had personally amplified rhetoric from conservatives calling Gadde the company’s “chief censor.” Gadde did not respond to a request for comment.

With Gadde’s departure, Roth became the highest-ranking Trust and Safety official, steering the site through the Nov. 8 midterm elections. At first, he worked with Musk to stanch a flood of hate speech unleashed under the new owner. But Roth quit two weeks later, after Musk insisted on creating a $7.99 pay-for-play blue check system — since suspended — that led to a profusion of “verified” impostor accounts.

In an op-ed Friday in the New York Times, Roth acknowledged that the old system of content moderation — a “know-it-when-I-see-it” mélange reflecting the values of advertisers, Apple and Google app store managers, and Big Tech executives — can “have a dismaying lack of legitimacy.” But Roth warned that Musk “perpetuates the same lack of legitimacy through his impulsive changes.”

“It was for this reason that I chose to leave the company: A Twitter whose policies are defined by edict has little need for a trust and safety function dedicated to its principled development,” he wrote.

Roth declined to comment further. Musk and Twitter did not respond to requests for comment.

A new owner​

In Twitter’s early years, the company’s executives called the social network “the free speech wing of the free speech party.” But then came GamerGate, the brutal 2014 campaign against women in the video game industry, and the site started to evolve. In 2015, Gadde committed in a Post op-ed to invest in tools and systems that would better detect and root out abuse.

After the 2016 election, it became clear that the social network had been used by Russian trolls to sow disinformation. The Trust and Safety team took on new significance and grew, increasingly developing policies that put it at the center of the culture wars that gripped the nation during the Trump administration and the coronavirus pandemic. Civil rights groups complained they weren’t doing enough; conservatives said they were being censored.

Despite Musk’s criticism in the lead-up to his purchase, he initially seemed more open to their work than the team expected. On Oct. 27, the day the deal closed, Gadde told her team that she’d had productive discussions with Musk.

Within hours, however, Gadde’s access to company email and other systems was cut off; she couldn’t even say a formal farewell. At a Halloween party at Twitter headquarters where workers had brought children dressed in costumes, some employees quietly left to go cry.

Twitter occupies two buildings linked by a foot bridge on the ninth floor. Across the bridge from the Halloween festivities, Roth huddled that day with Musk and his new team in a large second-floor conference room. Dubbed the war room, the space was strewn with toys and two-year-old X Æ A-Xii, one of Musk’s children was running around.

Musk’s takeover coincided with a critical stress test for the Trust and Safety team — the Brazil election. As they mapped out layoffs that ultimately would banish more than half the company’s 7,500 workers, Musk’s team wanted to cut off access to key content moderation tools to prevent disgruntled employees from wreaking havoc.

Roth pitched a plan to satisfy that desire while preserving sufficient access to permit his team to address any issues that might crop up in Brazil, where the incumbent, Jair Bolsonaro, uses Twitter much as Trump did and was likely to contest the results. Musk approved Roth’s plan.

The very next day, Musk demanded that Twitter reverse bans imposed on the Babylon Bee and Canadian psychologist Jordan Peterson, the self-proclaimed “professor against political correctness.” Peterson was suspended from Twitter this year after referring to transgender actor Elliot Page by the name he used before transitioning in a tweet that said the actor had his “breasts removed by a criminal physician.”

Musk has long promoted the Babylon Bee on Twitter. When he objected to a barb from its competitor in the medium, the Onion, in March 2021 about how he had made his money, Musk wrote, “Shame on you, Onion. This is why people are switching to @TheBabylonBee.”

But Musk’s sudden demand was problematic. If the company restored the two accounts without a clear reason, it would undo years of careful work. As owner of the company, Musk could just revoke the policy on transgender abuse. But Roth warned that such a move could make Twitter a lightning rod for the culturally charged national debate over gender during Musk’s first 24 hours at the helm.

Roth’s concerns were escalated to Musk, and they huddled in the kitchen to discuss. Twitter already was working on a new method of handling users who broke the rules. Instead of suspending their accounts, the company would obscure offending tweets with public warnings, much as it does with election misinformation.
Musk stood down, and Roth’s deputy sent a directive on Slack to his team to overhaul the suspension policy that very afternoon. The Babylon Bee and Peterson would remain suspended until Twitter unveiled the new policy.

Shortly after, Musk announced on Twitter that he would form a new content moderation council — composed of diverse voices from across the political spectrum — to help Twitter make decisions about restoring and banning accounts.

“No major content decisions or account reinstatements will happen before that council convenes,” Musk tweeted.

For the moment, it seemed like a victory.

The wheels come off​

Through the following weekend, Twitter executives worked with Musk and his team to plan layoffs. Food, coffee and snacks were catered so that people didn’t have to leave the building. In violation of Twitter’s long-standing practice of encouraging reusable water bottles, tables in the war room were dotted with buckets of Voss water. Musk’s allies — investor David Sacks, Spiro and a host of engineers from his other companies, Tesla, SpaceX and The Boring Co. — regularly accompanied him.

That Sunday, Musk tweeted a link containing misinformation about the attack on House Speaker Nancy Pelosi’s husband. He later deleted it, although his reasoning was unclear.

Meanwhile, hate speech surged on the site as users tested Musk’s new Twitter. Civil rights groups raised concerns, and basketball star LeBron James tweeted that the rise of hate was “scary AF,” calling on Musk to take it seriously.

Trust and Safety sprang into action. The team had already been working on new rules that would result in removing more hateful slurs — a policy it had planned to roll out months later. They pulled the trigger, despite some warning it might err on the side of removing posts that were fine, too.

“There was a lot of scrambling, a lot of disbelief,” said one former employee, who left the company this month. “How are we doing this in a week, when it was slated to take two quarters?”

When “Black Thursday” rolled around on Nov. 3, only 15 percent of Roth’s team was laid off, compared with half of the entire company. But other teams that also do critical content moderation suffered heavier cuts, including the Product Trust team that develops policies to prevent new features from harming people and the Health team that implements policies from Trust and Safety.

Civil rights groups called for an advertiser boycott. Roth tried to calm the public in a Twitter thread, where he said the company’s “core moderation capabilities remain in place.”

As layoff notices went out — first to Asia and Europe, then the United States — a group of longtime Twitter executives huddled together in a conference room and cried.
Civil rights groups and researchers who use the company as a resource noticed the effect almost immediately.

Thenmozhi Soundararajan, executive director of a Dalit rights group called Equality Labs, routinely asked Trust and Safety division employees to take down accounts violating Twitter’s rules on hate speech. Following the layoffs, her emails started bouncing back. When she made requests through Twitter’s public portal, she stopped receiving automated acknowledgments.

“It is such a dangerous time to have fired the moderators and this team,” she said in an interview, citing regional elections in India and surging misinformation and slurs. “Twitter has been already in a state of failure, this is just the nail in the coffin.”

On Nov. 7, Musk made many in the company cringe when he tweeted to his millions of followers that they should vote Republican. But Election Day otherwise came and went without major crises.

Then came Twitter Blue Verified, the paid version of the check marks that have long been appended to accounts of major corporations, celebrities, journalists and other influential people.

Already, Trust and Safety employees including Roth had made an impassioned appeal to delay the rollout indefinitely — at least until after the midterms, warning the fallout could be major.

In a seven-page memo reviewed by Musk, employees warned of the risks of broad impersonation of world leaders, advertisers, brand partners, election officials and other high-profile individuals.

To help mitigate this risk, the memo recommended that Twitter include “official account” labels, which should be “visually distinguishable” and “distinguishable on every surface” including in people’s timelines and replies.

Musk agreed to delay the rollout until after the midterms. But on Nov. 9, the check marks started popping up — as did the fake accounts. Impostors purporting to be basketball star James, President Biden and major brands began fomenting chaos.

Sales executive Robin Wheeler, Roth and Musk launched an audio chat on Twitter Spaces in an attempt to reassure advertisers, joining the call from separate rooms. Musk said the new paid-for verified service would make the site more reliable and help eliminate or demote fake accounts.

Mistakes will be made, Musk acknowledged. But, he said, “If we do not try bold moves, how will we make great improvements?”

The next day, Roth quit. A Trust and Safety director gathered employees in a team meeting and shared the abrupt announcement that Roth was out. Employees were given no explanation and worried about the institutional knowledge vacuum it created.

That same night, an urgent memo was sent to engineers over Slack. Less than 48 hours after the launch of Musk’s first major product, the company had disabled new sign-ups for check mark subscriptions to address the impersonation issues.

In a tweet late Monday, Musk said the company would hold off relaunching Blue Verified “until there is high confidence of stopping impersonation.” He also said Twitter “will probably use different color check for organizations than individuals.”

On Trust and Safety, morale continued to decline. On Nov. 10, the team executed a one-day sickout.

After the ultimatum asking employees to commit to working harder on Wednesday, Trust and Safety team members and their allies inside the company huddled virtually over Google Meet and Signal groups to discuss what to do. They no longer trusted private internal Slack channels, after Musk had fired several employees for their communications there.

There was a lopsided consensus that Musk didn’t value their work. The leaders that inherited Roth‘s team had not even spoken to Musk, they said, and false tweets from impersonator blue-check accounts were up for hours. They discussed a potential mass resignation.

But some had to stay for health insurance that protected their partners and children as well as themselves. Those who wanted out did not judge the others poorly.
“The message was: Do what is best for your mental health and your family,” one of the participants said.

The next day, many members of the team met in San Francisco with an engineer close to Musk, as people outside the office called in. One employee walked away calling it a “disaster.”

“Several colleagues made up their minds to leave after that,” the person said. “There appeared to be no strategy for [Trust and Safety], and Elon was just overriding or making up policy as he went without any input.”

Ella Irwin, the executive that Musk tapped to replace Roth as lead of the division, suggested after the meeting that Musk would learn moving forward.

“It is a complete Trust and Safety nightmare. It’s the kind of thing you work on nonstop, you make it your baby,” another person said, “and for someone to just disregard everything you are saying is a complete mental blow.”

Virtually, the entire team dedicated to rooting out covert foreign influence operations was fired or quit, putting in jeopardy the company’s abilities to detect accounts including those attempting to influence U.S. politics.

On Friday, Musk announced that the Babylon Bee and Peterson would be permitted back on the site — with little explanation. The Babylon Bee declined to comment.

“New Twitter policy is freedom of speech, but not freedom of reach,” Musk tweeted, adding that negative and hate tweets will be made less visible and won’t earn the company money.

Over the weekend, Musk announced that Trump would be reinstated to the platform after conducting a Twitter poll in which roughly 52 percent of participants voted to bring the former president back.

“Hope all judgy hall monitors stay on other platforms — please, I’m begging u,” Musk later tweeted.

His promise to appoint a council to bring transparency to such momentous decisions appeared to have been forgotten.

https://www.washingtonpost.com/technology/2022/11/22/elon-musk-twitter-content-moderations/ (Archive)
 
Free speech is not, never has been, and never will nor ever should be absolute, so yes, it'd be restricting speech.

Why do you think we can't threaten people, make false statements to police or under oath, shout fire in a crowded theater, or sexually harass people, etc? We can and have always selectively restricted speech.

Private companies cannot do whatever they want, there's all sorts of regulations in every industry. Even if they could that doesn't mean they should remain allowed to.
Free speech, as it currently is, does allow shit-spamming and advertising, so hence, you'd be changing it from what the first amendment currently is.

Private companies are allowed to ban people from using any speech they want. The first amendment only applies to the government, not private companies. Hence, Twitter can ban you for posting QAnon shit if they want. Now they probably won't because Elon owns it, but that's his right to run it as he sees fit.

Yes, private companies cannot do whatever they want (if it violates the law). However, saying people that use their services cannot use speech they don't like is perfectly within their rights.
 
Free speech, as it currently is, does allow shit-spamming and advertising, so hence, you'd be changing it from what the first amendment currently is.
Maybe it does currently allow it, I'm not sure. It's irrelevant anyway, we had blasphemy laws on the books in the past, and currently have obscenity laws in place. Free speech was never absolute, so barring spam is perfectly acceptable.

...Unless, you have a good argument for allowing automated trash, which you yourself complained about earlier?

Private companies are allowed to ban people from using any speech they want.
They shouldn't be allowed to. And if they really do want absolute power over speech, then they should not be treated as platforms but as publishers.

Sure would be a lot better for everyone except for libtard tyrants if we just allowed free speech and banned only illegal speech and spam.

Yes, private companies cannot do whatever they want (if it violates the law). However, saying people that use their services cannot use speech they don't like is perfectly within their rights.
So you oppose all industry regulations, or just ones that prevent big tech monopolies from enacting political discrimination and influencing our politics?

I think I know the answer.
 
Maybe it does currently allow it, I'm not sure. It's irrelevant anyway, we had blasphemy laws on the books in the past, and currently have obscenity laws in place. Free speech was never absolute, so barring spam is perfectly acceptable.

...Unless, you have a good argument for allowing automated trash, which you yourself complained about earlier?
I doubt they'd be able to get rid of that, because of the other implications that would follow. Advertising / commercial speech is indeed protected by the first amendment and removing it would have some other bad consequences.

Speaking of that, television stations are currently allowed to reject paid programming. You are saying they shouldn't be able to, which is pretty stupid.

They shouldn't be allowed to. And if they really do want absolute power over speech, then they should not be treated as platforms but as publishers.

Sure would be a lot better for everyone except for libtard tyrants if we just allowed free speech and banned only illegal speech and spam.
Then every website, including this one, would be gone. Mods wouldn't be able to move threads. If you made a post on Chris Chan, for example, in A&H, it'd have to stay.

It's amazing how you Republican boomers are fine with government telling businesses how to run themselves when it comes to being able to say the n-word.

So you oppose all industry regulations, or just ones that prevent big tech monopolies from enacting political discrimination and influencing our politics?

I think I know the answer.
I am all for industry regulations, but I also am all for companies being able to run themselves as they see fit. Just like how you are a theocrat who loves Republicans so much you can never criticize them, even when they do stuff you claim to be against, yet you also want the government to expand to make sure everyone can say whatever they want on any internet forum or site.

When you go onto Great Awakening to read the latest Q drops, you'd have to read posts from people calling you a retard for being such a gullible simpleton. Right now, they can delete those posts, but since you want to say the n-word on Twitter, they wouldn't be able to.
 
The term "safety" should be a red flag for anyone. If anything this started in the early 2010s when Twitter argued it was all in for protecting "women online" ie censoring anyone who makes jokes that mock the feelings of wine moms. From there it just got worse and worse til the only accounts were journos, celebrities, and left wing furries.
It makes me think of the Committee of Public Safety behind the Reign of Terror in the French Revolution. Hell, that might actually be the look they're going for.
 
No, because one has paid them money to provide an utility; they should at least give you back a refund if they kick you off their service, otherwise they would be committing fraud.
Oh right fraud.
Kind of like, and hear me out here.
If a company that provides DDoS mititgation would somehow not answer calls for days on end to give you your backups.
Proceed to skreed about how the website is some russian website/harassment website.
And then do absolutely fuckall for weeks.
But it's a private company so they can do whatever they want with terms of what you post on their services right? Even posts that don't fucking exist. lmfao.

Due to the definition of "companies being able to run themselves as they see fit" they can do that.

Also for anyone actually wanting to know the relevant topics instead of seeing this spergfight. Look up what heckler's veto is and commercial speech.
All this hypotheticals and absolutionist thinking is p irrelevant considering the actual topic of the 'debate,' you know the usual "bombard person with retarded info they cannot conceivably keep up with so I'm right" schtik that goes on.

Honestly at this point just ask him how he's making these dumb assumptions.
 
No, because one has paid them money to provide an utility; they should at least give you back a refund if they kick you off their service, otherwise they would be committing fraud.
At this point a lot of government agencies are using things like Facebook and Twitter for official communications. Some even, disturbingly, have discontinued normal means of communication. Pretty much anyone who has bought so much as a pack of cigarettes has paid at least something for that. Should private companies be able to turn off access to public services? Silence a sitting President in favor of his electoral opponent?
 
No, because one has paid them money to provide an utility; they should at least give you back a refund if they kick you off their service, otherwise they would be committing fraud.
Public utilities, common carriers are required to provide services in exchange for a set rate. It is not like they can just refund your money for shutting the power off. This lie about private companies being able to do fuck all is preposterous and absurd. Read about the history of the corporation, which was originally called a corporate charter, granted by the sovereign, the crown, on condition that certain requirements are met.
 
Free speech, as it currently is, does allow shit-spamming and advertising, so hence, you'd be changing it from what the first amendment currently is.

Private companies are allowed to ban people from using any speech they want. The first amendment only applies to the government, not private companies. Hence, Twitter can ban you for posting QAnon shit if they want. Now they probably won't because Elon owns it, but that's his right to run it as he sees fit.

Yes, private companies cannot do whatever they want (if it violates the law). However, saying people that use their services cannot use speech they don't like is perfectly within their rights.
The issue is that the government have been using twitter as both a tool to put out ‘the correct information’ and as a tool to suppress conflicting information and those who hold/spread it. It’s led to them being not really a company, but some bastard monster that’s neither gosh nor fowl. No company should have the monopoly they had not the ties to government. They have ended up with a reach and influence that’s quite frightening.
It certainly is up to twitter as a private company to ban whatever words they want. The wider point for me is how deep the government has had its claws in twitter as information delivery and gathering, and what that’s created.
If it was up to me I’d have free speech to the bounds of the law (ie you can’t go in and say ‘I shall be stabbing otterly at 4am tomorrow’ ) and I’d have user controls so that you can filter out any words that give you the vapours, or any subjects that upset you. Everything else that’s legal is on there. Perhaps advertisers could have those filters too so you could say ‘no widget adverts in posts with the following words on.’
 
It makes me think of the Committee of Public Safety behind the Reign of Terror in the French Revolution. Hell, that might actually be the look they're going for.
The one thing people forget is how terrible France was post Louis XI til Napoleon. Between the order and the committee of public safety committed such atrocities in France that the country was in defacto civil war until Napoleon was able to get people to unite under his rule.
 
So the electric company should be able to turn off your electricity if you backed the wrong political candidate?
Are you really comparing having service from an electric company to posting on an internet forum? And who has been banned from Twitter just for backing the wrong candidate?

The issue is that the government have been using twitter as both a tool to put out ‘the correct information’ and as a tool to suppress conflicting information and those who hold/spread it. It’s led to them being not really a company, but some bastard monster that’s neither gosh nor fowl. No company should have the monopoly they had not the ties to government. They have ended up with a reach and influence that’s quite frightening.
It certainly is up to twitter as a private company to ban whatever words they want. The wider point for me is how deep the government has had its claws in twitter as information delivery and gathering, and what that’s created.
If it was up to me I’d have free speech to the bounds of the law (ie you can’t go in and say ‘I shall be stabbing otterly at 4am tomorrow’ ) and I’d have user controls so that you can filter out any words that give you the vapours, or any subjects that upset you. Everything else that’s legal is on there. Perhaps advertisers could have those filters too so you could say ‘no widget adverts in posts with the following words on.’
They have? Who has been banned from Twitter for anything other than violating their terms of service?
 
Are you really comparing having service from an electric company to posting on an internet forum? And who has been banned from Twitter just for backing the wrong candidate?


They have? Who has been banned from Twitter for anything other than violating their terms of service?
Well anyone who asked very pertinent questions in covid for one. Not people who ‘spread misinformation’ - people who asked the questions that would normally routinely be asked during the approval process weren’t.
But I’m not really talking about banning individuals. I’m talking about the communications between various government departments and various social media during, for example covid - the big social media platforms were communicating with the government to coordinate on what should and what should not be allowed on their platforms. Facebook was too Lists of banned terms, things you’ll be banned or referred to official info sources for even mentioning, links you can’t send in messenger. I sent a picture of a wall of board games to a relative to ask which one her kid wanted for his birthday and it wouldn’t go through. We puzzled for ages, then I realised a game called plague was in the middle. I took it down, snapped another pic and it went through. We posted the initial pic on various FB and insta accounts and ALL the ones with the game plague in got a ‘fact check’ message about covid come up. So there’s a check for keywords. That’s overreach and it’s suppression of free discussion, and probably a few breaches of the smith mundt act. Oh wait that got repealed…so now you can legally be propagandised.
It all muddies the waters on what twitter and similar things like FB actually ARE. People are arguing about platform v publisher and public vs private when the truth seems to be that they were an arm of social info gathering and enforcement linked to the government. In that light, what they ban and encourage becomes what the government wants to and that’s a very worrying boundary to blur don’t you think?
 
Are you really comparing having service from an electric company to posting on an internet forum? And who has been banned from Twitter just for backing the wrong candidate?
I explained exactly why I used that analogy, because governments can and are using outlets like Facebook and Twitter to provide normal services. I can think of a couple local governments that do exactly that and have even shut down normal ways of talking to them. No Facebook? You can't access these services.

As for the second, I don't need an example. You said it's totally okay to do it for any reason or no reason, so it's just a matter of time until they do.

I note you don't address the one example I gave that actually did happen, of shutting down a sitting President during a contested election.
 
I doubt they'd be able to get rid of that, because of the other implications that would follow. Advertising / commercial speech is indeed protected by the first amendment and removing it would have some other bad consequences.
What implications? Spam and advertising are distinct, you can ban one and not the other, which companies already do. YouTube has a spam problem and fights it, but they still have commercials. They're not indistinguishable like you seem to be implying.

Speaking of that, television stations are currently allowed to reject paid programming. You are saying they shouldn't be able to, which is pretty stupid.
I know you're autistic and can only see everything in black & white, 0% or 100%, but we can have fair guidelines while still rejecting certain things. Nobody is advocating for forcing TV to accept literally everything.

Then every website, including this one, would be gone. Mods wouldn't be able to move threads. If you made a post on Chris Chan, for example, in A&H, it'd have to stay.
The solution is simple, treat these services like the utilities they are, which you oppose because we don't need Twitter if we're having a heart attack :story:

It's amazing how you Republican boomers are fine with government telling businesses how to run themselves when it comes to being able to say the n-word.
If people don't like the nigger word then they can filter it out, stop using that as an excuse to support banning ideas and perspectives you don't like.

I am all for industry regulations, but I also am all for companies being able to run themselves as they see fit.
Those are contradictory. If they are regulated then guess what, they're not running as they see fit--they are running within the bounds of imposed regulations.

So pick one. If you're for regulations generally but not these ones for big tech, then explain why you feel they alone deserve exemption in this case.
 
Well anyone who asked very pertinent questions in covid for one. Not people who ‘spread misinformation’ - people who asked the questions that would normally routinely be asked during the approval process weren’t.
But I’m not really talking about banning individuals. I’m talking about the communications between various government departments and various social media during, for example covid - the big social media platforms were communicating with the government to coordinate on what should and what should not be allowed on their platforms. Facebook was too Lists of banned terms, things you’ll be banned or referred to official info sources for even mentioning, links you can’t send in messenger. I sent a picture of a wall of board games to a relative to ask which one her kid wanted for his birthday and it wouldn’t go through. We puzzled for ages, then I realised a game called plague was in the middle. I took it down, snapped another pic and it went through. We posted the initial pic on various FB and insta accounts and ALL the ones with the game plague in got a ‘fact check’ message about covid come up. So there’s a check for keywords. That’s overreach and it’s suppression of free discussion, and probably a few breaches of the smith mundt act. Oh wait that got repealed…so now you can legally be propagandised.
It all muddies the waters on what twitter and similar things like FB actually ARE. People are arguing about platform v publisher and public vs private when the truth seems to be that they were an arm of social info gathering and enforcement linked to the government. In that light, what they ban and encourage becomes what the government wants to and that’s a very worrying boundary to blur don’t you think?
People weren't banned for questioning covid, they were banned for spreading misinformation which is against the terms of service. They violated the TOS, they got banned. It's really not a hard concept. I understand you're a good little conservative so you want to be able say the nword and spread your covid conspiracy theories all over, but if you violate terms of service, bans can happen.

I explained exactly why I used that analogy, because governments can and are using outlets like Facebook and Twitter to provide normal services. I can think of a couple local governments that do exactly that and have even shut down normal ways of talking to them. No Facebook? You can't access these services.
I highly doubt that's the case. Government services still have phone numbers. Otherwise old geezers wouldn't be able to use them.
As for the second, I don't need an example. You said it's totally okay to do it for any reason or no reason, so it's just a matter of time until they do.
Is it? That would be a terrible business decision. Again, it's their service, they can set up whatever terms of service they want. What you are arguing would allow people to spam all over, post irrelevant/off-topic stuff anywhere they wanted, and would likely shut down all social media and web forums.
I note you don't address the one example I gave that actually did happen, of shutting down a sitting President during a contested election.
That's because Trump was trying to lead an insurrection, which violated their terms of service.

What implications? Spam and advertising are distinct, you can ban one and not the other, which companies already do. YouTube has a spam problem and fights it, but they still have commercials. They're not indistinguishable like you seem to be implying.
What? That makes even less sense than usual for what you're saying. You are saying freedom of speech should apply to every internet site and forum, which would make it illegal to delete spam/off topic/etc posts.

I know you're autistic and can only see everything in black & white, 0% or 100%, but we can have fair guidelines while still rejecting certain things. Nobody is advocating for forcing TV to accept literally everything.
Ironic coming from the guy who admitted to having autism which has turned him into an 80 year old boomer. And again, your comparison makes zero sense (as usual). Now, again, you are arguing that peoples freedom of speech should be restricted. You can't pick and choose it with the argument you are making, Eris.

The solution is simple, treat these services like the utilities they are, which you oppose because we don't need Twitter if we're having a heart attack :story:
Except it's not a utility, because again, you don't need it to live day to day like you do a phone or water. I realize you have autism so you have trouble understand context, but it's important here.

If people don't like the nigger word then they can filter it out, stop using that as an excuse to support banning ideas and perspectives you don't like.
Again, sites may not want that on their platform because it's bad for advertising. I didn't you QAnoners were such commies.

Those are contradictory. If they are regulated then guess what, they're not running as they see fit--they are running within the bounds of imposed regulations.

So pick one. If you're for regulations generally but not these ones for big tech, then explain why you feel they alone deserve exemption in this case.
It's not contradictory at all. For example, if the A&H mods want to lock a post because it's gotten off topic, they should be able to. You are saying they shouldn't be able to. If Null wants to ban you for ban evading again, he should be able to. It's his site. The first amendment only applies to the government restricting your speech, not Twitter.

I am for regulation when it's for the better good. You wanting people to be able to post anything they want to any website they want isn't going to be good for everybody. For example, I understand you're a die-hard QAnoner who loves to thump his bible. If you go to GreatAwakening.win to talk about how amazing Republicans are with your fellow QTard religious zealots and you see a post there that is against banning birth control, you can report it and they're allowed to ban the poster. What you are arguing would lead to that not being possible. People like me would be able to register on there and laugh at you for being such a gullible simpleton, and there's nothing you could do. We could flood it with posts making fun of you, and you couldn't remove it. It would turn web forums into a shit hole. If you think DDOS attacks on Kiwi Farms are bad, wait til you make it illegal for Null to be able to delete off-topic posts all because you want to be able to post QAnon stuff to Twitter
 
People weren't banned for questioning covid, they were banned for spreading misinformation which is against the terms of service.
No they weren't, quite often they were banned just for a post including a forbidden word by some bot. Not that it would matter otherwise, because the people in charge of deciding what "misinformation" is are absolute fucking idiots who should be up against a wall anyway.
 
I could have sworn there were known incidents of politicians of various countries advocating violence on Twitter that were not touched. I want to say one of Iran's leaders in particular?

Am I misremembering?
 
I could have sworn there were known incidents of politicians of various countries advocating violence on Twitter that were not touched. I want to say one of Iran's leaders in particular?

Am I misremembering?
For a long time, ISIS was just allowed to act completely openly, while people were being banned for misgendering some fat pederast in programming socks every two minutes.
 
@Hollywood Hulk Hogan
That's because Trump was trying to lead an insurrection, which violated their terms of service.
So real-life events should be allowed to factor in under their ToS? Not that Trump lead an insurrection anyway.

Don't you at least think they should be obligated to equally enforce their ToS? They allow objectively worse people than Trump to stay on. They allow racism against white people. Shit, they should at least need to be open about their bias. Try to justify that.

You are saying freedom of speech should apply to every internet site and forum, which would make it illegal to delete spam/off topic/etc posts.
I've repeatedly said there should be exceptions. Illegal content and spam, primarily. Make spam illegal, classify it as a form of harassment, done.

Now, again, you are arguing that peoples freedom of speech should be restricted.
No I'm not; it's already restricted and always has been, since free speech isn't absolute. I'm just arguing to restrict it in ways that make sense and actually will increase our ability to speak freely.

Ban spam (which they already combat anyway) and impose regulations on big tech to accommodate free speech. What's the problem?

Except it's not a utility, because again, you don't need it to live day to day like you do a phone or water.
Why do you want us to be able to say nigger on the phone but not social media? That's your entire argument, that we just want to be able to say nigger.

So then why should we be allowed to say nigger on the phone? Why shouldn't phone services be allowed to restrict service to strictly emergency useage for bigots? Then we're still able to get help for those heart attacks you're concerned about.

Again, sites may not want that on their platform because it's bad for advertising.
It wouldn't be bad for advertising because every site would allow free speech. They couldn't just shop around for a woke site that bans mean words because they'd be everywhere again. Companies never had an issue advertising EVERYWHERE back before Wokeism consumed everything, so they can just cope and seethe and go back to how things were.

It's not contradictory at all. For example, if the A&H mods want to lock a post because it's gotten off topic, they should be able to. You are saying they shouldn't be able to.
And they should be allowed to, remember I advocated for common sense guidelines? Want me to quote it? Try to think, retard. It's not difficult to conceive of sites that run without blatant political and ideological discrimination.

I am for regulation when it's for the better good. You wanting people to be able to post anything they want to any website they want isn't going to be good for everybody.
Yeah, half the country being silenced by the lunatic left isn't a problem to you, in fact it'd be a problem if that ended--I get that. You just don't make a compelling argument for it to continue.
 
Back