Law Supreme Court showdown for Google, Twitter and the social media world - The Battle for Section 230 Goes to the U.S. Supreme Court

Paging @Null

The U.S. Supreme Court hears arguments this week in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online.

Celal Gunes/Anadolu Agency via Getty Images
In November 2015, ISIS terrorists carried out coordinated attacks across Paris, killing 130 people and injuring 400. Among the dead was Nohemi Gonzalez, a 23-year-old American studying abroad who was the first person in her large family to graduate from college. This week, lawyers for her family and others are in the Supreme Court challenging a law enacted more than a quarter century ago—a law that protects social media companies from what the families see as the role of internet companies in aiding and abetting terrorist attacks.

How the court rules could be a gamechanger for American law, society, and social media platforms that are some of the most valuable businesses in the world.

What the law says​

At the center of two cases to be argued over two days is Section 230 of the 1996 Communications Decency Act, passed by Congress when internet platforms were just beginning. In just 26 words, Section 230 draws a distinction between interactive computer service providers and other purveyors of information. Whereas newspapers and broadcasters can be sued for defamation and other wrongful conduct, Section 230 says that websites are not publishers or speakers and cannot be sued for material that appears on those sites. Essentially, the law treats web platforms the same way that it treats the telephone. And just like phone companies, websites that are host to speakers cannot be sued for what the speakers say or do.

The 26 Words That Made The Internet What It Is

The 26 Words That Made The Internet What It Is

At least that is the way the lower courts have uniformly interpreted Section 230. They have said that under the law, social media companies are immune from being sued for civil damages over most material that appears on their platforms. That is so, even though, at the same time, the law has an apparently contrary objective: It encourages social media companies to remove material that is obscene, lewd, excessively violent, harassing or otherwise objectionable.

The attack at the heart of the arguments​

This week's cases attempt to thread that needle. The Gonzalez family and the families of other terrorism victims are suing Google, Twitter, Facebook and other social media companies under the federal Anti-Terrorism Act, which specifically allows civil damage claims for aiding and abetting terrorism. The families allege that the companies did more than simply provide platforms for communication. Rather, they contend, that by recommending ISIS videos to those who might be interested, they were seeking to get more viewers and increase their ad revenue.

Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.

"Now most of the money is made by advertisements, and social media companies make more money the longer you are online," he says, adding that one way to do that is by algorithms that recommend other related material to keep users online longer.

What's more, he argues, modern social media company executives knew the dangers of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.

"The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . those government officials . . . told them exactly that," he says.

Google general counsel Halimah DeLaine Prado vehemently denies any such wrongdoing.

"We believe that there's no place for extremist content on any of our products or platforms," she says, noting that Google has "heavily invested in human review" and "smart detection technology," to "make sure that happens."

As Trump Targets Twitter's Legal Shield, Experts Have A Warning

As Trump Targets Twitter's Legal Shield, Experts Have A Warning

Prado acknowledges that social media companies today are nothing like the social media companies of 1996, when the interactive internet was an infant industry. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

The choice before the court​

Daniel Weitzner, the founding director of the MIT Internet Policy Research Initiative, helped draft Section 230 and get it passed in 1996.

"Congress had a really clear choice in its mind," he says. "Was the internet going to be like the broadcast media that were pretty highly regulated?" Or, was it going to be like "the town square or the printing press?" Congress, he says, "chose the town square and the printing press." But, he adds, that approach is now at risk: "The Supreme court now really is in a moment where it could dramatically limit the diversity of speech that the internet enables."

Section 230: A Key Legal Shield For Facebook, Google Is About To Change

Section 230: A Key Legal Shield For Facebook, Google Is About To Change

There are many "strange bedfellows" among the tech company allies in this week's cases. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.

But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration's position this way: "It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230."

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. The tech industry says there is no easy way to distinguish between aggregating and recommending.

Justice Clarence Thomas Takes Aim At Tech And Its Power 'To Cut Off Speech'

Justice Clarence Thomas Takes Aim At Tech And Its Power 'To Cut Off Speech'

And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial--those are two different things. What's more, the Supreme Court has made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

What makes this week's cases so remarkable is that the Supreme Court has never dealt with Section 230. The fact that the justices have agreed to hear the cases shows that they have concerns. Justice Clarence Thomas has been outspoken about his view that the law should be narrowly interpreted, meaning little protection for social media companies. Justice Samuel Alito has indicated he might agree with that. But the views of the other justices are something of a black box.

The cases are Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh.
 
But Newspapers and Networks is how they controlled public perceptions in the past. The internet threatens that. It makes people more aware of shenanigans. Why, if you were in power, want the population to know of all the shady shit you got up to? So they have to control the internet. Of course under the auspices of "harm reduction" and "misinformation"
I'm not arguing with you dude, your 100% right.

But control of the internet is not really about section 230. If "they" thought they could control the web section 230 would be gone in a heartbeat so they can hound and prosecuted anyone who dares to post/speak badthink.

But so far the internet has kinda slipped under their radar. Remember the power brokers are all very old boomer types. Remember how shocked and horrified they were when Hillary lost? It's because they really under estimated the power of social media and the internet and paid the price.

Well notice any changes in 2020?

Yah they won't make the same mistake twice. The internet as we know it will slowly be brought under heel. It's just a lot harder because it so decentralized. There are so many small players and bit roles that it's not easy to just carve up and control.

Look, I'm all for section 230. I just want very clear lines, in legal terms, about what is a platform and what is a publisher.

If Jersh has to toss the Featured section then so be it. It's a small price to pay to gain so much elsewhere. Imagine a Reddit where the mods couldn't just baleet anything they didn't like. Imagine Google search without SOE ranks and pure Google pulling bullshit on companies and people they don't like. Section 230 is both a shield and a sword depending on whose using it.

Yah will things get kinda fucked up. Yah there will be some casualties but none that I would actually miss. Do you really care if Wikipedia dies? Do you really care if Reddit goes the way of the dodo? Will the world be less great if Twitter has to drop it's trending tag and actually show whats really trending?

Fuck no...let the narrative pushers stay on things like CNN, MSBN, FOX and the New York Times. Their time is slowly coming to an end anyway as is their usefulness to their masters.
 
Look, I'm all for section 230. I just want very clear lines, in legal terms, about what is a platform and what is a publisher.

I do too. But theres too many powerful interests involved here. They simply CANNOT allow the internet to be pre-2015-2016. Having an unbiased internet puts too much power and information in the hands of the public. Precisely the thing they've been trying to reverse.
 
No because Null doesn't curate the forum. He doesn't choose what gets posted. His only rule is to only follow established laws that are already on the books in the US. So no, Null would be safe as he's only providing a platform for people to use.

Now if Null started deleting some threads and pushing up others to the front page then he would be curating the site and no longer a platform.
He does though. He deletes and locks shit threads. He will delete a thread if it's not really about a lolcow, or an obvious personal army attempt. That's curation. But that doesn't mean that he endorses any of the other posts, other posts that have an obvious username and avatar of the actual author responsible.
 
More like a bunch of fascists trying to control speech and hoping SCOTUS will help them.

Maybe Bill was right.
 
It's good that the Justices are asking decent questions that nobody can answer, it basically means this case is dead on arrival. Only like one person might rule in favor because they have some autistic bent on safety over freedom. Everybody else will be utterly unconvinced.
 
I hate how overcomplicated this has been made. Just make it so platforms aren't liable for any content published and can't editorialize (with only explicitly illegal content removed, not things they don't like).

I have to be missing something.
 
  • Agree
Reactions: WhatInTheActualFuck
It's not only algorithms, it's the owner's promotion of content. It's why Craigslist did away with personals entirely. I know it's an NPR link but this was not widely covered.
That's an inaccurate description. Craigslist did away with personal ads because of a sex trafficking carveout added to 230 to enable them to go after Backpages.

For the record, Backpages was receiving emails from parents explaining that certain listings featured underage children that had been abducted and they allowed the listings to stay up.
 
It won't.

...wait, you weren't serious were you? The law would never be applied to big corporations. They'd continue to host terrorists and cp unscathed.

Yeah I'm getting out over my skis a bit to be sure. I'm looking at it more like how the GDPR is a huge pain in the ass for the likes of Google & FB while having basically no impact on smaller sites in the US.

It's all probably wishful thinking. Putting faith in the Neocon wing of the Neoliberals is a fools errand.
 
I hate how overcomplicated this has been made. Just make it so platforms aren't liable for any content published and can't editorialize (with only explicitly illegal content removed, not things they don't like).

I have to be missing something.
Running a site with a narrow, specialized topic is editorializing enough to make such a site target for nonsense lawsuits.

So, for example, running a site about lolcows and periodically using one's judgement to delete threads that are solely personal army requests is a form of editorialization. They will argue that running a site that way implicitly endorses the remaining threads.

Ultimately, it should be legal to run a site with a narrow focus, even a political one, without turning it into a newspaper. And someone who operates such a site needs to be able to curate it for their focus without being responsible for posts that are obviously written by members of the site with avatars and usernames next to the post.

It should be legal to run Stormfront or ResetEra in that manner. They're obviously not newspapers. They're discussion forums with a specific focus (even a political focus).

It's like an extension of the IRL first amendment right of freedom of association. You can rent out a hall for your political event. You can invite people to speak. You can keep out the pro-choice, pro-life, gun-grabbing, NRA types (whatever your political interest is). But that doesn't mean you're responsible if one of the speakers on the podium starts defaming someone.

And I think this even applies to Facebook or Twitter. You can argue they have too much power and I'd definitely agree. But I don't think that's because they're newspapers pretending to be discussion forums. Their format is nowhere near that of a newspaper. They're just very biased, very large discussion forums. The bigger problem is tech and the banks strangling competition.

If you want a genuine example of a business really blurring the line between online newspaper and discussion forum, look at Substack or Medium. They are far closer to the traditional newspaper format, collecting subscription fees for a medium with a traditional newspaper/magazine article format while dodging liability with section 230.
Yeah I'm getting out over my skis a bit to be sure. I'm looking at it more like how the GDPR is a huge pain in the ass for the likes of Google & FB while having basically no impact on smaller sites in the US.

It's all probably wishful thinking. Putting faith in the Neocon wing of the Neoliberals is a fools errand.
That's fair. I could see some kind of sliding scale of regulation increasing liability as you get bigger.

I'm just super suspicious of congress actually getting it right.
 
This

Nobody seems interested in gutting 230, but rather, settling on what you can and can't do and then hide under it's blanket.

Seems to me that it's awfully hypocritical of Google, or Twitter or Facebook or whoever to claim they're merely a service provider when they actively edit your ability to say certain things. We wouldn't accept the phone company editing our calls on-the-fly, or the Post Office censoring our letters, that'd be clear abuse. But somehow, its not when your posts get deleted for not criminal things, but, just ideas the company providing you internet access find to be off-brand? Give me a break. It's time the common-sense line is drawn -

The "Its not censoring, it's enforcing community standards and safety" dodge must go the same way the "compelling interest public safety" excuse to regulate firearms in direct violation of the 2A did. - an outright LIE agreed upon by those in power who like the authority it gives them.


If you do ANYTHING to a user OUTSIDE of responding to breaking a LAW (not a platform "rule" but a law) ? YOU ARE EDITING, THAT MAKES YOU PUBLISHER.

Publishers DO NOT GET immunity.
That is the one, exact, specific, intentional thing that 230 protects. You are allowed to moderate your site any way you choose, even if it's arbitrary and capricious, and that doesn't mean you endorse the stuff that you don't remove. That's the entire PURPOSE of 230. You can say it's a bad law. You'd be wrong, but you could say it. You CAN'T say that it's not working as intended in that regard.
 
I hate how overcomplicated this has been made. Just make it so platforms aren't liable for any content published and can't editorialize (with only explicitly illegal content removed, not things they don't like).

I have to be missing something.
Nuance.

Moderation has to come in somewhere and that's the fly in the ointment. Yes, you can have an unmoderated website, but the financial system is also capable of completely locking you out for not moderating your website the way they want you to, or really any reason they deem fit but don't legally have to share with you.

Really 230 should just be left alone as anyone thinking some change or another to it is going to fix fucking anything has already lost the damn plot. We need freedom of commerce, which we don't have, for anything at all to be "fixed". It all comes back to that.
 
They aren't trying to do away with 230 but they want a specific exception carved out when it comes to recommending content. [ ...] There's not an obvious link to KF. But something like the highlighted tweets on the main page could be seen as you, personally, "publishing" by "recommending" content.
My ideal outcome would be one where any website/company that curates their content. Via a algorithm or program or person is no longer considered a platform but a publisher and thus liable for what it chooses to publish. [ ... ] That places them squarely in the publisher category and not just a simple platform like they claim.
[ ... ] Either your a platform that allows all viewpoints to be expressed (IE a public square) or your a publisher no different then any news network or newspaper.
So the Community Happenings thread here and the featured content on the main page would be "publishing" by null and would lead to many, many, more lawsuits.
No because Null doesn't curate the forum. He doesn't choose what gets posted. His only rule is to only follow established laws that are already on the books in the US. So no, Null would be safe as he's only providing a platform for people to use.

Now if Null started deleting some threads and pushing up others to the front page then he would be curating the site and no longer a platform.

Blanket immunity is the issue here. We need clearly defined rules to separate publishing from platforming.
Not the forum as a whole, but the "featured content". [ ... ] Deciding to delete or allow threads or posts is normal editorial action. Promoting threads or posts is what is at issue. Like the "featured" posts.
Seems to me that it's awfully hypocritical of Google, or Twitter or Facebook or whoever to claim they're merely a service provider when they actively edit your ability to say certain things. [ ... ] If you do ANYTHING to a user OUTSIDE of responding to breaking a LAW (not a platform "rule" but a law) ? YOU ARE EDITING, THAT MAKES YOU PUBLISHER.

Publishers DO NOT GET immunity.
He does though. He deletes and locks shit threads. He will delete a thread if it's not really about a lolcow, or an obvious personal army attempt. That's curation.
That is the one, exact, specific, intentional thing that 230 protects. You are allowed to moderate your site any way you choose, even if it's arbitrary and capricious, and that doesn't mean you endorse the stuff that you don't remove.

while promoting content via "featured" threads and removing or refusing to remove content are relevant to the current supreme court docket, let's not forget that there are other, less obvious, ways that the farms have recently straddled the line between platform and publisher: word filters.

the use of word filters is editorialization of content. it's a decision made by a platform to alter what a person has written, such that it may also alter the original intent of the content, which is not protected by section 230. it's publication, which makes kiwi farms a publisher. i'm fairly certain that null realized this awhile ago though, which is why we have words like "hash browns", "milk", "retard", and "ween" (to list a few) again.
 
Audio for lawyer of Twitter ( 50/50 musk is involved)
 
I hate how overcomplicated this has been made. Just make it so platforms aren't liable for any content published and can't editorialize (with only explicitly illegal content removed, not things they don't like).

I have to be missing something.
If you experienced the internet back when 95% of usenets and messageboards were entirely unmoderated you'd know why that can't work. Hordes of autistic retard chimps would shit up platforms with gore and gibberish to the point that they're unusable, just because they can.
 
If you experienced the internet back when 95% of usenets and messageboards were entirely unmoderated you'd know why that can't work. Hordes of autistic retard chimps would shit up platforms with gore and gibberish to the point that they're unusable, just because they can.
Allowing users tools to avoid stuff is easier now. Twitter already detects offensive images and asks if you want to see them, or filter words. Gibberish and gore shouldn't be a problem.
 
on some further thoughts, I think they are gonna keep section 230, but if you make algorithms dedicated to a certain thing or not choosable. then you are probably liable. I expect something really good to come from this and that is probably more transparency and more options for consumers when it comes to algorithms online!
 
  • Optimistic
Reactions: LurkTrawl
Back