Law Supreme Court showdown for Google, Twitter and the social media world - The Battle for Section 230 Goes to the U.S. Supreme Court

Paging @Null

The U.S. Supreme Court hears arguments this week in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online.

Celal Gunes/Anadolu Agency via Getty Images
In November 2015, ISIS terrorists carried out coordinated attacks across Paris, killing 130 people and injuring 400. Among the dead was Nohemi Gonzalez, a 23-year-old American studying abroad who was the first person in her large family to graduate from college. This week, lawyers for her family and others are in the Supreme Court challenging a law enacted more than a quarter century ago—a law that protects social media companies from what the families see as the role of internet companies in aiding and abetting terrorist attacks.

How the court rules could be a gamechanger for American law, society, and social media platforms that are some of the most valuable businesses in the world.

What the law says​

At the center of two cases to be argued over two days is Section 230 of the 1996 Communications Decency Act, passed by Congress when internet platforms were just beginning. In just 26 words, Section 230 draws a distinction between interactive computer service providers and other purveyors of information. Whereas newspapers and broadcasters can be sued for defamation and other wrongful conduct, Section 230 says that websites are not publishers or speakers and cannot be sued for material that appears on those sites. Essentially, the law treats web platforms the same way that it treats the telephone. And just like phone companies, websites that are host to speakers cannot be sued for what the speakers say or do.

The 26 Words That Made The Internet What It Is

The 26 Words That Made The Internet What It Is

At least that is the way the lower courts have uniformly interpreted Section 230. They have said that under the law, social media companies are immune from being sued for civil damages over most material that appears on their platforms. That is so, even though, at the same time, the law has an apparently contrary objective: It encourages social media companies to remove material that is obscene, lewd, excessively violent, harassing or otherwise objectionable.

The attack at the heart of the arguments​

This week's cases attempt to thread that needle. The Gonzalez family and the families of other terrorism victims are suing Google, Twitter, Facebook and other social media companies under the federal Anti-Terrorism Act, which specifically allows civil damage claims for aiding and abetting terrorism. The families allege that the companies did more than simply provide platforms for communication. Rather, they contend, that by recommending ISIS videos to those who might be interested, they were seeking to get more viewers and increase their ad revenue.

Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.

"Now most of the money is made by advertisements, and social media companies make more money the longer you are online," he says, adding that one way to do that is by algorithms that recommend other related material to keep users online longer.

What's more, he argues, modern social media company executives knew the dangers of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.

"The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . those government officials . . . told them exactly that," he says.

Google general counsel Halimah DeLaine Prado vehemently denies any such wrongdoing.

"We believe that there's no place for extremist content on any of our products or platforms," she says, noting that Google has "heavily invested in human review" and "smart detection technology," to "make sure that happens."

As Trump Targets Twitter's Legal Shield, Experts Have A Warning

As Trump Targets Twitter's Legal Shield, Experts Have A Warning

Prado acknowledges that social media companies today are nothing like the social media companies of 1996, when the interactive internet was an infant industry. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

The choice before the court​

Daniel Weitzner, the founding director of the MIT Internet Policy Research Initiative, helped draft Section 230 and get it passed in 1996.

"Congress had a really clear choice in its mind," he says. "Was the internet going to be like the broadcast media that were pretty highly regulated?" Or, was it going to be like "the town square or the printing press?" Congress, he says, "chose the town square and the printing press." But, he adds, that approach is now at risk: "The Supreme court now really is in a moment where it could dramatically limit the diversity of speech that the internet enables."

Section 230: A Key Legal Shield For Facebook, Google Is About To Change

Section 230: A Key Legal Shield For Facebook, Google Is About To Change

There are many "strange bedfellows" among the tech company allies in this week's cases. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.

But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration's position this way: "It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230."

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. The tech industry says there is no easy way to distinguish between aggregating and recommending.

Justice Clarence Thomas Takes Aim At Tech And Its Power 'To Cut Off Speech'

Justice Clarence Thomas Takes Aim At Tech And Its Power 'To Cut Off Speech'

And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial--those are two different things. What's more, the Supreme Court has made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

What makes this week's cases so remarkable is that the Supreme Court has never dealt with Section 230. The fact that the justices have agreed to hear the cases shows that they have concerns. Justice Clarence Thomas has been outspoken about his view that the law should be narrowly interpreted, meaning little protection for social media companies. Justice Samuel Alito has indicated he might agree with that. But the views of the other justices are something of a black box.

The cases are Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh.
 
So in my understanding, that means that they want Big Tech to stop hiding behind 230 if they, in any way, tries to "promote" certain content at you, correct?

Honestly, I'd be fine with that. Especially if that means targeted ads die with it. Or at the very least, the disingenuous "we thought you might like this" (((recommended))) content that is just ads in disguise.
Well there is an upside and a downside to that as I tried to explain.

It would fuck up the mainstream social media sites' ability to control conversation but it would fuck up everyone else's conversations as well.

There are actual lawyers on here that I would like to hear from.
 
I like the idea (weakening Section 230 in where Big Tech abuses it) but dislike the specific case (they're guilty because people found extremist content online). It is possible the USSC hits both: No, Section 230 doesn't cover all abuse in algorithm manipulation. No, Google is not guilty by proxy of terrorism. The USSC are some of the finest lawyers in the country and are often very good in their decisions.

The problem is, section 230 is abused because the establishment allows it. Those companies should've been reigned in years ago. But the people affected don't have the funds to go to the Supreme Court, challenging Big Tech, and saying they are abusing the spirit of Section 230. Because nobody has comparable power to stop them, so they are allowed to do so. And largely, the establishment likes it that way.
 
I saw a pic of the Gonzalez family looking suited up and sad in another article. Fuck 'em! I want my recommendation radicalization rabbit hole!

GettyImages-1247356442-800x534.jpg

The Biden administration is backing the Gonzalez family in the dispute before the Supreme Court. Stewart, the deputy solicitor general, said the Justice Department believes interactive service providers can be sued for their recommendations, which are choices made by the platforms themselves on how to organize third-party content.
 
Last edited:
“Would Google collapse and the Internet be destroyed if YouTube and therefore Google were liable for posting and refusing to take down videos that it knows are defamatory and false,” Alito asked Blatt.

Who defines what is defamatory or false? What recourse do you have if something is flagged as defamatory or false? How can you prove Google knew or did not know it was defamatory or false?
 
That the internet never would have gotten off the ground if everybody would have sued was not what Congress was concerned about at the time it enacted this statute,” Jackson said.

But a brief in the case filed by Section 230’s original authors has contradicted Jackson’s account of Section 230’s intent and purpose.

Lol, what a joke this article just made Brown out to be.
 
whether Google bears liability for user-generated content when its algorithms recommend videos to users

I mean they are responsible for that video showing up in a feed I guess but if that person goes and does a glownigger then no google shouldnt be responsible for the actions of its retarded consoomers

It doesnt seem like a hard decision my dude because once google becomes responsible for the independent actions of individuals oh boy imagine the lawsuits that you can fuck twitter and facebook with
 
They're going to lose, because the Supreme Court leans toward the Bill of Rights nor does Section 230 conflict with the Bill of Rights in any way.

I'm so sick of imbeciles trying to cancel the Internet.

Also, Youtube automatically serves recommendations based on user search/watch history, not at random or with intelligent purpose. This argument is as weak as a flat Sierra Mist.
 
Well this explains to me why YouTube started recommending me videos about how Google is the good guy in front of the Supreme Court (but the videos were about the Java APIs instead).

Yeah, that was a totally 100% organic recommendation and totally didn't have any agenda behind it whatsoever.
 
Funny how posting anything that goes against the current narrative gets your content shadow-banned, delisted or just outright removed within minutes on any of the major platforms. But ISIS and other terrorist organizations are allowed to freely advertise and recruit on sites like facebook and twitter with impunity.
LOL the jews also fund and control ISIS why the fuck would they ban it or not promote it in the algo when it was created to solely promote jewish geopolitical interests along with most of the other terrorist organizations of the world from Antifa to Green movement.
 
I mean they are responsible for that video showing up in a feed I guess but if that person goes and does a glownigger then no google shouldnt be responsible for the actions of its retarded consoomers

It doesnt seem like a hard decision my dude because once google becomes responsible for the independent actions of individuals oh boy imagine the lawsuits that you can fuck twitter and facebook with
Not just Facebook or Twitter, but the logic of "brainwashed by media" will naturally bleed over to arguments for banning books and vidya games, we'll swing back to the days of Catcher in the Rye being banned, along with any other book that could give people ideas.
 
My ideal outcome would be one where any website/company that curates their content. Via a algorithm or program or person is no longer considered a platform but a publisher and thus liable for what it chooses to publish.

Which is pretty fair IMHO. We all know that certain platforms are directing, pushing and censoring viewpoints that they disagree with. That places them squarely in the publisher category and not just a simple platform like they claim.

I hope that SCotUS has the sense to see where the end of this road is and not just wuss out and hand out a "new ruling but nothing changes" verdict.

Google, Twitter, Reddit...all these sites curate content and push certain narratives while practising censorship of anything they deem "wrong" blantly. It's time they were held to account. Either your a platform that allows all viewpoints to be expressed (IE a public square) or your a publisher no different then any news network or newspaper.

They can't have it both ways.
 
We all know that certain platforms are directing, pushing and censoring viewpoints that they disagree with. That places them squarely in the publisher category and not just a simple platform like they claim.
So the Community Happenings thread here and the featured content on the main page would be "publishing" by null and would lead to many, many, more lawsuits. Only this time they'd have SCOTUS precedent. It's horrifying.

It doesn't just affect the big players, it affects everyone.
 
It's time they were held to account. Either your a platform that allows all viewpoints to be expressed (IE a public square) or your a publisher no different then any news network or newspaper.

But Newspapers and Networks is how they controlled public perceptions in the past. The internet threatens that. It makes people more aware of shenanigans. Why, if you were in power, want the population to know of all the shady shit you got up to? So they have to control the internet. Of course under the auspices of "harm reduction" and "misinformation"
 
So the Community Happenings thread here and the featured content on the main page would be "publishing" by null and would lead to many, many, more lawsuits. Only this time they'd have SCOTUS precedent. It's horrifying.

It doesn't just affect the big players, it affects everyone.

No because Null doesn't curate the forum. He doesn't choose what gets posted. His only rule is to only follow established laws that are already on the books in the US. So no, Null would be safe as he's only providing a platform for people to use.

Now if Null started deleting some threads and pushing up others to the front page then he would be curating the site and no longer a platform.

Blanket immunity is the issue here. We need clearly defined rules to separate publishing from platforming.
 
Now if Null started deleting some threads and pushing up others to the front page then he would be curating the site and no longer a platform.
Not the forum as a whole, but the "featured content". I don't think you're paying attention. Deciding to delete or allow threads or posts is normal editorial action. Promoting threads or posts is what is at issue. Like the "featured" posts. Seriously, read the writ or at least skim it.
 
KF doesn't use algorithms to promote content and will be wholly unaffected by this decision. The only promotion we do (featured content) is already editorialized and not covered by Section 230, so I ask moderators to be very careful with how they word things and to be as objective as possible.

I like the idea (weakening Section 230 in where Big Tech abuses it) but dislike the specific case (they're guilty because people found extremist content online). It is possible the USSC hits both: No, Section 230 doesn't cover all abuse in algorithm manipulation. No, Google is not guilty by proxy of terrorism. The USSC are some of the finest lawyers in the country and are often very good in their decisions.
I wasn’t aware that featured content wasn’t covered by 230 already. Glad to hear this case won’t negatively impact your site.
 
Blanket immunity is the issue here. We need clearly defined rules to separate publishing from platforming.
This

Nobody seems interested in gutting 230, but rather, settling on what you can and can't do and then hide under it's blanket.

Seems to me that it's awfully hypocritical of Google, or Twitter or Facebook or whoever to claim they're merely a service provider when they actively edit your ability to say certain things. We wouldn't accept the phone company editing our calls on-the-fly, or the Post Office censoring our letters, that'd be clear abuse. But somehow, its not when your posts get deleted for not criminal things, but, just ideas the company providing you internet access find to be off-brand? Give me a break. It's time the common-sense line is drawn -

The "Its not censoring, it's enforcing community standards and safety" dodge must go the same way the "compelling interest public safety" excuse to regulate firearms in direct violation of the 2A did. - an outright LIE agreed upon by those in power who like the authority it gives them.


If you do ANYTHING to a user OUTSIDE of responding to breaking a LAW (not a platform "rule" but a law) ? YOU ARE EDITING, THAT MAKES YOU PUBLISHER.

Publishers DO NOT GET immunity.
 
Back