Law Supreme Court showdown for Google, Twitter and the social media world - The Battle for Section 230 Goes to the U.S. Supreme Court

Paging @Null

The U.S. Supreme Court hears arguments this week in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online.

Celal Gunes/Anadolu Agency via Getty Images
In November 2015, ISIS terrorists carried out coordinated attacks across Paris, killing 130 people and injuring 400. Among the dead was Nohemi Gonzalez, a 23-year-old American studying abroad who was the first person in her large family to graduate from college. This week, lawyers for her family and others are in the Supreme Court challenging a law enacted more than a quarter century ago—a law that protects social media companies from what the families see as the role of internet companies in aiding and abetting terrorist attacks.

How the court rules could be a gamechanger for American law, society, and social media platforms that are some of the most valuable businesses in the world.

What the law says​

At the center of two cases to be argued over two days is Section 230 of the 1996 Communications Decency Act, passed by Congress when internet platforms were just beginning. In just 26 words, Section 230 draws a distinction between interactive computer service providers and other purveyors of information. Whereas newspapers and broadcasters can be sued for defamation and other wrongful conduct, Section 230 says that websites are not publishers or speakers and cannot be sued for material that appears on those sites. Essentially, the law treats web platforms the same way that it treats the telephone. And just like phone companies, websites that are host to speakers cannot be sued for what the speakers say or do.

The 26 Words That Made The Internet What It Is

The 26 Words That Made The Internet What It Is

At least that is the way the lower courts have uniformly interpreted Section 230. They have said that under the law, social media companies are immune from being sued for civil damages over most material that appears on their platforms. That is so, even though, at the same time, the law has an apparently contrary objective: It encourages social media companies to remove material that is obscene, lewd, excessively violent, harassing or otherwise objectionable.

The attack at the heart of the arguments​

This week's cases attempt to thread that needle. The Gonzalez family and the families of other terrorism victims are suing Google, Twitter, Facebook and other social media companies under the federal Anti-Terrorism Act, which specifically allows civil damage claims for aiding and abetting terrorism. The families allege that the companies did more than simply provide platforms for communication. Rather, they contend, that by recommending ISIS videos to those who might be interested, they were seeking to get more viewers and increase their ad revenue.

Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.

"Now most of the money is made by advertisements, and social media companies make more money the longer you are online," he says, adding that one way to do that is by algorithms that recommend other related material to keep users online longer.

What's more, he argues, modern social media company executives knew the dangers of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.

"The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . those government officials . . . told them exactly that," he says.

Google general counsel Halimah DeLaine Prado vehemently denies any such wrongdoing.

"We believe that there's no place for extremist content on any of our products or platforms," she says, noting that Google has "heavily invested in human review" and "smart detection technology," to "make sure that happens."

As Trump Targets Twitter's Legal Shield, Experts Have A Warning

As Trump Targets Twitter's Legal Shield, Experts Have A Warning

Prado acknowledges that social media companies today are nothing like the social media companies of 1996, when the interactive internet was an infant industry. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

The choice before the court​

Daniel Weitzner, the founding director of the MIT Internet Policy Research Initiative, helped draft Section 230 and get it passed in 1996.

"Congress had a really clear choice in its mind," he says. "Was the internet going to be like the broadcast media that were pretty highly regulated?" Or, was it going to be like "the town square or the printing press?" Congress, he says, "chose the town square and the printing press." But, he adds, that approach is now at risk: "The Supreme court now really is in a moment where it could dramatically limit the diversity of speech that the internet enables."

Section 230: A Key Legal Shield For Facebook, Google Is About To Change

Section 230: A Key Legal Shield For Facebook, Google Is About To Change

There are many "strange bedfellows" among the tech company allies in this week's cases. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.

But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration's position this way: "It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230."

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. The tech industry says there is no easy way to distinguish between aggregating and recommending.

Justice Clarence Thomas Takes Aim At Tech And Its Power 'To Cut Off Speech'

Justice Clarence Thomas Takes Aim At Tech And Its Power 'To Cut Off Speech'

And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial--those are two different things. What's more, the Supreme Court has made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

What makes this week's cases so remarkable is that the Supreme Court has never dealt with Section 230. The fact that the justices have agreed to hear the cases shows that they have concerns. Justice Clarence Thomas has been outspoken about his view that the law should be narrowly interpreted, meaning little protection for social media companies. Justice Samuel Alito has indicated he might agree with that. But the views of the other justices are something of a black box.

The cases are Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh.
 
I don't trust any of these articles. What is actually being challenged in Section 230?
From the petition (attached):

The question presented is:
Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only
limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?
 
KF doesn't use algorithms to promote content and will be wholly unaffected by this decision. The only promotion we do (featured content) is already editorialized and not covered by Section 230, so I ask moderators to be very careful with how they word things and to be as objective as possible.

I like the idea (weakening Section 230 in where Big Tech abuses it) but dislike the specific case (they're guilty because people found extremist content online). It is possible the USSC hits both: No, Section 230 doesn't cover all abuse in algorithm manipulation. No, Google is not guilty by proxy of terrorism. The USSC are some of the finest lawyers in the country and are often very good in their decisions.
 
The central point, is that Facebook, Twitter, et al are playing legal games where they want all the protections offered under 230 for non-editors/publishers while also exercising de facto editorial control over content being posted which should preclude them from the expansive protections.

Justice Thomas explains the issue really well, which is that the protections of 230 as written are relatively narrow (and probably reasonable). Unfortunately these protections have been expanded to cover deliberate, bad faith conduct which runs completely counter to both spirit and letter of the law. The law itself is not necessarily defective, but the enforcement and interpretation of it is deeply flawed.

A very thorough judicial review of 230 would do a world of good.

The year before Congress enacted §230, one court blurred this distinction. An early Internet company was sued for failing to take down defamatory content posted by an unidentified commenter on a message board. The company contended that it merely distributed the defamatory statement. But the company had also held itself out as a familyfriendly service provider that moderated and took down offensive content. The court determined that the company’s decision to exercise editorial control over some content “render[ed] it a publisher” even for content it merely distributed. Stratton Oakmont, 1995 WL 323710, *3–*4. Taken at face value, §230(c) alters the Stratton Oakmont rule in two respects. First, §230(c)(1) indicates that an Internet provider does not become the publisher of a piece of third-party content—and thus subjected to strict liability— simply by hosting or distributing that content. Second, §230(c)(2)(A) provides an additional degree of immunity when companies take down or restrict access to objectionable content, so long as the company acts in good faith. In short, the statute suggests that if a company unknowingly leaves up illegal third-party content, it is protected from publisher liability by §230(c)(1); and if it takes down certain third-party content in good faith, it is protected by §230(c)(2)(A).

This modest understanding is a far cry from what has prevailed in court. Adopting the too-common practice of reading extra immunity into statutes where it does not belong, see Baxter v. Bracey, 590 U. S. —— (2020) (THOMAS, J., dissenting from denial of certiorari), courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms. (“[C]ourts have extended the immunity in §230 far beyond anything that plausibly could have been intended by Congress). I address several areas of concern.
Courts have discarded the longstanding distinction between “publisher” liability and “distributor” liability. Although the text of §230(c)(1) grants immunity only from
“publisher” or “speaker” liability, the first appellate court to consider the statute held that it eliminates distributor liability too—that is, §230 confers immunity even when a company distributes content that it knows is illegal. In reaching this conclusion, the court stressed that permitting distributor liability “would defeat the two primary purposes of the statute,” namely, “immuniz[ing] service providers” and encouraging “selfregulation.” Id., at 331, 334. And subsequent decisions, citing Zeran, have adopted this holding as a categorical rule across all contexts.
Courts have also departed from the most natural reading of the text by giving Internet companies immunity for their own content. Section 230(c)(1) protects a company from publisher liability only when content is “provided by another information content provider.” (Emphasis added.) Nowhere does this provision protect a company that is itself the information content provider. And an information content provider is not just the primary author or creator; it is anyone “responsible, in whole or in part, for the creation or development” of the content. §230(f )(3) (emphasis added).
But from the beginning, courts have held that §230(c)(1) protects the “exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content.” cf. id., at 332 (stating also that §230(c)(1) protects the decision to “edit”). Only later did courts wrestle with the language in §230(f )(3) suggesting providers are liable for content they help develop “in part.” To harmonize that text with the interpretation that §230(c)(1) protects “traditional editorial functions,” courts relied on policy arguments to narrowly construe §230(f )(3) to cover only substantial or material edits and additions. (“[A] central purpose of the Act was to protect from liability service providers and users who take some affirmative steps to edit the material posted”).
Under this interpretation, a company can solicit thousands of potentially defamatory statements, “selec[t] and edi[t] . . . for publication” several of those statements, add commentary, and then feature the final product prominently over other submissions—all while enjoying immunity. (interpreting “development” narrowly to “preserv[e] the broad immunity th[at §230] provides for website operators’ exercise of traditional publisher functions”). To say that editing a statement and adding commentary in this context does not “creat[e] or develo[p]” the final product, even in part, is dubious.
The decisions that broadly interpret §230(c)(1) to protect traditional publisher functions also eviscerated the narrower liability shield Congress included in the statute. Section 230(c)(2)(A) encourages companies to create content guidelines and protects those companies that “in good faith . . . restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” Taken together, both provisions in §230(c) most naturally read to protect companies when they unknowingly decline to exercise editorial functions to edit or remove third-party content, §230(c)(1), and when they decide to exercise those editorial functions in good faith, §230(c)(2)(A). But by construing §230(c)(1) to protect any decision to edit or remove content, courts have curtailed the limits Congress placed on decisions to remove content, see e-ventures Worldwide, LLC v. Google, Inc., (rejecting the interpretation that §230(c)(1) protects removal decisions because it would “swallo[w] the more specific immunity in (c)(2)”). With no limits on an Internet company’s discretion to take down material, §230 now apparently protects companies who racially discriminate in removing content. Sikhs for Justice, (concluding that “‘any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune’” under §230(c)(1)).
Paring back the sweeping immunity courts have read into §230 would not necessarily render defendants liable for online misconduct. It simply would give plaintiffs a chance to raise their claims in the first place. Plaintiffs still must prove the merits of their cases, and some claims will undoubtedly fail. Moreover, States and the Federal Government are free to update their liability laws to make them more appropriate for an Internet-driven society.
Extending §230 immunity beyond the natural reading of the text can have serious consequences. Before giving companies immunity from civil claims for “knowingly host[ing] illegal child pornography,” Bates, 2006 WL 3813758, *3, or for race discrimination, Sikhs for Justice, 697 Fed. Appx., at 526, we should be certain that is what the law demands. Without the benefit of briefing on the merits, we need not decide today the correct interpretation of §230. But in an appropriate case, it behooves us to do so.

I love Justice Thomas, what a chad.
 
It is possible the USSC hits both: No, Section 230 doesn't cover all abuse in algorithm manipulation. No, Google is not guilty by proxy of terrorism. The USSC are some of the finest lawyers in the country and are often very good in their decisions.
So basically they might rule that both parties are fags, gotcha.
 
Personally in my own opinion Google, Twitter will win these cases as from the looks of it they don't wanna handle or deal with the wave of lawsuits that will cause if they strike down 230. probably give them more protections as they don't wanna deal with this again
 
HAHAHAHAHAHAHAHAHAHA

...wait, you weren't serious were you? The law would never be applied to big corporations. They'd continue to host terrorists and cp unscathed.
It all depends on how good is the lawyer that you have with you.

Again as stated before I have a few on retainer.

As stated before I've been in litigation with some large corporation that lasted years until they found out I was not going anywhere and had deep pockets to afford good attorneys to be a fucking asshole to them.

As stated before because of the fucking years dealing within the court system I have seen so many strange things happen in them.

Anything is possible in a court of law and right now SCOTUS is not Liberal Media's Friend.
 

By Brian Fung and Tierney Sneed, CNN
February 21, 2023

Supreme Court justices appeared broadly concerned Tuesday about the potential unintended consequences of allowing websites to be sued for their automatic recommendations of user content, highlighting the challenges facing attorneys who want to hold Google accountable for suggesting YouTube videos created by terrorist groups.

For nearly three hours on Tuesday, the nine justices peppered attorneys representing Google, the US government and the family of Nohemi Gonzalez, an American student killed in a 2015 ISIS attack, with questions about how the court could design a ruling that exposes harmful content recommendations to liability while still protecting innocuous ones.

How – or if – the court draws that line could have significant implications for the way websites choose to rank, display and promote content to their users as they seek to avoid a litigation minefield.

The attorney for the Gonzalez family argued that narrowing Section 230 of the Communications Decency Act – the federal law protecting websites’ right to moderate their platforms as they see fit – would not lead to sweeping consequences for the internet. But both the Court’s liberals and conservatives worried about the impact of such a decision on everything from “pilaf [recipes] from Uzbekistan” to individual users of YouTube, Twitter and other social media platforms.

Justices are worried about lawsuits and disruption to the internet​

A big concern of the justices seems to be the waves of lawsuits that could happen if the court rules against Google.

“Lawsuits will be nonstop,” Justice Brett Kavanaugh said at one point.

But Eric Schnapper, representing the plaintiffs, argued that a ruling for Gonzalez would not have far-reaching effects because even if websites could face new liability as a result of the ruling, most suits would likely be thrown out anyway.

“The implications are limited,” Schnapper said, “because the kinds of circumstance in which a recommendation would be actionable are limited.”

Later, Justice Elena Kagan warned that narrowing Section 230 could lead to a wave of lawsuits, even if many of them would eventually be thrown out, in a line of questioning with US Deputy Solicitor General Malcolm Stewart.

“You are creating a world of lawsuits,” Kagan said. “Really, anytime you have content, you also have these presentational and prioritization choices that can be subject to suit.”

Chief Justice John Roberts mused that under a narrowed version of Section 230, terrorism-related cases might only be a small share of a much wider range of future lawsuits against websites alleging antitrust violations, discrimination, defamation and infliction of emotional distress, just to name a few.

“I wouldn’t necessarily agree with ‘there would be lots of lawsuits’ simply because there are a lot of things to sue about,” Stewart said, “but they would not be suits that have much likelihood of prevailing, especially if the court makes clear that even after there’s a recommendation, the website still can’t be treated as the publisher or speaker of the underlying third party.”

Plaintiffs struggle to draw a line between recommendations of harmful vs. non-harmful content​

Multiple justices pushed Schnapper to clarify how the court should treat recommendation algorithms if the same algorithm that promotes an ISIS video to someone interested in terrorism might be just as likely to recommend a pilaf recipe to someone interested in cooking.

“I’m trying to get you to explain to us how something that is standard on YouTube for virtually anything you have an interest in, suddenly amounts to aiding and abetting [terrorism] because you’re [viewing] in the ISIS category,” Justice Clarence Thomas said.

Schnapper attempted several explanations, including at one point digressing into a hypothetical about the difference between YouTube videos and video thumbnail images, but many of the justices were lost about what he was calling for.

“I admit I’m completely confused by whatever argument you’re making at the present time,” Justice Samuel Alito said.

Roberts added: “It may be significant if the algorithm is the same across … the different subject matters, because then they don’t have a focused algorithm with respect to terrorist activities… Then it might be harder for you to say that there’s selection involved for which you can be held responsible.”

Barrett raises legal threat to individual users for retweets​

One of the few justices focusing on how changes to Section 230 could affect individual internet users was Justice Amy Coney Barrett, who repeatedly asked whether narrowing the law in the ways Schnapper has proposed could put average social media users in legal jeopardy.

The text of Section 230 explicitly immunizes “users,” and not just social media platforms, from liability for the content posted by third parties. So a change that exposes tech platforms to new lawsuits could also have implications for users, according to several amicus briefs.

Under Schnapper’s interpretation, could liking, retweeting or saying “check this out” expose individuals to lawsuits that they could not deflect by invoking Section 230?

Yes, Schnapper acknowledged, because “that’s content you’ve created.”

Barrett raised the issue again in a question for Justice Department lawyer Stewart. She asked: “So the logic of your position, I think, is that retweets or likes or ‘check this out’ for users, the logic of your position would be that 230 would not protect in that situation either. Correct?”

Stewart said there was distinction between an individual user making a conscious decision to amplify content and an algorithm that is making choices on a systemic basis. But Stewart did not provide a clear answer about how he believed changes to Section 230 could affect individual users.

Specter of defamation litigation plays a big role in arguments​

Tech law experts say an onslaught of defamation litigation is the real threat if Section 230’s protections are weakened and the justices seemed to agree, posing several questions and hypothetical that turned on defamation claims.

“People have focused on the [Antiterrorism Act], because that’s the one point that’s at issue here. But I suspect there will be many, many times more defamation suits,” Chief Justice John Roberts said, while pointing to other types of claims that also may flood the legal system if tech companies no longer had broad Section 230 immunity.

Justice Samuel Alito posed for Schnapper a scenario where a competitor of a restaurant created a video making false claims about the restaurant violating health code and YouTube refusing to take the video down despite knowing its defamatory.

Kagan seized on Alito’s hypothetical later on in the hearing, asking what happens if a platform recommended the false restaurant competitor’s video and called it the greatest video of all time, but didn’t repeat anything about the content of the video.

“Is the provider on the hook for that defamation?” Kagan asked.

Dispute over Section 230’s role in the rise of the internet​

Though Google’s attorney, Lisa Blatt, did not get the tough grilling that Schnapper and Stewart received, some justices hinted at some discomfort with how broadly Section 230 has been interpreted by the courts.

Justice Ketanji Brown Jackson pushed back at Blatt’s claims about Congress’ intent when it enacted the relevant provision in 1996. Blatt had claimed that Congress’ move to broadly protect tech platforms from legal liability is what got the internet off the ground.

“That the internet never would have gotten off the ground if everybody would have sued was not what Congress was concerned about at the time it enacted this statute,” Jackson said.

But a brief in the case filed by Section 230’s original authors has contradicted Jackson’s account of Section 230’s intent and purpose.

The brief, written by Oregon Democratic Sen. Ron Wyden and former California Republican Rep. Chris Cox, explained that Section 230 arose as a response to early lawsuits over how websites managed their platforms, and was intended to shelter the nascent internet.

“Congress therefore sought to encourage that evolution by enacting a technology-agnostic immunity provision that would protect Internet platforms from liability for failing to perfectly screen unlawful content,” Wyden and Cox wrote, saying that algorithmic recommendations are an example of the kind of moderation techniques Section 230 was designed to incubate.

In questions for Google’s foes, Justice Sonia Sotomayor suggested that platforms could face liability if they created a search engine that was discriminatory. She put forth an example of a dating site that wouldn’t match individuals of different races. Justice Barrett returned to the hypothetical in her questioning of Blatt.

Multiple justices – even as they were not sympathetic to the tech foes’ arguments – suggested that that Google and its allies were playing Chicken Little in the stark warnings they gave the court about how a ruling against Google would transform the internet.

“Would Google collapse and the Internet be destroyed if YouTube and therefore Google were liable for posting and refusing to take down videos that it knows are defamatory and false,” Alito asked Blatt.

Blatt said Google may withstand such legal attacks but other websites – because they’re not as big as Google – would struggle to survive.

Barrett points to a potential exit ramp​

Barrett referenced an exit ramp the Supreme Court has that would allow it to avoid the big legal question over the scope of Section 230.

She pointed to the tech case the court will hear Wednesday, in which the justices will consider whether an anti-terrorism law covers internet platforms for their failure to adequate remove terrorism-related conduct. The same law is being used by the plaintiffs to sue Google in Tuesday’s case.

“So if you lose tomorrow, do we even have to reach the Section 230 question here? Would you concede that you would lose on that ground here?” Barrett asked Schnapper.

What happens Wednesday​

As Barrett mentioned, there’s another big tech case on the Supreme Court’s docket Wednesday. The case, Twitter v. Taamneh, is not about Section 230 directly. Instead, the justices are considering whether social media companies can be sued for aiding and abetting an act of international terrorism after hosting content that generally expressed support for ISIS but that did not refer to a specific act of terror.

Where Google has argued that it does not have to face such a suit because of Section 230 – triggering Tuesday’s liability debate – the Twitter case being argued Wednesday hinges more directly on the substance of the Anti-Terrorism Act.

The facts in the Twitter and the Google case are similar, even if they’re posing different legal questions. And that’s why, as Barrett suggested, a finding that Twitter is not liable under the ATA might also resolve the Google case without the need to weigh in on Section 230.

Together, the two tech cases the justices are hearing this week have the potential to transform the landscape of the internet. The Supreme Court’s rulings in the cases are expected by early summer.

This story and headline have been updated with developments from Tuesday’s hearing.




Analysis by Ariane de Vogue, CNN Supreme Court Reporter
February 21, 2023

Nine justices set out Tuesday to determine what the future of the internet would look like if the Supreme Court were to narrow the scope of a law that some believe created the age of modern social media.

After nearly three hours of arguments, it was clear that the justices had no earthly idea.

That hesitancy, coupled with the fact that the justices were wading for the first time into new territory, suggests the court, in the case at hand, is not likely to issue a sweeping decision with unknown ramifications in one of the most closely watched disputes of the term.

Tech companies big and small have been following the case, fearful that the justices could reshape how the sites recommend and moderate content going forward and render websites vulnerable to dozens of lawsuits, threatening their very existence.

The case before the justices was initially brought by the family of Nohemi Gonzalez, a US student who was killed in a Paris bistro in 2015 after ISIS terrorists opened fire. Now, her family seeks to hold YouTube, a subsidiary of Google, liable for her death because of the site’s alleged promotion – through algorithms – of terrorist videos.

The family sued under a federal law called the Antiterrorism Act of 1990 , which authorizes such lawsuits for injuries “by reason of an act of international terrorism.”

Lower courts dismissed the challenge, citing Section 230 of the Communications Decency Act of 1996, the law that has been used for years to provide immunity for websites from what one justice on Tuesday called a “world of lawsuits” that stem from third party content. The Gonzalez family argues that Section 230 does not protect Google from liability when it comes to targeted recommendations.

Oral arguments drifted into a maze of issues, raising concerns about trending algorithms, thumbnail pop-ups, artificial intelligence, emojis, endorsements and even Yelp restaurant reviews. But at the end of the day, the justices seemed deeply frustrated with the scope of the arguments before them and unclear of the road ahead.

A lawyer representing the plaintiffs challenging the law repeatedly failed, for instance, to offer substantial limiting principles to his argument that could trigger a deluge of lawsuits against powerful sites such as Google or Twitter or threaten the very survival of smaller sites. And some justices retracted from the “sky is falling” attitude put forward by an advocate for Google.

On several occasions, the justices said they were confused by the arguments before them – a sign that they may find a way to dodge weighing in on the merits or send the case back to the lower courts for more deliberations. At the very least they seemed spooked enough to tread carefully.

“I’m afraid I’m completely confused by whatever argument you’re making at the present time,” Justice Samuel Alito said early on. “So I guess I’m thoroughly confused,” Justice Ketanji Brown Jackson said at another point. “I’m still confused,” Justice Clarence Thomas said halfway through arguments.

Justice Elena Kagan even suggested that Congress step in. “I mean, we’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” she said to laughter.

But in court, Eric Schnapper, a lawyer for the family, repeatedly pushed much broader arguments that could impact other areas of third party content.

Yet even Thomas, who has expressed reservations about the scope of Section 230 before, seemed skeptical. He sought clarification from Schnapper of how one might be able to distinguish between algorithms that “present cooking videos to people who are interested in cooking and ISIS videos to people interested in ISIS.”

Alito asked whether Google might have been simply organizing information, instead of recommending any kind of content.

“I don’t know where you’re drawing the line,” Alito said.

Chief Justice John Roberts tried to make an analogy with a book seller. He suggested that Google recommending certain information is no different than a book seller sending a reader to a table of books with related content.

At one point Kagan suggested that Schnapper was trying to gut the entire statute: “Does your position send us down the road such that 230 can’t mean anything at all?” she asked.

When Lisa Blatt, a lawyer for Google, stood up she warned the justices that Section 230 “created today’s internet” because “Congress made that choice to stop lawsuits from stifling the internet in its infancy.”

“Exposing websites to liability for implicitly recommending third-party context defies the text [of 230] and threatens today’s internet,” she added.

In the end, Schnapper seemed to speak for the court when he said that “it’s hard to do this in the abstract.”
 


The U.S. Supreme Court, hearing a case that could reshape the internet, considered on Tuesday whether Google bears liability for user-generated content when its algorithms recommend videos to users.

In the case, Gonzalez vs, Google, the family of a terrorist attack victim contends that YouTube violated the federal Anti-Terrorism Act because its algorithm recommended ISIS videos to users, helping to spread their message. Nohemi Gonzalez was an American student killed in a 2015 ISIS attack in Paris, and his family’s lawsuit challenges the broad legal immunity that tech platforms enjoy for third party content posted on their sites.

Section 230 of the Communications Decency Act, passed in 1996, protects platforms from legal action over user-generated content, and it also protects them if they choose to remove content. Section 230 has withstood court challenges for the past three decades even as the internet exploded.

The attorney for Gonzalez’s family claimed that YouTube’s recommendations fall outside the scope of Section 230, as it is the algorithms, not the third party, that actively pick and choose where and how to present content. In this case, the attorney said, it enhanced the ISIS message.

“Third parties that post on YouTube don’t direct their videos to specific users,” said the Gonzalez’s attorney Eric Schnapper. Instead, he said, those are choices made by the platform.

Justice Neil Gorsuch said he was ‘”not sure any algorithm is neutral. Most these days are designed to maximize profit.”

He and justices on the right and left all acknowledged the importance of the case, but also said they found it confusing – most used that exact word — and would prefer that Congress, which wrote the law, be the one to address changing it.

Justice Elena Kagan said all other sectors including publishers have rules, and wondered why the internet gets “a pass.” But, she added, “We are a court. We really don’t know about these things. We are not the nine biggest experts on the internet. Isn’t this a case for Congress, not the court?”

Congress has held hearings and repeatedly made noises about Section 230, which has become increasingly controversial as platforms and their power to influence society has grown exponentially. Even though there have been calls to alter or eliminate Section 230, legislation has gone nowhere.


Internet firms swear that removing or limiting 230 protections would destroy the medium.

Would it? Chief Justice John Roberts asked Google’s attorney Lisa Blatt. “Would Google collapse and the internet be destroyed if Google was prevented from posting what it knows is defamatory?”

“Not Google,” she said, but other, smaller websites, yes.

She said if the plaintiffs were victorious, the internet would become a zone of extremes – either The Truman Show, where things are moderated into nothing, or like “a horror show,” where nothing is.

Blatt said some kind of curation and targeting has been intrinsic to the internet since its early days in the 1990s when people first started signing up for various subject-specific chat groups. Even then, “the internet was a mess. You had to organize it because it was massive.” Amazon has been targeting for years, she said, telling e-shoppers “if you bought this you might also like that.”

European regulators have shown that it is possible to regulate the internet to some extent. Congress also has carved out exceptions. In 2018, it passed a law removing immunity from internet for content dealing with sex trafficking. That content swiftly disappeared, and the web is still standing.

Justice Ketanji Brown Jackson got into it with Blatt about the Section 230 “good Samaritan” provision that shields internet providers from lawsuits if they remove offensive content. “Doesn’t that suggest Congress wanted internet companies to block offensive content?… The statute is like, ‘We want you to take these things down.”

“I think a lot of things are offensive that other people think are entertainment,” said Blatt.

SCOTUS is set to hear a separate but similar case involving Twitter on Wednesday.
 
KF doesn't use algorithms to promote content
They aren't trying to do away with 230 but they want a specific exception carved out when it comes to recommending content.

It's not only algorithms, it's the owner's promotion of content. It's why Craigslist did away with personals entirely. I know it's an NPR link but this was not widely covered.

There's not an obvious link to KF. But something like the highlighted tweets on the main page could be seen as you, personally, "publishing" by "recommending" content.

It stemmed from parents whose kid was killed by ISIS and they argue that Youtube's algorithm promoted content to ISIS training videos and that eventually led to their kid's death.

I hope they lose. It's an ugly case. You can read all of the filings here. There are a LOT of them but start with the writ of certiorari which lays out the specific question in law.
 
Funny how posting anything that goes against the current narrative gets your content shadow-banned, delisted or just outright removed within minutes on any of the major platforms. But ISIS and other terrorist organizations are allowed to freely advertise and recruit on sites like facebook and twitter with impunity.
 
They aren't trying to do away with 230 but they want a specific exception carved out when it comes to recommending content.

It's not only algorithms, it's the owner's promotion of content. It's why Craigslist did away with personals entirely. I know it's an NPR link but this was not widely covered.

There's not an obvious link to KF. But something like the highlighted tweets on the main page could be seen as you, personally, "publishing" by "recommending" content.

It stemmed from parents whose kid was killed by ISIS and they argue that Youtube's algorithm promoted content to ISIS training videos and that eventually led to their kid's death.

I hope they lose. It's an ugly case. You can read all of the filings here. There are a LOT of them but start with the writ of certiorari which lays out the specific question in law.
If someone wanted to be super aggressive about the interpretation, the highlight system KF uses may count as algorithmically promoting (and so depending on the ruling, being counted as publishing) the content.
 
They aren't trying to do away with 230 but they want a specific exception carved out when it comes to recommending content.

It's not only algorithms, it's the owner's promotion of content. It's why Craigslist did away with personals entirely. I know it's an NPR link but this was not widely covered.

There's not an obvious link to KF. But something like the highlighted tweets on the main page could be seen as you, personally, "publishing" by "recommending" content.

It stemmed from parents whose kid was killed by ISIS and they argue that Youtube's algorithm promoted content to ISIS training videos and that eventually led to their kid's death.

I hope they lose. It's an ugly case. You can read all of the filings here. There are a LOT of them but start with the writ of certiorari which lays out the specific question in law.
So in my understanding, that means that they want Big Tech to stop hiding behind 230 if they, in any way, tries to "promote" certain content at you, correct?

Honestly, I'd be fine with that. Especially if that means targeted ads die with it. Or at the very least, the disingenuous "we thought you might like this" (((recommended))) content that is just ads in disguise.
 
Back