US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

Choosing which threads get to stay (ie who qualifies as a lolcow) is curation and any weakening of section 230 will open the farms up to lawsuits on this basis.
Sure but there's a point to be made that these algorithm curation services even call themselves recommendations. That compared to just leaving up stuff for people to peruse leaves a point to be made about the different types of curation and how they relate to the opinions of the site owner and to what extent.

This seems all like stuff lawyers need to drink on camera argue in court over.
 
Sure but there's a point to be made that these algorithm curation services even call themselves recommendations. That compared to just leaving up stuff for people to peruse leaves a point to be made about the different types of curation and how they relate to the opinions of the site owner and to what extent.

This seems all like stuff lawyers need to drink on camera argue in court over.
The saving grace that section 230 offered sites like the farms was that that argument never needed to happen to begin with. It was blanket immunity, not just for sites that didn't practice curation, but that no site was considered to be a publisher period.

Even if the farms could possibly win 99% of those arguments, the fact the arguments might happen at all will bankrupt the site many times over.

The site is struggling financially just with ordinary copyright nonsense, let alone the defamation claims this will open up.
 
parent only started caring about the kid once they died, i guess?
Untitled-design-14-750x563.png
I don't see how moody v netchoice applies here. That case seems to be about intentionally weighting algorithms to effectively delist specific political opinions and push others to the top.
This is a case of "this user likes retarded slop with these tags." not someone at tiktok deciding to send this kid suicide instructions like the retard judge framed it.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'"

The family also doesn't seem interested in going after the user who actually made the content. Just another case of negligent nigs trying to cash in on their daughters corpse. Need dat weave money.
 
The saving grace that section 230 offered sites like the farms was that that argument never needed to happen to begin with. It was blanket immunity, not just for sites that didn't practice curation, but that no site was considered to be a publisher period.

Even if the farms could possibly win 99% of those arguments, the fact the arguments might happen at all will bankrupt the site many times over.

The site is struggling financially just with ordinary copyright nonsense, let alone the defamation claims this will open up.
Well in any event it's not a theoretical situation anymore so it's still down to lawyers hashing this out, is it not? I had no opinion on the topic other than "this was inevitable" prior to entering this thread, which has changed to "the inevitable happened, I hope some good comes of it instead of a fucking loophole that leaves only YouTube, Meta, and Twitter intact".
 
Well in any event it's not a theoretical situation anymore so it's still down to lawyers hashing this out, is it not? I had no opinion on the topic other than "this was inevitable" prior to entering this thread, which has changed to "the inevitable happened, I hope some good comes of it instead of a fucking loophole that leaves only YouTube, Meta, and Twitter intact".
The text of section 230 precludes this entirely. Frankly this seems like judicial activism just outright rewriting laws.

If congress repealed 230, I'm sure silicon valley lobbyists would work in exceptions for their businesses, exceptions that would make running a small tech business basically impossible.

If this decision isn't shut down at some point, we'll get effectively that same result, except without even the formality of having gone through congress to mint a new law first.
 
Section 230 need to be abolished TBH. Big Tech keeps on latching on to that when it's a bullshit law. The only reason why it doesn't apply to TikTak is because it's a chinese company and not like facebook which is an american company and has gotten away with worse.
getting rid of section 230 means the destruction of the internet as we know it. every site that relies on user generated content would be sued into oblivion or have to censor it's users to such a degree that they'd become useless to literally everyone.

maybe you think that's a good thing (heaven knows tranny shit and a lot of social contagions wouldn't have taken off like they have without it) but just know at minimum it would result in pretty much 90% of websites getting wiped and at max actual state run internet.
 
I've talked about this for years and I'm very aggressively pro-230.

I know it's massively faggy that Facebook et al get to just boot controversial opinions from their service. But in my eternal pessimism about tech and legislature, I don't trust our system to set up rules that enable us to impose rules on Facebook without giving a cause of action to outright bankrupt the farms with litigation.

I believe far more in the free market of ideas, and I'd much rather put legal restrictions on VPS and other server and network hosts for discrimination than to let people sue Facebook (and by extension the farms) for deleting threads.

It should be made cheaper to compete with Facebook, not bend Facebook to our will. Because we'll lose the latter. They'll flood congress with their representatives and get their way.
 
Choosing which threads get to stay (ie who qualifies as a lolcow) is curation and any weakening of section 230 will open the farms up to lawsuits on this basis.
Will it? If users post something that's fine so why would users rating/promoting/whatever things suddenly be an issue. How is a user posting in community happenings any different than them posting in any other board?

I thought that the consensus, until fairly recently, was that algorithmic recommendations weren't speech; but since they weren't speech, they were subject to regulation. Now that they're considered protected speech, I think you're correct.
seeing as how google fucks with political searches to influence elections, why w/shouldn't we?
I've always thought that any fuckery with a given algorithm by a company to promote something that isn't an objective metric (I'm phrasing this badly, but tl;dr; Optimizing for watch time: GOOD Trying to influence people/elections: BAD) should've qualified as editorializing. Give me your rainbows but I hope this leads to a selection of algorithms that get tested in court and become "canonically 230 immunity preserving algorithms". Then Faceberg has to pick between "completely random", "chronological order" or "user subscribed only" lest they become responsible for everything on their platform.
 
I've always thought that any fuckery with a given algorithm by a company to promote something that isn't an objective metric (I'm phrasing this badly, but tl;dr; Optimizing for watch time: GOOD Trying to influence people/elections: BAD) should've qualified as editorializing. Give me your rainbows but I hope this leads to a selection of algorithms that get tested in court and become "canonically 230 immunity preserving algorithms". Then Faceberg has to pick between "completely random", "chronological order" or "user subscribed only" lest they become responsible for everything on their platform.
Tbh that's how I read the decision, the issue isn't the video, but that TikTok pushed it in front of the kid, and it's widely known that TikTok interferes with what you see using their 'heating' tool to make things artificially popular, to amplifying trends or creators they consider desirable, meaning it stopped being just hosting 3rd party content and started being Tiktok's editorialised content, which was bolstered by the other case listed where the algo timeline of content was considered 'speech' of the platform, so not just hosting 3rd party content, but specifically picking and choosing what to show based on what the platform desires, and so not eligible for section 230 protection.

Put those together and I can see the argument being "you pick what is popular on your platform by interfering via heating, your algorithm then, after your interference, pushed dangerous content that the kid died trying to imitate in front of them, doesn't fall under moderation if you push specific content at people, no section 230 protection for you"

It could be the first step in de-enshittifying normie social media and if that leads to the death of all those shitty algo's pushing DEI shit because janny tranny's lobotomise them to push that garbage instead of what what people want I don't see that as a bad thing lol
 
It was blanket immunity, not just for sites that didn't practice curation, but that no site was considered to be a publisher period.
Not so. It was an immunity from third-party speech being held as first-party speech. Creating algorithms that are in charge of what content gets promoted, deleted, and otherwise editorializing the content has been held by SCOTUS to be first-party speech. It follows then that this is not covered by 230's liability immunity for third parties.

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." - 47 U.S. Code § 230

"The Fifth Circuit was wrong in concluding that Texas’s restrictions on the platforms’ selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas’s interest in changing the content of the platforms’ feeds. [...] But for purposes of today’s cases, the takeaway of Turner is this holding: A private party’s collection of third-party content into a single speech product (the operators’ “repertoire” of programming) is itself expressive, and intrusion into that activity must be specially justified under the First Amendment. [...] Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product." Moody v. NetChoice, LLC 603 U. S. ____ (2024) (algorithms being central to its opinion regarding third-party speech)

You may disagree with the Supreme Court's holding (and I think there's plenty of reason to do so, and some reasons not to), but the Third Circuit's holding is a natural consequence of the change of speech that Supreme Court forced upon the courts. The stark difference was even noticed by the judge:
Screenshot 2024-08-29 130715.png
Screenshot 2024-08-29 130721.png
 
Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own
Even this isn't that clear cut, because it brings up the question of who (the platform, or the user) that's doing the editorialization / curation.

If I follow someone, and their posts show up on my feed, is that me, or the platform doing the curation?
What about when their friend's comments and posts also show up?
What about when posts similar to those I've interacted with (liked, replied to, spent extended time viewing) showed up?
If the site sorts posts by time, is it doing curation? What about by views? or views by those that are in the same region/group as the user?

If a post, is manually set to show up for everyone, like on the front page of a news site, that's clearly curation on the website's part.
However, if it's an automatic system/algorithm that's content agnostic, reacting based on the input of the users, wouldn't it be the users, not the site that's doing the curation?
 
Even this isn't that clear cut
Yeah, I agree. SCOTUS made a broad stroke, which is not uncommon, but not super helpful here. Whether algorithms are speech or not, in my opinion, is an issue that's more nuanced and should be determined on case by case basis rather than broadly.

Algorithms designed to promote something would likely be speech. However, I'd argue, that algorithms that just automatically delete content based on a ruleset wouldn't be (though the ruleset itself may be speech). There's also an issue of algorithms that append comments. Would they fall into category one or two? Like, say you make a comment that 2020 election was stolen, and X appends the wiki article to your comment. Would it be just an automatic enforcement of the rules, or by the virtue of the rule implying a disagreement with your view, would that transform it into speech? There's a lot to be considered when deciding on this question like intent behind the algorithm, what it accomplished, and how it did so. There could be cases where differing algorithms suffer different liabilities (or lack thereof).

SCOTUS, somewhat foolishly, decided to overlook important questions, and made a generalized ruling that the lower courts will have to struggle to implement. If all algorithms that take third party content and curate it are speech then it gives rise to weird liabilities. Consider for example this; would Tik Tok be liable for videos outside its recommended tab? Or would Google be liable for its search results? In both cases it would appear that the answer is yes, where I would argue the answer should be no.

If I follow someone, and their posts show up on my feed, is that me, or the platform doing the curation?
SCOTUS actually decided to ignore that question
Screenshot 2024-08-29 143800.png
Which is a big fucking question to ignore.
 
Someone go dig up Archie so Holly can give him a big celebratory kiss.
That’s who I thought this might have been at first, poor kid.
Poor kid before and after he died.

This is stupid, but I can see the parent thought process of denial.

These people have their young young kids commit suicide and can’t handle that (understandably), so their kids are imitating TikTok and it went wrong.
First the parents have to accept the circumstances, suing TikTok means they may never will.

The breathing game/blackout existed for decades before TikTok.
Plus you play it with another person, that’s the whole point of games lol.
These poor children weren’t playing a game, they were done.
 
Last edited:
unless tiktok algorithm is intentionally selecting dangerous stunts then where would courts stop with the logic?
But TikTok is actively selecting and promoting videos they are certain you would watch. It's different from when you search up the video yourself. Wouldn't be the first time this theory was used to establish liability. New York recently entertained a similar theory
 
Back