US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

Isn’t this effectively the same thing that happened to Archie Battersby? He did some retarded online challenge and wound up brain dead?

@Useful_Mistake you are all wise and all knowing, how close am I?

I don’t know if Millennials are just retards, but for fuck sake my parents still told me that getting so close to screens would hurt my eyes. They’d actively discourage me from playing videogames too often. I watched a crap ton of Animal Planet and Nat Geo, but they still was aware of what I was watching.
 
I don't see a legal theory under which Google is liable
Unlike other websites, Google can be freely used by minors.

I just don’t understand why the early Internet didn’t clamp down on underage users. It was already degenerate in the 1990s. Now the cat is long out of the bag.
 
  • Agree
Reactions: Marvin
Youtube once recommended me a nihilistic, pro-suicide (and even homicide) philosophy video, I suppose it'll get banned soon.

X and Reddit will probably go down as well.

Honestly it seems to me that with Durov's arrest, Brazil's X ban, and now this, the world government wants each country to eventually have its own, government-curated, isolated internet section, where all social media and forums will be regulated and won't be able to be used as foreign influence.
 
Oh boy, I can't believe I get to witness both the rise and complete destruction of humanity's greatest invention, all in a single lifetime.
People better get used to using Tor, because clearnet is fucked.

Also The Internet Archive losing its appeal, you boys better buckle up.
 
Last edited:
This case isn't about deleting content. It is about the social media companies intentionally driving views to some content over other content, which is editorializing.

contrast that to Kiwifarms, which presents the same experience to everyone. The customized feeds are user-created. The site does not take a stance on which content should be presented, it is a repository of content.
So, to give context, the post you're quoting is where I'm criticizing people generally arguing that section 230 should be repealed or weakened in general. I wasn't focusing too much on this case specifically.

However to your point, KF does drive views to some content over other content, even if simply just by retaining certain content over other content. In this particular case, what practical difference is there between deleting some threads that aren't deemed to be lolcows (what KF does) and keeping the content but just lowering it in the search results (what this case is about)? It's editorializing regardless.

I personally don't see a difference. A site saying "I want to show our users X but not Y", and doing it by shadowbanning (tiktok) versus deleting the thread (KF) doesn't seem like a big difference.

But more importantly, the courts won't see a difference, or they won't see a difference quickly enough to permit immediate dismissal of a mess of expensive lawsuits against the farms.
Every person is both civilly and criminally liable for their own speech. And companies are legally people. They aren't being punished for violating people's free speech rights, but for the company's own independent speech.
The argument will be that running a site with a specific focus (say lolcows) and curating that list of permitted threads, is the site making a statement about the individuals and the accuracy of the comments made in the OPs and throughout the threads.
No.

Allowing something to stay up is very different than choosing to push it to the top of someone's page, basically forcing them to engage with it.
It's all curation. It's all editorialization. They're going to argue that it's equivalent to running a newspaper and deciding what articles get to appear in the final printed copy.

That's asinine in the context of discussion forums, but it's what they'll argue. And most importantly, it's what they'll argue and demand the question be settled in a very expensive courtroom.

The value of section 230 was that you never even had to go to a courtroom to answer the question. Websites simply weren't publishers of their users content. No matter what content they decided to delete or keep or promote or whatever. Simple as.
 
Is this good or bad?
It's godawfully fucked.

It literally disregards every previous piece of case law about 230 and substitutes its own reality. This is exactly the kind of behavior it was intended to protect. Now, Google could conceivably be held liable because their algorithm recommends a Judas Priest song to someone who listens to a lot of Judas Priest and then commits suicide.
Big tech companies (especially tiktok) and their algorithms are so hated that they can be used as a handy excuse to destroy free speech on the internet as a whole.
This is why defending freedom of speech almost always involves defending scoundrels engaging in reprehensible behavior, because outlawing it would also outlaw speech of value. They always start with the scoundrels first.
How is this destroying free speech on the Internet? I think it’s very intuitive that if Facebook purposefully curates your feed to show only certain things, then that’s the platform itself speaking. And the platform being regulated for its own speech is different than a platform being regulated for the speech of its users.
Your naive faith in the good intentions of the government is charming but misplaced.
 
Last edited:
What's especially retarded about this ruling is if a machine algorithm's behavior is ruled as protected speech
It is not the algorithm itself for whom speech is ruled endowed, it is the act of curating content (even if by proxy, through algorithm) that is considered protected speech; completely in line with existing legal philosophy of corporations being (legal) persons. You are correct in assessing that no court could rule on the intention of an algorithm or machine, as it is not a person. This all goes back to IBM's old adage from way back in '79:
A computer can never be held accountable, therefore a computer must never make management decisions

What's insane is the amount of people outraged that this could mean the satanic algorithms from hell that constantly drip feed them mind viruses would be shut off. Well, that's a bit of hyperbole; nonetheless, I said this once and I'll say it again:
The final act of the ritual is when we all get on our knees to worship at the altar of our bruises.
Just discover things by word of mouth, or look up things that you want to see. This will, if as extremely implemented as possibly could be, remove the possibility for users to be passive content consumers, instead needing to seek out things they want to see, which I don't see as a bad change.
 
Last edited:
And here I thought you were Matthew Hardin.
I am capable of holding down multiple roles. I believe I was once accused of being Tony Robbins and Alan Dershowitz.
Isn’t this effectively the same thing that happened to Archie Battersby? He did some retarded online challenge and wound up brain dead?

@Useful_Mistake you are all wise and all knowing, how close am I?
I will not speak about British Incidents or British cases/caselaw as I know next to nothing about it, sorry.
Unlike other websites, Google can be freely used by minors.
Even still.
However to your point, KF does drive views to some content over other content, even if simply just by retaining certain content over other content. In this particular case, what practical difference is there between deleting some threads that aren't deemed to be lolcows (what KF does) and keeping the content but just lowering it in the search results (what this case is about)? It's editorializing regardless.
I think there is significant difference between deleting something that doesn't match certain standard of quality vs deboosting (or boosting) content that you do not like. The former might be editorializing (something I would not agree with), but the latter definitely is editorializing.
The argument will be that running a site with a specific focus (say lolcows) and curating that list of permitted threads, is the site making a statement about the individuals and the accuracy of the comments made in the OPs and throughout the threads
There is no way that argument would work though. It's something our enemies would certainly try, but not something that would work out. Hell, Big Tech would probably come to our aid for the chance to not be held liable for literally everything everywhere.
Now, Google could conceivably be held liable because their algorithm recommends a Judas Priest song to someone who listens to a lot of Judas Priest and then commits suicide.
Google could not conceivably be held liable for that.
 
I think there is significant difference between deleting something that doesn't match certain standard of quality vs deboosting (or boosting) content that you do not like. The former might be editorializing (something I would not agree with), but the latter definitely is editorializing.
Algorithms don't like or dislike anything. Where is there even any allegation that TikTok boosted suicide encouraging content because they personally approve or disapprove of it?

Section 230 was specifically drafted to protect activity like this. There's nothing functionally distinctive from an algorithm deboosting things like "hate speech" or "pornography" or even "just stuff I don't like," since by definition that boosts content that doesn't meet the negative definition.

What actually usable test is articulated here? All it serves to do is cast the jurisprudence into complete chaos.

What are you supposed to not do to get the safe harbor of 230 in the Third Circuit? Why isn't Null featuring an article now protected under 230? Because if it isn't, we're fucked, and so is everyone else.
Google could not conceivably be held liable for that.
And they should not even have to stand trial for it, as Judas Priest themselves did. 230 is not an affirmative defense. It is a complete immunity even to stand trial. Now that immunity has an enormous carveout that is completely nonsensical and effectively dilutes the immunity to extinction.
 
Exactly. I'm trying to figure out why so many people here seem to think this is a win or something.

How can you have any algorithm if you are liable for the content within. The home page is an algorithm, the search results are an algorithm, ratings sort is algorithm, autoplay is an algorithm... Do you expect that Google will shutter and gimp half their products? Do you people think TikTok actually boosted 'blackout' content, or just failed to censor it? How is this anything but a ruling that undermines 230 in favor of stringent censorship/content vetting?
Presumably if you subscribe to a particular feed and no magic curation happens, then you should be covered. Sure it would hurt discovery. But I'd personally rather those companies to stop editorializing and controlling the narratives in their platforms so much, as that isn't good for free speech either.

Honestly, hit me with rainbows but my hope is that the preferred alternative is federated (eg. RSS) content aggregation with little curation by platforms, and informal discovery hubs through something similar to what used to be your favorite microblog citing someone else's and leading you there, or reading something here and checking out the source.
 
I think there is significant difference between deleting something that doesn't match certain standard of quality vs deboosting (or boosting) content that you do not like. The former might be editorializing (something I would not agree with), but the latter definitely is editorializing.
I think "standard of quality" is a good description for curating a website about a specific topic, like say guns, and deleting off topic posts.

But with some topics, like, say, internet gossip, curation will get inherently political and contentious. I do think there's a significant judgment call to deciding who is or isn't a lolcow. I wouldn't say it's unfair to say that deleting lolcow threads involves a bit of editorialization.

And to his credit, I think Null has been very neutral and impersonal with this. I don't think he's actually been very editorial in his decisions. But the argument is still there.
There is no way that argument would work though. It's something our enemies would certainly try, but not something that would work out. Hell, Big Tech would probably come to our aid for the chance to not be held liable for literally everything everywhere.
I agree that it's a bit extreme to argue retaining a thread means endorsing everything said inside.

But I think a weaker version of that argument would have legs long enough to be a permanent headache for the farms. So if some whackjob posts absurd shit in a thread and their post doesn't get deleted quickly enough, that'll be a point brought up in a lawsuit. If someone posts something else rumored but dubious, that'll be a point brought up in a lawsuit. "well last time someone posted some goofy gossip, you correctly identified it as such and deleted it, demonstrating you're capable of it; but you decided to let this post stand??"

I do believe Null has deleted stuff in threads before, that was clearly some butthurt lolcow in their own right trying to plant goss. And deleting those posts, I'm totally fine with. Except that sort of editorial decision will be argued about.

Basically my issue is that literally every admin decision will be on the table to drag into court over, if section 230 gets savaged badly enough.
 
But I think a weaker version of that argument would have legs long enough to be a permanent headache for the farms. So if some whackjob posts absurd shit in a thread and their post doesn't get deleted quickly enough, that'll be a point brought up in a lawsuit. If someone posts something else rumored but dubious, that'll be a point brought up in a lawsuit. "well last time someone posted some goofy gossip, you correctly identified it as such and deleted it, demonstrating you're capable of it; but you decided to let this post stand??"
This is exactly the argument Congress rejected in enacting 230, which at the time was specifically discussed in the Congressional debates as directly aimed at getting rid of the pernicious Stratton Oakmont doctrine, which Congress perceived as an imminent disaster for the then-nascent global Internet.

The Third Circuit has resuscitated this shit from the Pet Sematary and it needs to be put back underground.
@Useful_Mistake

No, I mean Archie also died due to the blackout challenge on TikTok. @Null am I right? Didn’t Archie also strangle himself?
It's complete bullshit and there's no evidence of it. Archie's parents more or less made that shit up.
Basically my issue is that literally every admin decision will be on the table to drag into court over, if section 230 gets savaged badly enough.
I am astounded that here, of all places, people will argue that diminishing and degrading the First Amendment is a good thing in any way, shape, or form.
 
Unlike other websites, Google can be freely used by minors.

I just don’t understand why the early Internet didn’t clamp down on underage users. It was already degenerate in the 1990s. Now the cat is long out of the bag.
More groomed minors means more guaranteed lifelong customers of the medical industry, an industry which already has large overlaps with the tech industry due to all the bioinformatics stuff.
 
what are the bets for real ID for Social Media?
rail guard to detect & restrict the under 18?

my guess -
*month set up & seethe over having to do real ID.
* if Social media is pushing for Supreme court review year but, you will know its over if tune change about using real ID.
*betting on chat restriction & video restriction for new accounts of under 18.
...

my only disagreement is china tiktock meme-


Opinions like those expressed in the 1st video make me chuckle as they boil down to clueless USians griping that we don't have a fascist government promoting healthy lifestyles and trends to ourpeople. Meanwhile we're getting raped every day by Kamala Klones staffing nigh every office of a government that's Great Replacing them, but we're so aloof all we can muster while we're slowly boiled is "man I wish it was still the 90s" :story: .

2nd video I find more useful as it goes over some of the differences between Chinese & USian tiktok. Appears china while on paper poorer is more advanced than the US with Internet of Things use with they phones vis-a-vis the US. I'm not well traveled but I'd say the US is in the middle of that kind of IoT integration between Europe which still uses a lot of physical currency to Asia / Russia (Moscow) where anecdotally it seems that IoT integration is more widespread.
 
Last edited:
  • Feels
Reactions: Ukraine is Big Gay
\>Banks are being targeted with fraud made popular by tiktok
\>Laws bend to hold tiktok liable for recommended videos

Thanks for reading my schizopost
Giving credit to this prior.
Also legally speaking I think there we live in opposite's day clown land and there never should not have ever been a completely turn key operation internet. Everything being a Ouroboros of third party freelancer scripts to sushi conveyer belt the contents the online world around to save a few bucks on staffing should have never become the norm.

Everything also being "The algorithm"'s fault so you can have megacorps be blameless because one jeet coded autobot script is doing a thing you exactly wanted so your hands are off the smoking gun when you mass censor and control all information VS "The ICKY SMALLER WEBSITE" that is curated by actual people thus and so bad and wrong think and doesn't get the same excuse of "Well it was the algowithym : (" legally feels like horse shit.

But simultaneously you can't trust the government to regulate shit nor the regulators to regulate themselves so what can ya do.
 
  • Agree
Reactions: George Lucas
Is curation "speech"? Legally speaking since legalese terms are usually very different than english, and I don't think I've ever seen 'curation' in a legal context. Except iirc in 230.

Yes. When it is not content neutral, regardless of what anyone else tries to argue; targeted and ideological control of speech is an act of editorializing that very much is speech; in that it communicates a specific message
 
  • Thunk-Provoking
Reactions: LurkTrawl
Back