US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

What's especially retarded about this ruling is if a machine algorithm's behavior is ruled as protected speech, then that would mean the algorithm's behavior falls under the First Amendment, which means the computer has freedom of speech, therefore you can't sue it for what it says because its recommendations are protected under the First.

In their attempt to censor freedom of speech, they just extended it to unthinking mathematical expressions. That would mean that TikTok shouldn't be liable for what their algorithm recommended anyway, because if an algorithm has freedom of speech, that would mean it's considered to be endowed with personhood, and as an entity endowed with personhood, the algorithm would logically be liable for its own behavior.

Nothing about this ruling makes any sense.
Well somebody had to program the algorithm. The owner and operator. That is the argument. It is a extension of the website operators wishes, beliefs, and desires under the ruling.
 
Because, and this will be extremely hard to understand, but try and stick with me, the content in question was recommended to the kid's 'for you' tab, tab specifically for Tik Tok's algorithim to show you stuff it wants you to see.

This context is extremely hard to understand and would require you to have been able to read paragraph one of the order, or paragraph two and or four of the article. I realize that this is a difficult task.

They can, and they probably will.
Remember:

A computer cannot be held accountable, therefore a computer should not make a business decision. - IBM, 1978
 
I can't see how the SCOTUS won't step in and knock this retardation down, seems to me if it was any other site than tiktok it wouldn't have gotten as far as it has.
What was the main reason behind this decision again? Surely you have read it, or even the article itself, and can tell me.
 
I'm personally fine with this ruling, not because of the faggotry that happens on Twitter, Tiktok and Instagram, but for the algorthmic censorship that occurs on Google. When Google can just unilaterally blackhole any topic they want (with a clear left wing political bias), with virtually no competitors, you functionally have a company with a monopoly on information itself. This finally opens them up to being sued for their election interference and social engineering projects. Anything that fucks Google in the ass I'm 200% on board for.
 
I'm personally fine with this ruling, not because of the faggotry that happens on Twitter, Tiktok and Instagram, but for the algorthmic censorship that occurs on Google. When Google can just unilaterally blackhole any topic they want (with a clear left wing political bias), with virtually no competitors, you functionally have a company with a monopoly on information itself. This finally opens them up to being sued for their election interference and social engineering projects. Anything that fucks Google in the ass I'm 200% on board for.
I don't see a legal theory under which Google is liable
 
Unless and until someone can prove a google search killed a kid, Google is fine. That is the real reason this case is getting the hammer, a kid is fucking dead.
Any harm resulting from negligent faults within Google Search would suffice, but I just don't see any sector of facts that would give rise to such liability (under this case)
 
Any harm resulting from negligent faults within Google Search would suffice, but I just don't see any sector of facts that would give rise to such liability (under this case)
I expect that to kill off AI more than anything. There was already a family hospitalized because the book of mushroom identification they bought was AI made and incorrect as all hell. I can only imagine any search info brought up by Google Search would be equally as useless.
 
So, how much of this is because it's TikTok?
Americans all across the political spectrum are desperate to get rid of section 230 so they can punish people and platforms they don't like. It's also an easy way for existing platforms to easily prevent competition since it'll be much harder to upstarts if section 230 goes away.
I'm personally fine with this ruling, not because of the faggotry that happens on Twitter, Tiktok and Instagram, but for the algorthmic censorship that occurs on Google. When Google can just unilaterally blackhole any topic they want (with a clear left wing political bias), with virtually no competitors, you functionally have a company with a monopoly on information itself. This finally opens them up to being sued for their election interference and social engineering projects. Anything that fucks Google in the ass I'm 200% on board for.
It's more likely to affect you negatively if section 230 goes away than it is to hurt your those you dislike.

Sued? My brother in Christ Google might as well have infinite money.
 
No.

Allowing something to stay up is very different than choosing to push it to the top of someone's page, basically forcing them to engage with it.
And what about features? That could be a gray area.
 
  • Agree
Reactions: Marvin
What's especially retarded about this ruling is if a machine algorithm's behavior is ruled as protected speech,
You are very off base. The ruling was that companies are responsible for the content they curate, whether that be human curation or a computer program. The company still created the algorithm and they are still responsible for its actions.

If a tesla glitches out and rams you into a wall, Tesla doesn't get to escape liability by saying "the computer did it, not us"
 
Last edited:
What's especially retarded about this ruling is if a machine algorithm's behavior is ruled as protected speech, then that would mean the algorithm's behavior falls under the First Amendment, which means the computer has freedom of speech, therefore you can't sue it for what it says because its recommendations are protected under the First.

In their attempt to censor freedom of speech, they just extended it to unthinking mathematical expressions. That would mean that TikTok shouldn't be liable for what their algorithm recommended anyway, because if an algorithm has freedom of speech, that would mean it's considered to be endowed with personhood, and as an entity endowed with personhood, the algorithm would logically be liable for its own behavior.

Nothing about this ruling makes any sense.

Furthermore, to what degree does this ruling extend to? Does any sorting method whatsoever count as algorithmic protected speech as well? Threads with more posts are designed to climb up higher on a forum than those that don't, does this mean that this sorting method by popularity, by nature of being an automatic process, counts as protected speech authored by the owner? What about third party software purchased and implemented by the site owner, who's liable in that circumstance, the software developer or the person implementing it to their site?

This dumb ruling throws all manner of liability into chaos and none of it will make any sense because this ruling was made by activist judges.
You won’t see this ruling applied against anyone other than political foes of whoever is in charge.

And the toadies will clap and cheer as you are somehow charged with stochastic terrorism for reposting a meme
 
Any harm resulting from negligent faults within Google Search would suffice, but I just don't see any sector of facts that would give rise to such liability (under this case)
I think they are going to the next step this will take if they are able to crack open the door a bit. I would say slippery slope but that term has taken on to much prophetic power in my life time.
 
Back