US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

I think Tiktok is special in this regard, as the content can't really be found anywhere effectively other than the algorithm. It's not like DeviantArt, Furaffinity or even this place where recent content is simply curated or can be searched by users. Facebook and Twitter will be more affected because they are just an algorithm with a screen.
 
Is this good or bad?
My thoughts too, I know big platforms have elaborate algorithms and if those are not under full protection of 230, it fucks with big platforms. Specifically the platform, not the user and I don't mind that, but on the other hand, I am not very smart and I might be missing the full picture, because KF is also affected and generally the big platforms give much less of a fuck and it's the smaller places that are more burdened with adapting to the new bullshit.
 
Facebook and Twitter will be more affected because they are just an algorithm with a screen.
New York actually found liability for YouTube, Facebook and Reddit based on similar product negligence theory. Of course, that case, like this one, is still ongoing, but it survived a motion to dismiss.
 
I've read Null's thoughts about this possibly affecting the farms due to the way featuring stories works, but to me that seems like a small issue if it means this will be a big blow to the corporations using AI algorithms to push/suppress stories/videos all across youtube / social media.

And frankly, featuring stories always ruins a thread for days / weeks for those who were already subscribed and following that cow, so maybe losing featured isn't that big a deal to begin with?

This is ignoring that currently the front page has 13 featured stories taking up 43 lines of text (30 printed, 13 line breaks) that people have to scroll past to get to their subbed section.
 
Last edited:
Well in any case it is now unless it is fought. I don't see that fight being won. A child is dead, seemingly because the algorithm that governs TikTok is retarded and promotes suicide challenges.
In reality though that kid hung himself because his mom was such a neurotic white trash whore commiting emotional incest on him (God I hope it stopped there) that he couldn't live with it anymore.

Edit: can't read, thought it was Hollie Dance's kid who was the reason for this decision. And I will take the hats because fuck that vile britoid.
 
Social media companies absolutely want to shape what media you're allowed to see on their platforms to push certain messages and they don't deserve to be allowed to hide behind protections to pretend it's third party speech. Good decision. Not only they make editorial decisions about what content you see but they push ever more dangerous, harmful and society-destroying content because that's what gets clicks. Completely irresponsible use of section 230 protections.
 
I'm trying to wrap my head around this but is this saying the company is liable for what the users post or that the company is liable for what their algorithm decides to show users? Because those two things are very different. The way I am reading this sounds like the appeals court found that TikTok is liable for their algorithm pushing the content to this kid rather than the fact that the content was there to begin with. Am I correct with that?
 
Does this mean the algos will get removed (good) or does it mean there will be merciless removal of all speech that doesn’t push the state/corporate approved line? (Bad.)
The latter would require a lot of moderation - if it was done automatically would that fall foul of the same algo ruling?
 
In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.
This is a surprisingly reasonable middle ground, and could also be applied to the mass political censorship cases: you can be the "town square" that simply hosts user content without liability for it, or you can be a publisher responsible for what you choose to actively put out.

What BigTech should not be able to do is claim immunity from content liability, while picking and choosing who gets a voice.
 
Seems like a platform could maintain durable Section 230 protection by making it so "the algorithm" was also user-generated without any overriding control by the platform. And then whatever the default presentation could be the most sanitized, lawyer-reviewed, authoritarian milquetoast content imaginable, but the second a user tweaks "the algorithm" to get the good stuff, their content feed itself becomes user-generated content.
 
Back