US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

I’m torn because in theory, platforms being forced to not use algorithms to bury content would fix a lot of what’s wrong with the internet today. No more users self censoring for algorithm dollars and no more shitty clickbait and 10,000 degree knives.
But I know how corpos work, and this will force it to where every company(probably even Kiwifarms) will have to enforce stricter censorship guidelines to avoid anything ever getting in an algorithm. So honestly, I’m inclined to pick the devil I know vs the devil I don’t and hope that this decision gets over ruled and things continue as they have been.
 
In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

This judge is so retarded. If they actually read the case law, background and history of 230, they'd realize that this is EXACTLY what it's for.

Wikipedia
This concern was raised by legal challenges against CompuServe and Prodigy, which were early service providers at that time. CompuServe stated it would not attempt to regulate what users posted on its services, while Prodigy had employed a team of moderators to validate content. Both companies faced legal challenges related to content posted by their users. In Cubby, Inc. v. CompuServe Inc., CompuServe was found not to be at fault as, by its stance as allowing all content to go unmoderated, it was a distributor and thus not liable for libelous content posted by users. However, in Stratton Oakmont, Inc. v. Prodigy Services Co., the court concluded that because Prodigy had taken an editorial role with regard to customer content, it was a publisher and was legally responsible for libel committed by its customers.

Congress recognized that requiring service providers to block indecent content would make them be treated as publishers in the context of the First Amendment, and thus would make them become liable for other content such as libel, not set out in the existing CDA. Cox and fellow Representative Ron Wyden (D-OR) wrote the House bill's section 509, titled the Internet Freedom and Family Empowerment Act, designed to override the decision from Stratton Oakmont, so that a service provider could moderate content as necessary and would not have to act as a wholly neutral conduit.

So, the judge is 100% wrong. The express purpose is that they can pick how they want to moderate while not being held liable for the retards on their platforms. Is the judge retarded enough to argue that if a human moderator made the same exact decision that the algorithm did, they would be liable? Because that is the literal example case that Congress was trying to change.

I know some of you are gonna read this and think "Great, now we can go back to the no moderation internet days" but that isn't right. Read the wikipedia quote again

REQUIRING service providers to block indecent content would make them be treated as publishers in the context of the First Amendment, and thus would make them become liable

Under current law, some things you are REQUIRED to moderate, e.g. child porn, copyright, preventing minors under 13 from your platform, etc. So, you are required to moderate, but if you moderate, you become liable. I hope people realize how retarded this is now. The only way a platform could exist without section 230 is if you preapprove every post before putting it up. It is untenable.
 
I have seen the parenting of today. Children glued to tablets watching absolutely incoherent Roblox youtube content at 10x speed. You can’t them away from it otherwise they turn into spastic nightmares writhing on the floor in mock pain from being deprived of over-stimulating, mental popcorn.

If that’s your kid, I don’t know what to say beyond “When they die from dumb shit, it is your fault.” It’s probably a mercy that they kill themselves rather than live the consequences of your lack of participation and care, holy fuck. We are headed towards some Lawnmower Man scenario where kids are running around in jumpsuits and fingerfucking Winnie The Pooh in cyberspace for Robucks.
 
I agree with the ruling. I think there is a distinct difference in the way Tik Tok presents content vs, a traditional forum. Tik Tok relies entirely on its algorithm to serve content. You can’t browse the entire library of Tik Tok without viewing anything. Tik Tok literally forces you to watch things to find content. To me, you can’t just wave away liability with a law that was written before such a model was even possible.
 
The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death,

Legislation can't fix stupid. Your kid earned a page in The Darwin Awards, it's either on you or your genes.

I'm sure the powers that be will have a ton of fun with this one. When you need to unravel the entire internet you start with some "think of the children" BS apparently.

Nylah... PA... *sigh*... got some other dead kids you can ghetto grift off of? ... Maybe Jamal ate a 9mm while trying to look tough on Facebook? Any more pillars of free white society you and Mr. Silverbergstien Esq can undermine for a payday?
 
Last edited:
parent only started caring about the kid once they died, i guess?
Social media is helping the world with the whole survival of the fittest thing. If your child believes what they see on social media tries it and die, they deserve it and you should have been a better parent.
 
  • Agree
Reactions: Sperghetti
So how long before sites are sued because their algorithms aren't actively combating hate speech/racism, and someone tries suing for emotional damages or claims a loved one was "radicalised" because they got recommended memes not deemed kosher?
 
s or claims a loved one was "radicalised" because they got recommended memes not deemed kosher?
Screenshot 2024-09-04 202250.png
 
Watching kids? How does that work?
I watch mine like a hawk. There is some quite cool stuff on YouTube as an example. So we all get in from school, and they get half an hour of TV to chill while I cook. I don’t mind that and I don’t mind them watching something autistic about fish, or planets on YouTube. The problem is that you have to be right there next to them, because as far as I can see you can’t whitelist stuff. You can set up a playlist but you can’t white list a bunch of channels you know are kid friendly and just let them pick.
Now I will do that. I’ll cook and as one video ends I’ll go and check what’s next up and if it’s more autistic content about iguanas, or rockets we are grand. If it’s not I’ll change it
But … The algorithm has other ideas. Even if you’re clearly starting from a ‘kid is watching this’ perspective it creeps and gradually changes to worse and worse stuff without any user input.
I’ve sat with the older ones and showed them this - showed them how they start watching something fun, benign and educational and then the next ‘offered’ video is subtly different, with keywords that lead to the next one and the next until there’s gore, nudity or stuff that’s completely unsuitable. It is deliberate
It’s very alarming. Yes you have to watch your kids all the time.
I would be much happier with YouTube if I could whitelist a bunch of channels.
 
In theory this should lead to greater free speech but in reality I hope we all know that it will lead to even MORE burying of controversial content like guns, politically sensitive topics, suicide, etc. YouTube already lowers the chance of that stuff appearing in the algorithm and slaps giant warning labels on it, now they might force you to search for it to even find it all if not outright banning it. I think this decision is going to have seriously bad reverberations.
 
Seems like a platform could maintain durable Section 230 protection by making it so "the algorithm" was also user-generated without any overriding control by the platform. And then whatever the default presentation could be the most sanitized, lawyer-reviewed, authoritarian milquetoast content imaginable, but the second a user tweaks "the algorithm" to get the good stuff, their content feed itself becomes user-generated content.
This seems like basically how Reddit works, where the default subs (even though they change from time to time) are bland and pretty inoffensive, stuff like /r/funny. And even then they're arguably recommending a category of post to you, not the posts themselves, whereas sites like TikTok are recommending specific pieces of content by way of algorithm, which I could see being a legal distinction made.

Depending where that line gets drawn, you could probably pretty easily set up a tagging system for videos, especially ones people use with hashtags, and let people subscribe to them and maybe suggest some to people based on content they've liked. If you crack open the black box a bit and explain to the user how content is recommended to them, e.g. "most popular videos in X tag" on their front page, that seems like you could effectively shift the curation onto the user.
 
Ignoring the whole section 230 algorithm issue, if I say, "hey check out this video" to a friend and they kill themselves after copying what was in the video, how in the world am I liable? That is all TikTok's algorithm does. It never says "do this thing." In fact, all social media algorithms love showing you rage bait that they know you will disagree with or laugh at.
 
Under current law, some things you are REQUIRED to moderate, e.g. child porn, copyright, preventing minors under 13 from your platform, etc. So, you are required to moderate, but if you moderate, you become liable
Legally obligated content removal would fall under compelled speech and would not qualify as the editorial approval or algorithmic curation that is the platform operator's speech.

Section 509 is of dubious necessity. It is also worded such that the liability protection applies only to restricting access and availability to content (and even then, only certain types and classes of content). Promoting content (i.e., the "For You" feeds) would not be protected under Section 509, consistent with the court's ruling.
 
  • Informative
Reactions: Flaming Insignias
A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.
You're completely wrong about Google search. They literally censor and hide results for political content that doesn't match their worldview. That is as editorial as it gets.
I'm too tired atm to think of a good metaphor. But it's kind of like how if your dog bites somebody, you're responsible. Except your dog is a robot.
It's worse. Algorithms do exactly what you tell them to do. They do not have any independent agency like an animal does. It would be like a newspaper saying they don't editorialize content, they just print third party content. They still choose which stories get printed and where they get featured (front page vs. buried at the back)
 
Back