US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

There is no massive difference when kids shouldn't be on it unattended in the first place. It's the digital equivalent of letting your kid wander into the ghetto and being surprised they get kidnapped/raped/murdered.

It's the job of a child's parents to look after them, and nobody else's. Nobody else who doesn't volunteer for the job, anyway.
A 18 year old is a legal adult. A 10 year old is still figuring out multiplication. If you can't figure out the difference you are hopeless.
Where exactly were the kid's parents?
Where was the moderation team? Tiktok is owned by bitedance, one of the richest corpos in the world. Why the fuck was a suicide trend not instantly dumpstered and throttled by a janny? That was nuclear levels of bad but they did nothing about it. A site has a responsibility to protect its user base from hostile threats. A trend telling kids to KILL THEMSELVES is a threat that should have been jannied the second it hit the servers, scrubbed from existance.
 
The child was 10.
Nobody's saying the death of a child isn't a tragedy. But most people, at some point in their childhoods, heard their parents say "Would you jump off a cliff because you saw your friends do it?"

Personally I think it'd be a good idea if TikTok made every new user watch a video telling them not to do dumb shit just because they saw some influencer in Beverly Hills do it. But that's just me.
 
A kid is dead because a machine went haywire. Naw. Your opinion is one that doesn't have to deal with kids often. Irresponsible. It happened on their website and their moderation staff didn't stop that shit. No. Fuck them.
The problem is a machine (probably) did not "go haywire", an algorithm which estimates that you will be interested in certain videos based on user profile commonalities with other people who were interested in those videos (the way these algos typically work) is working as intended. The only way it could have prevented this is by somehow anticipating that the content of the video was harmful without any moderator input. The case is not alleging that tiktok gained sentience and maliciously murdered a 10 year old, it is alleging that staff should have known that their algorithm might present someone a harmful video FROM A TREND WHICH DID NOT EXIST WHEN THE ALGORITHM WAS CREATED. To avoid prosecution by this standard would require staff to have anticipated every possible harmful video in the entire latent space of human creativity and preemptively banned them.

This is why it's a terrible test case in my opinion because it establishes that the standard for piercing 230 can be staff negligence rather than staff editorial action. If this were instead a case where it was alleged staff had manually altered the algorithm in some way and should have anticipated that manual action to have resulted in dangerous videos being promoted, that I think would portend a safer precedent for future jurisprudence on 230, but a standard of failure to anticipate an unknown unknown is insane, impossible to comply with, and to even make a facade of compliance would require draconian moderation you can't begin to imagine.
 
Let’s not pretend that these ‘algorithms’ are just code or some unknown jeetcoder’s responsibility. These computer programs are fine-tuned to deliver a very specific result, whether for a commercial or political purpose. Behind every algorithm there is a person pushing a specific agenda, and when users start noticing, they hide behind the bullshit ‘it’s our algorithm!’ excuse. This isn’t the likes of qsort and sprintf we’re talking about gentlemen.
Publish the methodology and then I would have no problem with that excuse how does it work give us your methodology they always hide behind well that would make our competitors win look how you recommend cat videos is not a trade secret that other people can't figure out
 
Publish the methodology and then I would have no problem with that excuse how does it work give us your methodology they always hide behind well that would make our competitors win look how you recommend cat videos is not a trade secret that other people can't figure out
Fun fact: if you repost a news article from a supposed "real journalism" outlet, you can now be held personally liable for "boosting" it. This is how dumb this precedent is and how dumb anyone who supports it is.

The Third Circuit has basically rewritten 230. This is called legislating from the bench. This is judicial activism. This is quite literally Communism.
 
Fun fact: if you repost a news article from a supposed "real journalism" outlet, you can now be held personally liable for "boosting" it. This is how dumb this precedent is and how dumb anyone who supports it is.
i think its also just an excuse from tackling the real problem. which is kids are getting dumber, mostly non white kids, idk how you can really give some chink site the responsibility of singlehandedly unretarding the american youth unless you send them to foreign tard camps. if the kids are dumbass enough to eat soap and shoot themselves because tiktok said then theyll do it anyway. during the milk crate challenge i saw some little impatient niglet knock an older black guy down off the top of a really high pile for "taking too long" the tiktok chinkcops cant teach those people that they shouldnt splatter their brains everywhere.

idk i just hope like others said it means the days of totally-not-curated- but-completely-curated algorithm pushes comes to an end. or it has to be so sanitized that the algo is only used for people who want to basically watch network cable show clips and the real content is unfiltered finally.
 
Where was the moderation team?
Sorry I realize now that there's possibly a misunderstanding here that I didn't address. That's potentially a valid question, but it's not the question the court was ruling on, the key point is here:
"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'"

This is not a moderator action. No moderator approves of every algorithmic homepage video placement. The judge is NOT saying moderators should have banned the trend when they saw it, or the company should have better equipped them to ban harmful trends.

He is saying that the automated serving of related videos by the algorithm, irrespective of whether staff were involved or not, itself constitutes a deliberate, willful editorial action, and that action may be adjudicated as either negligent or prudent and is thus not eligible for protection under section 230.
 
He is saying that the automated serving of related videos by the algorithm, irrespective of whether staff were involved or not, itself constitutes a deliberate, willful editorial action, and that action may be adjudicated as either negligent or prudent and is thus not eligible for protection under section 230.
You're actually missing a crucial point here:
1725504918632.png
1725504832552.png

This is SPECIFICALLY because the algorithm takes factors OTHER than user input into account. I've been making this argument for years that platforms such as facebook and youtube messing with their algorithms to promote or demote content breaks 230 protections because they are effectively editorializing what content is shown to people rather than letting them curate their own feeds based off of what's available on the platforms.

It's actually hilarious, because the two worst offenders are tiktok and facebook. I can't get tiktok to stop showing those stupid fucking reddit stories in my FYP, even after a solid week purposefully trying to purge them out. I can't get facebook to start showing me more than liek 3% of my friend's posts no matter how hard I try to curate it. That's the kind of shit this ruling hammers, and I can't say I disagree. You can't sit here and curate out content that a user says they want, while curating in content they didn't, and say you're not acting as a publisher.
 
You're actually missing a crucial point here:
This is SPECIFICALLY because the algorithm takes factors OTHER than user input into account...

I can't get tiktok to stop showing those stupid fucking reddit stories in my FYP, even after a solid week purposefully trying to purge them out.
To be frank, this is a non sequiter, it's not the argument made by the judge and it's obvious you are trying to fit your own personal grudge against the children's keyjangling app into this conversation where it doesn't belong. Relevant text "TikTok’s FYP algorithm “[d]ecid[es] on the third-party speech that will be included... Accordingly, TikTok’s algorithm... was TikTok’s own “expressive activity,” id., and thus its first-party speech. Such first-party speech is the basis for Anderson’s claims... that TikTok’s FYP algorithm was defectively designed... TikTok’s recommendations via its FYP algorithm—is TikTok’s own expressive activity, § 230 does not bar Anderson’s claims.(12)" (The ellipses are mostly skipping external case citations)

That's the statement attached to footnote 12. It does not say Tiktok curates their algorithm and is consequently liable. It says without reference to the method or fairness of selection that the action of the algorithm selecting videos, is the editorial action in question. Footnote 12 does not say that the algorithm takes factors other than user input it says "was not contingent upon specific user input" e.g. a "search function" which would disqualify any algorithmic recommendation, not just ones you personally find annoying.

Once again, the only way for a platform to respond to the standards this judge suggests are required to retain section 230 protection is significantly more prolific and more invasive moderation, or the elimination of algorithmic recommendations entirely (they aren't going to do that).
 
Fun fact: if you repost a news article from a supposed "real journalism" outlet, you can now be held personally liable for "boosting" it. This is how dumb this precedent is and how dumb anyone who supports it is.

The Third Circuit has basically rewritten 230. This is called legislating from the bench. This is judicial activism. This is quite literally Communism.
I think they should have 230 protection when it comes to their algorithm however it should be transparent how it actually works and they should also be transparent in the moderation which they're not so if you wanna say you can arbitrarily ban people for XY and Z and certain things are being boosted people especially government officials have every right to ask why certain harmful content that is promoting harmful self harm being promoted on your website
Sorry I realize now that there's possibly a misunderstanding here that I didn't address. That's potentially a valid question, but it's not the question the court was ruling on, the key point is here:
"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'"

This is not a moderator action. No moderator approves of every algorithmic homepage video placement. The judge is NOT saying moderators should have banned the trend when they saw it, or the company should have better equipped them to ban harmful trends.

He is saying that the automated serving of related videos by the algorithm, irrespective of whether staff were involved or not, itself constitutes a deliberate, willful editorial action, and that action may be adjudicated as either negligent or prudent and is thus not eligible for protection under section 230.
This is why you need judges who actually understand how technology works rather than these 97 year old boomers who couldn't figure out how to check their own emails
 
Once again, the only way for a platform to respond to the standards this judge suggests are required to retain section 230 protection is significantly more prolific and more invasive moderation, or the elimination of algorithmic recommendations entirely (they aren't going to do that).
I don't see any difference between what they're saying here other than "we're just abolishing 230 and bringing back Stratton Oakmont" and fuck the First Amendment.

And that's legislating from the bench.

Judicial activism.
 
It's actually hilarious, because the two worst offenders are tiktok and facebook. I can't get tiktok to stop showing those stupid fucking reddit stories in my FYP, even after a solid week purposefully trying to purge them out. I can't get facebook to start showing me more than liek 3% of my friend's posts no matter how hard I try to curate it. That's the kind of shit this ruling hammers, and I can't say I disagree. You can't sit here and curate out content that a user says they want, while curating in content they didn't, and say you're not acting as a publisher.
Interesting you also have this problem. My facebook feed is awash with adverts for fucking 'all natural' cleaning products and 'all natural' deodorants and beauty products etc, when I only trust in bleach and aluminium salts. It must be some kind of market segmentation crap because it's clearly not getting that input from me personally.
 
I have never seen YouTube append that disclaimer. At any rate, it would be for jury to decide if that negates the negligence. I'd say it would.
Even very small mentions of COVID in a video during the pandemic would get it tagged with a "This is what COVID is" Wikipedia link right above the description. There's also similar warnings for suicide hotlines, flat Earth conspiracies and climate change. I've just always been curious if they were automated, caused by user reports or manually tagged on by the uploader.

Hell, somewhat recently a Chantal video had the "binging is a disorder" warning on it, and she definitely never toggles those on.
 
The argument will be that running a site with a specific focus (say lolcows) and curating that list of permitted threads, is the site making a statement about the individuals and the accuracy of the comments made in the OPs and throughout the threads.
Lets look at what the law actually says. (For my own sake mainly)
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." (47 U.S.C. § 230(c)(1)).
The important parts are "interactive computer service" and "provided by another information content provider".

When TikTok (or any of the big tech sites) "recommends" a list of content, the content is no longer coming from a third party. It is coming from the company itself.

I think "standard of quality" is a good description for curating a website about a specific topic, like say guns, and deleting
I no longer think my or any curation argument matters based on the law's wording. Site owners are not required to keep posts up. A company does not violate the law by deleting posts it doesn't like. They have freedom of association also. The protection we all get from 230 is to not be liable for speech third parties make. It's complicated by the fact that Facebook et. al claims ownership of all the content posted on their sites. Therefore they are the publisher and the content no longer comes from a third party.

I think the impact will be limited to killing "recommended" or "for you" curation of feeds. Facebook will have to go back to chronological timelines and such. It will be harder for companies to deprioritize content but not to remove it. If they can get to it all. Also the "misinformation" disclaimers and similar.
 
Back