US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

I'm too tired atm to think of a good metaphor. But it's kind of like how if your dog bites somebody, you're responsible. Except your dog is a robot.

To assist your sleep deprived attempt I think a better one would be you sell dogs. Someone buys a dog and it attacks, it's not your fault. Now if you get Michael Vic in to train the dogs to rip out toddler throats without telling your customers and it shockingly rips out a toddlers throat, the parents might justifiably blame you.
 
Google staff jerks off in support forums, automates content filter and use AI instead of humans. All three sites do not have enough willing manpower to janny content or reports, leaving it automated or makes it responsibility of the channel owner. Facebook/META stocks crashed twice and Reddit moderation is still reeling r/jailbait fiasco, tranny jannies are very selective about moderation and allow obvious fedposts fly to own chuds.

Lawsuits when they turn class action against big sites turn into massive will force massive and sweeping changes. Sinking money to positive PR.

Similar lawsuits happened over here making sites liable for content, shit show started after hackers dumped SSNs and medical records of people in healthcare system. Drug trade and CP killed Finland's first imageboards once feds got involved.


As sites became more liable for their content, sites removed possiblity for anonymity and added real life ID on multiple sites. Anomity started dying through thousand cuts. Normies only saw newspaper headlines papers and news. Refusing to stop and think about ramifications of making sites liable and what sites will do to make users liable for their content for their posts.


Country of 6 million is a blip in the radar, our internet is a cautionary tale. US controls global internet, forcing major sites to make them liable for their content. Big sites are followed by smaller sites as privacy is slowly stripped away as fed posting intensifies across the internet.

That's story of Finnish internet, a cautionary tale. Don't allow cancer spread. On surface cheering big corpos seem all fun and games, what happens next is another matter entirely.

Correct if I am wrong about something.
 
Last edited:
If you can find a negligent way it caused you harm, you might be able to.

What kind of disclaimers?
Ones like, "Don't try this at home. This is done by experts. If you try to replicate this you could get hurt. We are not responsible for any injury." etc etc, ones that protect them from lawsuits basically.
 
Under current SCOTUS precedent that may very well be correct, but I would disagree to this theory under the 230. IIRC Null disagreed on it too in his last stream, but for different reasons then mine. Personally I do not believe that moving out threads out of PG which have reached a certain level of completeness counts as speech of the platform, though again, that may not be the case under current SCOTUS caselaw.
Will Tik Tok appeal this ruling to the Supreme court?
 
How can you have any algorithm if you are liable for the content within.
Algorithms are irrelevant in this discussion. The issue at hand is the algorithms which are automating a human curation process. They aren't presenting the same content to everyone. This alone means the company is publishing a subset of the content based on its own internal decision-making. In addition, they add disclaimers and editors notes to "user content". They are transforming it from its original form and are now responsible for that content. How these companies choose to curate that content is irrelevant. Whether they have an algorithm do it or a team of pajeets, they are still editorializing.

Do you people think TikTok actually boosted 'blackout' content, or just failed to censor it? How is this anything but a ruling that undermines 230 in favor of stringent censorship/content vetting?
For TikTok specifically, they boosted this content. They have an internal process of choosing what to show, they own the content at that point. They even state in their TOS they own the content.
 
Technically we do in the most basic sense. The highlight feature is automatic and works off user input via reactions, IIRC. That, then, in effect shows off posts you should read. Not the same, but in the most basic sense it fits.
I think that's a fundamentally different class of thing since it's working entirely off user reactions, like clicking thumbs up on a youtube video. There's no algorithmic curation there on behalf of the site, it's all based on public interaction; the only thing the highlight feature really does is provide a threshold so that it doesn't mark every post as 'hot' (which I guess is normalised across the thread with some bayesian dropoff shit or whatever for newer posts--but that's a constant formula for every thread and everybody's seeing the same result). So there shouldn't be any implication that it's editorial outside of the possibility that you could misunderstand how it works since that's kinda nebulous and the label is a bit ambiguous.

Though I guess that's a distinct possibility since zoomers don't know forums. Maybe instead of "highlight" it should be an emoji of a banana pepper
 
Big tech companies (especially tiktok) and their algorithms are so hated that they can be used as a handy excuse to destroy free speech on the internet as a whole.
How is this destroying free speech on the Internet? I think it’s very intuitive that if Facebook purposefully curates your feed to show only certain things, then that’s the platform itself speaking. And the platform being regulated for its own speech is different than a platform being regulated for the speech of its users.

If I cut up a bunch of newspapers and create a whole new writing using sentences from the cut up papers, that’s me speaking, not the papers.

Social media can avoid liability if they just show a chronological ordering of posts of people you follow. It’s really not that hard, and it’s definitely far healthier for society to stick to that than to allow Google or Facebook or Twitter to rile up crazies by purposefully showing content that will “engage” them (i.e. drive them literally insane).
 
what are the bets for real ID for Social Media?
rail guard to detect & restrict the under 18?

my guess -
*month set up & seethe over having to do real ID.
* if Social media is pushing for Supreme court review year but, you will know its over if tune change about using real ID.
*betting on chat restriction & video restriction for new accounts of under 18.
...
I think that's a fundamentally different class of thing since it's working entirely off user reactions, like clicking thumbs up on a youtube video. There's no algorithmic curation there on behalf of the site, it's all based on public interaction; the only thing the highlight feature really does is provide a threshold so that it doesn't mark every post as 'hot' (which I guess is normalised across the thread with some bayesian dropoff shit or whatever for newer posts--but that's a constant formula for every thread and everybody's seeing the same result). So there shouldn't be any implication that it's editorial outside of the possibility that you could misunderstand how it works since that's kinda nebulous and the label is a bit ambiguous.

Though I guess that's a distinct possibility since zoomers don't know forums. Maybe instead of "highlight" it should be an emoji of a banana pepper
my only disagreement is china tiktock meme-


 
Last edited:
What is the difference if, instead of promoting specific posts For You, they identify posts they think you will like, and then demote all other content?

If you're demoting 2% of the user-generated content, in some indirect sense the other 98% content that is served to every user is "promoted". If you're "demoting" 98% of the user-generated content, it's rather more obvious that the 2% you choose to serve is being promote by you. You are effectively using the 2% of user-generated posts you agree with in order to speak, and then claiming that it wasn't you who said it, it was the users.

You could naively draw the line at 50%, but intent also matters. If 51% of the "user-generated" content you remove is spam by bots, that shouldn't make you liable for whatever foolishness the other 49% of actual human users might post. Though if you intend to distort the available speech in an online public forum, you don't need to demote a large fraction of it to achieve this. Also, the first amendment applies to governments. If twitter or tiktok or facebook want to violate user's freedom of speech, they are free to do so (though since coproate persons are a legal fiction, it does raise questions of how a government that has not been empowered to violate freedom of speech can create a coporation empowered with powers the government lacks and cannot grant. But that's a different discussion.) And, according to this ruling, twitter or ticktok or facebook applying and algorithm violating user's freedom of speech is an expression of twitter or tiktok or facebook's protected speech.
 
Some some random appeals faggot judge is what is going to start the complete destruction of the internet. Good to know.
Judge is tied to FBI who made network of spyware and ISP level data harvesting.
This ruling makes implantation of said network that spies on users in network, in search of CP and they get your data it doesn't matter whenever you're innocent or guilty.

This shit is planned from the started from the start, only kosher family friendly, corpo and government approved content for you, sonny.

Retarded tech inept judges make best useful idiots in power, always have.
 
And, according to this ruling, twitter or ticktok or facebook applying and algorithm violating user's freedom of speech is an expression of twitter or tiktok or facebook's protected speech.
Every person is both civilly and criminally liable for their own speech. And companies are legally people. They aren't being punished for violating people's free speech rights, but for the company's own independent speech.
 
  • Agree
Reactions: Useful_Mistake
Every person is both civilly and criminally liable for their own speech. And companies are legally people. They aren't being punished for violating people's free speech rights, but for the company's own independent speech.
Yes. And I find it delightful that the means by which a company can avoid this liability is by respecting the first amendment.
 
Can someone give a retard like myself a tl;Dr on if the Farms is safe or not? If I don't get my daily fix of the Kiwi Farms I'll start stimming out of control.
 
  • Feels
Reactions: Barry Williams
Can someone give a retard like myself a tl;Dr on if the Farms is safe or not? If I don't get my daily fix of the Kiwi Farms I'll start stimming out of control.
The farms is proably safer than 90% of social media under this ruling because barely anything is automated under a "algoritm". Facebook and Twitter though are fucking sweating. This only applies right now in the Third Circuit, so unless someone sues us from there, we are fine.
 
Back