US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

But TikTok is actively selecting and promoting videos they are certain you would watch. It's different from when you search up the video yourself. Wouldn't be the first time this theory was used to establish liability. New York recently entertained a similar theory
yeah I get that but I think editorial responsibility really depends on how the algorithm is constructed. like when youtube shows me troon videos that is youtube editorial choice making people watch a thing. when it shows me ordinary nerd shit it is probably site users promoting the content themselves, like stickers featuring a post here.
 
yeah I get that but I think editorial responsibility really depends on how the algorithm is constructed. like when youtube shows me troon videos that is youtube editorial choice making people watch a thing. when it shows me ordinary nerd shit it is probably site users promoting the content themselves, like stickers featuring a post here.
Oh, yeah, there's certainly an issue here, I agree. I'm not sure how courts are to solve it; are they to start calculating percentages of liability like in car crash cases? SCOTUS drew only a singular distinction, though. Published content that the platform in some way touched vs. one that was only touched by the users. This distinction ends up not being very useful, 'cause neither of these are realistic scenarios. I'd guess that most if not all content we consume on social media is a mixture of the two. I suppose one could see the second one as a "harsh line" of sorts, and so everything that is not the second category would fall into the first. Given that they viewed all the content in question of the case as the first category, I guess this view is accurate.

I have further thoughts, however, in a related fashion. If all even-a-bit curated third party content is speech, doesn't that make all publishers into speakers by the mere act of choosing to speak? See this part of the opinion:

"To the extent that social media platforms create expressive products, they receive the First Amendment’s protection. And although these cases are here in a preliminary posture, the current record suggests that some platforms, in at least some functions, are indeed engaged in expression. In constructing certain feeds, those platforms make choices about what third-party speech to display and how to display it. They include and exclude, organize and prioritize—and in making millions of those decisions each day, produce their own distinctive compilations of expression. And while much about social media is new, the essence of that project is something this Court has seen before. Traditional publishers and editors also select and shape other parties’ expression into their own curated speech products. [...]So too we have held, when applying that principle, that expressive activity includes presenting a curated compilation of speech originally created by others."

They specifically chose to apply standards meant for speakers to publishers. This could mean bad things for publishers if the courts read this as I did.
 
I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.
A search is fundamentally different to a tailored feed.

Moreover, Google shouldn’t be protected to the extent that it is manipulating the search results beyond the scope of “what do we think the user is looking for” and instead providing results based on “what do we think the user should see”. Those are editorial decisions on their part.
 
Will it? If users post something that's fine so why would users rating/promoting/whatever things suddenly be an issue. How is a user posting in community happenings any different than them posting in any other board?
The issue is for jannies deciding which threads get to stay. By making those decisions, the idea is that the site takes responsibility for the content of those threads.
Not so. It was an immunity from third-party speech being held as first-party speech. Creating algorithms that are in charge of what content gets promoted, deleted, and otherwise editorializing the content has been held by SCOTUS to be first-party speech. It follows then that this is not covered by 230's liability immunity for third parties.

"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." - 47 U.S. Code § 230

"The Fifth Circuit was wrong in concluding that Texas’s restrictions on the platforms’ selection, ordering, and labeling of third-party posts do not interfere with expression. And the court was wrong to treat as valid Texas’s interest in changing the content of the platforms’ feeds. [...] But for purposes of today’s cases, the takeaway of Turner is this holding: A private party’s collection of third-party content into a single speech product (the operators’ “repertoire” of programming) is itself expressive, and intrusion into that activity must be specially justified under the First Amendment. [...] Deciding on the third-party speech that will be included in or excluded from a compilation—and then organizing and presenting the included items—is expressive activity of its own. And that activity results in a distinctive expressive product." Moody v. NetChoice, LLC 603 U. S. ____ (2024) (algorithms being central to its opinion regarding third-party speech)

You may disagree with the Supreme Court's holding (and I think there's plenty of reason to do so, and some reasons not to), but the Third Circuit's holding is a natural consequence of the change of speech that Supreme Court forced upon the courts. The stark difference was even noticed by the judge:
Well yeah, that's the change I'm remarking on.

The line for what would qualify as third-party speech was drawn really far.

Like yes, if null wrote a post at the front of the site and signed it as coming from the site, that'd qualify as first-party speech, but otherwise, the precedent up until recently, pretty much any sort of curation, algorithmic or otherwise, was not considered speech the site would be responsible for.

This is important because in order to have a website about any specialized topic, curation is necessary. Someone needs to delete off-topic content otherwise the site turns to pudding.

I find a lot of the discussion about what would be "fair" or "reasonable" amounts of curation to be sorta irrelevant or missing the forest for the trees.

Like talking about how pre-Musk Twitter used to selectively ban conservatives while pretending to be a mainstream, neutral discussion site is missing the point. Not that it isn't wrong, a site with a broad focus like Twitter rightfully should be politically neutral. (Or hell, maybe a lot of people would be sated if it just explicitly advertised itself as a a lefty shithole, a la resetera.)

But the problem isn't about whether or not the rules are fair, but the real issue is that the process itself is the punishment.

I'm convinced that accelerated legal processes like the DMCA, as flawed as they are, are essential to running a site like the farms on a shoestring budget. If this interpretation sticks, basically everyone and their grandma with a KF thread would be able to file a lawsuit that would survive initial dismissal in court.

Even if the courts ultimately settle on a reasonable amount of liability for site providers, it'll only come after the farms has been bent over and fucked and tested those theories in court.

That's what I fear, anyway.
 
The issue is for jannies deciding which threads get to stay. By making those decisions, the idea is that the site takes responsibility for the content of those threads.
Under current SCOTUS precedent that may very well be correct, but I would disagree to this theory under the 230. IIRC Null disagreed on it too in his last stream, but for different reasons then mine. Personally I do not believe that moving out threads out of PG which have reached a certain level of completeness counts as speech of the platform, though again, that may not be the case under current SCOTUS caselaw.
Well yeah, that's the change I'm remarking on.
Oh, sorry, my bad. I must have misunderstood you.
The line for what would qualify as third-party speech was drawn really far.
I completely agree. "All algorithms are speech" is an unworkable rule, just as "all curation is speech" is. I am willing to entertain that some curation could be speech, but certainly not all or even most of it. For example it is illogical to hold that a moderator removing illegal content is speech rather than compliance with the law. Similarly complying with the ruleset, even by a human, should (and I suppose it could depend on the specific rules) not count as speech provided those rules are not viewpoint-based. Obviously if a pro-Ukraine forum were to make a rule banning pro-Russian opinions, complying with that would be speech, but a rule forbidding one-word posts would not. There's an interesting question on whether or not rule banning "low effort" posts would count as speech. I could see it either way.
Even if the courts ultimately settle on a reasonable amount of liability for site providers, it'll only come after the farms has been bent over and fucked and tested those theories in court.

That's what I fear, anyway.
Hopefully it won't turn so grim, but it is a well founded fear
 
The issue is for jannies deciding which threads get to stay. By making those decisions, the idea is that the site takes responsibility for the content of those threads.
Why's it the site, and not the jannies that take responsibility?
If I janny my own website, that's still on me, not the webhost, they provided means of publishing but did played no part (even though they technically could) in the editorial process.
If I janny my own Facebook page, that's on me, Facebook provided the means of publishing, but played no part (even though they technically could) in the editorial process.
So if a janny mops their own board, wouldn't the logic be the same?

Of course, this doesn't apply if it was done under the instruction of the site operator (a news site hired an editor).

That's why I think that if the defendant should have known their actions would lead to the issue at hand has to play into equation somehow, even though where the line is probably isn't tested in court yet.
I completely agree. "All algorithms are speech" is an unworkable rule, just as "all curation is speech" is. I am willing to entertain that some curation could be speech, but certainly not all or even most of it. For example it is illogical to hold that a moderator removing illegal content is speech rather than compliance with the law. Similarly complying with the ruleset, even by a human, should (and I suppose it could depend on the specific rules) not count as speech provided those rules are not viewpoint-based. Obviously if a pro-Ukraine forum were to make a rule banning pro-Russian opinions, complying with that would be speech, but a rule forbidding one-word posts would not. There's an interesting question on whether or not rule banning "low effort" posts would count as speech. I could see it either way.
I agree on this, but would consider, in addition to it being viewpoint-based, the relevance to the issue at hand.
For example, in a case about libel against a certain company's products, a janny having a hateboner against animeposters, shouldn't be as liable as one that skews the discussion by requiring posts defending said products show a higher burden of proof, lest they be deleted as "low effort".

So the broad interpretation of speech can still exist, since if it's not something that a reasonable person should have to cause the issue at hand, it shouldn't be considered as them putting their finger on the scales.
 
Last edited:
Why's it the site, and not the jannies that take responsibility?
If I janny my own website, that's still on me, not the webhost, they provided means of publishing but did played no part (even though they technically could) in the editorial process.
If I janny my own Facebook page, that's on me, Facebook provided the means of publishing, but played no part (even though they technically could) in the editorial process.
So if a janny mops their own board, wouldn't the logic be the same?

Of course, this doesn't apply if it was done under the instruction of the site operator (a news site hired an editor).

That's why I think that if the defendant should have known their actions would lead to the issue at hand has to play into equation somehow, even though where the line is probably isn't tested in court yet.
You're probably right.

By "the site", I just mean some aspect of the administration, whether that's jannies or if done under the direction of the site owner, the site owner, or some combination of the two.

That being said, certain people want to push for a very expansive understanding of responsibility here. They want to interpret Facebook permitting material to remain up as some kind of endorsement. So their hypothetical argument would be "well, Facebook had the opportunity to take down this libelous material but chose not to, ergo they must endorse it".

They don't want to afford Facebook (and by extension the farms) any kind of ambiguity here.

The old interpretation of section 230 was very strict about this; all posters own their own content, hosting is not endorsement. But as these attacks on 230 mount, we see court cases chipping away at that, unfortunately. Hopefully it doesn't get that far.
 
the big happening threads on the front page are curated and not protected by 230. Jannies need to be careful with their editorialization because of that fact. josh talked about it on a MATI last year.
This was sort of why the farms did not feature that happening of Pat Tomlinson getting swatted with the video of him yelling at the police. That could be argued the Farms condoned a criminal activity (the swatting)
 
Creating algorithms that are in charge of what content gets promoted, deleted, and otherwise editorializing the content has been held by SCOTUS to be first-party speech. It follows then that this is not covered by 230's liability immunity for third parties.
So any search engine can be prosecuted criminally if it returns any results that might be used to commit a crime, since all of them use algorithms and all of them rank content, promoting some and demoting others, even if for the purposes of reducing spam or not returning results with "hate speech" or porn or whatever.

Unless they're deliberately promoting suicide videos, I don't see how that's their speech, especially if the results you get are user-directed.
 
Tbh that's how I read the decision, the issue isn't the video, but that TikTok pushed it in front of the kid, and it's widely known that TikTok interferes with what you see using their 'heating' tool to make things artificially popular, to amplifying trends or creators they consider desirable, meaning it stopped being just hosting 3rd party content and started being Tiktok's editorialised content, which was bolstered by the other case listed where the algo timeline of content was considered 'speech' of the platform, so not just hosting 3rd party content, but specifically picking and choosing what to show based on what the platform desires, and so not eligible for section 230 protection.

Put those together and I can see the argument being "you pick what is popular on your platform by interfering via heating, your algorithm then, after your interference, pushed dangerous content that the kid died trying to imitate in front of them, doesn't fall under moderation if you push specific content at people, no section 230 protection for you"

It could be the first step in de-enshittifying normie social media and if that leads to the death of all those shitty algo's pushing DEI shit because janny tranny's lobotomise them to push that garbage instead of what what people want I don't see that as a bad thing lol
This is pretty much what I was trying to say just put better. God willing youtube will have to show the niggercattle what they're subbed to and what's actually trending instead of trannies and kangs or be responsible for what gasthejews1488 is posting. :semperfidelis:

The issue is for jannies deciding which threads get to stay. By making those decisions, the idea is that the site takes responsibility for the content of those threads.
There is basically an explicit janny carve out:

"No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or" (emphasis mine)

Jannies acting in good faith and removing essenitially anything are fine but an algorithm trying to promote "trans influencers" specifically to 12 year olds will hopefully make them be considered a publisher now. Also for the record I always thought a site trying to promote or supress a given viewpoint obviously made them a publisher but we just live in clown world. If anyone knows about a case where that was tested I would be very interested to read it.
 
But TikTok is actively selecting and promoting videos they are certain you would watch. It's different from when you search up the video yourself. Wouldn't be the first time this theory was used to establish liability. New York recently entertained a similar theory
YouTube algorithm is very much in play if you look up videos. The question is if the algorithm "knew" if you were suicidal or not. (By "knew" there was machine learning that was intentional or not). I can imagine there might be a problem if TikTok algorithms start seeing "ah ha, this user likes Roblox videos and is a 13-year-old boy, better suggest him LGBT/grooming shit".
 
There is basically an explicit janny carve out:

"No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or" (emphasis mine)

Jannies acting in good faith and removing essenitially anything are fine but an algorithm trying to promote "trans influencers" specifically to 12 year olds will hopefully make them be considered a publisher now. Also for the record I always thought a site trying to promote or supress a given viewpoint obviously made them a publisher but we just live in clown world. If anyone knows about a case where that was tested I would be very interested to read it.
Or null picking posts for the Featured Content panel on the front page would make him a publisher and therefore subject to defamation lawsuits for the contents of threads.

Or even just promoting threads from Prospering Grounds.

You can't try to wield liability in this manner as a weapon against some of the most powerful, well funded public companies and not expect it'll be wielded much more effectively as a weapon against one of the most (unfairly) maligned, unsympathetic websites.

I don't like the concept of treating interactive computer services as publications because they're really much more like communities. And being able to curate the membership of your community shouldn't come with the same liabilities that publishing a newspaper would.

It should be legal to establish a lefty shithole discussion forum without directly, personally taking on liability for every stupid thing your users say. Because if it's legal to do that, it's legal to run the farms.
 
Also for the record I always thought a site trying to promote or supress a given viewpoint obviously made them a publisher but we just live in clown world. If anyone knows about a case where that was tested I would be very interested to read it.
It was called Stratton Oakmont v. Prodigy. It found exactly that. Congress, in a rare burst of wisdom, realized this insanely idiotic idea would absolutely kill the Internet and passed a law overruling it.

I disagree with interpreting a SCOTUS decision about jannying being speech by the janny (or the corporation) and worthy of First Amendment protection somehow overturning essentially the entirety of previous Section 230 liability, especially since 230 was only mentioned in passing, and in a couple of footnotes, and the Court showed no indication express or implied that it was doing away with literally thousands of previous 230 cases reaching the opposite conclusion.

Now, there might be carveouts under COPPA specifically for material targeted to or presenting a special danger to minor users, possibly creating an obligation to remove such material if it actually did pose a threat to life or constitute pornography or other exploitative conduct, but that's really not what this case is about.

At least the cancer is currently limited to the Third Circuit. We'll see if other courts make this same blunder.
 
Last edited:
In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.
Remember how parents used to ask if all your friends jumped off a bridge, would you?

Yeah.
 
Back