US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

The analogy remains sound, being that there was no point at which a person reviewed and tacitly approved of the third party speech being promoted. The whole point of section 230 is that it's an undue burden to expect a platform to individually review every item of third-party speech it platforms, AND you don't incur that burden even if you elect to review and moderate some content.
Unless the Texas case is misrepresented, they don't have to, they are only responsible for content they curate and select because that is when it becomes their speech, they don't have to review all 3rd party speech like they don't now, however once they start picking and choosing what to put forward it's their speech and they are responsible, which was the point behind the Texas ruling, you can't force someone to not delete something, because that is then making something they don't want part of their expressive speech, the judges also specifically called out that if the kid found the videos through the search page it likely would have stayed 3rd party speech and S230 could have applied, though if there is nuance there I would be happy to hear it.

Again, you are alleging facts not included in the case. If it was alleged an employee manually reviewed and approved of the KYS challenge, or TikTok had a company policy of promoting KYS challenges, that would already be outside 230 protection...

It might just be clunky wording in the pdf, but the way the following reads Tiktok agreed they knew about the videos, that they were spreading, that several children had already died, and post multiple deaths they failed to remove it.
No one claims the videos Nylah viewed were created by TikTok; all agree they were produced and posted by other
22-3061 Document: 53 Page: 15 Date Filed: 08/27/2024 3
TikTok subscribers. But by the time Nylah viewed these videos, TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and 3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31–32. Yet TikTok “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their [For You Pages].” App. 32–33. Instead, TikTok continued to recommend these videos to children like Nylah

B.

Following her daughter’s death, Tawainna Anderson sued TikTok and its parent company, ByteDance, Inc. Anderson seeks to hold TikTok liable for 1) hosting the Blackout Challenge videos on its platform, 2) continuing to distribute the videos after it learned about the videos and the deaths that followed, and 3) recommending the videos to Nylah after TikTok knew the videos were likely to cause harm. TikTok moved to dismiss, arguing that Anderson sought to hold TikTok liable for acts completely immunized by § 230(c)(1). The District Court agreed.

Based on that it sounds like Tiktok agreed it knew and failed to stop it trending, power it definitely had, or otherwise moderate it so they wouldn't appear on the 'for you' feed, and instead promoted it to the kid based on whatever metrics they use for the 'for you' feed.

but that's not the allegation. Tiktok's algorithm is not a person. TikTok's algorithm can't choose or decide anything, something the plaintiff clearly knows because they're not trying to sue a fucking computer program.

The algorithm was designed and built by Tiktok to do what they want it to do, therefore anything it does and actions it takes are Tiktok's responsibility, they can't disclaim liability just because 'muh computer' just like Boeing couldn't when their fuckup writing software for the 737 Max killed people, or when AECL's Therac-25 ended up irradiating patients due to software errors, Tiktok didn't claim they missed anything or their algorithm failed, they tried claiming they didn't have to care because of S230 immunity.
 
The whole point of section 230 is that it's an undue burden to expect a platform to individually review every item of third-party speech it platforms, AND you don't incur that burden even if you elect to review and moderate some content.
This is where tiktok, Facebook, et al are different from what 230 was designed for. Their feeds aren't reviewing and moderating, they are (self admittedly) editorialiazing and curating. While those words can be used synonymously, there is a distinction. Reviewing and moderating concerns what's allowed to exist on the site in the first place. Editorializing and curating feeds is determining what they are choosing to promote to users as it's own separate product.

The best analogy I can think of is the difference between collecting a bunch of newspapers, vs snipping words out of them and gluing them to a piece of paper to create a message. If you're just collecting, and tossing ones that are duplicates or you aren't interested in is protected. If you're chopping them up to create messages, that's not protected.
 
Arguably Null is already responsible for any potential civil negligence in featuring threads since the act of featuring, as well as the tagline, are uncontroversially his own first-party speech. This ruling wouldn't be necessary for that to be true and wouldn't expand the liability there. The highlight feature on the other hand, an automated feature promoting certain posts, could be liable under the standards presented here and would be impossible to safely employ if it is, since there's no way to predict when a highlight could be "negligent".
That's contrary to the vast weight of authority. The leading case is Blumenthal v. Drudge, 992 F. Supp 44 (D.D.C. 1998). In this case, Matt Drudge was sued for publishing content by third parties on his website, which was largely linking to them with descriptive headlines on The Drudge Report. He was sued by Blumenthal because the content linked was alleged to be defamatory.

He was found immune under 230 despite having editorial content of his own when linking. That's all Null is doing. If that were true, it would completely eviscerate the whole purpose of the section. There are only extremely narrow exceptions to this, such as if the descriptive headline itself contained defamatory content or mischaracterized the article in such a way as to change its meaning to a defamatory one when the original was not itself defamatory.

The ISP under 230 can even edit third party content and publish the altered content so long as it is not to add a defamatory meaning. Batzel v. Smith, 333 F.3d 1018, 1033 (9th Cir. 2003). (I'll note the Ninth Circuit has a few outlier cases taking a narrower approach, like Roommates.com, which seems to indicate content solicited by the third party in the form of a questionnaire and then substantially reformatted could be subject to publisher liability.)

That said, Drudge so far as I know remains good law although numerous cases distinguish from it. To a vast extent, 230 immunity has up until now shielded third parties from liability for acts that would subject a traditional dead tree publisher or television station to liability, and for that matter, also to the vast majority of third party liability for any sort of civil tort state or federal.

So according to the traditional view, Null would be liable for anything he personally said that was defamatory (such as the description of a feature), if it were new to him and not simply an accurate description of the third party content, or if he edited it in a manner that added defamatory meaning. For example, suppose there were a story about someone having been arrested in the past for a crime, and the story mentioned they were subsequently exonerated, and he edited that out.

It possibly wouldn't always have to be that blatant and it's probably best practice to pass on third party content as unaltered as possible, but unless the exceptions are exceedingly narrow, you once again open the floodgates to arguing endlessly at great expense to both litigants and the taxpayer-funded courts in every single case about whether the immunity applies at all, thereby eviscerating the whole point of the statute.

The whole point of an immunity is to terminate litigation as quickly and inexpensively as possible at a preliminary stage, not to litigate the immunity itself endlessly.
 
Last edited:
Then it should be limited to social media specifically that allows minors.

And it should be Congress doing such legislation, not activist judges legislating from the bench. If SCOTUS had meant basically to overturn 230 protection they would have said so.
Why would congress make a law addressing a relevant issue when they could just continue fucking grifting endlessly? That's seriously been their pattern since 2004.
 
  • Feels
Reactions: AnOminous
Could platforms adapt to a precedent forbidding curation for 230 protection by outsourcing the moderation? I.e...

1. Anyone can post anything on the platform, at the platform level the only things culled will be spam, illegal content, and anything that goes a certain length of time with no replies/engagement.
2. Any user of the platform may block certain posted content and users, making them invisible as they use the platform. Furthermore, they may publish their list of blocked things and people as a "moderation feed."
3. Any user may subscribe to another user's moderation feed, effectively trusting them to curate their experience of the platform.

This allows users to get the benefit of moderation without the platform itself exercising any control over users' posted content save for things that are illegal or plainly abusive.
 
2. Any user of the platform may block certain posted content and users, making them invisible as they use the platform. Furthermore, they may publish their list of blocked things and people as a "moderation feed."
3. Any user may subscribe to another user's moderation feed, effectively trusting them to curate their experience of the platform.
Then they'd be doing exactly what TikTok was doing and, if this decision is actually followed as setting a broad precedent, the maintainer of the "moderation feed" could be treated as liable for the third party content. After all, the inevitable impact of blocking some content is to promote what isn't blocked, just as promoting content demotes what isn't promoted by burying it under what's on top.

Whether or not you call it that, it's simply a matter of semantics as to which you're doing. So maybe this leaves it open for such actions, but now you'll have a different schmuck getting sued to test that.

I suspect you're going to see a lot of litigation tourists shopping for the Third Circuit after this and it will probably suck enough the next circuit panel to look at it may say fuck this, this hasn't worked and either overrule this or significantly narrow its scope.

If that doesn't happen, this could conceivably be a glitch. There are a lot of the thousands (perhaps tens of thousands by now) of cases involving 230 that are weird outliers, outright dumb, or otherwise wrong. So far none of them have had absolutely catastrophic results.

That said, it's rare for one of those cases to be a relatively major federal appeals court.
 
  • Agree
Reactions: Marvin
Then they'd be doing exactly what TikTok was doing and, if this decision is actually followed as setting a broad precedent, the maintainer of the "moderation feed" could be treated as liable for the third party content. After all, the inevitable impact of blocking some content is to promote what isn't blocked, just as promoting content demotes what isn't promoted by burying it under what's on top.
I can see your point, you could say that a mod feed publisher is encouraging someone to see something like the "Blackout Challenge" if they block content other than it, even if it's in a really indirect way. However, my plan would alter the landscape around content liability for one major reason: a mod feed publisher, being just another user of a platform, is probably a much less appetizing target for lawsuits.

Developing and owning a major content platform requires a lot of money, hence platform owners could offer a payday for lawyers if they're found liable. If you make it about suing random users you change the strategic calculus, and the optics of a lawyer going after some teenager or a little old lady because they -didn't- censor something is much worse than going after a corporation. The RIAA/MPAA got a lot of hate for their punitive measures against ordinary people who were downloading things.

Even so, a mod feed is effectively just a bunch of suggestions of content not to view, since users could switch or disable them easily. Would suggestions carry the same weight of responsibility as a platform owner who can decide whether a given piece of media is viewable at all?
 
  • Thunk-Provoking
Reactions: AnOminous
In my view, this is retarded, because the judges confused algorithms that serve content users wants with algorithms that actually know what the content is. The only speech most of these algorithms have is "we want to recommend the content that maximizes "engagement" (ad revenue)" and it could be argued that it's against the first amendment that now "speech" based on metrics only is restricted if you don't also have "speech" about the meaning of the content.
 
Back