US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

I think the impact will be limited to killing "recommended" or "for you" curation of feeds. Facebook will have to go back to chronological timelines and such. It will be harder for companies to deprioritize content but not to remove it. If they can get to it all. Also the "misinformation" disclaimers and similar.
I hope that's the only impact. That might actually do some good.
 
Facebook will have to go back to chronological timelines and such.
God I wish this would come true, it would make the site finally fucking usable again after like 10 years.
I doubt they'll be willing to nuke their precious algorithms though. Think it's more likely they'll go extremely hamfisted with the censorship to leave everything absolutely squeaky clean.
 
This is why defending freedom of speech almost always involves defending scoundrels engaging in reprehensible behavior, because outlawing it would also outlaw speech of value. They always start with the scoundrels first.
You can argue for scoundrels all you like, as one damn should for free speech

But dead kids are another order entirely.
 
Lets look at what the law actually says. (For my own sake mainly)
The important parts are "interactive computer service" and "provided by another information content provider".

When TikTok (or any of the big tech sites) "recommends" a list of content, the content is no longer coming from a third party. It is coming from the company itself.
The only situation where this matters is someone claiming the company has liability over the content of the posts, as if they were the publisher.

Promoting some kinds of content over other kinds of content (which includes deleting some posts) raises the question. There's historical precedent for people attempting this argument. Arguing this argument in court would probably bankrupt the farms unless some sort of miracle happens and congress addresses this with a law that proposes a cheap legal process (a la DMCA) and leaves the rest of the untouchability that 230 provides intact.

If section 230 gets savaged by retarded court precedents like this one (that is, it gets worse than this individual case, I'm not catastrophizing about this individual case necessarily, but it is a start to the process), then all these questions will need to be answered in court, by the farms, for thousands of dollars per suit.
I no longer think my or any curation argument matters based on the law's wording. Site owners are not required to keep posts up. A company does not violate the law by deleting posts it doesn't like. They have freedom of association also. The protection we all get from 230 is to not be liable for speech third parties make.
That's not what anyone's saying. No one's saying specifically deleting a post is the illegal act. In fact, nothing here is illegal. It's about liability, which is much more complicated than "illegal or legal".

The argument is that by demonstrating that a site is capable of moderating its content (has a moderation team and they occasionally delete some posts), then it has a choice of which content to moderate.

If a post ends up being defamatory, or even looks defamatory enough that they want to bring a suit and it survives immediate dismissal, they will argue that the site choosing (that it's demonstrated to choose before) to keep the content up, that that makes the site a publisher of the content in some fashion.

Like, you're aware newspapers are liable for the content of the articles they publish, right? Like usually when a newspaper commits defamation, they get named solely in the suit. Sometimes they tack on the author of the article, but not always.

Section 230 says that cannot happen with discussion forums. And there are no exceptions in its text. Which means there's no exceptions to argue over in court. There's no "... as long as they don't editorialize" or anything like that.

That means that any sort of selective curation of content, like deleting threads, cannot be used to argue in court that a site has shifted over to being a publisher. And being dragged into court to argue that point, even if you're victorious, is expensive as shit.

Because... interactive computer services are never publishers. That's the powerful immunity of section 230.

Chesterton's fence: think about why it exists before tearing it down.
It's complicated by the fact that Facebook et. al claims ownership of all the content posted on their sites. Therefore they are the publisher and the content no longer comes from a third party.
This is an urban legend. No major social media website claims to own the content you post.
 
Because... interactive computer services are never publishers. That's the powerful immunity of section 230.
There is a question of was 230 designed to account for bots running a website. Because that's what's happening. I'll admit last night I was a bit upset making my arguments. But this is where I'll lay my cards on. Section 230 was designed in the era of message boards, web 1.0, where mods had to do all admin functions. Social media barely existed. The bots that now run the internet often act automatically but with full administrative privileges. Someone has to program those bots, specifically the owner. Either the bot is a admin or it isn't.
 
You can argue for scoundrels all you like, as one damn should for free speech

But dead kids are another order entirely.
Nobody is arguing for dead kids, merely that TikTok is not necessarily responsible for dead kids any more than, say, the manufacturer of the shoelaces they hang themselves with. Were the shoelaces defective because they were sturdy enough to strangle a child? Though it's not the typical use, being wrapped around a child's neck is an easily foreseeable eventuality for shoelaces.

Indeed it could be argued this outcome was more anticipable for the shoelace manufacturer than TikTok. There are practically infinite ways a video might convince a child to kill themselves and pretty much only one way a shoelace can kill a child, yet the shoelace manufacturer didn't think to plan for this simple predictable outcome? We need simply mandate that all shoelaces be made from tissue paper and you could rest peacefully knowing no child would ever again kill themselves with shoelaces. What, you object? Do you prioritize your own convenience lacing shoes over the LIFE of a CHILD?
 
  • Agree
Reactions: Marvin
There is a question of was 230 designed to account for bots running a website. Because that's what's happening. I'll admit last night I was a bit upset making my arguments. But this is where I'll lay my cards on. Section 230 was designed in the era of message boards, web 1.0, where mods had to do all admin functions. Social media barely existed. The bots that now run the internet often act automatically but with full administrative privileges. Someone has to program those bots, specifically the owner. Either the bot is a admin or it isn't.
Perhaps.

And if that changes people's stance on this issue, fair enough. But then that's something for congress to fix, not for a judge to suddenly read a new meaning in their crystal ball.

I don't understand why the automation changes the discussion though. I do believe the bots are admins and their acts are acts of the site owner themselves. Maybe without intent, but still.

Like from what I can tell, if section 230 holds, then it really doesn't matter if an admin specifically promoted the video manually or if the bot did it. The distinction is irrelevant.
Okay, so they "license" the content. They profit from it. That destroys the argument that the content is "third party" as they are republishing the content.
It's a legal technicality that their lawyers deemed necessary to permit them to operate the site. To display the content in things like apps and whatnot.

It's no different from the farms. The farms just doesn't have a team of lawyers that, out of an abundance of caution, warned null that such a clause would be necessary.

It's a red herring to these section 230 arguments. Unrelated.
 
Perhaps.

And if that changes people's stance on this issue, fair enough. But then that's something for congress to fix, not for a judge to suddenly read a new meaning in their crystal ball.

Yes, the bots are admins and act as if the site owner themselves are doing that moderation. That's not really in dispute.

How does that change the discussion?

Like from what I can tell, if section 230 holds, then it really doesn't matter if an admin specifically promoted the video manually or if the bot did it. The distinction is irrelevant.
The distinction I see is that a machine is making administrative decisions. Not a human. A machine is only as smart as the operator. It is a extention of the operator. A human is not. A human has free will and can make judgment calls. The responsibility I see at that point lies with the operator.

Where I can agree is how this is decided. Congress should update it. A judge should decide a law as written. Now there are inconsistencies. And now SCOTUS likely will have to sort this out.
 
Nobody is arguing for dead kids, merely that TikTok is not necessarily responsible for dead kids any more than, say, the manufacturer of the shoelaces they hang themselves with. Were the shoelaces defective because they were sturdy enough to strangle a child? Though it's not the typical use, being wrapped around a child's neck is an easily foreseeable eventuality for shoelaces.

Indeed it could be argued this outcome was more anticipable for the shoelace manufacturer than TikTok. There are practically infinite ways a video might convince a child to kill themselves and pretty much only one way a shoelace can kill a child, yet the shoelace manufacturer didn't think to plan for this simple predictable outcome? We need simply mandate that all shoelaces be made from tissue paper and you could rest peacefully knowing no child would ever again kill themselves with shoelaces. What, you object? Do you prioritize your own convenience lacing shoes over the LIFE of a CHILD?
I'm not saying that they should, but unfortunately in the American public discourse, dead kids are an extremely effective rhetorical cudgel (think of the CHILDREN!)

Except when it comes to mass shootings, yanks appear to be content with their schools being target ranges
 
The distinction I see is that a machine is making administrative decisions. Not a human. A machine is only as smart as the operator. It is a extention of the operator. A human is not. A human has free will and can make judgment calls. The responsibility I see at that point lies with the operator.
Sure, but do you think that makes the difference? I still think the real issue is whether it's OK to exercise discretion, manual or mechanical, over stuff on your site without being personally responsible.

I view it as an extension of freedom of association.

I can rent a venue for like minded people, invite people to speak, reject other people, but if someone gets up on stage and starts claiming Tony Reed enjoys urethral sounding, he can't sue me just because I decided to execute editorial discretion.

I don't see why people are making the algorithm thing a big issue, when I think they'd be just as mad if humans were doing it.

Hell, I think that's what got people more mad and got all the boomers excited about tearing down 230: the possibility that Jack Dorsey Twitter was having an army of pajeets manually delete right wing content. "aha, he can't hide behind the algorithm, doing it manually means he's deliberately exercising editorial discretion!"

We have to answer the fundamental question: should it be OK to run a discussion site, and to moderate the content, possibly mechanically, possibly with an extreme political bias, without directly incurring personal liability for your user's posts? Like what difference does the algorithm make here?

I think the kid harming themselves is also a red herring that people are using as a backdoor to attack section 230. By that argument, all of the content on Sanctioned Suicide should be the responsibility of whats-his-name. That's a whole site deliberately about killing oneself.
 
Sure, but do you think that makes the difference? I still think the real issue is whether it's OK to exercise discretion, manual or mechanical, over stuff on your site without being personally responsible.

I view it as an extension of freedom of association.

I can rent a venue for like minded people, invite people to speak, reject other people, but if someone gets up on stage and starts claiming Tony Reed enjoys urethral sounding, he can't sue me just because I decided to execute editorial discretion.

I don't see why people are making the algorithm thing a big issue, when I think they'd be just as mad if humans were doing it.

Hell, I think that's what got people more mad and got all the boomers excited about tearing down 230: the possibility that Jack Dorsey Twitter was having an army of pajeets manually delete right wing content. "aha, he can't hide behind the algorithm, doing it manually means he's deliberately exercising editorial discretion!"

We have to answer the fundamental question: should it be OK to run a discussion site, and to moderate the content, possibly mechanically, possibly with an extreme political bias, without directly incurring personal liability for your user's posts? Like what difference does the algorithm make here?

I think the kid harming themselves is also a red herring that people are using as a backdoor to attack section 230. By that argument, all of the content on Sanctioned Suicide should be the responsibility of whats-his-name. That's a whole site deliberately about killing oneself.
I see algorithms as what has harmed the internet wholesale. There is no discretion. You try to appeal your ban or whatever to a bot and you get auto rejected. There is no discussion. You are at the mercy of a machine as a human. That is a tragedy, and it allows companies to shift blame onto the bots they built to do the things they designed it to do. The internet has too many bots on it as is.

If you, in your example, want to boot a guy talking about Tony Reed enjoying ureteral sounding, fine. But it should be a person doing it. Not a automaton. A machine answers to no one. It is nothing more than a rabid dog. Whereas a person running a forum, you can talk to the admin. He is real. He should have protections. A robot is not a person. Are we granting toasters personhood now? Either you do things by the book, get jannies and actually moderate, or be lazy and use bots that have no clue how to deal with humans.
 
It's a legal technicality that their lawyers deemed necessary to permit them to operate the site. To display the content in things like apps and whatnot.
I strongly disagree. It prevents them from being sued by their users for profiting from user-copywritten content. It does not spare them from liability for republishing libel and the like. Simply showing a user's post is protected by section 230 as the original poster is the publisher of the content. When Facebook adds things like misinformation labels, they are republishing the original work with modifications. This is why I believe the ruling will have limited scope and will not gut Section 230 protections.

Here is the Facebook terms of service on the licensing:
However, to provide our services we need you to give us some legal permissions (known as a "license") to use this content. This is solely for the purposes of providing and improving our Products and services as described in Section 1 above.

Specifically, when you share, post, or upload content that is covered by intellectual property rights on or in connection with our Products, you grant us a non-exclusive, transferable, sub-licensable, royalty-free, and worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content (consistent with your privacy and application settings). This means, for example, that if you share a photo on Facebook, you give us permission to store, copy, and share it with others (again, consistent with your settings) such as Meta Products or service providers that support those products and services. This license will end when your content is deleted from our systems.

Here is the legal definition of an "information service":
The term “information service” means the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications, and includes electronic publishing, but does not include any use of any such capability for the management, control, or operation of a telecommunications system or the management of a telecommunications service.

Here is the definition of "interactive computer service":
(2)Interactive computer service
The term “interactive computer service” means any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.

This means a website has protection for hosting or allowing computer access to content. It does not provide protection for the advertising or endorsement of said content.

When a company adds new content like disclaimers to the original content, they are transforming the work. The transformed content they put out is now the company's work, licensed for transformation by the original provider. They are now liable for the content as it is no longer the original third-party content.

When a company recommends personalized content to individual users, they are doing so to drive engagement of their product. They are no longer simply providing access, they are now endorsing and advertising the content they recommend. They become liable for it.

Sure, but do you think that makes the difference? I still think the real issue is whether it's OK to exercise discretion, manual or mechanical, over stuff on your site without being personally responsible.
Directly from section 230:
(2)Civil liabilityNo provider or user of an interactive computer service shall be held liable on account of—
(A)
any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)
any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]

I view it as an extension of freedom of association.
I fully agree with this.
 
I see algorithms as what has harmed the internet wholesale. There is no discretion. You try to appeal your ban or whatever to a bot and you get auto rejected. There is no discussion. You are at the mercy of a machine as a human. That is a tragedy, and it allows companies to shift blame onto the bots they built to do the things they designed it to do. The internet has too many bots on it as is.

If you, in your example, want to boot a guy talking about Tony Reed enjoying ureteral sounding, fine. But it should be a person doing it. Not a automaton. A machine answers to no one. It is nothing more than a rabid dog. Whereas a person running a forum, you can talk to the admin. He is real. He should have protections. A robot is not a person. Are we granting toasters personhood now? Either you do things by the book, get jannies and actually moderate, or be lazy and use bots that have no clue how to deal with humans.
That's fair enough. Though I'd say running any large site at scale would become massively expensive if every moderation action was required to be backed by a human moderator.

Someone could argue that giant behemoths such as Twitter and Facebook simply shouldn't exist if they can't afford to staff it properly. And that's a fair perspective.

Perhaps if a business wanted to use automated moderation to some extent, they could be required to provide a legal human contact for appeals? A la the DMCA appeals process?

Honestly, I think the DMCA unfairly gets shit on too much. It's a clunky, poorly thought out process, but I do appreciate what they were trying to do. The internet had the potential to be a massively interesting creative engine, bringing a bunch of people together to make creative works. But if you want to preserve pre-internet copyright principles, you'll want to have a mechanism to have people assert their rights without slowing everything down.

The whole DMCA takedown process was a decent (if flawed) attempt at balancing those concerns.
When Facebook adds things like misinformation labels, they are republishing the original work with modifications.
When a company adds new content like disclaimers to the original content, they are transforming the work. The transformed content they put out is now the company's work, licensed for transformation by the original provider. They are now liable for the content as it is no longer the original third-party content.

When a company recommends personalized content to individual users, they are doing so to drive engagement of their product. They are no longer simply providing access, they are now endorsing and advertising the content they recommend. They become liable for it.
I'm not aware of any case where commentary distinctively separated from the original work is regarded as a transformation of the original work for liability purposes. As in, they're now responsible for the content of that original work as publishers.

Like a blogger refuting someone else's post, and carefully quoting (and labeling as quotations) wouldn't be responsible for the quoted portions, let alone the unmolested original post.
 
That's fair enough. Though I'd say running any large site at scale would become massively expensive if every moderation action was required to be backed by a human moderator.

Someone could argue that giant behemoths such as Twitter and Facebook simply shouldn't exist if they can't afford to staff it properly. And that's a fair perspective.

Perhaps if a business wanted to use automated moderation to some extent, they could be required to provide a legal human contact for appeals? A la the DMCA appeals process?

Honestly, I think the DMCA unfairly gets shit on too much. It's a clunky, poorly thought out process, but I do appreciate what they were trying to do. The internet had the potential to be a massively interesting creative engine, bringing a bunch of people together to make creative works. But if you want to preserve pre-internet copyright principles, you'll want to have a mechanism to have people assert their rights without slowing everything down.

The whole DMCA takedown process was a decent (if flawed) attempt at balancing those concerns.
I fully admit it would cost a lot for the bigger sites. And your solution is valid. If there was something akin to a DCMA that would force human interaction, that would bring "justice" into the equation a machine simply cannot give, that would be a fair compromise. Currently however, it feels there is one dude in the control room of this massive ship of every social media company, asleep, and he only wakes up when the FBI or some big entity yells at him enough to take action. Which with how titanic some of these companies are, with how much money they make, is not how things should be.

Think about Youtube. The monkey torture videos are still going on. A lot of bad things still are. They have successfully found a way to evade the bots while using the algorithm to their advantage. But if there was real genuine moderation going on, that industry of blood would collapse over night. Bots have made things easier for the companies yes, cheaper certainly, but worse for the user. I grew up on the internet as long as I can remember. From a Compaq Windows XP pc, to now with phones, laptops, and modern PC's. It has never been more hostile as it is now. Something needs to change.
 
I'm not aware of any case where commentary distinctively separated from the original work is regarded as a transformation of the original work for liability purposes. As in, they're now responsible for the content of that original work as publishers.
You've changed my mind. The warning labels and recommendations are just commentary on the work itself.
 
It is worth pointing out again in the thick of armchair legal spergery that the facts of this case are that some TikTok chomo decided to come up with the KYS challenge for kids, and the chink nigger jannies though it was funny it was being promoted to 8-12yos and did nothing if not boosted it instead of nuking it with extreme prejudice.

The judges, who are presumably all middle-aged to ancient and not terminally online, took one look at this and said "this is fucked".

The broader case law implications as always will be hashed out legislatively or judicially, even if it is such a bad precedent they just retcon it and say "that ruling is still good though, because, uh, it is, okay".
 
Back