US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🔧 At about Midnight EST I am going to completely fuck up the site trying to fix something.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

I wonder if the retarded parents were using the restricted setting in the TikTok of their kids. Probably not.
The retarded part is that TikTok allowed their machine to do this. You don't see kiwi farms promoting suicide challenges. Why? We have jannies and actual moderation. It would never happen here. Its why this ruling doesn't really effect us. It speaks to a lackadaisical attitude to moderation of their own product. But its what i expect of chinesium spyware. Naw, i blame the machine. It killed that kid because nobody did their fucking jobs.
 
2nd video I find more useful as it goes over some of the differences between Chinese & USian tiktok. Appears china while on paper poorer is more advanced than the US with Internet of Things use with they phones vis-a-vis the US. I'm not well traveled but I'd say the US is in the middle of that kind of IoT integration between Europe which still uses a lot of physical currency to Asia / Russia (Moscow) where anecdotally it seems that IoT integration is more widespread.
Delivery is much cheaper in china (the delivery people are probably getting paid peanuts) plus they have the "everything apps" than in any other nation would have been taken apart due to being monopolies.
 
Where is there even any allegation that TikTok boosted suicide encouraging content because they personally approve or disapprove of it?
To me the following reads as just such allegation:
Screenshot 2024-09-05 005627.png
Screenshot 2024-09-05 005143.png
Screenshot 2024-09-05 005238.png
Screenshot 2024-09-05 005411.png

Likewise, when they promote a content to your unique (as alleged) "For You" tab they must have a purpose. That aside, whether or not Tik Tok wants to kill children is immaterial to strict products liability and negligence torts. Tik Tok's algorithm, as designed, sold its product in a defective state, a state they allegedly knew to be defective.

I think we are all jumping the gun here, somewhat. Tik Tok may very well, or may not, still prevail in trial.
There's nothing functionally distinctive from an algorithm deboosting things like "hate speech" or "pornography" or even "just stuff I don't like," since by definition that boosts content that doesn't meet the negative definition.
These deboostings would indeed be speech, but not speech that carries any liability (in your particular given example).
What actually usable test is articulated here?
The test materially did not change. If it's a third-party action, there is no liability. If it's first-party action, there is. What changed was the breath of things that falls into the latter category, and there I agree, it's much too large.
Why isn't Null featuring an article now protected under 230? Because if it isn't, we're fucked, and so is everyone else.
Why would it be protected? The feature text is Null's own speech. The reposting of the article by the user, however, is obviously still protected.
I wouldn't say it's unfair to say that deleting lolcow threads involves a bit of editorialization.
I can see that. On the other hand, we usually just let the "bad" threads be instead of moving them. That, I feel, makes a lot of difference. A bad thread can always become better (or a close-cow can always become a lolcow (like Rekieta)), but a deleted one is a ban on the thread improving (though of course, it can be remade and qualify).
Basically my issue is that literally every admin decision will be on the table to drag into court over, if section 230 gets savaged badly enough.
I agree that this is effectively a real issue. But likewise I don't believe that internet gets to be wholly and uniquely immune from any and all liability. I mean, if actions as alleged are true, then Tik Tok should have some sort of punishment, and the victim some sort of relief, right? But you are completely correct that this also risks fucking over the little men like Null. I'm not smart enough to figure out how to resolve the conflict, but I also refuse to err on the side of complete immunity for what may be crimes (in so far as they are done by Tik Tok themselves, rather than someone on Tik Tok. In the latter case, Tik Tok should still have immunity).
 
The retarded part is that TikTok allowed their machine to do this. You don't see kiwi farms promoting suicide challenges. Why? We have jannies and actual moderation. It would never happen here. Its why this ruling doesn't really effect us. It speaks to a lackadaisical attitude to moderation of their own product. But its what i expect of chinesium spyware. Naw, i blame the machine. It killed that kid because nobody did their fucking jobs.
you can put a restricted mode in tiktok like you can put YouTube kids or use parental controls for websites. Kids can find all kind of shit in the internet, suing tiktok is not going to stop that. Especially if the site has a safe mode the parents were too lazy/stupid to set up. Most of my coworkers and friends who are parents restrict the use of the internet for their younger kids.

Also i suspect than some of the kids ( mainly the older ones who were like 12) commited suicide and the parents are just searching for someone to blame.
 
you can put a restricted mode in tiktok like you can put YouTube kids or use parental controls for websites. Kids can find all kind of shit in the internet, suing tiktok is not going to stop that. Especially if the site has a safe mode the parents were too lazy/stupid to set up. Most of my coworkers and friends who are parents restrict the use of the internet for their younger kids.

Also i suspect than some of the kids ( mainly the older ones who were like 12) commited suicide and the parents are just searching for someone to blame.
How do you know the parent even knew about it? When i was that age i was a tech wizard compared to my parents. I doubt they knew about the feature you're talking about and there is a chance they didn't even know they had a Tik Tok at all. The parents have a right to demand retribution when their kids end up fucking dead. I know you probably don't have nephews or nieces, definitely not your own children, but my lord, have some perspective. A KID IS DEAD AND TIK TOK ENABLED IT. FUCK THEM.
 
How do you know the parent even knew about it? When i was that age i was a tech wizard compared to my parents. I doubt they knew about the feature you're talking about and there is a chance they didn't even know they had a Tik Tok at all. The parents have a right to demand retribution when their kids end up fucking dead. I know you probably don't have nephews or nieces, definitely not your own children, but my lord, have some perspective. A KID IS DEAD AND TIK TOK ENABLED IT. FUCK THEM.
are you retarded?, the parents of these kids are not boomers. They were in their 30s or 40s at most. They know how to operate a cellphone. I feel sorry for the kids, but the parents were terrible parents. Giving a 8 year old a cellphone with no restrictions is awful parenting
 
are you retarded?, the parents of these kids are not boomers. They were in their 30s or 40s at most. They know how to operate a cellphone. I feel sorry for the kids, but the parents were terrible parents. Giving a 8 year old a cellphone with no restrictions is awful parenting
A kid is dead because a machine went haywire. Naw. Your opinion is one that doesn't have to deal with kids often. Irresponsible. It happened on their website and their moderation staff didn't stop that shit. No. Fuck them.
 
Algorithms don't like or dislike anything. Where is there even any allegation that TikTok boosted suicide encouraging content because they personally approve or disapprove of it?

Section 230 was specifically drafted to protect activity like this. There's nothing functionally distinctive from an algorithm deboosting things like "hate speech" or "pornography" or even "just stuff I don't like," since by definition that boosts content that doesn't meet the negative definition.

What actually usable test is articulated here? All it serves to do is cast the jurisprudence into complete chaos.

What are you supposed to not do to get the safe harbor of 230 in the Third Circuit? Why isn't Null featuring an article now protected under 230? Because if it isn't, we're fucked, and so is everyone else.

And they should not even have to stand trial for it, as Judas Priest themselves did. 230 is not an affirmative defense. It is a complete immunity even to stand trial. Now that immunity has an enormous carveout that is completely nonsensical and effectively dilutes the immunity to extinction.
Actually when the algorithms leaked inside of Google they legitimately had code inserted into it where you could deboost certain things and boost certain things you'd only put that in there if you were actually planning on doing that.
Also I guarantee you that 95% of searches on did the Holocaust happen should not lead to the ADL or a bunch of other pro Jew sites
 
A kid is dead because a machine went haywire.
A kid is dead because the kid was fucking stupid.
Actually when the algorithms leaked inside of Google they legitimately had code inserted into it where you could deboost certain things and boost certain things you'd only put that in there if you were actually planning on doing that.
Also I guarantee you that 95% of searches on did the Holocaust happen should not lead to the ADL or a bunch of other pro Jew sites
And if they do the solution is to document that and avoid it. Otherwise you're saying sites can't even avoid pro-pedo sites or otherwise curate content for their audience, or they're instantly totally liable for literally everything that appears in search results.

That's the Stratton Oakmont doctrine, and it's what 230 was supposed to be the stake in the vampire's heart to STOP THAT SHIT FOREVER.
How do you know the parent even knew about it? When i was that age i was a tech wizard compared to my parents. I doubt they knew about the feature you're talking about and there is a chance they didn't even know they had a Tik Tok at all.
So are you saying you were both smarter than your parents and that if you saw some retard online telling you to kill yourself you'd have just decided "wow putting my neck in a noose and killing myself sounds AWESOME!"
Likewise, when they promote a content to your unique (as alleged) "For You" tab they must have a purpose. That aside, whether or not Tik Tok wants to kill children is immaterial to strict products liability and negligence torts.
This is still within the core territory of what 230 was supposed to immunize.

This is saying "we're bringing back Stratton Oakmont despite this law specifically being directed against liability in this situation."
The test materially did not change. If it's a third-party action, there is no liability. If it's first-party action, there is.
So Lolcow LLC is now liable specifically any time a janny or Null features a thread, thereby "boosting" it, even if he says nothing specifically about it. And the algorithm that picks featured threads is also "speech" by the site itself, and therefore Lolcow LLC.

How the FUCK are you okay with this? Do you want to be sued personally? Because that's now possible thanks to this bullshit.
Why would it be protected? The feature text is Null's own speech. The reposting of the article by the user, however, is obviously still protected.
That may be some of the dumbest nonsense I've ever seen anyone say here.
 
Last edited:
A kid is dead because the kid was fucking stupid.
The child was 10.
So are you saying you were both smarter than your parents and that if you saw some retard online telling you to kill yourself you'd have just decided "wow putting my neck in a noose and killing myself sounds AWESOME!"
With technology yes. I grew up with computers every step of the way. And as a child the internet was highly influential in my life. Its also 10X more prevelent and hostile than it was when i was 10. That machine influenced a pliable 10 year old mind that suicide was OK. You are looking at this from a lawyer perspective. I actually have family that is the age of this kid. Kids have gotten dumber and more influenced due to digital devices such as tablets and smartphones. It is all of them across the broad range. Yes it killed this kid. Your arguments fail to account for the human factor and I reject them. Talk to me when you have kids to take care of.
 
The retarded part is that TikTok allowed their machine to do this. You don't see kiwi farms promoting suicide challenges. Why? We have jannies and actual moderation. It would never happen here. Its why this ruling doesn't really effect us. It speaks to a lackadaisical attitude to moderation of their own product. But its what i expect of chinesium spyware. Naw, i blame the machine. It killed that kid because nobody did their fucking jobs.
I mean, we do regularly tell people to kill themselves.

They mostly deserve it, but still.
 
And if they do the solution is to document that and avoid it. Otherwise you're saying sites can't even avoid pro-pedo sites or otherwise curate content for their audience, or they're instantly totally liable for literally everything that appears in search results.
If you're controlling what hits the front page by carefully selecting for certain things you should be allowed to be held criminally liable for it now if you're gonna have an algorithm that promotes things based on similar interest that's perfectly acceptable nothing that is legally protected speech should be allowed to be kicked off these platforms or else they get their 230 removed

also if you mix ammonium bleach together you can create happy fun time gas
 
The child was 10.
Where exactly were the kid's parents?
If you're controlling what hits the front page by carefully selecting for certain things you should be allowed to be held criminally liable for it now if you're gonna have an algorithm that promotes things based on similar interest that's perfectly acceptable nothing that is legally protected speech should be allowed to be kicked off these platforms or else they get their 230 removed
So they hired some jeets and specifically said "let's boost as much pro-suicide content as we can so we can murder children!"

Lmao. No.
 
This site is for 18 and up. Not children. Certainly not 10 year olds. There is a MASSIVE DIFFERENCE.
There is no massive difference when kids shouldn't be on it unattended in the first place. It's the digital equivalent of letting your kid wander into the ghetto and being surprised they get kidnapped/raped/murdered.

It's the job of a child's parents to look after them, and nobody else's. Nobody else who doesn't volunteer for the job, anyway.
 
Let’s not pretend that these ‘algorithms’ are just code or some unknown jeetcoder’s responsibility. These computer programs are fine-tuned to deliver a very specific result, whether for a commercial or political purpose. Behind every algorithm there is a person pushing a specific agenda, and when users start noticing, they hide behind the bullshit ‘it’s our algorithm!’ excuse. This isn’t the likes of qsort and sprintf we’re talking about gentlemen.
 
Let’s not pretend that these ‘algorithms’ are just code or some unknown jeetcoder’s responsibility. These computer programs are fine-tuned to deliver a very specific result, whether for a commercial or political purpose. Behind every algorithm there is a person pushing a specific agenda, and when users start noticing, they hide behind the bullshit ‘it’s our algorithm!’ excuse. This isn’t the likes of qsort and sprintf we’re talking about gentlemen.
Sure, if we can all also stop pretending that the U.S. government isn't actively hostile to all who don't worship it.
 
Back