US TikTok isn't protected by Section 230 in 10-year-old’s ‘blackout challenge’ death - Recommendations are now protected speech, so you're liable for them!

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
A US appeals court has issued an opinion that could have wide-ranging implications for social media platforms, finding that content selected for users by TikTok's algorithms doesn't qualify for Section 230 protection.

In an opinion [PDF] published today, a three-judge panel from the Third Circuit Court of Appeals in Pennsylvania decided that, because TikTok presented "blackout challenge" posts to 10-year-old Nylah Anderson on her For You Page of recommended content, the platform deserves to be taken to court for her death that followed.

The "blackout challenge" refers to a dangerous self-asphyxiation "trend" that went around on TikTok several years ago. Anderson attempted to participate in the challenge, leading to her death, but a lower-court judge decided in 2022 that TikTok was protected by Section 230 of the Communications Decency Act (CDA), which protects social media platforms from liability for content posted by their users.

The Third Circuit court sharply disagreed.

"TikTok knew that Nylah would watch [the blackout challenge video] because the company's customized algorithm placed the videos on her 'For You Page' after it 'determined that the Blackout Challenge was 'tailored' and 'likely to be of interest' to Nylah,'" Judge Paul Matey wrote in a partial concurrence included in the decision.

Matey argued that Section 230's application has evolved far beyond the original intent when Congress passed the CDA in 1996. It is not to "create a lawless no-man's land" of legal liability.

"The result is a Section 230 that immunizes platforms from the consequences of their own conduct and permits platforms to ignore the ordinary obligation that most businesses have to take reasonable steps to prevent their services from causing devastating harm," Matey said.

Judge Patty Shwartz wrote in the main body of the opinion that the Third Circuit's reading of Section 230 is reinforced by the recent Moody v NetChoice decision from the US Supreme Court. In that case, related to content moderation laws passed in Florida and Texas, SCOTUS held that algorithms reflect editorial judgments. Shwartz wrote that it's a compilation of third-party speech made in the manner a platform chooses, and thus merits First Amendment protection.

"Given the Supreme Court's observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others' content via their expressive algorithms, it follows that doing so amounts to first-party speech under Section 230, too," Shwartz reasoned.

In short, you can't have it both ways: Either you serve everything, let users sort it out and keep that liability shield; or you make algorithmic picks that surface content, give users what you think they want and take on the liability that comes with being the arbiter of that content.

With the appeal decided, Anderson's case will head back to the District Court in the Eastern District of Pennsylvania to be re-heard.

"Today's opinion is the clearest statement to date that Section 230 does not provide this catchall protection that the social media companies have been claiming it does," Anderson family lawyer Jeffrey Goodman told the Associated Press regarding the outcome.

TikTok didn't respond to questions for this story.

It's not immediately clear what sort of broader reach the Third Circuit's opinion could have on Section 230 protections for other social media platforms, but the three-judge panel that made the call knows it's likely to be applied elsewhere: It's a precedential one.

---------------------

A very successful parent had their child die from a "blackout challenge" (choking yourself) and decided to sue TikTok for not jannying quickly enough. As much as I'm inclined to cheer the downfall of TikTok and other social media companies, I think this case sets a bad precedent - no one seriously thinks that Google search recommendations are somehow endorsed by Google as factually true, for example.

Article: https://www.theregister.com/2024/08/28/tiktok_blackout_challenge_appeal/
Archive (pending): https://archive.ph/Eg97c

The court's decision is attached.
 

Attachments

Even very small mentions of COVID in a video during the pandemic would get it tagged with a "This is what COVID is" Wikipedia link right above the description. There's also similar warnings for suicide hotlines, flat Earth conspiracies and climate change. I've just always been curious if they were automated, caused by user reports or manually tagged on by the uploader.

Hell, somewhat recently a Chantal video had the "binging is a disorder" warning on it, and she definitely never toggles those on.
That's YouTube's automated goodthink compliance reminder that they added after 2020.
 
This ruling might be used to make bad rulings in the future, but I think narrowly applied its not the worst.

If Google keeps unequally promoting trannies, they can be sued by angry autists with inverted chodes.
 
  • Like
Reactions: Bill's Due
It is worth pointing out again in the thick of armchair legal spergery that the facts of this case are that some TikTok chomo decided to come up with the KYS challenge for kids, and the chink nigger jannies though it was funny it was being promoted to 8-12yos and did nothing if not boosted it instead of nuking it with extreme prejudice.

The judges, who are presumably all middle-aged to ancient and not terminally online, took one look at this and said "this is fucked".
If those are the facts of the case apparently the judge and the plaintiff are both lying because neither of them ruled or alleged anything about the moderation team you fucking retard. The judge ruled that a platform is legally liable for third party speech which their automated process promotes over other third party speech, akin to holding you personally responsible for a letter someone else wrote that you drew out of a hat, but sure, this was really a ruling that "Dem chinks is crazy". You cracked the code fuckwit.

Remember that time a BASED boomer ruled that Null was so evil he traveled back in time to encourage someone to infringe Greer's copyright after it had already happened? I mean that was a retarded ruling but clearly the judge looked at Greer's "motion that kiwifarms kills people" and thought "this is fucked". Laws are just a set of excuses a judge can use to punish you however they personally see fit and that's the way we GRASS-TOUCHERS like it.
 
If those are the facts of the case apparently the judge and the plaintiff are both lying because neither of them ruled or alleged anything about the moderation team you fucking retard. The judge ruled that a platform is legally liable for third party speech which their automated process promotes over other third party speech, akin to holding you personally responsible for a letter someone else wrote that you drew out of a hat, but sure, this was really a ruling that "Dem chinks is crazy". You cracked the code fuckwit.

Remember that time a BASED boomer ruled that Null was so evil he traveled back in time to encourage someone to infringe Greer's copyright after it had already happened? I mean that was a retarded ruling but clearly the judge looked at Greer's "motion that kiwifarms kills people" and thought "this is fucked". Laws are just a set of excuses a judge can use to punish you however they personally see fit and that's the way we GRASS-TOUCHERS like it.
Nothing was particularly alleged about human moderators because there is no trail to substantiate that they had any human input (by design). The question then was whether blaming the Algorithm© for allowing objectionable content to circulate on their site sufficiently guarded the company from liability. The court ruled that it doesn't. This means that the company does bear responsibility (in this case) to make sure that the noose challenge doesn't go viral among middle schoolers in the same way that they have to make sure chomos don't trade CSAM on their platform. I am not exactly twisting the facts in saying the issue was the jannies not sweeping up.

Personally I find it incredible that absolutely no janny noticed that the rope yourself challenge was trending. As above, the law requires proactive moderation in several areas. Which, again, leads me to believe that the chink jannies knew and thought it was funny, in the same way that interracial, trannsexual, anti-child, etc. content proliferates on that website. But maybe I'm just a chud and all content on social media is totally organic and 30 Changs just lost 20 thousand social credit points because they all went out for hot pot that day. Believe what you will.

I apologize for being too white to think we don't need laws n shieet coz da po-lice only use em to keep a nigga down.
 
I apologize for being too white to think we don't need laws n shieet coz da po-lice only use em to keep a nigga down.
You are the one who suggested that the judge secretly took into account unrelated facts (possible moderator culpability) in ruling narrowly on whether an automated process is a legal agent of the entity which employs it. That would not be an application of the law, it would be da jew-dishy-airy abusing the law to keep a yellow nigga down. I'm not saying that happened, you are.
 
  • Dumb
Reactions: Flaming Insignias
Sort of. Certain government employees are not allowed to use it in any capacity.

"The choking game" has been known since at least the 1950s.
I remember people doing this dumb shit when I was a kid. They'd grab some other kid and choke him out until he passed out and thought it was funny. Usually it was blacks doing it to each other.
When TikTok (or any of the big tech sites) "recommends" a list of content, the content is no longer coming from a third party. It is coming from the company itself.
Unless they're actually paying them to do it, I don't see how it's their speech. You are basically saying Null should be held personally liable whenever he features an article, or if he deletes one thread but not another similar thread. That's exactly what 230 was intended to prevent. That's the sole reason 230 was even passed into law.

Or even more directly, that the "Highlight" feature, which algorithmically makes higher rated posts more easily accessible to readers of the Farms based on user ratings, automatically makes Null (and Lolcow LLC) personally liable for literally every single post on the site.

This is like saying "hey I read this article I read on Breitbart and liked it" and now you're personally liable if anything in it turns out to be defamatory.
The whole DMCA takedown process was a decent (if flawed) attempt at balancing those concerns.
I actually think it's outright terrible, but despite that, better than NOT having a DMCA safe harbor. If you're willing to dox yourself and say "come at me bro" the fact that the ISP can then basically tell the complaining party to sue or go fuck themselves with impunity means they'll let you take that risk.

I'm not really opposed to the idea that you can be personally liable for things you actually do.
When TikTok (or any of the big tech sites) "recommends" a list of content, the content is no longer coming from a third party. It is coming from the company itself.
Unless they're actually paying them to do it, I don't see how it's their speech. You are basically saying Null should be held personally liable whenever he features an article, or if he deletes one thread but not another similar thread. That's exactly what 230 was intended to prevent. That's the sole reason 230 was even passed into law.
This ruling might be used to make bad rulings in the future, but I think narrowly applied its not the worst.
Well I'm chucking some rainbows at that because it doesn't appear to be drafted that way or intended that way and I'm deeply suspicious the Third Circuit literally has no fucking clue what it's talking about here.
 
Last edited:

Wyden (one of the two s230 authors and the one that's still in the Senate) even though he's coming from the opposite perspective aka we need to protect blacks and trannies and moderate hatespeech comes to very similar conclusions that I've read on the Kiwi Farms from less retarded users on s230, the Internet, data privacy and copyright. I don't know if he's released a statement about the TikTok case yet but this one does at least mention algorithms.

His voting record is very consistent on Internet freedom, though much of the time he's casting a protest vote against bipartisan slop. He may be a cringey Oregon Democrat but idk he seems alright for this issue
 
This is.. not sure how i feel about it. It's the worst of both worlds. Limits protections while not touching the biggest potential issues with it. (i.e. "we aren't publishers.. now here's a list of what we will and won't allow and the correct narrative for most issues") Basically it ignores violations of the spirit of the thing while going after everything else in potentially a dangerous way for the net.

If this is strictly applied, to things like official suggestions etc, then we probably shouldn't worry AS MUCH YET... but if this is the flood gates being opened... The user based net and 3rd party content is likely in a lot of trouble. Exactly what big tech and media want. Cable 2.0, more interactive edition!

That said, fuck tiktok, the CCP and social media.
 
I remember people doing this dumb shit when I was a kid. They'd grab some other kid and choke him out until he passed out and thought it was funny. Usually it was blacks doing it to each other.
Heh. We threw Lawn Darts at each other as well as play "Cowboy's and Indians/Bad Guys with BB guns and 22's.

What does not kill you will make you stronger.

I miss those days... :woo:
 
Wyden (one of the two s230 authors and the one that's still in the Senate) even though he's coming from the opposite perspective aka we need to protect blacks and trannies and moderate hatespeech comes to very similar conclusions that I've read on the Kiwi Farms from less retarded users on s230, the Internet, data privacy and copyright. I don't know if he's released a statement about the TikTok case yet but this one does at least mention algorithms.
Well, he's right in that while 230 doesn't force "ISPs" (which are very broadly defined in case law) to moderate hate speech, delete porn, or other similar things, it also doesn't create the deranged Stratton Oakmont effect of making you liable for all content if you moderate any of it.
 
  • Like
Reactions: Marvin
Unless they're actually paying them to do it, I don't see how it's their speech. You are basically saying Null should be held personally liable whenever he features an article, or if he deletes one thread but not another similar thread. That's exactly what 230 was intended to prevent. That's the sole reason 230 was even passed into law.

This is a problem social media created for itself, not some activist judge TRYING to fuck up 230. When facebook and all challenged the florida/texas laws telling them they couldn't discriminate based on viewpoints, social media companies won by convincing the court that their algorithms WERE their editorial control:
Like the editors, cable operators, and parade organizers this Court has previously considered, the major social-media platforms curate their feeds by combining “multifarious voices” to create a distinctive expressive offering. Hurley, 515 U. S., at 569. Their choices about which messages are appropriate give the feed a particular expressive quality and “constitute the exercise” of protected “editorial control.” Tornillo, 418 U. S., at 258. And the Texas law targets those expressive choices by forcing the platforms to present and promote content on their feeds that they regard as objectionable.

It's actually THE FUCKING SOCIAL MEDIA COMPANIES that pierced their own 230 protections in court when it comes to their algorithms. They couldn't suck it up and stop fucking with content they didn't like, so THEY made the argument that the product of their algorithms was their expression in order to escape it.

And now we get to here:
The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment. Id. at 2394. Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, id. at 2409, it follows that doing so amounts to first-party speech under § 230, too. See Doe ex rel. Roe v. Snap, Inc., 144 S. Ct. 2493, 2494 (2024)
While 230 protects a site from 3rd party speech, it had been made precedent that the product of social media algorithms are first amendment protected expressions, NOT the 3rd party speech it's comprised of. It's for all legal purposes a completely separate product. This court had NO other way to rule after that supreme court ruling if it were to stay consistent with the case law we now have.
 
This is a problem social media created for itself, not some activist judge TRYING to fuck up 230.
Then it should be limited to social media specifically that allows minors.

And it should be Congress doing such legislation, not activist judges legislating from the bench. If SCOTUS had meant basically to overturn 230 protection they would have said so.
 
Is there some weird understanding about what curating content is or something in this thread?

If you published a book that is a compilation of other authors works, or a 'letters to the editor' page, or even put some retard in a pre-recorded TV show, and they said something defamatory or whatnot you can't just say 'well they said it not me'

Social Media companies got their curated feeds declared their own speech, so they now have to own the consequences of that, the ruling seems pretty clear that because they claim the edited product as their speech they are liable for it, 'but computer' doesn't work as an excuse anymore, they make the code, they choose what it does, they specifically set it up so they can pick and choose what content to show to their benefit.

The idea that somehow they should get the benefits of 230 protection for 3rd party content, for their first party curated content never made sense as no other 'media' has that protection as far as I am aware, and once they pushed the whole 'it's our speech so you can't interfere with our freedom of speech' then they have to eat shit that they are now liable for what they say.

If the concern is where the line is, section 230 seems to pretty clearly read that removing content doesn't make the rest yours, and this ruling seems to say if you, not your users actions, specifically select content to put in front of those users, that the compilation of selected content you are liable for, as it's your speech and you chose what's in it, and Tiktok/Facebook/etc cannot argue they don't as it's widely known they promote specific content/creators(Tiktok literally called theirs the 'heating tool' to make content 'hot') to timelines that would not organically appear otherwise via user actions.

Tbh their lawyers should have highlighted this was going to happen at some point after the 'content feeds are free speech so no government touchy' case, but expecting these retards to think that far ahead seems a bit much lol
 
I don't know dick about laws but as a programmer I will find it very entertaining to watch a bunch of lawyers and judges try to hash out which algorithms count and don't count as 'editorialized curation'

Does a simple lexicographical sort count as political speech because it ranks Democrats above Republicans? That's obviously absurd but you'd be amazed how many seemingly innocuous engineering choices can have actual substantial effects on what you get shown.
 
If the concern is where the line is, section 230 seems to pretty clearly read that removing content doesn't make the rest yours, and this ruling seems to say if you, not your users actions, specifically select content to put in front of those users, that the compilation of selected content you are liable for, as it's your speech and you chose what's in it, and Tiktok/Facebook/etc cannot argue they don't as it's widely known they promote specific content/creators(Tiktok literally called theirs the 'heating tool' to make content 'hot') to timelines that would not organically appear otherwise via user actions.
Please explain the principled rule that separates this from any single site in existence, including ours, featuring threads or just outright deleting them, because Null doesn't like them or wants to feature them.

This effectively abolishes 230 entirely, because being able to delete or feature content is EXACTLY what 230 was passed into law to protect.

Please explain to me how this doesn't destroy Kiwi Farms as it exists, you goddamn fucking retard who is like the goddamn hundredth retard to say this STUPID FUCKING SHIT KILL YOURSELF PLEASE.

Sorry. Got a little bent out of shape there. But you're a stupid fucker.
 
  • Agree
Reactions: Marvin
If you published a book that is a compilation of other authors works, or a 'letters to the editor' page, or even put some retard in a pre-recorded TV show, and they said something defamatory or whatnot you can't just say 'well they said it not me'
If you published a book that is a compilation of other authors works, you would at some point have had the opportunity to read and judge their contents and could be found negligent in having either failed to do so, or done so inadequately. There are a great number of man hours between being submitted letters or books or raw footage and publishing them in a fixed format. The case in question is much more analogous to having someone on live TV and being found negligent in not anticipating and preventing defamatory statements from being made. How are you expected to prevent something retroactively?

There are a number of material differences related to the novel format of social media. It's probably impossible to find and sue a party responsible for creating the trend other than TikTok, for example, because no one person is meaningfully responsible for its popularity and you can't sue tens of thousands of people collectively. With multiple dead kids (Tawainna is just the first to successfully sue), I'm sure it seems galling to have no clear way of preventing this in future, but sometimes bad things just can't be reasonably prevented.

I'm convinced of the strict legality of Diamond's argument based on the 22-277 Moody v. NetChoice, LLC (https://www.supremecourt.gov/opinions/23pdf/22-277_d18f.pdf) citation where the supreme court rules in favor that the employment of an algorithm qualifies as first-party speech. I think that supreme court decision, then, is the harmful and incorrect precedent.
Please explain the principled rule that separates this from any single site in existence, including ours, featuring threads or just outright deleting them, because Null doesn't like them or wants to feature them.
Arguably Null is already responsible for any potential civil negligence in featuring threads since the act of featuring, as well as the tagline, are uncontroversially his own first-party speech. This ruling wouldn't be necessary for that to be true and wouldn't expand the liability there. The highlight feature on the other hand, an automated feature promoting certain posts, could be liable under the standards presented here and would be impossible to safely employ if it is, since there's no way to predict when a highlight could be "negligent".
 
If you published a book that is a compilation of other authors works, you would at some point have had the opportunity to read and judge their contents and could be found negligent in having either failed to do so, or done so inadequately. There are a great number of man hours between being submitted letters or books or raw footage and publishing them in a fixed format. The case in question is much more analogous to having someone on live TV and being found negligent in not anticipating and preventing defamatory statements from being made. How are you expected to prevent something retroactively?

There are a number of material differences related to the novel format of social media. It's probably impossible to find and sue a party responsible for creating the trend other than TikTok, for example, because no one person is meaningfully responsible for its popularity and you can't sue tens of thousands of people collectively. With multiple dead kids (Tawainna is just the first to successfully sue), I'm sure it seems galling to have no clear way of preventing this in future, but sometimes bad things just can't be reasonably prevented.
The screaming out on live TV analogy fails because it's not live, it's based on decisions the company make that are actioned by the algorithm, it would be more along the lines of a delayed broadcast and then repeatedly showing it on later news broadcasts not cut out, 'trending' doesn't happen instantly, they had a opportunity to prevent it via moderation and chose not to, instead allowing it through and then repeatedly pushing it new people afterwards based on Tiktok deciding they should see it, the big change in these cases is that platforms can't use an algorithm as a figleaf to pretend they don't editorialise/curate anymore, and if they do editorialise/curate they are responsible for that edited/curated product.

The exposure of the 'heating tool' proves that the 'For You' Page isn't just a feed of user content, but is specifically controlled by Tiktok to raise/lower the reach of specific creators/topics meaning they absolutely had the ability to prevent it trending, and they had the ability to remove it and be protected by Section 230 doing so, plus the Texas case setting precedent for this was about the right to delete content they don't want to host, and the article linked in the post on the Texas decision points out that it's limited to where platforms 'pick and choose' content to show, not removing content they find objectional.
"In other words, the court held that the free speech rights of social media companies are only implicated where they are engaging in expression by picking and choosing what postings to show," said McGill. "The more automated that process, the less First Amendment implications there are, and therefore the less free speech protections would apply."

The fact the judges seemed to delineate between user action to fill a content feed(i.e. calling out the search page in particular) vs the 'For You' feed being TikTok's speech seems to show the core comes down to who decides what is in the feed, if it's the user then it would be a compilation of 3rd party speech subject to 230, if it's the platform then it's the platforms 1st party speech and they are liable, where the line falls when it comes to meta-data or behaviours being used to make decisions is where it will get tricky, it could also impact shoving ads into feeds since that's the platform specifically selecting speech they want you to see.

Please explain to me how this doesn't destroy Kiwi Farms as it exists, you goddamn fucking retard who is like the goddamn hundredth retard
The ruling is about promoting/curating content not fucking deleting it, Null's also pointed out the site is responsible for features mods pick anyways, and highlights being automated by user interaction would also mean the site isn't editorializing/curating highlights since it's based on individual user behaviour, not what the mods/platform choose to highlight, so the least likely to be 'speech of the platform' and losing the highlights wouldn't kill the fucking site anyways.

Stop sperging out like some fucking autistic faggot, deleting shit you find objectionable on your site is not editorialising/curating according to the Texas case and specifically protected by 230 anyways, fucking Null says they are already responsible for mod features and that hasn't changed, and highlights going away while lame aren't the entire site, unless you have some super secret info to post, or Null comes out and says there is a problem just calm the fuck down.
 
The screaming out on live TV analogy fails because it's not live,
The analogy remains sound, being that there was no point at which a person reviewed and tacitly approved of the third party speech being promoted. The whole point of section 230 is that it's an undue burden to expect a platform to individually review every item of third-party speech it platforms, AND you don't incur that burden even if you elect to review and moderate some content.
had a opportunity to prevent it via moderation and chose not to, instead allowing it through and then repeatedly pushing it new people afterwards based on Tiktok deciding they should see it
Again, you are alleging facts not included in the case. If it was alleged an employee manually reviewed and approved of the KYS challenge, or TikTok had a company policy of promoting KYS challenges, that would already be outside 230 protection, but that's not the allegation. Tiktok's algorithm is not a person. TikTok's algorithm can't choose or decide anything, something the plaintiff clearly knows because they're not trying to sue a fucking computer program.

At best, this decision merely makes it impossible to employ an automated process of any kind while retaining 230 protection from the results of that process, even if you had no knowledge of those results or any reasonable way anticipate them. At worst it's a trojan horse for a total regression to duty-to-moderate legal standards, something clearly you'd like, as you say yourself TikTok incurs a duty to moderate because they sometimes do so (with the "heater" tool), the EXACT AND ONLY LIABILITY PREVENTED BY 230.
 
Back