I briefly glanced through the past 5 pages and didn't see this mentioned so this about reddit jannies and the Supreme Court might be fitting for the 1000th page on this thread
Supreme Court allows Reddit mods to anonymously defend Section 230
Mods tell SCOTUS that Reddit's special formula depends on Section 230 immunity.
Over the past few days, dozens of tech companies have filed briefs in support of Google in a Supreme Court case that
tests online platforms’ liability for recommending content. Obvious stakeholders like Meta and Twitter, alongside popular platforms like Craigslist, Etsy, Wikipedia, Roblox, and Tripadvisor, urged the court to uphold Section 230 immunity in the case or risk muddying the paths users rely on to connect with each other and discover information online.
Out of all these briefs, however,
Reddit’s was perhaps the most persuasive. The platform argued on behalf of everyday Internet users, whom it claims could be buried in “frivolous” lawsuits for frequenting Reddit, if Section 230 is weakened by the court. Unlike other companies that hire content moderators, the content that Reddit displays is “primarily driven by humans—not by centralized algorithms.” Because of this, Reddit’s brief paints a picture of trolls suing not major social media companies, but individuals who get no compensation for their work recommending content in communities. That legal threat extends to both volunteer content moderators, Reddit argued, as well as more casual users who collect Reddit “karma” by upvoting and downvoting posts to help surface the most engaging content in their communities.
“Section 230 of the Communications Decency Act famously protects Internet platforms from liability, yet what’s missing from the discussion is that it crucially protects Internet users—everyday people—when they participate in moderation like removing unwanted content from their communities, or users upvoting and downvoting posts,” a Reddit spokesperson told Ars.
Reddit argues in the brief that such frivolous lawsuits have been lobbed against Reddit users and the company in the past, and Section 230 protections historically have consistently allowed Reddit users to “quickly and inexpensively” avoid litigation.
The Google case was
raised by the family of a woman killed in a Paris bistro during a 2015 ISIS terrorist attack, Nohemi Gonzalez. Because ISIS allegedly relied on YouTube to recruit before this attack, the family sued to hold Google liable for allegedly aiding and abetting terrorists.
A Google spokesperson linked Ars to a
statement saying, “A decision undermining Section 230 would make websites either remove potentially controversial material or shut their eyes to objectionable content to avoid knowledge of it. You would be left with a forced choice between overly curated mainstream sites or fringe sites flooded with objectionable content.”
Eric Schnapper, a lawyer representing the Gonzalez family, told Ars that the question before the Supreme Court "only applies to companies, like Reddit itself, not to individuals. This decision would not change anything with regard to moderators."
"The issue of recommendations arises in this case because the complaint alleges the defendants were recommending ISIS terrorist recruiting videos, which under certain circumstances could give rise to liability under the Anti-Terrorist Act," Schnapper told Ars, noting that the question of that liability is the subject of
another SCOTUS case involving Twitter, Meta, and Google.
Reddit mods granted anonymity to defend Section 230
The Supreme Court will have to weigh whether Reddit's arguments are valid. To help make its case defending Section 230 immunity protections for recommending content, Reddit received special permission from the Supreme Court to include anonymous comments from Reddit mods in its brief. This, Reddit’s spokesperson notes, is “a significant departure from normal Supreme Court procedure.” The Electronic Frontier Foundation, a nonprofit defending online privacy, championed the court’s decision to allow moderators to contribute comments anonymously.
“We’re happy the Supreme Court recognized the First Amendment rights of Reddit moderators to speak to the court about their concerns,” EFF’s senior staff attorney, Sophia Cope, told Ars. “It is quite understandable why those individuals may be hesitant to identify themselves should they be subject to liability in the future for moderating others’ speech on Reddit.”
One Reddit moderator providing comment, “u/AkaashMaharaj,” runs a sub-Reddit dedicated to “all horsepeople, horse lovers, and fans of equestrian sports.” As a volunteer content moderator, this Reddit user, like all Reddit mods, has access to algorithmic tools that can be customized “to make day-to-day content moderation less burdensome and more effective.”
In the Google case, petitioners have alleged that Google-owned YouTube should be held liable for recommending terrorist recruitment videos to users via its algorithms. Therefore the primary question before the Supreme Court is whether Section 230 shields platforms from liability for how algorithms are used to recommend content. This seems to become a sticking point for Reddit because its users are primarily the ones using algorithms to recommend content.
In comments, Reddit user AkaashMaharaj argued that Section 230 makes it “possible for platforms such as Reddit to sustain content moderation models where technology serves people, instead of mastering us or replacing us.” Continuing, the Reddit user argued, “Section 230 must not be attenuated by the Court in a way that exposes the people in that model to unsustainable personal risk, especially if those people are volunteers seeking to advance the public interest or others with no protection against vexatious but determined litigants.”
Another Reddit content moderator, “u/Halaku,” manages two forums—one focused on computer science and another on a rock band. That Reddit user argues that the algorithmic tools Reddit provides are indispensable when sifting through tens of millions of individual users posting. Taking away that tool would be equivalent to removing a spam filter from an inbox, the mod argues, making sub-Reddit communities difficult to maintain. If the Supreme Court weakens Section 230 and algorithmic tools expose moderators to liability, the Reddit user says that could put entire communities at risk by “leaving users to hunt and peck for actual communications amidst all the falsified posts from malicious actors engaging in hate mail, advertising spam, or phishing attempts to gain financial credentials.”
Reddit’s entire platform runs on this engine of community engagement, with more than 100,000 active sub-Reddits putting users in charge of pointing others to the most engaging and worthwhile content. Reddit user AkaashMaharaj said in the brief that this model is what makes Reddit special. “The fact that Reddit has delegated moderation to volunteer human beings, supported by automated tools, is the platform’s single greatest strength,” AkaashMaharaj said. “It is a model that should be fostered and encouraged at other social media platforms.”
In the brief, Reddit argues that recommendations “are the very thing that make Reddit a vibrant place” and “if petitioners succeed in this case in allowing a plaintiff to overcome Section 230” immunity “merely by pleading” that YouTube should be liable for allegedly recommending extremist content, Reddit users “will not volunteer their time to moderate their communities if doing so carries a serious risk of being sued.”
A question for Congress, not SCOTUS
A Reddit spokesperson told Ars that Reddit takes violations of its extremism policy seriously, relying on human review and automated tools to ensure that users who post extremist content are banned and violating content is removed at scale. The spokesperson said that a federal law already makes inciting terrorism illegal and that there are other legal actions to take to hold platforms accountable for recommending content that incites terrorism. However, Reddit agrees with Google that weakening Section 230 would cause more harm than good by introducing liabilities for platforms already moderating in good faith.
In addition to tech companies filing briefs over the past few days, many industry and advocacy organizations also chimed in with briefs. Among them was a
brief from the Electronic Frontier Foundation, which echoed Reddit by warning the Supreme Court that weakening Section 230 puts Internet users’ free speech at risk. Cope told Ars that EFF agrees that Reddit’s concerns are valid.
“Reddit users that interact with third-party content—including ‘hosting’ content on a sub-Reddit that they manage, or moderating that content—could definitely be open to legal exposure if the Court carves out ‘recommending’ from Section 230’s protections, or otherwise narrows Section 230’s reach,” Cope told Ars.
Reddit urged the Supreme Court to leave amendments to Section 230’s interpretation to Congress and side with Google. Otherwise, the court risks “dramatically” expanding “Internet users’ potential to be sued for their online interactions.”
“It may very well be true that our society should reexamine the duties of technology companies and their users in light of the rapid evolution over the last decade of the Internet, social media, and targeted recommendations,” Reddit argued in its brief. “But it must be Congress that decides what those changes should be and how broadly they should sweep. Judicial interpretation should not move at Internet speeds, and there is no telling what a sweeping order removing targeted recommendations from the protection of Section 230 would do to the Internet as we know it.”
https://arstechnica.com/tech-policy...eddit-mods-to-anonymously-defend-section-230/ (
Archive)
A couple of threads on this:
https://old.reddit.com/r/privacy/comments/10i0sd5/supreme_court_allows_reddit_mods_to_anonymously/ (
Archive)
https://old.reddit.com/r/technology...reme_court_allows_reddit_mods_to_anonymously/ (
Archive)
And for the fun of it I've attached the most recent filings of a court case related to child porn that reddit was involved in that I'm aware of