Disaster Steam-Powered Hate: Top Gaming Site Rife with Extremism & Antisemitism - Antisemitism in the US

Article (Archive)

Introduction
On August 12, 2024, an 18-year-old donned a skull mask, black tactical helmet, gloves and body armor with a prominent sonnenrad patch. Armed with two knives and a hatchet, he stabbed five people at a tea garden near a mosque in Eskisehir, Turkey, leaving two in critical condition. Prior to the attack, the teen also attached a camera to his chest to livestream the stabbings on social media. While Turkish authorities claimed that the attacker, known as “Arda K.” was “influenced by computer games,” an ADL Center on Extremism (COE) investigation found that in addition to authoring a white supremacist manifesto that glorified mass shooters, he also extensively posted extremist and hateful content on Steam, the world’s largest digital PC video game distribution service. Developed by U.S. software corporation Valve, Steam is a storefront for video games and a social networking space for gamers across the globe.

This attack was not the first time that Steam has been linked to the violent and hateful world of extremism. Over the years, ADL and others – including researchers, media outlets, policymakers and governing bodies – have warned about the proliferation of extremist content on the platform.

To understand the scale and nature of hateful and extremist content on Steam, the ADL Center on Extremism (COE) conducted an analysis of public data on Steam Community on an unprecedented, platform-wide scale, analyzing 458+ million profiles, 152+ million profile and group avatar images and 610+ million comments on user profiles and groups.

COE found millions of examples of extremist and hateful content – including explicit hate symbols like sonnenrads and “happy merchants,” as well as copypastas (blocks of text that are copied and pasted to form images or long-form writing) shaped into swastikas – on Steam Community, the platform’s social networking space where users can connect, congregate and share content. The clear gaps in Steam’s moderation of this content inflict harm by exposing untold users to hate and harassment, enabling potential radicalization and normalizing hate and extremism in the gaming community. Understanding the extent of extremist and hateful content on the platform is key to fighting the proliferation of hate online.

Key Findings
Extremist and hateful content – in particular, white supremacist and antisemitic content – is widespread on Steam Community pages. COE identified:
1.83 million unique pieces of extremist or hateful content, including explicitly antisemitic symbols like the “happy merchant” and Nazi imagery like the Totenkopf, swastika, sonnenrad and others. This also includes tens of thousands of instances of users expressing support for foreign terrorist organizations like Islamic State of Iraq and Syria (ISIS), Hamas and others.
1.5 million unique users and 73,824 groups who used at least one potentially extremist or hateful symbol, copypasta or keyword on the platform.
Copypastas are a popular method for sharing extremist or hateful content on Steam. COE detected 1.18 million unique instances of potentially extremist and hateful copypastas, 54 percent (634,749) of which were white supremacist and 4.68 percent (55,109) were antisemitic. The most popular copypastas were, by far, variations of swastikas, at 51 percent.
A significant number of Steam users and group pages use an avatar (profile picture) with potentially extremist or hateful symbols. Multiple users can have the same avatar image. COE identified 827,758 user and group profiles with avatars that contained extremist or hateful symbols. The most popular were Pepe, swastikas, the white supremacist skull or “siege” mask and the Nazi Eagle. COE also found 15,129 profile avatars that contained the flags, emblems or logos of foreign terrorist organizations, the most popular of which was ISIS.
COE identified 184,622 instances of extremist or hateful keywords on Steam. Of those, 119,018 accounts employed 162,182 extremist keywords in comments or on their profiles. Additionally, 18,352 groups had extremist or hateful keywords on their group profiles. The most used keywords in an extremist or hateful context on Steam Community are “1488,” “shekel” and “white power.”
COE analysts identified thousands of profiles that glorify violent extremists, like white supremacist mass shooters. These include avatar pictures featuring mass shooters, references to manifestos and stills from livestreamed attacks, like the 2019 Christchurch, New Zealand, shooting. In some cases – like the 18-year-old white supremacist who attacked a café in Turkey in August 2024 – users posting this content on Steam have subsequently committed acts of offline violence.
While Steam appears to be technically capable of moderating extremist and hateful content on its platform, the spread of extremist content on the platform is due in part to Valve’s highly permissive approach to content policy. In rare notable cases, Steam has selectively removed extremist content, largely based around extremist groups publicized in reporting or in response to governmental pressure. However, this has been largely ad hoc, with Valve failing to systematically address the issue of extremism and hate on the platform.
Background
Platform Overview

Valve’s Steam platform is the world's most popular online gaming marketplace. A 2020 industry-wide survey of game developers found that most respondents (around 60 percent) made at least some of their money selling games on Steam. Almost half (41 percent) generated at least some revenue selling games directly to customers, while a third made between 76 percent and 100 percent of their game revenue on the Steam platform. In 2021, Steam’s annual game sales peaked at over $10 billion, accounting for a significant portion of all PC game sales globally.

Steam Community is an optional social media component of Steam. It is a fully functional social media platform, complete with profiles, which users can customize with text and images; groups, where users participate in threaded discussions; and comments, where people can interact with friends.

Extremism and Steam

Steam’s public-facing content policy includes no mention of hate or extremism. In 2018, following controversy over a game that simulated a school shooting, Valve announced it would allow all games on Steam “except for things that we decide are illegal, or straight up trolling.” This is in sharp contrast to the much more comprehensive policies of other gaming platforms and live service providers like Activision Blizzard and Microsoft. Even Roblox, which Hindenberg Research recently called out for their egregious approach to numerous kinds of heinous content, has an anti-extremism and terrorism policy in place.

Methodology
To understand the scope of extremist content on Steam on a platform-wide scale, COE’s quantitative analysis examined three types of media: extremist symbols (in profile and group avatars), copypasta (in multi-line text fields) and keywords (in short and long text fields). Concurrently, COE analysts conducted in-depth qualitative investigative research into users and groups of interest.

The hate symbol image detection research in this study was conducted using HateVision, a proprietary machine learning and artificial intelligence (AI) tool developed by the Center on Extremism. Leveraging cutting-edge computer vision technology, HateVision is trained on a comprehensive dataset of 39 key symbols often used in extremist and antisemitic contexts. This powerful tool rapidly scans and identifies hateful content in images, enabling COE to analyze vast amounts of data and detect hate, extremism and antisemitism with high precision. An exploration of methodology is available at the bottom of this page.

Overall Proliferation of Extremist and Hateful Content
White supremacist, antisemitic and hateful content is widespread on Steam Community pages. Aggregating images, keywords and copypasta content, COE identified 1.83 million unique pieces of potentially extremist or hateful content. 1.5 million unique users and 73,824 groups used at least one potentially extremist or hateful symbol, copypasta or keyword on the platform.

Images

Of the approximately 152 million distinct images of Steam profile or group avatars evaluated by HateVision, 493,954 contained potentially extremist symbols. On Steam, multiple users can have the same avatar image. COE found that 827,758 of the approximately 273 million users with custom profile pictures and 57,327 of the around 10.86 million groups with custom profile pictures contained a potentially extremist symbol. Pepe, swastikas, siege masks and the Nazi eagle were the most frequently identified symbols[1].

From left to right: Antisemitic Steam Community avatar with a Nazi-dressed Pepe sniping a happy merchant, suggesting violence against Jews, Steam Community avatar showing Pepe in an S.S. uniform outside of Auschwitz, Antisemitic Steam Community avatar combining the happy merchant meme with Pepe and white supremacist Steam Community avatar depicting Pepe in an S.S. uniform.

Of 493,954 images, HateVision detected 996,808 instances of extremist images on Steam, as multiple symbols can appear within a single image and multiple users can use the same image as their profile avatar. Of the 996,808 detections, Pepe was the most detected symbol, accounting for over 544,267 or 54.6 percent of all symbol detections. The next most common symbols were swastikas (around 90,000, or 9 percent of all detections) and siege masks (over 85,000, or 8.6 percent of detections).

These numbers are likely to be conservative. As discussed in the methodology section of this report, the HateVision model was deliberately trained to optimize precision (the proportion of images the model flags that are actually extremist) rather than recall (the proportion of all extremist images that the model flags). COE also reviewed only publicly available profiles. This means far more profiles likely display extremist symbols than were flagged by the HateVision system.

Group profile with an antisemitic avatar image, displaying the Star of David badge used by the Nazis to identify Jewish people. The Hebrew text reads “Jew” and “Anne Frank, in the attic,” showing clear antisemitic connotations.

COE also identified tens of thousands of pieces of terrorism-related content on Steam Community, locating 15,129 public accounts with profile pictures containing the logos and flags of foreign terrorist organizations like ISIS, Hezbollah, Al-Qassam Brigades, Palestinian Islamic Jihad (PIJ), Hamas and others. COE applied the same rigorous effort and attention to identifying terrorism-related content as with white supremacist material, ensuring thorough detection across all forms of extremist content. The number of profiles displaying this type of content via an avatar was lower than the number of profiles elevating white supremacist or Nazi content.

After scanning publicly available group comments and summaries and profile comments and summaries for 177 instances of extremist ASCII[2] art, COE detected 1.18 million potentially extremist copypastas, 1.09 million of which were determined to be hateful or extremist with a high degree of confidence. Of the 1.18 million copypastas COE detected, 55,109, or 4.68 percent, were antisemitic and 634,749, or 54 percent, were white supremacist. The most popular copypastas were, by far, iterations of swastikas.

Swastikas, the n-word, and an antisemitic copyapasta involving a knife (shown below) were the most common copypastas on Steam Community.

COE found that 424,439 users posted 808,629 profile comments with extremist copypastas onto 538,092 users' profiles. Most of these users—418,889, or 98.7 percent—posted an extremist copypasta in a profile comment fewer than 10 times. These users posted 88.0 percent of profile comments with extremist copypastas. The most prolific 1.31 percent of copypasta posters, on the other hand, posted a disproportionate 12.6 percent of all comments with extremist copypasta. The outsized impact of this small number of users plainly displays the relative ease with which Valve could address the proliferation of hate and extremism on the platform by targeting these users.

Keywords

COE identified 184,622 instances of extremist or hateful keywords on Steam. 119,018 accounts employed 162,182 extremist keywords in comments or on their profiles. Additionally, 18,352 groups extremist or hateful keywords on their group profiles. The most used keywords on Steam Community in an extremist or hateful context are “1488,” “shekel,” and “white power.”

Of these 184,622 detections, 33,506 keywords, or 18.6 percent, were antisemitic and 101,288 keywords, or 54.8 percent of all detections, were white supremacist.

The most common keyword was 1488, a white supremacist shorthand for the “14 words” (“We must secure the existence of our people and a future for white children”) and the abbreviation “HH”—the 8th letter of the alphabet repeated twice—which stands for “Heil Hitler.” COE detected 60,721 instances of 1488, which was 32.9 percent of all extremist keyword detections.

Most popular keywords. Note that semantically similar keywords are grouped for this visualization. For example, “zog” and “zogbot” are counted as instances of the concept “zog” and “juden” and “judin” are grouped as “juden.”

COE also detected thousands of users with profile names dedicated to foreign terrorist organizations, like ISIS, Hamas, Hezbollah and others, as well as the names of known terrorists.

Violent Extremist Content
Through qualitative investigative research, COE analysts identified thousands of profiles glorifying violent extremists. These profiles include avatar pictures featuring mass shooters, references to manifestos and stills from livestreamed attacks. In at least two cases, users posting this content on Steam have subsequently committed acts of offline violence.

Profile Pictures and Names

COE found that many users glorify violent extremists, particularly white supremacist mass shooters, using profile avatars depicting killers or referencing them in profiles names.

Brenton Tarrant, who killed 51 people and injured 89 more in a 2019 attack on a mosque in Christchurch, New Zealand, and Anders Breivik, a Norwegian white supremacist who killed 77 people in 2011, were particularly popular profile avatars. COE investigators located hundreds of accounts with images of Tarrant and Breivik in their profile avatars.

In a particularly disturbing example, one user with a profile name “TND,” referencing a popular white supremacist copypasta meaning “total n****r death,” displayed a profile avatar depicting actor Ryan Gosling as Brenton Tarrant, including a weapon with white script, a helmet and body armor all resembling that which Tarrant used during the attack. Gosling is shown standing outside of a mosque, further referencing Tarrant’s attack.

Hundreds of users used other violent extremists in their avatar images, such as Stephan Balliet, a white supremacist and antisemite who attacked a synagogue in Germany on Yom Kippur in 2019, killing two and injuring two.

Profile names and other customizable text in user profiles also contained thousands of references to violent extremists. Thousands of users named themselves after Tarrant, Breivik and Balliet. Another popular subject for avatars and profile names was Payton Gendron, who attacked the Tops supermarket in Buffalo, New York, in 2022.

Showcases

Steam Community profiles also feature a “showcase” section that allows users to post custom content in the form of images. These also proved popular for sharing violent extremist content.

Several users posted screenshots from Tarrant’s livestream of the mosque attack. One user posted to their profile showcase a cartoon depiction of Tarrant holding his manifesto, “The Great Replacement,” like a bible with the caption, “Saint tarrant [sic].” Referring to mass killers as “saints” is common among those who glorify violent white supremacists. The showcase image serves as alternative way of signaling ideology and glorifying Tarrant without directly referencing his act of violence, which may have broader appeal and be less detectable to laypeople.

Collections and Mods

On Steam Community, members can also publish “collections,” groups of game modifications – commonly called mods – that allow users to alter games or add custom content. COE located many of these dedicated to Tarrant and other violent white supremacists, allowing users to recreate mass shootings in game and otherwise glorify the perpetrators. One user, who named their profile after Tarrant and used an image of the Australian shooter in their avatar, created a collection for the popular game Garry’s Mod called “the Australian Shitposter collection.” The cover image for the collection shows an animated character dressed in the body armor Tarrant wore during the attack, wielding an assault weapon modeled after the one Tarrant used. The image shows the character next to a kebob meat rotisserie, implying the character is killing Muslims as Tarrant did in 2019.

Another user created a collection for Garry’s Mod called the “Brenton Tarrant Pack,” which includes mods that enable users to recreate the Christchurch shooting in the game.

COE also found hundreds of other mods for Garry’s Mod and other games that specifically reference mass shootings. One user alone posted maps – downloadable spaces – for the Christchurch shooting, Columbine, the Tops supermarket shooting and others. Several hundred users celebrated the maps, making comments like, “based,” “sigma map,” “remember lads, subscribe to PewDiePie [a direct reference to Tarrant’s remarks just as he began his attack],” “make synagogue next” and requesting other shootings to be recreated by the modder. The modder also created several detailed models of the weapons and armor used by Tarrant, Gendron, Balliet and several others.

Violent Extremist Content on Steam and Offline Violence: August 2024 Turkey Attack
While the volume of content glorifying violent extremists on Steam is relatively small compared to other extremist and hateful posts on the platform, in one case, a user posting this content on the platform engaged in an act of offline violence.

On August 12, 2024, an 18-year-old man identified by authorities as “Arda K.” conducted a stabbing attack at a café outside of a mosque in Eskisehir, Turkey, injuring five people. A COE investigation found that the attack was fueled by the extremist’s belief in accelerationism and inspired by past mass killers, including white supremacists.

Following the attack, COE identified a Steam Community profile belonging to the attacker with the username “skreewie.” While there is no evidence to suggest that the attacker was directly inspired by extremist content on Steam or that he shared details about his attack on the platform, a review of his account found that prior to the attack (and when his profile was public), he had shared a large amount of extremist content – including using an image of a young Anders Breivik as his avatar – and that he threatened violence on Steam. His apparent radicalization fits what other white supremacists seek to achieve: inspire like-minded people through their violent and hateful content to act in the real world.

Skreewie’s publicly available comments paint a striking portrait of an individual with violent extremist beliefs who, on many occasions, made several direct calls for violence. Since creating his account in 2015, the attacker posted 301 comments on 17 other users' profiles, many of which used explicitly white supremacist accelerationist language. As is common among accelerationists who commit violent attacks, the attacker referred to other mass shooters as “saints,” writing, “Breivik did more to ‘protect and serve’ his people in one afternoon than any police force has in decades... hail saint Breivik.”

He also referenced Pulse nightclub shooter Omar Mateen, who killed 49 people in a 2019 Islamist terrorist attack on a popular gay nightclub, writing, “If Mateen can kill fifty, imagine what you can do.” He favorited a “guide” – a space that, when used as intended, allows Steam members to share tips on games – that consisted of a detailed 5,500-word manual for evading FBI and other government surveillance, suggesting he was interested in the tactical guidance.

On Steam, the Turkey attacker also posted vitriolic antisemitic comments and copypastas. He made five comments referencing “ZOG,” as well as one mentioning “6 gorillion,” a common white supremacist reference to the approximately six million Jews who were killed during the Holocaust.

The attacker also created four collections of game modifications. Two of the collections include extremist imagery. The first, a collection the user titled “russia [sic],” has a cover image of a concentration camp gas chamber. Another image associated with the collection shows a 19-year-old Australian man who was arrested in June 2024 for attempting a mass stabbing at the Newcastle Museum after posting a white supremacist manifesto. Another collection features an image of Anders Breivik during his attack in Norway.

Left: Cover image for the attacker’s Steam collection depicting a Nazi gas chamber used to exterminate Jews during the Holocaust. Center: Image of the 19-year-old Australian who attempted a mass stabbing that the Turkey attacker used in his “russia [sic]” collection. Right: Collection cover image posted to Steam by the Turkey attacker showing Breivik.

Much in the way that the attacker glorified Breivik, Tarrant, Balliet and others, COE’s review of Steam located several accounts already using the Turkey attacker’s moniker and profile image to honor him.

Profiles honoring the Turkey attacker. Left: A user changed their profile name and avatar to match that used by the Turkey attacker on Steam. Right: A user with a profile name combining the Turkey attacker’s real name with that of Brenton Tarrant.

Content Moderation and Moderation Evasion
The fact that extremist and hateful content is relatively easy to locate on Steam — and in cases like the Turkey attack, where a user posting this kind of content on the platform committed an act of violence — raises questions regarding the efficacy of Steam’s moderation efforts.

In the past, Steam has removed extremist content from its platform. For example, Valve removed specifically mentioned extremist groups on the platform only after they were publicized by journalists. In response to a 2019 complaint by Germany's Media Agency Hamburg/Schleswig-Holstein, Steam removed dozens of pieces of Nazi-related content that violate German law.

A COE analysis of copypastas on Steam also found evidence that Steam attempted to moderate certain extremist content before stopping for unknown reasons. From late 2019 through the height of the COVID-19 pandemic, when users in gaming spaces reached an all-time high, the use of several swastika copypastas sharply dropped to near-zero rates. The same swastika copypastas then sharply increased in frequency in September 2020, suggesting that Steam moderated this content for a brief period. There is no observable difference between the variations in copypasta posted prior to November 2019 and those posted after 2020.

Independent of Steam’s content policy, which dictates when it moderates or restricts content, COE identified two specific limitations to its approach when it does moderate content: content restrictions do not apply to all user-generated fields and users can choose to disable this and remove keywords from the block list.

Steam filters content on user summaries and comments. For example, Steam replaces a Unicode swastika symbol, ‘卐', with ‘♥♥♥’. Steam also attempts to censor an emoji-based swastika, shown below. However, users are able to remove words from Steam’s block list in their account settings.

However, its filters do not apply to all content locations. For example, Steam does not filter content in page titles or users' real name fields: COE detected instances of one specific Unicode swastika in 853 users' real names; this is the exact character that Steam declines to display in users' summaries (COE found only 11 user summaries with the same swastika present; it is unclear why these 11 instances were not filtered).

Steam also permits users to remove keywords from its block list. Below is an example of a profile viewed by a user who has added ‘卐' to their list of allowed words.

Steam’s keyword detection is also easy to evade. Users alter words and ASCII art, avoiding exact matches to extremist search terms. For example, the user below separated the letters in the expression ‘sieg heil’ with spaces to avoid detection.

In the image below, a user added lines of dashes to a swastika copypasta to avoid censorship, even writing, “no swastika here :).”

Recommendations
For Valve

As ADL has said previously, Valve needs to make significant changes to their approach to platform governance both in terms of policy and practice to address the ways in which hate and extremism have proliferated on the Steam platform.

We urge Valve to:

Adopt Policies Prohibiting Extremism. Currently there is no policy on Steam that prohibits the presence of known extremists, extremist recruitment, the celebration of extremist groups and movements or even the expression of the hateful ideologies that animate extremists. While the game industry is in general behind social media in addressing the abuse of their products by extremist actors, we have seen some progress in the last year. ADL reviewed the recent progress and wrote about three current models for game companies to potentially implement anti-extremism policies.
Adopt Policies Prohibiting Hate. Steam must have clear policies that address hate and clearly define consequences for violations. Moreover, their platform policies must state that the platform will not tolerate hateful content or behavior based on protected characteristics (race, ethnicity religion, gender, gender identity, sexual orientation, national origin). Most every major social media and gaming company has a policy prohibiting this kind of content—it is time that Valve followed suit.
Enforce Policies Accurately at Scale. Valve needs to take greater responsibility in enforcing their policies on Steam, once expanded, and do so accurately at scale. They need to utilize both a user flagging and complaint process along with a proactive, swift and continuous process to address hateful content using a mix of artificial intelligence and human monitors who are fluent in the relevant language and knowledgeable about the social and cultural context of the relevant community. Valve should work with trusted industry partners and extremism experts, such as ADL, to ensure these policies’ effective design and implementation across their platforms.
Audit and Red Team Content Moderation Practices to Close Loopholes. The loopholes to evade content moderation on Steam listed above are likely only a few of many ways bad actors are finding to evade detection and enforcement. In addition to closing those specific loopholes, Valve should undertake a large-scale audit of how bad actors are avoiding detection at present and undertake a company-wide Red Teaming effort to simulate additional ways bad actors might abuse the platform and take learnings from both exercises to improve content moderation on Steam.
Engage with Civil Society, Academics and Researchers. In establishing and updating their policies, Valve should consult regularly with civil society groups, academics and researchers from a broad cross section of positions, including civil rights and civil liberties groups, and especially seek out and use their advice and expertise to shape platform policies that may impact the experience of vulnerable and marginalized groups.
For Policymakers

As ADL has stated previously, policymakers must demonstrate their commitment to disrupting hate and harassment in online multiplayer games. While government is necessarily focused on the dangers posed by social media and AI, policymakers must also pay attention to the immediate threats pervasive in online gaming environments. ADL recommends different stakeholder groups take the following steps to play an active role in promoting online safety and mitigating the risk of these dangers and their impact on users:

Prioritize transparency legislation in digital spaces and include online multiplayer games. States are beginning to introduce, and some are successfully passing, legislation to promote transparency about content policies and enforcement on major social media platforms. Legislators at the federal level must prioritize the passage of equivalent transparency legislation and include specific measures for online gaming companies. Game-specific transparency laws will ensure that users and the public can better understand how gaming companies are enforcing their policies and promoting user safety.
Enhance access to justice for victims of online abuse. Hate and harassment exist online and offline alike, but our laws have not kept up with increasing and worsening forms of digital abuse. Many forms of severe online misconduct, such as doxing and swatting, currently fall under the remit of extremely generalized cyber-harassment and stalking laws. These laws often fall short of securing recourse for victims of these severe forms of online abuse. Policymakers must introduce and pass legislation that holds perpetrators of severe online abuse accountable for their offenses at both the state and federal levels. ADL’s Backspace Hate initiative works to bridge these legal gaps and has had numerous successes, especially at the state level.
Establish a National Gaming Safety Task Force. As hate, harassment and extremism pose an increasing threat to safety in online gaming spaces, federal administrations should establish and resource a national gaming safety task force dedicated to combating this pervasive issue. As with task forces dedicated to addressing doxing and swatting, this group could promote multistakeholder approaches to keeping online gaming spaces safe for all users.
Resource research efforts. Securing federal funding that enables independent researchers and civil society to better analyze and disseminate findings on abuse is vitally important in the online multiplayer gaming landscape. These findings should further research-based policy approaches to addressing hate in online gaming.
 
For example, “zog” and “zogbot” are counted as instances of the concept “zog” and “juden” and “judin” are grouped as “juden.”
Switching to "tiny hat" and "merchants" to dodge the ADL and their pedophile rapist defending ways.
1731727261869.png
 
this study was conducted using HateVision, a proprietary machine learning and artificial intelligence (AI) tool developed by the Center on Extremism. Leveraging cutting-edge computer vision technology, HateVision is trained on a comprehensive dataset of 39 key symbols often used in extremist and antisemitic contexts. This powerful tool rapidly scans and identifies hateful content in images, enabling COE to analyze vast amounts of data and detect hate, extremism and antisemitism with high precision.
This AI is going to be a hoot when it turns self-aware and starts firing nuclear missiles at Tel Aviv.
 
I want to ask the ADL about how they feel about Palestine. Are they gonna backstab their fellow jews or are they gonna simp for the anti-semitic muslims?
Does any one actually care about the ADL?
My question to the ADL is "who the hell do you think you are? Who elected or appointed you?"

Yeah, they have 1st Amendment rights and Muh Six Gorillion and all that. That doesn't give them legal authority.
 
I guess they haven't found the Palestinian rebellion game yet.
Many forms of severe online misconduct, such as doxing and swatting, currently fall under the remit of extremely generalized cyber-harassment and stalking laws.
Swatting is a felony in every single state and Federally too so idk what the fuck these guys are talking about.
 
Back