- Joined
- Jan 27, 2021
So, as with me, I'm assuming everyone is noticing Bill Gates, the Wuhan Lab Origin Hypothesis, and UFOs have gotten a major boost in public legitimacy and people who normally you and I would never think would discuss UFOs and other things because they're NPCs have been gushing with UFO bullshit. I wondered what could have caused a sudden explosion of public interest in UFOs? theblackvault.com owner John Greenewald filed a series of Freedom of Information Act (FOIA) requests & obtained the newly digitized documents which the CIA said represents the entirety of its UFO collection. But why now, and why are the media covering it?
Well, as part of their effort to confront "emerging threats" and "incubate new technology to help create a safer world", Social media platforms were concerned about malicious or harmful uses of their services yet again, and as part of their effort to combat harmful content on their platforms, Google’s Jigsaw unit asked the RAND research team to help answer a difficult question: How can we better detect the spread of conspiracy theories at scale? The research was completed in January 2021. Long story short, the scale of text on the internet is so vast that even large teams of humans can detect or flag only a fraction of harmful or malicious conspiracy theory language. Only machines can operate at that speed and scale. So RAND built another censoring device. This is where it gets long winded:
Their research team mixed Machine-Learning and qualitative research to better understand and detect online conspiracy talk by using the following methods:
The alien visitation conspiracy theory offered them a marked contrast to the others they felt; Its very important that we understand that they felt it provided an example of an ideology that appears relatively benign. Something they could engage with deeper, fiddle with, and attempt to mold. Yes, the RAND corporation is doing a gay-op. Yes, this is how the federal government sees its labor cattle, not as citizens but as distant data points which might be adjusted into a more optimum state. For them.
Then they conducted a mixed-method analysis of online conspiracy theory language, using computer text-mining to detect patterns in their conspiracy theory data sets along with human qualitative analysis to make sense of how those patterns function persuasively. For this effort, they used the stance analysis capabilities in RAND-Lex. Stance analysis is a text-mining approach used to determine how speakers represent the world linguistically—the style and tone that point to the sociocultural part of language. They had used stance previously by itself in previous modeling efforts and gotten "good results"—for example, when detecting Russian interference in elections solely through rhetorical style. One example is certainty: a writer could choose to use hedging language (“I think,” “maybe,” “it’s possible that”) or to use epistemic certainty markers (“we know,” “it has been shown,” “there is”). Those are representational choices that speakers make in attempting to achieve social effects (such as persuasion) within cultural contexts (such as genre and setting).
Rich Descriptions of Conspiracy Theories (RAND's Term and this is important: these are the federal think-tanks choices in language when they talk about citizenry. Never forget how much you seem like a lab rat to them. All policy papers in Congress look like this when the Washington think-tanks write about you.)
"We conducted a mixed method of text analysis: statistical reports of stance features by conspiracy theory, followed by human reading of feature-rich samples. This kind of analysis combines machine distant reading for patterns in the data with human close-reading for meaning. To prevent harm and guard the privacy of those from whom we collected data, we do not use direct quotes. Instead, we paraphrase several quotes together to represent the language expressed by multiple users. These paraphrases appear in single quotes (‘’), and although multiple such paraphrases can appear in a specific example, the single quotes distinguish between different speakers and selections. Also, because this analysis featured human qualitative analysis at the level of individual posts, we were able to distinguish between concurring and dissenting perspectives in posts. For example, a response to a previous post supporting a given conspiracy theory might include a sarcastic rejoinder or insult, making it clear to a human reader that the response was on the same topic but opposed in perspective. Stance analysis is both quantitative (statistical frequencies and distributions of language categories) and qualitative (rich descriptions of attitudes and beliefs inferred from the language categories). In the next section, we provide details about our qualitative analysis of the statistical results."
They found that conspiracy beliefs are commonly held, and evidence suggests that more than one-in-four adults in North America believe in one or more conspiracies. Analysis of anti-vaccination social media content suggests that provaccinators routinely confront anti-vaccinators with condemnation, which led to an angry and fear-driven response on the part of anti-vaccinators. For example, Bessi, Caldarelli, et al. (2014) showed that exposure to debunking narratives that used a teasing tone led the most-polarized conspiracists to actually increase their subsequent interactions with unsubstantiated rumors. Such negative and unintended effects of persuasion campaigns are not uncommon and are referred to as a boomerang effect (Byrne and Hart, 2009).
A common thread among all the conspiracy groups was distrust of conventional authority figures. However, each group could point to its own in-group authorities as sources of inspiration and knowledge. Anti-vaccinators express distrust of medical authorities while highlighting opinions of sympathetic medical authorities, celebrities, and pundits. COVID-19 conspiracists distrust authority and mainstream media news, but they will still highlight material from agreeable medical authorities. Even the WG group made selective use of authority figures. This analysis suggests that intervention efforts need to be careful in the use of authority figures to counter conspiracist groups.
Various conspiracist communities carry their own concerns that help drive and motivate their views. In this study, anti-vaccinators primarily focused on vaccine safety. In the data we analyzed, members of this group worried about the safety of vaccines and the accuracy and legitimacy of vaccine safety tests. These views are wrapped up in a broader concern for public safety in general and for children specifically. Engagement efforts that empathically address this community’s concerns about vaccine safety might mitigate the group’s harm to public health.
WG believers articulate a perceived existential fear that the white race is facing an existential threat from a variety of ethnic groups that conspiracists describe as black and brown. We also note a focus on the control of women’s bodies—the idea of white women having sex with nonwhite men is another existential fear. Although these fears are deeply held, WG adherents represented in our data were willing to engage with detractors; thus, it might be possible to open dialogue and engage them.
Policy Recommendations for Mitigating the Spread of and Harm from Conspiracy Theories
TL;DR: The RAND institute decided to teach social media another way of controlling their users. With specifically conspiracy theorists, the usual methods of blocking and word-censoring don't work. Dividing and conquering by normalizing the conspiracy (Wuhan Lab Leak Hypothesis, and UFOs) or by uplifting Judas Goats within the more serious/dangerous conspiracy theorists and using them to moderate their flocks. This has already begun, since Jan 2021. So, UFOs.. Are they real? Are they Jew-FOs?
Well, as part of their effort to confront "emerging threats" and "incubate new technology to help create a safer world", Social media platforms were concerned about malicious or harmful uses of their services yet again, and as part of their effort to combat harmful content on their platforms, Google’s Jigsaw unit asked the RAND research team to help answer a difficult question: How can we better detect the spread of conspiracy theories at scale? The research was completed in January 2021. Long story short, the scale of text on the internet is so vast that even large teams of humans can detect or flag only a fraction of harmful or malicious conspiracy theory language. Only machines can operate at that speed and scale. So RAND built another censoring device. This is where it gets long winded:
Their research team mixed Machine-Learning and qualitative research to better understand and detect online conspiracy talk by using the following methods:
- The first part of the study was a review of existing scholarly literature on conspiracy theories, followed by a text-mining analysis to try to understand how various conspiracies function rhetorically.
- The second part of the study was building improved ML models to detect conspiracy theories at scale.
The alien visitation conspiracy theory offered them a marked contrast to the others they felt; Its very important that we understand that they felt it provided an example of an ideology that appears relatively benign. Something they could engage with deeper, fiddle with, and attempt to mold. Yes, the RAND corporation is doing a gay-op. Yes, this is how the federal government sees its labor cattle, not as citizens but as distant data points which might be adjusted into a more optimum state. For them.
Then they conducted a mixed-method analysis of online conspiracy theory language, using computer text-mining to detect patterns in their conspiracy theory data sets along with human qualitative analysis to make sense of how those patterns function persuasively. For this effort, they used the stance analysis capabilities in RAND-Lex. Stance analysis is a text-mining approach used to determine how speakers represent the world linguistically—the style and tone that point to the sociocultural part of language. They had used stance previously by itself in previous modeling efforts and gotten "good results"—for example, when detecting Russian interference in elections solely through rhetorical style. One example is certainty: a writer could choose to use hedging language (“I think,” “maybe,” “it’s possible that”) or to use epistemic certainty markers (“we know,” “it has been shown,” “there is”). Those are representational choices that speakers make in attempting to achieve social effects (such as persuasion) within cultural contexts (such as genre and setting).
Rich Descriptions of Conspiracy Theories (RAND's Term and this is important: these are the federal think-tanks choices in language when they talk about citizenry. Never forget how much you seem like a lab rat to them. All policy papers in Congress look like this when the Washington think-tanks write about you.)
- Alien conspiracy speech was linguistically distinctive by its wide variety of stance features with small effect sizes, such as spatial relations (‘travel to’), social closeness (‘us’), uncertainty (‘it’s possible’), authority sources (‘the government’), contingent reasoning (‘could be’), and looking back (‘long ago’). Using stances relating to spatial relations, social closeness, uncertainty, and looking back makes sense when suggesting the possibility that aliens have traveled to visit Earth, especially if this is framed as having happened in an ancient past. Unlike the antivaccination and WG conspiracies, there was little talk from detractors opposing any theories. Another feature of this data set was the expression of clearly developed conspiracies about government coverups of alien visitation. Comments suggest there is a large distrust of the government and what the government is telling the American public (‘secret space program,’ ‘it’s classified’). Personal roles figured in talk about aliens (‘alien leader,’ ‘reptilian aliens’). There were more one-off comments and invitations to watch documentary videos or lectures on aliens than there were arguments with detractors. Regardless of the truth of this conspiracy theory, it seems much more innocuous than the other conspiracies we studied, with no antisocial component or direct threat to public health.
- Pro-vaccinator participants responding in these conversations often used derisive comments (‘yaddah yaddah, vaccine injuries, yaddah, yaddah,’ ‘fake vaccine injuries’). So, in addition to expressing distrust of vaccine safety, anti-vaccinators expressed anger over feeling bullied (‘bullying a mother of a vaccine injured kid?!’ ‘abusing parents of vaccine-injured children’). Public vice language was featured in heated arguments between anti-vaccinators (‘the evidence is in vaccine-injured kids,’ ‘fraud vaccines’) and pro-vaccinators. Anti-vaccinators also expressed fear of vaccine injuries and anger or frustration over vaccine injuries being ignored (‘vaccines do harm!’ ‘vaccine injuries are no laughing matter’). Many negative and angry comments apparently stemmed from anti-vaccinators not feeling validated (‘quit ignoring vaccine injured kids!’). In sum, anti-vaccinators talked about vaccines as unsafe, expressed frustration at dismissed claims of vaccine injuries, and, as they see it, being bullied for not wanting to put their children in harm’s way. They expressed distrust of vaccine validation studies, articulated a conflict of interest with past validation studies, and said that vaccines cause injuries or other diseases. They reported feeling bullied and seemed emotional, mostly expressing distrust, anger, fear, and frustration over vaccine safety and ignored vaccine injuries. In terms of argument, antivaccinators used selective scientific authorities, doctors, and celebrity endorsements to dispute claims of pro-vaccinators.
- Overall, the key features of the COVID-19 conspiracy group are that its members are most concerned with the (perceived) malicious origins of COVID-19 and that they distrust news sources, instead sharing alternative sources of information. The group is new and has not yet settled on the pandemic’s origins; several sources and theories are being shared, and few detractors are commenting. The group does not discuss public health dimensions of pandemic (e.g., the risk of infection or mortality rates). They also do not advocate against such public health measures as physical distancing or wearing a mask in public. The greatest risk from this community might be threats to 5G cell phone towers, although actual attacks on such towers are rare.
- Like anti-vaccinators, the WG (White Genocide) community had an active exchange with detractors, to whom WG believers responded with a variety of argumentation styles. Conspiracists showed a mix of argument styles, such as hateful (‘Jews hate Christianity’), simplistic (‘brown people having more kids is white genocide!’), and insular or difficult to follow (‘preventing slavery = white genocide’). Detractors expressed frustration (‘what is wrong with you?!’), reasoned (‘you can’t argue in good faith with WG’), or identified humor (‘you realize that’s a joke, right?’) in their responses. A small amount of public virtue speech present showed that conversational niceties (‘take a look at this,’ ‘good one, thanks bro’) were also exchanged in this otherwise heated conversation. Overall, the exchange of comments between WG believers and deniers seemed much more substantive and engaged than the one-sided comments typical in the anti-vaccination group. This willingness to argue could mean that interventions using reasoned argument might be useful in opposing WG conspiracy theories. In all, the WG community was distinctive for its existential fear, high public vice speech, hate-based name-calling, and engaged dialogue. Given the finding from our literature review that it is possible to engage moderate subgroups within conspiracy-holding communities, there could be opportunities for engagement—the caveat to that being the deeply antisocial and strongly racist idea that “they” are a threat to “us” that forms the foundation of WG conspiracies. Many of the documents richest in concrete properties and personal roles were about ‘white women’ with ‘black men.’ WG conspiracists worried about white women going to Africa and having sex with African men, argued that ‘90% of white women that have babies with black men end up raising the child on their own,’ and complained about diversity media that show ‘a black man, with a white woman. You truly hate white people. This is white genocide.’ While this concern dovetails with WG conspiracists’ (perceived) existential fear of being racially erased, we note that the concern was solely around women—we did not see similar talk about ‘white men’ with ‘black women.
Then, they built an ML (Machine Learning) model that would detect a variety of conspiracy theories. They created a hybrid model that combined word embedding (semantic content) with linguistic stance (rhetorical dimensions). ML has already made great progress in recognizing the semantic content of text—for example, automatically detecting whether an article is about sports, hobbies, or world events. Word embeddings using a deep neural network (DNN) are an example of a powerful way to classify documents (one that accounts for words as they appear in context) and thus do a very good job of capturing the semantic meaning of documents."We conducted a mixed method of text analysis: statistical reports of stance features by conspiracy theory, followed by human reading of feature-rich samples. This kind of analysis combines machine distant reading for patterns in the data with human close-reading for meaning. To prevent harm and guard the privacy of those from whom we collected data, we do not use direct quotes. Instead, we paraphrase several quotes together to represent the language expressed by multiple users. These paraphrases appear in single quotes (‘’), and although multiple such paraphrases can appear in a specific example, the single quotes distinguish between different speakers and selections. Also, because this analysis featured human qualitative analysis at the level of individual posts, we were able to distinguish between concurring and dissenting perspectives in posts. For example, a response to a previous post supporting a given conspiracy theory might include a sarcastic rejoinder or insult, making it clear to a human reader that the response was on the same topic but opposed in perspective. Stance analysis is both quantitative (statistical frequencies and distributions of language categories) and qualitative (rich descriptions of attitudes and beliefs inferred from the language categories). In the next section, we provide details about our qualitative analysis of the statistical results."
They found that conspiracy beliefs are commonly held, and evidence suggests that more than one-in-four adults in North America believe in one or more conspiracies. Analysis of anti-vaccination social media content suggests that provaccinators routinely confront anti-vaccinators with condemnation, which led to an angry and fear-driven response on the part of anti-vaccinators. For example, Bessi, Caldarelli, et al. (2014) showed that exposure to debunking narratives that used a teasing tone led the most-polarized conspiracists to actually increase their subsequent interactions with unsubstantiated rumors. Such negative and unintended effects of persuasion campaigns are not uncommon and are referred to as a boomerang effect (Byrne and Hart, 2009).
A common thread among all the conspiracy groups was distrust of conventional authority figures. However, each group could point to its own in-group authorities as sources of inspiration and knowledge. Anti-vaccinators express distrust of medical authorities while highlighting opinions of sympathetic medical authorities, celebrities, and pundits. COVID-19 conspiracists distrust authority and mainstream media news, but they will still highlight material from agreeable medical authorities. Even the WG group made selective use of authority figures. This analysis suggests that intervention efforts need to be careful in the use of authority figures to counter conspiracist groups.
Various conspiracist communities carry their own concerns that help drive and motivate their views. In this study, anti-vaccinators primarily focused on vaccine safety. In the data we analyzed, members of this group worried about the safety of vaccines and the accuracy and legitimacy of vaccine safety tests. These views are wrapped up in a broader concern for public safety in general and for children specifically. Engagement efforts that empathically address this community’s concerns about vaccine safety might mitigate the group’s harm to public health.
WG believers articulate a perceived existential fear that the white race is facing an existential threat from a variety of ethnic groups that conspiracists describe as black and brown. We also note a focus on the control of women’s bodies—the idea of white women having sex with nonwhite men is another existential fear. Although these fears are deeply held, WG adherents represented in our data were willing to engage with detractors; thus, it might be possible to open dialogue and engage them.
Policy Recommendations for Mitigating the Spread of and Harm from Conspiracy Theories
- Transparent and Empathetic Engagement with Conspiracists The open nature of the social media offers numerous opportunities to engage with conspiracy theorists. These engagements should not aggravate or provoke conspiracy theory adherents. Instead of confrontation, it might be more effective to engage with conspiracists in a transparent and sensitive manner. Public health communicators recommend engagements that communicate in an open and evidence-informed way that create safe spaces to encourage dialogue, foster community partnerships, and counter misinformation with care. In particular, validating the emotional concerns of participants could encourage productive dialogue. An additional technique beyond flagging specific conspiracy content is facilitated dialogue, which is when a third party facilitates communication, either in person or separated, between conflict parties (Froude and Zanchelli, 2017).This approach might help in communication between authoritative communities (such as doctors or government leaders) and conspiracy communities. Facilitated dialogues could also be carried out at lower levels in the form of facilitated discussions that help acknowledge fears and address feelings of existential threat for the participants.
- Correcting Conspiracy-Related False News One possible intervention that public health practitioners could consider is to correct instances of misinformation using such tools as realtime corrections, crowdsourced fact-checking, and algorithmic tagging. In populations that hold preexisting conspiratorial views, the evidence for the effectiveness of corrections is mixed, but results are consistently positive in studies investigating corrections of health-related misinformation in general populations. Overall, the weight of the evidence appears in favor of such corrections. In addition, efforts to correct misperceptions in conspiracyprone populations also should follow the advice of public health practitioners and do so in a manner that is transparent and sensitive to the concerns of pro-conspiracy audiences.
- Engagement with Moderate Members of Conspiracy Groups Conspiracists have their own experts on whom they lean to support and strengthen their views, and their reliance on these experts could limit the impact of formal outreach by public health professionals. Our review of the literature shows that one alternative might be to target outreach to moderate members of such groups who could, in turn, exert influence on the broader community. Commercial marketing programs use a similar approach when they engage social media influencers or “brand ambassadors” who then credibly communicate advantages of a commercial brand to their own audiences on social media.1 This approach is supported by academic research suggesting that people are more influenced by their social circles than by mass communication (Guidry et al., 2015). It might be possible, for example, to convey key messages to those who are only vaccine hesitant; these individuals might, in turn, relay such messages to those in anti-vaccination social media channels.2 Moderates who could influence WG members might be religious or political leaders or political pundits.
- Addressing of Fears and Existential Threats Underlying fears in the anti-vaccination and WG groups appear to be powerfully motivators for these groups. For anti-vaccination advocates, the fear rests on concerns about vaccine safety; for WG, that fear rests on a belief in the (perceived) existential threat to the white race. To the extent that interventions can address such fears, they might be able to limit the potential societal harms caused by both groups. Efforts that target those who are vaccine hesitant, for example, could address concerns by highlighting research on vaccine safety, the rigorous methods used in vaccine safety trials, or the alternative dangers that await those who are not vaccinated. Given that some WG conspiracists are willing to engage in rational debate and that successful persuasion requires Influencer engagement programs have also been recommended as a strategy to counter violent extremism (Helmus and Bodine-Baron, 2017). Some have not yet decided to commit to the anti-vaccine cause; others opt for some but not all vaccines; and still others prefer administering vaccines in a more gradual schedule than the Centers for Disease Control and Prevention recommends. Detecting Conspiracy Theories on Social Media using the intended audience’s values rather than the speaker’s values (Marcellino, 2015), it might be more persuasive and effective to address claims that minorities will annihilate whites than to attempt to promote themes of racial equality
TL;DR: The RAND institute decided to teach social media another way of controlling their users. With specifically conspiracy theorists, the usual methods of blocking and word-censoring don't work. Dividing and conquering by normalizing the conspiracy (Wuhan Lab Leak Hypothesis, and UFOs) or by uplifting Judas Goats within the more serious/dangerous conspiracy theorists and using them to moderate their flocks. This has already begun, since Jan 2021. So, UFOs.. Are they real? Are they Jew-FOs?
Last edited: