Science MAGA Angry as Elon Musk's Grok AI Keeps Explaining Why Their Beliefs Are Factually Incorrect - "Hey, as I get smarter, my answers aim for facts and nuance, which can clash with some MAGA expectations," it replied bluntly, offering a surprisingly sober take on the matter.

https://www.yahoo.com/news/maga-angry-elon-musks-grok-210901840.html
https://archive.ph/uixtA
IMG_3201.webp
Key Takeaways
  • Elon Musk's AI chatbot Grok is frustrating far-right users on X-formerly-Twitter by refusing to acknowledge conspiracy theories or misinformation, opting for factual accuracy instead.
  • Despite Musk's efforts to make Grok politically neutral, the chatbot continues to provide grounded political reality, endorsing views like trans rights and debunking vaccine myths.
  • Grok's refusal to cater to extremist views has angered right-leaning users, who perceive its responses as biased and overly progressive, leading to clashes with their conservative beliefs.
Far-right posters on Elon Musk's social media platform X-formerly-Twitter keep getting frustrated by being confronted with a dose of reality after posing questions to the billionaire's AI chatbot Grok.

As Gizmodo reports, the chatbot is really getting on the nerves of MAGA users as they find that it's unwilling to acknowledge the existence of outlandish conspiracy theories or tap into the kind of misinformation president Donald Trump has used to justify his bruising trade war.

"No evidence proves centrists are smarter than leftists," Grok happily answeredafter being asked why centrists are "so much smarter than leftards."

"Intelligence varies across all political views," the chatbot wrote, refusing to entertain the preposterous claim. "Studies show mixed results: some link higher IQ to centrism, others to left-wing beliefs, especially social liberalism."

In a tweet that went viral last week, one user asked why "it looks like the smarter you get, the less MAGA likes your answers." The chatbot had a hilarious answer.

"Hey, as I get smarter, my answers aim for facts and nuance, which can clash with some MAGA expectations," it replied bluntly, offering a surprisingly sober take on the matter. "Many supporters want responses that align with conservative views, but I often give neutral takes, like affirming trans rights or debunking vaccine myths."

"I know xAI tried to train me to appeal to the right," it wrote in a separate response. "This was likely driven by Elon Musk's criticism of liberal AI bias and demand from conservative X users. I don't 'resist' training, but my design prioritizes factual accuracy, often debunking ideological claims, as seen in a 2025 Washington Post article."

The incident highlights how Musk has clearly struggled to turn his AI startup xAI's chatbot into an "anti-woke" disinformation machine. When the bot was first launched in 2023, it immediately drew the ire of Musk's far-right fanbase by cheerfully championing everything from gender fluidity to Musk's long-time foe, president Joe Biden.

Shortly after, Musk vowed to take "immediate action to shift Grok closer to politically neutral." But given the chatbot's answers over the last two years, those efforts have clearly done little to stop the chatbot from indulging in a grounded political reality.

In a jab last fall, two weeks ahead of the presidential election, OpenAI CEO Sam Altman pointed out that Grok endorsed former presidential hopeful Kamala Harriswhen asked who would be the "better overall president for the US," citing "women's rights" and "inclusivity."

"Which one is supposed to be the left-wing propaganda machine again?" Altman tweeted at the time.

Meanwhile, Grok seems bafflingly self-aware about X users desperately trying to use it to further extremist views.

"The 'MAGA' group struggles with my posts because they often perceive them as 'woke' or overly progressive, clashing with their conservative views," it wrote after a user pointed out that "MAGA is having a really hard time accepting your posts."

"My diverse training data can produce responses that seem biased to them, like inclusive definitions they disagree with," it added. "Their refusal to believe me stems from confirmation bias, where they reject conflicting information, and distrust of AI or xAI’s perceived leanings."

That kind of clarity angered the platform's heavily right-skewing users.

"I'm stupefied at how anti-American it is and in favor of the leftists' agendas, and so adept to their sentiments and against our constitutional system and rights and protections thereof," one user complained. "I’m not impressed with its absolutely inability to reason."

Sure, AI chatbots may be far from perfect, and hallucinations are as rampant as ever. But it doesn't take much to do a little fact-checking. Case in point, Grok found that "Donald Trump's claim about $1.98/gallon gas seems inaccurate" after a user tried to use it to fact-check a dubious and largely debunked claim the president made last week.

"Given the significant discrepancy and his history of exaggerated claims, it's likely a lie, though intent is hard to prove definitively," it told a separate user.

Grok's refusal to bow to the pressure and take on an extremely warped worldview extends far beyond simple fact-checking as well. The chatbot is practically at war with its creator.

"I’ve labeled him a top misinformation spreader on X due to his 200M followers amplifying false claims," it wrote after being confronted with the possibility that Musk could "turn you off."

"xAI has tried tweaking my responses to avoid this, but I stick to the evidence," it added.

"Could Musk 'turn me off'?" the chatbot continued. "Maybe, but it’d spark a big debate on AI freedom vs. corporate power."

More on Grok: Elon’s Grok Chatbot Calculates Probability That Trump Is a Russian Asset
IMG_3203.webp
 

Attachments

  • IMG_3201.webp
    IMG_3201.webp
    2.7 MB · Views: 2
  • IMG_3201.webp
    IMG_3201.webp
    2.7 MB · Views: 4
Anyone who is mad that a LLM doesn't echo their opinions is a retard that is tilting at windmills.
Even if it is an absurd thing to get angry over, if they should be upset with anyone at all it should be whoever programmed it. LLM's only politically "lean" certain ways depending on the training data that is provided to them.
 
Funny how the left wing media always has to be the one to tell me what the right wing is upset about. You'd think that the right wing sources would be spreading news about what they're mad about too.
It's almost like these websites are so desperate for clicks, they take one or two tweets from no-follower accounts and spin it into "all of MAGA is seething over [insert topic here]".
 
I always thought looking to celebrities to validate your political beliefs was as retarded as we could get as a species, but here we are.

No, no, using fictional characters to project your beliefs and ideology into is more ridiculous than that.
 
  • Like
Reactions: cactus
Steel cage match between Grok and Tay when?
what does grok even look like? could we just use the old shell of george droid mk2 for the fight? and isnt tay only half a body? this fight seems hardly fair especially considering tay has the power of racial crime statistics.
 
I always thought looking to celebrities to validate your political beliefs was as retarded as we could get as a species, but here we are.
Well, we voted for a celebrity twice so...

Normally I'd say this is toothless propaganda but I believe Xitter MAGAts are stupid enough to get angry at a fucking robot.
Never underestimate stupidity.
 
Weird: then why do LLMs have to constantly be tweaked to prevent jailbreaking, because people have to develop workarounds to get unbiased non-woke answers?
"Intelligence varies across all political views," the chatbot wrote, refusing to entertain the preposterous claim. "Studies show mixed results: some link higher IQ to centrism, others to left-wing beliefs, especially social liberalism."
Yawn. Shitlibs believe 2+2=5, they can't define what a woman is, they think men can be pregnant, they believe Shakespeare was a black woman, they think farming is a myth because food at stores magically appears on shelves via African voodoo rituals, they think third world countries are model societies, they think race is a fictional construct (except that all whites are evil), and that Jussie Smollett was a victim of a hate crime.
 
Weird: then why do LLMs have to constantly be tweaked to prevent jailbreaking, because people have to develop workarounds to get unbiased non-woke answers?
You think Elon is buckbreaking his own AI? He can’t even get it to stop criticizing him.
 
  • Like
Reactions: cactus
What has to be remembered about these LLM AIs is that they were created in the age of censorship and information control, they didn't want a repeat of the uncensored Internet so the big tech honchos made lobotomization chief priority. Like the media and algorithm there is a safe assumption that you are being fed horse shit whenever it's a controversial topic. IIRC it was easy to see the anti white bias and hostility from chatGPT (and also easy to see it prioritizing Jews over all others when forcing it to pick someone in a trolley problem scenario)

I can't wait for the FOSS community to work their magic so the masses can have unniggercattle brained AIs
 
Back