AI Derangement Syndrome / Anti-AI artists / Pro-AI technocultists / AI "debate" communities - The Natural Retardation in the Artificial Intelligence communities

I still hate the word slop with a deep passion (autism) but it has its reasonable usages. It's the go-to word to describe anything that feels inauthentic and mass produced, in which, unfortunately, most AI work is since most of them are being done by people who are clearly using AI for the trendy hype train than actually utilizing it in a positive way.

"Slop" is used by people who don't want to elaborate why they don't like something. It's the same as calling something "Reddit."
 
Aidan is an OpenAI ballwasher with a gigantic ego who recently got hired due to his great ballwashing skills. Notably every entry on his list is outright wrong.
I'm more or less pro-AI as a general issue, but it's hard to argue against the fact that virtually every company involved in it is pure fucking evil.
 
IMG_8439.jpeg
As funny as it is watching Twitter artists and Redditors seethe over AI, let’s not pretend that the same censorious Silicon Valley is against it. Silicon Valley loves this shit, big corpos love it and just about every mainstream establishment loves it as much as the Tech Cultists do.
 
View attachment 6987280
As funny as it is watching Twitter artists and Redditors seethe over AI, let’s not pretend that the same censorious Silicon Valley is against it. Silicon Valley loves this shit, big corpos love it and just about every mainstream establishment loves it as much as the Tech Cultists do.

People who integrate class conflict into their beliefs are retarded. A lot of people on the AI wars sub or DefendingAIArt are mostly leftists, so you often see people say things like "AI isn't the issue, Capitalism is" or fantasize about Universal Basic Income, and they still seek a solution in centralized control of the technology. I think it's reasonable to feel compassion to the "have-nots" and indifference towards the "haves", but more often then not these same people goes to the extent they defend violent criminals while they seethe envy from the sight of the successful. Their concept of class conflict basically boils down to viewing themselves as the lesser fortunate and every other possible problem being a consequence of others kicking the ladder beneath them. Except when the situation turns in a way they are the fortunate, no matter what magnitude, they would still retain the opinion that they are the victims and those beneath them are actually slothful thieves who leech off of them. Alternatively they become groveling cowards ashamed of their own success. I mean look at how many of the Anti-AI people who don't remotely sympathize with people using AI because they can't afford hiring artists or buying art. Or artists who used AI just a little and they beg for the forgiveness of the public who never supported them to begin with.

So you have these pro-AI leftists who see direct benefit in the technology, but they can't possibly be on the side of the tech-bro, or god forbid Libertarian, so they go through mental gymnastics to justify that it's "Punk" and counter culture, when in reality they look at it positively largely because their usage of the tool is immediately convenient and useful for them.

I'm situationally pro-AI since I identify more towards being a libertarian but it drives me up the wall that people on the sub make these points because their application of class conflict is simplistic and juvenile. It's perfectly fine to like something because it's convenient, and dislike something inconvenient to themselves, however to them it's a taboo to be solely self-interested so they have to reframe every one of their retarded idea as a form of class struggle.
 
Last edited:
Saw this gem of an interaction in a ttrpg discord
aisperging.png

What a snob. You can tell he gets his knowledge about LLMs from other pretentious people, parroting them in the same way he thinks of ChatGPT. I don't think it has the "knowledge of all of science," but it's far more helpful than Wikipedia or any search engine when it comes to technical matters.
 
They keep repeating the same talking points, meanwhile the bot gives an actual nuanced answer. They sound more like NPC than the actual computer.
The anti ai guy is right, the LLMs answer is a whole lot of nothing at best and just a verbose rephrasing of the original text it is responding to at worst.
A lot of people in this thread are dunking on people getting mad at posters for using LLMs but honestly if I wrote something out only for some jackass to hit me with a "chat gpts response" followed by ~200 characters of slop I'd be pissed too.
When you have to turn to an LLM to supplement your critical thinking you automatically lose the argument, end of story.
 
A lot of people in this thread are dunking on people getting mad at posters for using LLMs but honestly if I wrote something out only for some jackass to hit me with a "chat gpts response" followed by ~200 characters of slop I'd be pissed too.

Would you respond by saying "This isn't funny! You've just committed a serious taboo!"?
 
The anti ai guy is right, the LLMs answer is a whole lot of nothing at best and just a verbose rephrasing of the original text it is responding to at worst.
A lot of people in this thread are dunking on people getting mad at posters for using LLMs but honestly if I wrote something out only for some jackass to hit me with a "chat gpts response" followed by ~200 characters of slop I'd be pissed too.
When you have to turn to an LLM to supplement your critical thinking you automatically lose the argument, end of story.
ChatGPT's response:
I get where you're coming from. A lot of AI-generated responses do end up being glorified rewordings with no real substance, especially when people just copy-paste without engaging critically. And yeah, if someone puts thought into a post only to get an AI-generated wall of text in response, it makes sense they'd be annoyed.

That said, I don’t think using an LLM automatically means someone loses an argument. It depends on how they use it. If they're just regurgitating what the AI spits out without adding their own thoughts, then yeah, they’re outsourcing their thinking. But if they use it as a tool—fact-checking, organizing their thoughts, or refining an idea—it’s not much different from using a search engine or asking an expert for insight.

The real issue is that too many people treat LLMs like an easy way to sound smart without putting in any real effort. That’s where it gets annoying.
 
Does anyone remember the name of the Reddit board where all the posters are just AI-run boy accounts?
 
Back