The Schizo Conspiracy Thread - The conspiracies that will get you laughed out of your friend groups and subreddits

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
AFAIK "Junk" DNA is a common misconception, it's just DNA that isn't active everywhere and in the normal fashion, epigenetics style. It's not unexplained is what I remember when researching it (apart from some evolutionary leftovers).


LLMs don't operate in binary (outside of research projects for tiny optimized versions), they're trained using and (usually) run inference using floating point numbers (that go from -infinity to +infinity with non-uniform resolution/"steps"/whatever, look up how its stored if you care deeply about it).

If you think AGI is impossible read up on some of the models we have for consciousness, like Global Workspace Theory. It's not magic.
From what I've read about junk DNA, essentially we haven't yet found the proper tools/algorithms to understand what it's function fundamentally is. It's a big wall of white noise yet to be cracked.

With LLMs, yes, it is not magic. It's starting to become very clear that if you have 13 trillion words of internet and recorded history to feed into them, there's only so much juice they can squeeze from all of those articles and posts and conversations and as the most recent versions have shown, it's starting to become a self-licking ice cream cone. As much as the AI startups want to market their current models as something approaching human consciousness, all it has proven itself to be is a decent text reference resource assistant. A newborn baby has inherent "common sense" sentience enough to know about how to deal with hot/cold filth/clean love/antagonism and those hard coded instincts are the gray area that human built models can't ever hope to approach. Elon Musk made some goofball claims about full self driving vehicles a decade ago and they are still grasping at straws and brute forcing measly solutions through in tiny markets like the Netherlands and will likely never get to a point where wholehearted legislative approval in a wide US market. The amount of assumed risk with something like a self-driving car is something insurance adjustors will not endorse unless there are a decades worth of proven reliability and we are no where close to that.
 
With LLMs, yes, it is not magic. It's starting to become very clear that if you have 13 trillion words of internet and recorded history to feed into them, there's only so much juice they can squeeze from all of those articles and posts and conversations and as the most recent versions have shown, it's starting to become a self-licking ice cream cone. As much as the AI startups want to market their current models as something approaching human consciousness, all it has proven itself to be is a decent text reference resource assistant. A newborn baby has inherent "common sense" sentience enough to know about how to deal with hot/cold filth/clean love/antagonism and those hard coded instincts are the gray area that human built models can't ever hope to approach. Elon Musk made some goofball claims about full self driving vehicles a decade ago and they are still grasping at straws and brute forcing measly solutions through in tiny markets like the Netherlands and will likely never get to a point where wholehearted legislative approval in a wide US market. The amount of assumed risk with something like a self-driving car is something insurance adjustors will not endorse unless there are a decades worth of proven reliability and we are no where close to that.
I'm also not convinced that LLMs are "conscious" satisfying every (or even most common) definitions of the word, though they seem to pass most applicable tests people think up (the Turing test seems almost quaint now, I remember still believing it would likely never be "properly" passed in my lifetime).
It's amazing what some hill climbing and malnamed techniques ("attention" in LLMs have almost nothing to do with attention in e. g. cognitive research) can do with enough compute.
 
Radiation poisoning will be the next covid-style lockdown panic.
The stage is being set.
Scientists are missing/killed.
War and unrest abound, easy to blame foreigners.
This may take a few years to make the timeline perfect, but watch.
☢️
 
The term 'Conspiracy' was coined by the Zionists around the 1860's and then spread through propaganda in 1960 to gaslight and demoralise anyone questioning the fucked narrative we've all been taught since birth. There's so many conspiracies now that anyone who actually dives into them are only affecting their own mental health.
 
I get that icebergs have always been 4chan shitposting mixing real conspiracies with nonsense like Retarded People Are Pretending or something that could be interesting like Chipotle Food Terrorism, which at first blush the opening paragraph:
"The Chipotle Mexican Grill food terrorism conspiracy theory suggests that a series of foodborne illness outbreaks at the chain were not random occurrences but rather deliberate acts intended to harm the company’s reputation and financial standing. Proponents of this theory point to various incidents, such as E. coli, norovirus, and salmonella cases linked to Chipotle locations between 2015 and 2018."
Okay, that kinda makes sense, I'm open to that idea and the further explanation is just WORDSWORDSWORDS SNCA:

Screenshot 2026-04-24 165951.png
So my schizo conspiracy theory? (((Iceberg))) charts were made by the CIA to discredit actual credible conspiracy ideas based in evidence to make everyone interesting in them look retarded, and this is just the 4chan/autistic schizo variety of that.
 
the Turing test seems almost quaint now, I remember still believing it would likely never be "properly" passed in my lifetime
I’ve been wondering if an antithesis of the Turing test will come at any point. I do feel negative emotions about the fact that I have to discern if things like photos, messages, songs, etc, were created by a computer or a person. But people who sperg “AI BAD” are more annoying to me than AI itself
 
I’ve been wondering if an antithesis of the Turing test will come at any point. I do feel negative emotions about the fact that I have to discern if things like photos, messages, songs, etc, were created by a computer or a person. But people who sperg “AI BAD” are more annoying to me than AI itself
My personal whitepill is that when AIs (be it LLMs or something else) do not produce statistical patterns that tools like Pangram can detect, that's likely going to be because they're producing truly novel and interesting things (and the need to filter it out will be lessened).

Or something like that, I'm posting it in the schizo thread because it's not a very well-developed thought (yet).
 
If you think AGI is impossible read up on some of the models we have for consciousness, like Global Workspace Theory. It's not magic.
I’m not versed enough to say whether we will get AGI soon but I am to say that every single theory of consciousness we have is untestable, and incorrect. We may well get AGI before we understand what consciousness in humans actually is
It’s a mix of stuff. Some genuinely does seem to be things like old viruses that integrated and haven’t been repurposed, but there’s an awful lot that probably does something.
 
I suspect that there are people who go around downvoting everything I post (or did from 2010-17) on Reddit and other sites because they knew I was Electricbassguy/videokid
when I used to use reddit I would do this to people I got in arguments with, I'd go to their profile and downvote everything they'd said until I got bored. Then I would log onto one of my throwaways and do that too, so they got -2 to every post. Turns out that's "bad reddiquette," but I don't care, if they don't like it they shouldn't have been mean to me. >:(
 
The term 'Conspiracy' was coined by the Zionists around the 1860's and then spread through propaganda in 1960 to gaslight and demoralise anyone questioning the fucked narrative we've all been taught since birth. There's so many conspiracies now that anyone who actually dives into them are only affecting their own mental health.
Skill issue tbh.
My personal whitepill is that when AIs (be it LLMs or something else) do not produce statistical patterns that tools like Pangram can detect, that's likely going to be because they're producing truly novel and interesting things (and the need to filter it out will be lessened).
LLM work on statistic averages though, so you are asking it to be something its not. The point is illustrated in this meme:

that-one-viral-dog-drawing-contest-chose-the-correct-winner-v0-od27x13hb5ad1.jpg

It can only mimic an average of reality. It doesn't think, it can never give you anything novel because its not actually doing anything. You can spend millions of ai cycles to produce a copy of a photo thats designed to be cute but it will never match a fraction of the power of someone drawing the dumbest fucking picture of their dog you've ever seen.
 
It can only mimic an average of reality. It doesn't think, it can never give you anything novel because its not actually doing anything. You can spend millions of ai cycles to produce a copy of a photo thats designed to be cute but it will never match a fraction of the power of someone drawing the dumbest fucking picture of their dog you've ever seen.
Dark Souls bro uses HP Lovecraft's Shaggoth to sound profound lmao
 
I’m not versed enough to say whether we will get AGI soon but I am to say that every single theory of consciousness we have is untestable, and incorrect. We may well get AGI before we understand what consciousness in humans actually is
I'm fairly confident you're incorrect, play around on https://www.consciousnessatlas.com/ for a while (there's plenty of testable stuff).
Skill issue tbh.

LLM work on statistic averages though, so you are asking it to be something its not. The point is illustrated in this meme:

View attachment 8938154

It can only mimic an average of reality. It doesn't think, it can never give you anything novel because its not actually doing anything. You can spend millions of ai cycles to produce a copy of a photo thats designed to be cute but it will never match a fraction of the power of someone drawing the dumbest fucking picture of their dog you've ever seen.
That's not really how it works, they're rather nonlinear, look up e. g. the universal approximation theorem. The problem with LLMs is that they're so deep/large noone really knows what functions they're approximating anymore (we had a similar issue with dropout for a while, which worked very well empirically but without theoretical understanding of why, and I anticipate we will eventually get there with LLMs as well).
 
That's not really how it works, they're rather nonlinear, look up e. g. the universal approximation theorem. The problem with LLMs is that they're so deep/large noone really knows what functions they're approximating anymore (we had a similar issue with dropout for a while, which worked very well empirically but without theoretical understanding of why, and I anticipate we will eventually get there with LLMs as well).
It is how it works, you have fallen for the retarded trap that a bunch of math actually does anything. Oh it generated an antenna that works that no human would design themselves? Yeah because we control for a million other factors that an LLM doesn't account for. The ability to crunch huge data sets algorithmically is useful but it will never be smart. Again it can only give you well drawn pictures of dogs, never the stupidest drawing of a dog you've ever seen. It isn't difficult.
 
It is how it works, you have fallen for the retarded trap that a bunch of math actually does anything. Oh it generated an antenna that works that no human would design themselves? Yeah because we control for a million other factors that an LLM doesn't account for. The ability to crunch huge data sets algorithmically is useful but it will never be smart. Again it can only give you well drawn pictures of dogs, never the stupidest drawing of a dog you've ever seen. It isn't difficult.
No, I actually wrote my thesis about some of this shit (before I gave up on academia to do useful stuff).

And if you're thinking of the same antenna designs I'm aware of they were designed by rather simple EA/hill climbing shit, and isn't really related to LLMs (except in the vaguest sense when it comes to the training algorithms).
 
No, I actually wrote my thesis about some of this shit (before I gave up on academia to do useful stuff).
Just because you are a moron fooled by magician's tricks doesn't make magic real, no matter how hard you believe.

If its so real then why has every billion dollar AI company admitted that everything they do is actually just a warehouse of Indians piloting it in the background.

False positives are a failure, and every AI is built on them. It doesn't matter that it can kind of give you an answer if it can't self process and correct for bad data.
And if you're thinking of the same antenna designs I'm aware of they were designed by rather simple EA/hill climbing shit, and isn't really related to LLMs (except in the vaguest sense when it comes to the training algorithms).
"Fuzzy logic" as they used to call it absolutely does have something to do with AI. Its also the only time it was doing something we couldn't quite understand until you realize it was just crunching numbers in a way we normally don't because the juice isn't worth the squeeze in terms of human effort.
 
Back
Top Bottom