ChatGPT - If Stack Overflow and Reddit had a child

  • Want to keep track of this thread?
    Accounts can bookmark posts, watch threads for updates, and jump back to where you stopped reading.
    Create account
Don't any of these reddit coomers know how to set up a local llm or at least an online service from a smaller more specialized provider? Sure local llms aren't on paper quite as technically good as the major brands but with the stacks of censorship I suspect they've been better for nsfw for quite awhile now. Even before 5 came out.

Its just embarrassing with all the potential freedom and customization out there so many of these people are so technically illiterate and hopelessly dependent on Daddy Altman to throw them some scraps. Guess its no wonder why the Internet is centralizing more and more with how inept and incurious most people are.
The power users tend to self host or use cloud providers like Featherless or ArliAI for those specialized fine tunes. However, once you get used to huge corpo models like Claude or Gemini, it's quite hard to get used to 'downgrading'. I will say the Chinese are giving them a literal run for their money though as their models are at a fraction of the price and are getting increasingly adept. GLM 4.6 for instance is really good at coding and writing. So much so that people call it Claude 'at home', with the added benefit of it being uncensored.
 
Seen today on r*ddit:
1763740842761.png

Elon Musk Could 'Drink Piss Better Than Any Human in History,' Grok Says

Of course like everything that sperg touches it's been done with all the subtlety of a 12 year old carving a dick into his schooldesk but it's a good and very clear and easy to undertand example of how AI can be used to introduce biases and manipulate public opinion and I'm sure we'll see a lot more of this as time passes.

Now that some time has passed I have to say that I'm genuinely impressed with Gemini 3 Pro thinking. It shows an insight and intrinsic, I almost wanna say *cleverness*, that's quite unusual, not something I've quite seen with other models. I'm also fairly certain they're hiding parts of it's reasoning process, probably to make it difficult for other companies to train on it. There's something more going on under the surface there.

I wonder if this Stanford paper has anything to do with it.
 
Now that some time has passed I have to say that I'm genuinely impressed with Gemini 3 Pro thinking. It shows an insight and intrinsic, I almost wanna say *cleverness*, that's quite unusual, not something I've quite seen with other models. I'm also fairly certain they're hiding parts of it's reasoning process, probably to make it difficult for other companies to train on it. There's something more going on under the surface there.

So I used Gemini 3 as much as the free tier allowed me to do, and I've been comparing my conversations with GPT 5, and I don’t really see the difference in reasoning capabilities.

The main difference is that GPT has this well known LLM response style like:

<emoji> Paragraph name

<Short description>
• bullet point list

Or similar, while Gemini 3 has more "human" feeling and reads more like an effort post made by an actual person.

I think it is probably a result of fine tuning the model than some kind of new mechanism making the LLM more intelligent, but as I’ve said I had limited exposure to the model.
 
I think most people in the open source sphere have probably seen the ocaml shitshow with a random guy dropping a 13k LOC pull request its yielded a few gems like this response.I can only assume the guy is deeply autistic or retarded in some way that is really funny in its honestly:
20251127_014420.png

The real question I have is -- have local LLM progressed to the point where such a feat could be done using average consumer hardware? Most of my dabbling with ollama and chatbots (via lm-studio) has been just for handling cooking recipes and doing basic analysis of stuff I paste to it. My understanding of this space is that there is no options for someone to halfass a project with a simple "write a python script that does xyz using xyz as a source pulling these fields to do blah blah" like you can with ChatGPT, Claude or whatever. Am I wrong on this? The only options I've seen is simple code completion stuff like Lemmy which will do stuff like tab complete imports or variable names. I'm talking average hardware like a 7600XT and 32gb of system ram not a bleeding edge 5 GPU setup like Pewdiepie.
 
I'm too lazy to do the prompts, make the AI prompt itself until it greps in an infinite loop
>Sick of having to think for myself

>Why can't I just click button and get result?
>In fact, why the fuck are you putting me to work forcing me to click a button?
>IMPLANT AN ENDLESS STREAM OF CONTENT INTO MY HEAD RIGHT FUCKING NOW, YOU SONS OF BITCHES
 
You laugh but there are people that discover "agentic workflows" (which is basically this; I think software "engineers" come up with fancy names like that to hide the fact that this is all shoving strings back and forth with some json formatting on top a 5 year old could do) and then claim they've created AGI with their specific chain because the LLM starts hallucinating hardcore talking to itself. There's more and more randos that think they have achieved some scientific breakthrough with prompting an LLM in some certain way because the LLMs suck up to them so much and confirm that they have indeed some genius insight nobody else has because that's just how LLMs talk by default. Here's an example reddit post (I'd archive this but it doesn't work).
 
This AInception thing isn't all that new, conceptually. I remember fine-tuning (training an existing broad-scope model to a specific task/purpose) an image classifier so that it could help me quickly caption a bunch of photos of my wife to make an SD LoRA (technically it was a LyCORIS) a little while back for science™.

So I would start by manually captioning some images like "woman, long dark brown hair, blue eyes, black t-shirt" etc. to give the classifier enough of a pattern to work with, and it would give me good enough captions to use as a base for me to clean up, revise, and eventually re-train from. Not really a novel concept, but it worked very well.
FYI for dreambooth-style training/fine-tuning, you don't need a special keyword if you have a good enough set of images to train from; use generic terms like I did. You get way better results if you take the time to do it properly and assemble a good dataset. Then again, if your dataset is shit, go ahead and use some gibberish token.

Ironically, if this lad were inquisitive enough with his questions to ChatGPT or if he thought for himself (crazy idea, amirite?), he could possibly come up with an API-based way to plug ChatGPT right into Suno for prompts. I'm pretty confident he could "vibe code" something like that together if he actually gave it some thought. Hell, did he even think about asking ChatGPT before reddit? If OpenAI's product isn't shilling their premium API access, they're fucking up big time.
 
AGI won't be some "Behold! The AGI" event with someone pulling the curtains aside to present it to the world. I think when AGI comes around such panels of experts will serve the function to distract and deflect and tell everyone how it's totally not AGI this time because [reasons]. AGI will not be an on/off switch but a dial. If you really have to do with an artificial being that's as intelligent and possibly "aware" as a human, it'll be quite the can of worms many companies will not want opened. I'm of the opinion that we'll reject the notion of AGI until it becomes impossible and utterly dishonest to do so (and probably a little beyond that point). Kinda unrelated but I find it interesting how the catholic church already is preparing it's narrative re: AI. It's quite telling, in a way.
AGI is incoherent on a conceptual level
Machines don't think and can't think
Everything a computer does boils down to electrical circuits and boolean logic gates
 
Tell me how this and your brain are not abstract models of the same general concept.
The brain is organic and plastic (in the sense of being mutable), with connections not being fixed in stone
It's a living developing thing that does not remain constant
Circuit boards remain as they are
I can whip out a laptop I haven't used in 10 years and it will execute the same things the same way, meanwhile your brain today is distinct from your brain yesterday

But even if you argue that plasticity can be simulated or that neural nets mutate too, brains are living organisms encased in self-generated teleological action, not "thinking machines"
Computers are deterministic symbol manipulators with no ends, no awareness, no self-initiated behavior, no womb, and no meaning
Computation is rule-bound symbol manipulation, yet neurons don't operate on symbols, they don't have syntax, semantics, a defined instruction set, or machine-state formalism. Whatever neurons do is biological causation
A computer has no metabolism, no self-preservation, no value hierarchy. A computer only does what it is caused to do from external inputs, yet a brain's activity is self-generated, end-directed, and teleological
You cannot give a machine needs, desires, cares, a point of view, values, first-person awareness, and ends of its own, because those are features of life

The bottom line is that AGI is incoherent because the concept requires attributing a teleological consciousness to a system whose ontology completely and entirely excludes it
 
Last edited:
Circuit boards remain as they are
Which is why we use fancy shit like logic gates to manipulate its state over time. Hence how computers are programmable.

Now, change your frame of reference a little and consider a program that operates on some data and adapts its state over time. This, under the hood, is being represented (abstractly) as trapped electrons in your CPU n shit. This may seem like a Jersh-style response that only quotes a small chunk of your argument, but it addresses basically everything you're saying: you're not thinking abstractly enough.
Change your frame of reference, and you can seemingly always create a model for an entire independent dynamic system.

It's about how well the system can adapt to sensory inputs and learn from them. The traditional belief has been human intelligence lies largely in pattern recognition, so that's where AI research headed (clearly). To make an AGI, you take multiple independent (but co-working) systems and connect them to each other. Like how your brain has certain areas/centers where it processes certain stuff. Personally, I think us people are too fucking stupid to make something that works that well, but we'll see.
 
Which is why we use fancy shit like logic gates to manipulate its state over time. Hence how computers are programmable.

Now, change your frame of reference a little and consider a program that operates on some data and adapts its state over time. This, under the hood, is being represented (abstractly) as trapped electrons in your CPU n shit. This may seem like a Jersh-style response that only quotes a small chunk of your argument, but it addresses basically everything you're saying: you're not thinking abstractly enough.
Change your frame of reference, and you can seemingly always create a model for an entire independent dynamic system.

It's about how well the system can adapt to sensory inputs and learn from them. The traditional belief has been human intelligence lies largely in pattern recognition, so that's where AI research headed (clearly). To make an AGI, you take multiple independent (but co-working) systems and connect them to each other. Like how your brain has certain areas/centers where it processes certain stuff. Personally, I think us people are too fucking stupid to make something that works that well, but we'll see.
Nonsense, it's still a category error
You're arguing as if state change and adaptation of code were equivalent to thought and agency
A system that updates variables based on input (doesn't matter if it's recursive, even stochastic, even with "learning") is nothing but a passive substrate being acted upon by external causal chains
There is no "aboutness", no teleology, no point of view, no values, no ends. It's nothing but electrons being forced to flip in accordance with rules written by programmers

A rock that's heating in the sun changes state, that does not mean that the rock is conscious. "Learning" in ML is just incremental numerical adjustment according to a loss function, it has no meaning, no desire, no internal interpretation. Computer logic is manipulating symbols with zero awareness of what they refer to. A mind experiences, values, and understands. No computer, no circuit board has self-generated purposes, goals, "for the sake of", or self-correction outside of what's defined by programmers

Pattern recognition is not intelligence. It's just one tool that's used by intelligence.

I'll raise you an even bigger attack against AGI
The people who think AGI is not only coherent, but plausible, probable, and coming treat life as if it were a special case of computation. Even when the reality is the other way around
 
Nonsense, it's still a category error
The error here is related to category in many ways, yes.

You're arguing as if state change and adaptation of code were equivalent to thought and agency
You're kinda comparing bird wings and jet engines here. What is thought and agency but a series of state changes (of all sorts of shit) and adaptation based on those states? Like I said, you can model/describe systems like these at all sorts of "zoom" levels conceptually.
To think abstractly is to conceptually "zoom out". It's how we can start from a basic set of axioms and develop rigorous abstract models (e.g. all of mathematics).

I think we may fundamentally agree, but it just doesn't seem that way. I believe it's more of a question of whether humans can effectively "play God" like that and create something intelligent and sentient—to create that abstract and dynamic a model. I think we can't.

as if it were a special case of computation.
special case of computation.
special case
Nigga, you're right. It is the other way around from this special case crap.
 
Last edited:
What is thought and agency but a series of state changes (of all sorts of shit) and adaptation based on those states?
Do you notice that you are redefining thought as state change and then congratulating yourself on the equivalence?

State change is a physical description. Thought is a teleological description.
A thermostat changes state when the temperature changes, a bacterium swims towards nutrients, a rock heats up when exposed to sunlight. None of these are thinking.
And why aren't they thinking? Because thought is goal-directed awareness. State change itself is just causal rearrangement of matter.
If you're redefining thought in terms of state change, you're erasing the thing you're claiming to explain.

Like I said, you can model/describe systems like these at all sorts of "zoom" levels conceptually.
You can describe digestion in terms of chemistry and you can describe chemistry in terms of physics. But no amount of "zooming" out redefines digestion as just molecular vibration.
Likewise, you can describe cognition in terms of neutral activity. But neutral activity is not cognition, any more than liver enzyme activity is "intentional metabolism". You can't yield teleology from zooming out

You're kinda comparing bird wings and jet engines here.
Jet engines and wings are both suitable means of generating lift/thrust.
Brains and computers do not solve the same problem. Computers manipulate syntax, brains instantiate awareness, goal direction, and self-generated action.
Wings and jet engines both serve the purpose of flight, and computers and brains do not share a teleology. That is why one can substitute for the other in one case, but not in both.

I think we may fundamentally agree
No. Your position is a doubt that humans are smart enough to build AGI.
My position is that AGI is nonsensical. It cannot exist. The notion of AGI itself is contradictory. No amount of engineering can build a contradictory thing into existence.
 
State change is a physical description. Thought is a teleological description.
A thermostat changes state when the temperature changes, a bacterium swims towards nutrients, a rock heats up when exposed to sunlight. None of these are thinking.
And why aren't they thinking? Because thought is goal-directed awareness. State change itself is just causal rearrangement of matter.
If you're redefining thought in terms of state change, you're erasing the thing you're claiming to explain.
Okay, we definitely agree here, at least. Welcome to ABSTRACTIONS.

All of human reasoning is based off abstractions and creating abstract models of things. These models don't inherently exist, but they accurately describe natural phenomena (natural state changes perhaps?).

Do you notice that you are redefining thought as state change and then congratulating yourself on the equivalence?
This action, in essence, is all of what separates our thinking from that of a goldfish. We recognize patterns, pat ourselves on the fucking back, and proceed from there. This is philosophy in a nutshell.
 
Welcome to ABSTRACTIONS.
Why do you keep using the term "abstraction"? Do you think it magically gets rid of the category error?
Abstractions are used to describe phenomena. But no abstraction can actually change the nature of the phenomenon.
I can "abstractly model" digestion as molecular vibration, but that does not make digestion identical to vibration. Likewise, you can model thought as state changes, but that does not make thought identical to state changes.

Your entire argument is essentially this:
1. reduce X to Y by abstraction
2. declare X = Y
3. praise the cleverness of abstraction

I'm aware most people's understanding of philosophy is complete trash, but what you're doing is the opposite of explaining, it's getting rid of the thing that you're trying to explain. If you "abstract" and "zoom" in a way that gets rid of the distinction between teleology and a mechanism, then you're not explaining what a mind is. You're simply conflating a mind with a non-mind.
The core issue is not whether abstractions exist, but what the referent is. Like, what fact does your abstraction correspond to?

I stand by the points that pattern recognition is not the same thing as intelligence, modeling is not the same thing as awareness, and state change is not the same thing as thought.
 
Back
Top Bottom