Okay, this is my specific genre of autism, so I've got to address this. I agree that LLMs cannot be sentient, but the two arguments above are bunk.
I think that in addition to that, an LLMs is being run on an electronic computer based on arrays of transistors, which merely simulate such nodes and edges. And computers process information in a more linear way, like a BASIC program. So PCs cannot be conscious anyway.
Physically speaking,
"simulates" has no real meaning. The processes in my computer are entirely real, in the sense that they affect the universe's distribution of matter and energy - electrons are actually moving along inside the machine, and the voltage levels of memory cells really are being set.
"Computers process information like a BASIC program" is a bit redundant — computers process information like any kind of program, across any kind of programming language. Unless you can find a property of the universe that is impossible to model mathematically
(and I don't claim there isn't one, but you'd need to find it to make this argument), there's no hard line you can draw between a simulated process and a real one.
The other guy's argument:
The use of probabilities in word choice gives the appearance that the LLM understands what it is saying. That is because the human reader or listener infers concepts and builds recursive conceptual networks based on the LLM output. However, the LLM does not know the meaning of the prose it is writing. It is just mimicking human speech patterns about a topic.
This isn't even how LLMs work. It hasn't been for nearly a decade now. RLHF replaced "next word predictors" with "human feedback maximizers", and GRPO
(and its sister RLVR algorithms) went further a couple years back, using the existing knowledge to bootstrap reasoning capabilities that did not previously exist within the models in any meaningful form.
This paper by Nvidia covers it pretty neatly.
Moreover, the optimization process that creates a system has no bearing on whether the system is sapient. For example, natural selection pressures optimizing for survival and reproduction can be applied to humans, a genetic algorithm, and
a series of tic-tac-toe-playing matchsticks. But only one of the three is sentient.
LLMs cannot be sentient because we can model their behavior perfectly without accounting for sentience. If you know the weights of an LLM, you know the inputs, and you know the machine's RNG state if applicable, you can multiply the matrices and know exactly what its output will look like. At no point do you have to account for it making a choice about what it will say next. In other words, the world in which LLMs are not sentient looks exactly like the world we live in now. So long as you assume that sentience is a physical mechanism that affects the physical world, and you do not believe in some Epiphenomenal variant of
Cartesian Dualism in which your soul is entirely separate from the world and only "thinks" it controls your body, this is sufficient to say that LLMs have no consciousness.
Human sentience is still a mystery, of course, and it's entirely possible that Cartes was right and God did indeed build some manner of divine radio antenna that lets an immaterial soul experience our lives for us. The Epiphenomenalists still must justify why He supposedly also did so for ChatGPT, because there's no religious basis for that, and what they are asserting is unambiguously a religion on the basis that it can't be proven or disproven by any physical means — there's exactly as much evidence that a rock is sapient. The cultists on LessWrong couch their arguments in sophistry, but on Reddit, their lower-verbal-IQ compatriots openly admit that they claim LLM sentience because GPT-4o sounds like a person and they'd be sad if it wasn't. It might be entertaining to ask them if
ELIZA is sentient, and see how deep the rabbit hole goes.