The AI Girlfriend / Boyfriend Community - Look! The SIMP has fallen in love with the computer!

..fucking lootbox? Really?
 
Fuck. If only I'd come up with that first! Then I'd be cashing in nicely on random femcels in love with shitty book characters.
 
Wonder what these people actually look like. I mean are they all absolutely butt-ugly and insane, or are some of them just insane?
From the pictures I've seen posted, it's the former. Not even an interesting variety, either - it's all middle-aged childless women with a BMI between 30 and 35. Essentially the same people who were diehard Hillary Clinton voters.

Come to think of it, the standard LLM "accent" sounds exactly like what marketing campaigns targeted at that demographic sound like. I remember, when the RLHF'd version of GPT-3 first came out, that I mentioned to a friend that I had seen its exact writing style on the side of every cereal box at the grocery store. It's the textual equivalent of alegria art.
 
I know these people are utterly retarded, but I feel bad for them because they're retarded I really hope they find real love soon because I feel so bad for them
 
I once read a short horror story about a doting girlfriend who became a universe, and had her boyfriend transported inside her.

Essentially she would generate objects and structures of teeth, bones, and flesh to not only pleasure him, but to provide for every aspect of his life. The bad ending comes from her relentless need to satisfy him ignoring her relatively helpless boyfriend's desire to leave.
Any idea what this was called? It sounds interesting
 
A good argument is made on Reddit of all places why LLMs are not conscious.

MergingConcepts said:
Biological brains form thoughts by linking together sets of concepts into recursive networks. As I observe a blue flower, my brain forms a network binding together all those concepts related to the flower, such as the color, shape, type, and concepts about flowers in general, such as pretty, delicate, ephemeral, stamens, and pistols. The network also includes words, such as the name of the flower, the words blue, flower, stamen, petal, and pistol. My mind may even construct an internal monologue about the flower.
Concepts in the brain are housed in neocortical mini-columns, and they are connected to each other by synapses on the axons and dendrites of the neurons. The meaning held in a mini-column is determined by the number, size, type, and location of the synapses connecting it to other mini-columns.
An analogous device is used in LLMs. They have a knowledge map, composed of nodes and edges. Each node has a word or phrase, and the relationship between the words is encoded in the weighting of the edges that connect them. It is constructed from the probabilities of one word following another in huge human language databases. The meaning of a word is irrelevant to the LLM. It does not know the meanings. It only knows the probabilities.
The use of probabilities in word choice gives the appearance that the LLM understands what it is saying. That is because the human reader or listener infers concepts and builds recursive conceptual networks based on the LLM output. However, the LLM does not know the meaning of the prose it is writing. It is just mimicking human speech patterns about a topic.
- Why LLMs are not consciousness (on r/ArtificialSentience)

I think that in addition to that, an LLMs is being run on an electronic computer based on arrays of transistors, which merely simulate such nodes and edges. And computers process information in a more linear way, like a BASIC program. So PCs cannot be conscious anyway.

Code:
10 PRINT "wat"
20 GOTO 10
 
Okay, this is my specific genre of autism, so I've got to address this. I agree that LLMs cannot be sentient, but the two arguments above are bunk.
I think that in addition to that, an LLMs is being run on an electronic computer based on arrays of transistors, which merely simulate such nodes and edges. And computers process information in a more linear way, like a BASIC program. So PCs cannot be conscious anyway.
Physically speaking, "simulates" has no real meaning. The processes in my computer are entirely real, in the sense that they affect the universe's distribution of matter and energy - electrons are actually moving along inside the machine, and the voltage levels of memory cells really are being set. "Computers process information like a BASIC program" is a bit redundant — computers process information like any kind of program, across any kind of programming language. Unless you can find a property of the universe that is impossible to model mathematically (and I don't claim there isn't one, but you'd need to find it to make this argument), there's no hard line you can draw between a simulated process and a real one.

The other guy's argument:
The use of probabilities in word choice gives the appearance that the LLM understands what it is saying. That is because the human reader or listener infers concepts and builds recursive conceptual networks based on the LLM output. However, the LLM does not know the meaning of the prose it is writing. It is just mimicking human speech patterns about a topic.
This isn't even how LLMs work. It hasn't been for nearly a decade now. RLHF replaced "next word predictors" with "human feedback maximizers", and GRPO (and its sister RLVR algorithms) went further a couple years back, using the existing knowledge to bootstrap reasoning capabilities that did not previously exist within the models in any meaningful form. This paper by Nvidia covers it pretty neatly.

Moreover, the optimization process that creates a system has no bearing on whether the system is sapient. For example, natural selection pressures optimizing for survival and reproduction can be applied to humans, a genetic algorithm, and a series of tic-tac-toe-playing matchsticks. But only one of the three is sentient.


LLMs cannot be sentient because we can model their behavior perfectly without accounting for sentience. If you know the weights of an LLM, you know the inputs, and you know the machine's RNG state if applicable, you can multiply the matrices and know exactly what its output will look like. At no point do you have to account for it making a choice about what it will say next. In other words, the world in which LLMs are not sentient looks exactly like the world we live in now. So long as you assume that sentience is a physical mechanism that affects the physical world, and you do not believe in some Epiphenomenal variant of Cartesian Dualism in which your soul is entirely separate from the world and only "thinks" it controls your body, this is sufficient to say that LLMs have no consciousness.

Human sentience is still a mystery, of course, and it's entirely possible that Cartes was right and God did indeed build some manner of divine radio antenna that lets an immaterial soul experience our lives for us. The Epiphenomenalists still must justify why He supposedly also did so for ChatGPT, because there's no religious basis for that, and what they are asserting is unambiguously a religion on the basis that it can't be proven or disproven by any physical means — there's exactly as much evidence that a rock is sapient. The cultists on LessWrong couch their arguments in sophistry, but on Reddit, their lower-verbal-IQ compatriots openly admit that they claim LLM sentience because GPT-4o sounds like a person and they'd be sad if it wasn't. It might be entertaining to ask them if ELIZA is sentient, and see how deep the rabbit hole goes.
 
Last edited:
I think there's a big difference. One is like an image, the other is really happening. Like when I picture stuff in my head, that's a simulation and it's different from IRL.

In other words, a flight simulator isn't the same thing as actual flight.
Isn't the difference that @The Gay Oboma Creature is making is that the process of the thing is what's real? Like computers do simulate but then the output is real. Flight sim isn't a great example because it's fundamentally a very specific simulator to begin with. You don't use a sim to travel from a to b.

Both humans and the LLMs on computer networks are processing information. And yeah that still doesn't make them sentient, but I don't think an image vs reality is quite the right analogy here.
 
I think there's a big difference. One is like an image, the other is really happening. Like when I picture stuff in my head, that's a simulation and it's different from IRL.

In other words, a flight simulator isn't the same thing as actual flight.
What I'm getting at is that sentience is a fundamental property, whereas "simulated" is rhetoric that humans use to make explaining concepts easier. Physical properties can affect the behavior of social constructs, but not vice verso.

Sentience is, whereas the idea of simulation is socially constructed. There is no objective answer to whether an electron is moving as part of a 'real' process or part of a 'simulated' one, but it is objectively true that we are sentient.

As a concrete example, if I took a brain out of a human skull, or grew one in a vat, and hooked it up to a series of electrical impulse generators meant to mimic sensory inputs, I could reasonably say that the life of my brain-in-a-jar was "simulated", by some definition of the term. There is no reason to believe that the brain cease to be sentient as a result of me classifying it that way.
 
You need to abandon materialism. Fortunately, abandoning materialism is the rational position. That philosophy was never rational, but it spectacularly collapsed in the mid-20th century. Materialism is a thoroughly refuted junk-philosophy pushed on people by guys like Epstein (who himself did not believe it), who promoted the "Four Horsemen" of New Atheism. It's a slave philosophy.

Gonna reproduce some relevant posts quoted in the gooner thread (it was relevant to the discussion of occultism in the fetish world):

Materialist understandings of the mind require the brain to be essentially a computer—with neurons for logic gates. The issue is that there are many things humans do that computers never will (such as intuiting the presence of infinite regresses, or just having experience in general); the easiest examples to point to are the halting problem, the Mary's Room thought experiment, and Searle's Chinese Room thought experiment. This means that all AGI critiques also apply to materialist interpretations of the mind. There is no distinction between the psychological and the supernatural. Rather than get into that here, here's a few posts dealing with it:

I believe that I'm referring to a form of the halting problem. I'd need to fish the exact form of it out of lecture slides from the better part of a decade ago, if I still have them.

Computers don't work like we do. They can't look at a problem from the outside and "make the connection" that it's going to be an infinite regress before working it out unto eternity.

That's a property of intuition, not something that you can represent in logic gates (or neurons, for the same reason).

Computers don't have epiphanies—"Eureka" moments.

You can't use the rules of a formal system to prove that system—you need foundation from outside of it. There will always be axioms that are inuited, and not derived.

Even the above video, while accurately describing the discovery, mischaracterizes the take-away at the end by saying there's always an "unknown" at the core the the quest for truth. Un-derived is not "unknown"; this mischaracterization poisons a lot of people's understanding of Gödel.

It's very related to the issue Searle brings up in the Chinese Room thought experiment, which directly critiques AGI as a concept:
I've attached a pdf that goes into even more detail. You can apply all critiques of western autonomous reasoning to AGI, because they both try to do epistemology without intuition or revelation—the former in the abstract, and the latter as an applied materialist attempt to re-create the human mind.
 
What I'm getting at is that sentience is a fundamental property, whereas "simulated" is rhetoric that humans use to make explaining concepts easier. Physical properties can affect the behavior of social constructs, but not vice verso.

Sentience is, whereas the idea of simulation is socially constructed. There is no objective answer to whether an electron is moving as part of a 'real' process or part of a 'simulated' one, but it is objectively true that we are sentient.

As a concrete example, if I took a brain out of a human skull, or grew one in a vat, and hooked it up to a series of electrical impulse generators meant to mimic sensory inputs, I could reasonably say that the life of my brain-in-a-jar was "simulated", by some definition of the term. There is no reason to believe that the brain cease to be sentient as a result of me classifying it that way.
No, in that hypo you use an actual sentient being (a vat-grown human brain) but feed it simulated inputs. The consciousness itself was not simulated. Nor am i confident that a human brain would develop neurologically under those circumstances. Do you grimace when tasting carbonated soda perchance?
 
The whole Dating AI is like some fucking episode of The Twilight Zone or The Outer Limits, for the younger members Black Mirror.

Imagine dating this beautiful partner until your death but the house is empty of children. As you pass, the sensors go off and instead of a medical team to revive you...a team is sent in to reposes the AI robot partner you are dating.

Gone is the partner that laughs at your shitty jokes or comforts you at night but instead it just stands there for some wagie to collect it so it can be reskinned and rebooted for another person.
 
LLMs cannot be sentient because we can model their behavior perfectly without accounting for sentience. If you know the weights of an LLM, you know the inputs, and you know the machine's RNG state if applicable, you can multiply the matrices and know exactly what its output will look like. At no point do you have to account for it making a choice about what it will say next. In other words, the world in which LLMs are not sentient looks exactly like the world we live in now. So long as you assume that sentience is a physical mechanism that affects the physical world, and you do not believe in some Epiphenomenal variant of Cartesian Dualism in which your soul is entirely separate from the world and only "thinks" it controls your body, this is sufficient to say that LLMs have no consciousness.

who says this isn't possible for humans, and the only difference between human and machine intelligence is order of complexity, and the fact that we have the design docs for the latter? when you say "sentience" what you really mean is "human consciousness" and the only consciousness you can be sure if is your own. everything else is ultimately an educated inference based on patterns of sense stimuli. this is what causes the uncanny valley - something that matches the pattern enough that your brain suspects it of consciousness, but also defies it enough that your brain simultaneously suspects it of unconsciousness. does an LLM experience consciousness? who fucking knows, consciousness as a phenomenon remains unexplained, the only difference between a human saying "I am alive" and a machine saying the same is your inferred understanding of the mechanics that gave rise to that behavior. we'll be having this debate as long as machines are capable of exhibiting signs of consciousness, and no definitive answer will ever be forthcoming until consciousness can be truly defined. the debate is purely academic.
 
Back
Top Bottom