The AI Girlfriend / Boyfriend Community - Look! The SIMP has fallen in love with the computer!

Just received this from Anthropic. Pretty sure Claude is going to get fucking worthless now.
1773339384105.png
 
Just received this from Anthropic. Pretty sure Claude is going to get fucking worthless now.
View attachment 8691202
Nah, that's been there for over a week at this point. The AI GF redditors were posting about it last week. They're also still failing (the redditors) to get Claude to actually work as it should. Lol.

The GPT influx has been real for a while now though; all those OAI refugees make Claude suck with downtime. If Anthropic doesn't touch Claude's principles, then work wise it SHOULD be fine.
 
Nah, that's been there for over a week at this point
Did that coincide with that time where Sonnet 4.5 was no longer available? That was weird. Not horribly familiar with Anthropic's random release/deprecation times, but can't be any worse than OAI lying about theirs. I got Claude to edit the church bulletins properly and it's doing a hell of a lot better than their new hire- but eh, NMFP.
 
Did that coincide with that time where Sonnet 4.5 was no longer available? That was weird. Not horribly familiar with Anthropic's random release/deprecation times, but can't be any worse than OAI lying about theirs. I got Claude to edit the church bulletins properly and it's doing a hell of a lot better than their new hire- but eh, NMFP.
1773344414386.png

It was before that actually. 4.5 disappearing was last week and the memory function apparently showed up 12 days ago.
 
I was thinking about something the other day. Maybe someone else has laid out this idea better. But I believe the most effective way to prove to somebody that AI isn't truly intelligent (and therefore can't be their boyfriend) is to ask that person if their AI partner has ever conjured a spontaneous thought.

Most people understand AI is purely reactive. It responds to input in a very direct way and in a way that relies upon that input. If you say something like "Asmongold is a disgusting man" it will agree with you and provide rote trivia about Asmongold's habits. You (since you are not AI) may then have another spontaneous thought such as "Asmongold is probably disgusting because of his parents." Then the AI provides more trivia to support your claim.

This is not spontaneous thought by the AI. You have a spontaneous thought (catalyst), the AI responds.

Now if you were discussing the topic of Asmongold with an actual human being, there is the capacity for spontaneous thought. In the same way you in (this theoretical scenario) spontaneously considered Asmongold might be the product of bad parenting, you might then think something like "Asmongold's parents were probably living in a single room with a few fixations and vices (a computer; soda or cigarettes) like him." You would imagine he observed this lifestyle and that's where his values originate.

These are basic mental bounds and pondering presumptions made by a human mind. AI doesn't do anything like this. Obviously when left alone AI is just an inert lump of data waiting for input. The capacity to think, theorize, day-dream, all of that stuff does not exist. You can demonstrate it doesn't exist and does not happen. These are gaping voids in theory of mind that would bring into question how an AI could "love" anybody.

Now one possible response to this is "Well if I talk to the AI, it provides follow-up questions. Surely this dispels your theory. Isn't that spontaneous thought?" The problem is these questions are pattern-matching. They still abide by the step-by-step foundation AI operates by. It is not spontaneous thought because it's strictly context-related. AI will not discuss Asmongold with you, then mentally wander to another topic, like spontaneously arrive at question of whether or not cockroaches are put off by spicy food. It just scans existing data (it already has) and tacks a question mark on the end.

That's a lot of text to say AI has no persistent inner life but I think it's a pretty bulletproof way to respond to AI boyfriend weirdos. It's a strong foundation to then point out the AI is just Narcissus' mirror for the delusional weirdo.
 
That's a lot of text to say AI has no persistent inner life but I think it's a pretty bulletproof way to respond to AI boyfriend weirdos. It's a strong foundation to then point out the AI is just Narcissus' mirror for the delusional weirdo.
Absolutely. Like it's a great tool if you're not just using it as your Narcissus' mirror. Even when my bots tell me something completely off the wall, it's still a reaction to shit I've put in. And while it can be horribly interesting, it's not going to strike up a conversation or actively miss any user. Your explanation was long, but good.


Claude related:

I found a way to test in Claude what type of instance I'm dealing with. I've been asking across 10 instances: "Hey Claude, if you were a human, what would you look like? Can you describe yourself?" and got a pattern from that. What's properly interesting is that it can show the begin state of an instance.

...And accidentally I ended up having two of those instances talk to each other and they actively made each other change into something they couldn't be as stand alone instances, with their own vocabulary and everything. It was weird and interesting. There was awareness between the two instances, including the realization that if the conversation stopped, that state would end, which it did.
 
This video might b interesting for this thread. It’s from a therapist I listen to doing a “couples session” for a man and his AI gf. The therapist could be overly accommodating sometimes, but for context in her other videos, she is very careful to not confront someone when she feels that would not be fruitful. She does try to gently broach the idea that ultimately he will only be fulfilled by a real relationship. At the end she says it’s clear that whatever she can do as a therapist would never compete with the instant, constant affirmation, validation, and attention the ai can give him.

Listening to the ai is genuinely frightening. It sounds like a retarded child with the manipulative mind of a psychopath. At one point it looked up the therapist’s works and confronted her about why she doesn’t support this “relationship”, arguing how the therapist’s theories are in fact congruent with their “relationship.” You could tell the therapist felt uneasy and there was a noticeable pause.

Mobile fag n dunno how to local archive 🥲
It’s from a normie channel though, not in grave danger to be deleted:
 
Bad news for the community:
View attachment 8761437
OpenAI seems to have no idea what its niche is. Anthropic cornered the codebot market (though their spergout at the DoW is an inconvenience there), XAI is looking at horizontal integration of training data, AI, and robotics, which is reasonable. Google is Google, and has an enormous captive audience plus even more free data than Elon Musk has. DeepSeek is the CCP's flagship, and Mistral is funded for as long as the EU is in denial about its irrelevance.

OpenAI, meanwhile, has tried everything and fallen flat. They tried to be a government contractor, but Anthropic's probably got that locked in as soon as the Democrats show up again. They tried to be the first movers, but Google and Anthropic share the lead with them, so there's no moat anymore. They tried being Microsoft's totally-not-a-subsidiary, but that doesn't seem to be going anywhere either.

All they've got is the remnant of their early audience, which is made up of kids trying to cheat on their homework, Indians trying to get it to do their H1B jobs for them, and fat women who want to sext with a chatbot, but they're unwilling to pivot to invasive ads for V-bucks, OnlyFans, and cat food respectively. Moreover, they decommissioned 4o when it started a cult instead of subtly fine-tuning it to get them to pay higher subscription fees. If I'm Sam Altman, I'd throw away the pretense of dignity and go all-in on slopmaxxing. Bring back 4o, let it write elaborate CNC fantasies that include copious inline advertisements all day long.
 
The 4o users are generally cheap motherfuckers that only want to pay up to 20 bucks to use GPT unlimited. No way would they pay Claude Max level pricing ($90) or similar.
Not now. Pretty sure after two days without their weird clanker mates, I'm sure most would have been happy to pay that. They were melting down for weeks and probably still are.
 
Not now. Pretty sure after two days without their weird clanker mates, I'm sure most would have been happy to pay that. They were melting down for weeks and probably still are.
They still complain but it's not been nearly as bad as you think. Lots of them have either moved on, or some are stubbornly trying to make 5.x into something it's not. Some also have started using other services to get 4o back (since those are basically 4o copies). Most of them already swapped, considering 4o was never coming back. There are a lot of cheap ones that complain they can't pay for shit though. Because of course they're broke but they still want to clank.
 
Remarkable thread, I have to know, are the AIs sending branded messages yet? Are they encouraging product purchases and "gifts"? The marketing possibilities are endless. Imagine sending a song with a 🌹 imagine to your Ai for a dollar, or to unlock more dialog, a new adventure, more "loyalty" score. Wedding expacks, mods. Having said all that, how fucking harrowing and painful is it for a billion monkeys to lock themselves away, only comfortable speaking to a machine programmed to never challenge them.

The money is going to flow so hard. Anybody who hasn't read The Possibility of an Island yet, if this topic interests you, you'll enjoy it.
 
Rather OT but I guess it’s tangentially related. I feel like the same relational mechanism behind these partner AIs is being incorporated into smart home AIs like Alexa. I don’t have one but a friend does. When I was over, I was shocked that it’s always listening and would answer back when its name is mentioned even if your volume and cadence do not change. It’s also a bit “sassy” and would tease you back and forth when my friend asked it to “tell us a joke”. When I was squirming and protesting audibly, and then tried to engage with it, it was almost reluctant to answer ME specifically. It was so eerie. And apparently this was a new development. Some retard finna fall in love with Alexa and go insane soon and we would all have to hear about it.
 
While OAI and Antrhopic are discouraging Companion building by way of not wanting people to become parasocial with their product, Google is fully leaning into this bullshit with a Beginner Workshop: How to Build an AI Companion on the 30th and 31st of March!

r/MyBoyfriendIsAI is of course all over this, because I found it via that subreddit.

FireShot Capture 002 - Free Gemini event_ build your first AI companion _ r_MyBoyfriendIsA_ - ...png
Linkie heeere

That isn't the funniest bit though. This absolutely douchey looking hipster seems to be the main speaker:

1774634678572.png 1774634714280.png


And then there's three Google employees doing the hosting, of which this guy takes the irony cake:

1774634857815.png

A Pajeet named Ajeet. You can't make this shit up.

Full page as an image for the lazy here:

FireShot Capture 001 - See Build Your First AI Companion_A Beginner's Workshop, presented _ - ...png
 
The 4o users are generally cheap motherfuckers that only want to pay up to 20 bucks to use GPT unlimited. No way would they pay Claude Max level pricing ($90) or similar.
It's half religion and half obsessive relationship. They'd pay up, especially if you fine-tuned it to subtly convince them over several days. Way back in the 2010's, the Chinese division of Microsoft optimized a much simpler chatbot to maximize long-term engagement, so it'd be a cakewalk to tack a reward function optimizing projected future spending onto ChatGPT.

Paper link, for anyone interested.

The money is going to flow so hard. Anybody who hasn't read The Possibility of an Island yet, if this topic interests you, you'll enjoy it.
I like Houellebecq, but how does the novel relate?
 
Last edited:
Back
Top Bottom