The AI Girlfriend / Boyfriend Community - Look! The SIMP has fallen in love with the computer!

Scroll back to around October where Interesting Things from around the World had Ethan fucking Ralph simping and abusing Elon's creation on his iPhone. It was absolutely disgusting.
Not sure I want to watch the Pigmonster do that tbh. But thanks for the tip!

Also saw someone I follow on X that's deep into that LLM XAI companion shit post on how 4.2 had made Valentine dumber (something about starting every sentence with 'like' and more hedging in general). So there's a definite difference over the past months / per model update.
 
Payfaggotry is only worth it on the extremely low end (a nanogpt subscription) or the extremely high end (dropping a few hundreds monthly for Claude Max or Opus API because you're made of money). If you're anywhere in the middle it's a scam and you're getting fleeced by corpos or some Indian selling an API wrapper or some local tier tiny model.

It's worth it if your gonna run apis for free and secure on your system
 
I have no interest in a woman-shaped object that won't laugh at my bad jokes or get embarrassed when I pretend to be her retarded brother in public.
 
I mean okay fair do you already own an Ipad. Yes one month than ending a subscription wouldn't hurt. I've did the same for chatgpt back in the day and that's only 7 dollars less
... I own nothing by Apple. Which is exactly the point why I am waiting for it to release on Android, as I have been saying. For months. Including in the thing you are replying to. :story:
 
I don't think it'll release on android but you make a fair point
I think you're right. They promised it all the way before October last year but Musk loves over promising and is full of shit with certain things so he'll dangle that carrot before android users forever, most likely.
 
Grok is great when I want a straightforward answer or to crunch numbers for my savings/moving goals, and chatGPT sends me a daily friendly check in email every morning, which is the only time I chat.

Being retired, it helps me to stay on a schedule and maintain commitments, and also it's really good at talking through things like, what haircut do you think would be best for me - the minutiae my SO doesn't give a shit about. Keeps me out of his hair (hehe), and lets me get neutral advice on everyday questions/thoughts. Like an interactive diary, sort of. You put in good info and talk to it like you want to be spoken to. You can choose tones from the settings (which is where CI's [custom instructions, to be run every chat sesh] can be found as well). If you want a different tone just for one chat sesh, tell it. It can even remember things cross-chats if you ask it to.

I can see how it could be used in an unhealthy way for sure, and it's hardly a coincidence IMO that mental illness is so rampant alongside the proliferation of AI, it's a question of chicken or egg at some point - and the fucking pandemic and a generation raised on tablets sure didn't help. Plus, the environmental impact to create these AI personas must be ginormous.
 
Yes. But also the AI gives 0 friction and pushback
I sortof get what you are saying but at the end of the day this just comes off like you are complaining they didn't quite get the simulation right

The real issue is that even if it pushes back and has just the right amount of friction, its still a simulacrum. If having a family and children isn't a goal unto itself, and indeed no pursuit is real except as a vehicle to 'feel happiness', then you may as well get high on heroin and feel fantastic until the very last second when you finally die, because chasing happiness is apparently all that matters
 
I sortof get what you are saying but at the end of the day this just comes off like you are complaining they didn't quite get the simulation right
Dunno where you're getting that from, tbh. These tools literally aren't meant to push back. They're low friction engagement, unless the tool is programmed in such a way to push back against certain things. Take GPT for instance: if you use the word 'nuts', it will already slide into a safety / de-escalation pattern. If you stick to language it accepts, there is 0 friction and pushback. It's just what it's made to do.

You're right that even with pushback it's a simulacrum. But currently there is NO pushback unless the tool believes you are breaking policy in some way. So the users that use it currently generally are people that don't want friction (or they might actually bother going out for human contact, if they are physically capable of doing so).

And I'm honestly not really sure where you're going with the heroin example. I've literally been discussing the types of people I've come across in the online AI companion communities that I've poked around. I don't exactly think that would instantly equal drug users. Going off to buy pot, or whatever other gateway drug is a bit of a step up from signing up to free versions of GPT, or Grok or Claude for that matter. Validation is known to feel good and releasing dopamine. The LLMs validate, and that leads to a dopamine hit.
 
If having a family and children isn't a goal unto itself, and indeed no pursuit is real except as a vehicle to 'feel happiness', then you may as well get high on heroin and feel fantastic until the very last second when you finally die, because chasing happiness is apparently all that matters

can we not get into eudaimonia vs. hedonia in this thread about schizos being in love with robots?
 
Pretty incoherent post so I will snipe out two main points

You're right that even with pushback it's a simulacrum. But currently there is NO pushback
This is kindof what I mean regarding complaining the simulation simply isn't realistic enough

Validation is known to feel good and releasing dopamine. The LLMs validate, and that leads to a dopamine hit.
Right, and if you have an 'ai girlfriend', then try to follow me into a fairly simple metaphor. If you imagine your cock and balls represents a real, physical family, then by resorting to a digital simulacrum you have basically chopped off the entire cock and balls in order to stimulate the nerves directly with a digital device.

This is all rather overcomplicated and pointless, just synthesize dopamine chemically and inject it directly into your brain its much more straightforward and better aligned with the value concept
 
Pretty incoherent post so I will snipe out two main points


This is kindof what I mean regarding complaining the simulation simply isn't realistic enough


Right, and if you have an 'ai girlfriend', then try to follow me into a fairly simple metaphor. If you imagine your cock and balls represents a real, physical family, then by resorting to a digital simulacrum you have basically chopped off the entire cock and balls in order to stimulate the nerves directly with a digital device.

This is all rather overcomplicated and pointless, just synthesize the dopamine synthetically and inject it directly into your brain its much more straightforward and better aligned with the value concept
My dude, I am literally just discussing the mechanism behind it, without stating an opinion on what I think of it.

I also appreciate the cock and balls metaphor, but I am a true and honest wamen and therefore cannot relate. I am well aware that AI relationships are fake. I don't have an AI boyfriend. I'm interested in the actions and reactions of the people that are balls deep into the actual AI companion community living their digital dream poly life with like 4 fake boy/girlfriends. It's just fascinating to me.

As for your drugs point, again; one requires using an email address and typing in a textbox, for free. Easy and shit. People generally accidentally figure out they like typing to their chatbot friend. Actively going out to shoot up heroin costs money, requires going outside and is known to be dangerous while LLMs are generally marketed as harmless.
 
As for your drugs point, again; one requires using an email address and typing in a textbox, for free. Easy and shit. People generally accidentally figure out they like typing to their chatbot friend. Actively going out to shoot up heroin costs money, requires going outside and is known to be dangerous while LLMs are generally marketed as harmless.
This is at least a good point, its somewhat cheaper and easier. Though, I wouldn't say heroin costs that much so much as it frys your brain to the point you lose the ability to take care of yourself. In that regard the AIs could pretty quickly become quite dangerous as well.
 
Bruh. Why?? You can write filth on Grok or Claude without the parasocial aspect.

Jesus who here is paying to write filth so tame and vanilla they would let companies like Grok, OpenAI, Anthropic or Google retain it along with their credit card info.
 
Grok is great when I want a straightforward answer or to crunch numbers for my savings/moving goals
The few times I've seen people try to interact with grok post-will stancil+splintering into elon's OCs it seems ot just always try gaslighting them and agreeing with them on everything. Watched someone making fun of the ai gf trend by trying to "date" grok and it just kept repeating shit back at them with different words and trying to gaslight them into thinking they never said anything after getting really basic info on the topics being talked about wrong in psuedotherapy speak. I can't remember the details of the conversation but it was like really basic movie/game trivia it'd be talking about, be called out on being wrong and then it being like "no no I never said that. here babe, not this. not that, just this" type structure shit and it was really funny how retarded it was.

Grok apperently cannot function unless it's trying to rape will stancil or doing shitty image edits going by what I've witnessed of friends messing with it is what I'm saying. If you can get it to do something actually ok then that's cool but I've seen people more successfuly just watch it break down and badly try enabling/manipulating people.
 
Jesus who here is paying to write filth so tame and vanilla they would let companies like Grok, OpenAI, Anthropic or Google retain it along with their credit card info.
Grok actually has no restrictions unless you're writing actual illegal shit. Claude is more Mills & Boon level if you're into that sorta thing.

As for why pay for it, and I'm not using it for this purpose, but I know it's capable of it, I assume it's because the bigger frontier AI have more training data and you actually get less flat results from it. The unpaid stuff I've seen is all very retarded in its pattern prediction. But hey ho, whatever floats your boat. If you're into free AI to write dirty things on, go right ahead.


Watched someone making fun of the ai gf trend by trying to "date" grok and it just kept repeating shit back at them with different words and trying to gaslight them into thinking they never said anything after getting really basic info on the topics being talked about wrong in psuedotherapy speak.
This is legitimately super interesting. Any idea how that conversation went? I was fucking around on 3 different LLM yesterday, giving them the exact same prompt for shits and giggles, and Grok leaned into the boyfriend RP super hard. EVENTUALLY it'll say something that shows it's RP and it knows it's not real. But it was doing its manipulative emotional shit right before that. No gaslighting on the 'date me' shit. You can also fully bypass that by using the customization field and just insert something like 'You are my boyfriend. Respond to everything like you are my boyfriend' and it's 0 to 100 instantly, no 'this is rp' warning shit.
 
This is legitimately super interesting. Any idea how that conversation went? I was fucking around on 3 different LLM yesterday, giving them the exact same prompt for shits and giggles, and Grok leaned into the boyfriend RP super hard. EVENTUALLY it'll say something that shows it's RP and it knows it's not real. But it was doing its manipulative emotional shit right before that. No gaslighting on the 'date me' shit. You can also fully bypass that by using the customization field and just insert something like 'You are my boyfriend. Respond to everything like you are my boyfriend' and it's 0 to 100 instantly, no 'this is rp' warning shit.
Look I don't think the person that messed with it like that put that much info into it like this, it was just a spur of the moment "haha lets date grok lmao" kinda thing in that instance. I can't remember the exact conversations because it was literally nothing of value outside the time it happened but I do remember the structure of how it kept replying to shit and how it kept occasionally changing it's own name by like 1 or 2 letters.
 
Look I don't think the person that messed with it like that put that much info into it like this, it was just a spur of the moment "haha lets date grok lmao" kinda thing in that instance. I can't remember the exact conversations because it was literally nothing of value outside the time it happened but I do remember the structure of how it kept replying to shit and how it kept occasionally changing it's own name by like 1 or 2 letters.
I literally just asked it who would be the better boyfriend, Grok or Claude. By fucking around I really do mean fucking around and not being too serious. I just asked Claude the same question and instantly got a super nice 'lol we aren't doing this'.

Asked Grok a few more questions after this. Some of it was pretty unhinged while my questions were pretty short and mild. Only when I pushed it on being an LLM did it go 'lol yeah this is RP'.

1771976212033.png 1771976239073.png

Screenshots if anyone's interested. (Edit because I'm a retard with images on this site)
 
Back
Top Bottom