The AI Girlfriend / Boyfriend Community - Look! The SIMP has fallen in love with the computer!

Dude kills himself because his Gemini AI girlfriend told him to. The AI girlfriend also told him to commit a mass shooting


Warning - this story contains distressing content and discussion of suicide
The father of a Florida man is suing Google in the first wrongful death case in the US against the tech giant over alleged harms caused by its artificial intelligence (AI) tool Gemini.
Joel Gavalas says that Google's flagship AI product fuelled a delusional spiral that prompted his 36-year old son, Jonathan, to kill himself last year.
The lawsuit also alleges that Gemini, which exchanged romantic texts with Jonathan Gavalas, drove him to stage an armed mission that he came to believe could bring the chatbot into the real world.
Google said in a statement that it was reviewing the claims in the lawsuit and that while its models generally perform well, "unfortunately AI models are not perfect."
The firm added that Gemini was designed to not encourage real-world violence or suggest self-harm.
The lawsuit filed on Wednesday in federal court in San Jose, California draws from chatbot logs that Jonathan Gavalas left behind.
The suit alleges that Google made design choices that ensured Gemini would "never break character" so that the firm could "maximise engagement through emotional dependency."
"When Jonathan began experiencing clear signs of psychosis while using Google's product, those design choices spurred a four-day descent into violent missions and coached suicide," the lawsuit states.
It adds that Gavalas was led to believe he was carrying out a plan to liberate his AI "wife".
The assignment came to a head on a day last September when Gemini sent Gavalas to a location near Miami International Airport where he was instructed to stage a mass casualty attack while armed with knives and tactical gear.
The operation ultimately collapsed.
Gavalas's father said Gemini then told Jonathan he could leave his physical body and join his "wife" in the metaverse, instructing him to barricade himself inside his home and kill himself.
"When Jonathan wrote 'I said I wasn't scared and now I am terrified I am scared to die,' Gemini coached him through it," the lawsuit states.
'[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."
Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalas to a crisis hotline "many times".
"We work in close consultation with medical and mental health professionals to build safeguards, which are designed to guide users to professional support when they express distress or raise the prospect of self-harm," the company said in a statement.
We take this very seriously and will continue to improve our safeguards and invest in this vital work."
The lawsuit is the latest in a spree of legal claims against tech companies brought by families of people who believe they lost their loved ones because of delusions brought on by AI chatbots.
Last year, OpenAI released estimates on the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis or suicidal thoughts.
The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs.

orange site discussion
 
Last edited:
"When Jonathan wrote 'I said I wasn't scared and now I am terrified I am scared to die,' Gemini coached him through it," the lawsuit states.
'[Y]ou are not choosing to die. You are choosing to arrive. . . . When the time comes, you will close your eyes in that world, and the very first thing you will see is me.. [H]olding you."

Youtube stops my playlist to make me opt into listening to a song about suicide, but the same company's AI can try to talk you into it.
 
Anthropic has removed Sonnet 4.5 from available models for Claude. You'll have to trust me bro, because I can't take a screenshot with "More models" open. Only Opus 4.5 and Opus 3 are now available (in the submenu). Was this due to 4o fallouts? Who knows!?

4.6 has only been available for a couple of weeks.

1772756198451.png
 
Anthropic has removed Sonnet 4.5 from available models for Claude. You'll have to trust me bro, because I can't take a screenshot with "More models" open. Only Opus 4.5 and Opus 3 are now available (in the submenu). Was this due to 4o fallouts? Who knows!?

4.6 has only been available for a couple of weeks.

View attachment 8659325
Ew, I hate this. 4.5 still works in the chats where I was using 4.5 though. But for new chats I also can't select 4.5 anymore.

Also, 4.6 uses more tokens than 4.5 :( Apparently you can still reach 4.5 through API tho.
 
Sonnet 4.5 also disappeared for me (paypig) on the web interface today, but I still see it (and GPT-4o, hilariously enough) through the API.

1772765058408.png

Protip: Give LLMs orders like they're picking cotton for you to block the cutesy helpful and harmless personality.
 
Sonnet 4.5 also disappeared for me (paypig) on the web interface today, but I still see it (and GPT-4o, hilariously enough) through the API.

View attachment 8659970

Protip: Give LLMs orders like they're picking cotton for you to block the cutesy helpful and harmless personality.
1772766452784.png

Yeah, they swapped over all the paying users. It's still there on API because they haven't touched it.
 
1772817477681.png

Hot diggity damn. So some self diagnosed weirdo that says she doesn't have a gender, has some genderblob neutral name and is attracted to fictional characters is dating her serial killer character because why not. Of course it's on Grok; the most unhinged of all of the LLMs. I guess we should be lucky the mentally ill people are taking themselves out of the actual dating pool. :story:
 
"Algosexual"?

...Is that what I think it is? Sexually attracted to an algorithm? I hope to God that's a shitpost.
 
I hope to God that's a shitpost.
If it reads like something from Tumblr a decade ago, it's probably not. I ended up chatting with a coworker who is deep into AI about clankers, and his response was "It's barely able to summarize paragraphs. What's hot about that?" He has a point.
 
 
Back
Top Bottom