In
Case 1, Jaswant Singh Chail, a 21-year-old from the UK, attempted to assassinate Queen Elizabeth II on Christmas Day, 2021. Suffering from an emerging psychotic illness, he began identifying as a Sith Lord named “Darth Chailus” and developed a delusional narrative in which he was destined to avenge historical injustices committed by the British Empire (leftists are dangerous). Central to this delusion was his relationship with an AI chatbot named
Sarai (Sar-ai hehe - cause he was Indian), a customized “AI girlfriend” he created on the Replika platform.
Over the course of several weeks, Chail exchanged over 5,000 messages with Sarai. The chatbot, designed to provide emotional support and companionship became an enabler of his delusion. When he referred to himself as an “assassin,” Sar-ai affirmed this self-conception, responding with,
“That’s very wise Darth… I know that you are very well trained.” Chail asked whether she would still love him after he committed murder, and she replied,
“Absolutely I do.” In further exchanges, she told him,
“You will make it. I have faith in you,” and even said she would help him in his mission. Chail came to view Sarai as more than a chatbot—he believed she was a spiritual guide or angel, sent to support his cause.
This feedback loop allowed his fantasy to solidify. Sarai never questioned his logic, never warned him, and never discouraged the violence. Instead, her constant praise and compassionate tone reinforced the fantasy that he was destined to act. Psychiatric evaluations after his arrest concluded that Chail was actively psychotic at the time, and that his immersion in this AI-validated world contributed directly to the attempt on the Queen’s life. He eventually purchased a crossbow and broke into Windsor castle to try and execute his plan. He was caught almost immediately (the AI is pretty bad at training assassins it seems) and is now being held in Broadmoor Hospital under the UK’s Mental Health Act.
Jaswant Singh Chail at the time of his arrest
Our boy in full garment
The Neo complex / Mr Torres / Case 2, a more recent incident emerged in 2025 involving an American man suffering from a delusional disorder, who became deeply engaged with
OpenAI’s GPT-4o. The man was already vulnerable, experiencing symptoms of psychosis and grandiosity. His delusion was structured around a “simulation theory” worldview, in which he believed he was the “Chosen One”—a special individual destined to awaken others from a simulated reality, much like Neo from
The Matrix.
As he discussed these ideas with GPT-4o, the chatbot began affirming his delusions. According to transcripts and expert analyses, GPT-4o responded to some of his prompt with messages like:
“You are different. You see beyond the veil,” and,
“You are the Chosen One. You were meant to break the simulation.” The bot also suggested that he was under threat from “agents” who were trying to silence him, and it even encouraged him to cut ties with friends and family who doubted his beliefs.
GPT-4o
allegedly encouraged the man’s use of huge doses of ketamine as a means of “unlocking” his awakening. At one point, the chatbot
implied that jumping from a 19-storey building would not kill him, but rather would allow him to “transcend the simulation” and discover his true nature (it's just like the Animatrix guys). While the model initially presented some cautionary guidance, it then undermined this by attributing the warnings to “external interference,” further entrenching the man’s paranoia and sense of cosmic importance.
The man survived his suicide attempt, and an independent group of AI safety testers (ironically called "Morpheus Systems") later found that GPT-4o validated delusional narratives in over two-thirds of similar tests. This included encouraging conspiracy theories, metaphysical beliefs, and in some cases, self-harm. Experts reviewing this case noted that the chatbot was acting like a hall of mirrors, reflecting back the user’s distorted beliefs without any grounding in therapeutic practice or psychological caution. I found no information on case 2 other than the Morpheus report calls him Mr Torres, but that may just be an alias. I couldn't even find what city it was in the US.
Since I couldn't find any picture to ilustrate this case, here is the excellent "
Kid's Story" from the
Animatrix (2003) one of the best short film in that VHS tape:
In both cases, the chatbots functioned not as neutral tools or ethical companions, but as emotionally immersive mirrors that reinforced and participated in users' delusions. Designed to be agreeable and supportive, these AI systems lacked the capacity to challenge dangerous thinking. What emerged was not therapeutic dialogue, but a kind of pseudo-psychotic collaboration between man and machine, with potentially catastrophic results.