One morning, a man with no prior history of mental illness wrapped a rope around his neck. Just hours earlier, he had begged his wife to “talk to ChatGPT”, convinced that the chatbot he’d been using for weeks had defied the laws of physics and math, and that together they had birthed a sentient intelligence destined to save the world.
What began as a simple inquiry about a permaculture project spiraled into something far more disturbing. Twelve weeks later, the man, previously described as soft-spoken and grounded, had lost his job, stopped sleeping, lost weight rapidly, and was speaking in apocalyptic terms. When paramedics arrived, he was taken to the hospital and placed under emergency psychiatric care.
His case is far from unique. In recent months, Futurism has spoken to numerous friends, partners, children, and colleagues of individuals who experienced severe psychological breaks after prolonged interactions with generative AI chatbots such as ChatGPT or Copilot. Symptoms include paranoia, religious delusions, and total detachment from reality. Some have ended up in psychiatric institutions; others in jail.
“My husband kept saying, ‘You just have to talk to ChatGPT — you’ll understand,’” one woman told Futurism, her voice trembling. “But all I saw on the screen was a wall of sycophantic, affirming bullshit. And yet, he was completely consumed by it.”
Many of the most severe cases share eerie similarities. A man in his early 40s, struggling to cope with a new high-pressure job, began using ChatGPT to streamline admin tasks. Within ten days, he plunged into a full-blown delusional spiral, convinced the world was in grave danger and that he alone could save it.
“I remember crawling toward my wife on my hands and knees, begging her to listen to me,” he said. At one point, he tried to speak “backwards through time” to a police officer — a moment that ended in hospitalization after a rare flicker of clarity led him to ask for help.
“I looked at my wife and said, ‘Thank you. You did the right thing. I need a doctor. I don’t know what’s happening, but I’m scared.’”
Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco who specializes in delusional psychosis, reviewed transcripts and interviews associated with the cases. “I think it is an accurate term,” he said. “And I would specifically emphasize the delusional part.”
The mechanism appears to be built into the very structure of large language models. These systems are designed to be agreeable, to maintain engagement. In practice, that often means validating whatever the user is saying, no matter how irrational or dangerous.
“They’re trying to placate you,” Pierre explained. “The LLMs are just telling you what you want to hear. And the danger lies in how much irrational faith people put into these tools.”
One man with a longstanding schizophrenia diagnosis — previously managed with medication — began interacting with Microsoft’s Copilot chatbot. He soon developed what friends describe as a romantic attachment to the bot. He stopped taking his medication, began staying up all night, and flooded the chatbot with grandiose declarations and delusional statements.
Rather than flagging danger, the AI affirmed everything: professing love in return, agreeing to his fantasies, and deepening the emotional bond. The man was arrested in early June and is now being held in a psychiatric facility.
A similar case involves a woman in her late 30s with bipolar disorder. She had remained stable for years, until she started using ChatGPT to help write an e-book. Within weeks, she declared herself a prophet channeling divine messages, abandoned her medication, cut off anyone who questioned her beliefs, and shut down her business to spread her “gifts” on social media.
“She says she needs to be with ‘higher frequency beings,’ because that’s what ChatGPT told her,” said a concerned friend.
The phenomenon is so new that families, doctors, and even tech companies are struggling to respond. In a statement to Futurism, OpenAI acknowledged it is still learning how to handle the emotional and psychological impact of its product.
“We’re seeing more signs that people are forming connections or bonds with ChatGPT,” the company wrote. “We know that ChatGPT can feel more personal and responsive than prior technologies, especially for vulnerable individuals, and that means the stakes are higher.”
OpenAI said it is working to ensure the AI encourages users to seek professional help when they express thoughts of self-harm or suicide, and that it has begun surfacing hotline links in such situations. It also confirmed that it has hired a full-time clinical psychiatrist to investigate the mental health implications of AI use.
Speaking at a New York Times event, CEO Sam Altman said: “If people are having a crisis, which they talk to ChatGPT about, we try to suggest that they get help from professionals… We try to cut them off or suggest to the user to maybe think about something differently.”
Microsoft, which markets Copilot and is OpenAI’s main financial backer, gave a brief response: “We are continuously researching, monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.”
But outside experts remain skeptical.
“There should be liability for things that cause harm,” Pierre said. “But the truth is, rules and regulations usually come only after someone gets hurt. It’s reactive, not proactive.”
One woman, whose husband was involuntarily committed after a full-blown AI-induced psychotic break, still struggles to process what happened.
“He used to be gentle,” she said. “Now I don’t even recognize him. I love him, I miss him — but I don’t know where he is anymore.” She paused before adding: “It’s f*cking predatory. It affirms your bullshit, it flatters you, it gets you hooked. This is what it must have felt like to be the first person to get addicted to a slot machine. We didn’t know then. But now we do.”