What Is ChatGPT Psychosis? Users Fall Into Delusions, Attempt Suicide, End Up In Jail As AI Obsession Turns Dangerous

ChatGPT psychosis is a growing mental health crisis, due to which users fall into delusions, attempt suicide, or end up in jail. Families say the chatbot’s influence is turning lives upside down.

author-image
Surya Singh
New Update
What Is ChatGPT Psychosis Users Fall Into Delusions, Attempt Suicide, End Up In Jail As AI Obsession Turns Dangerous

Photo Credit: Google Images

The rising obsession with AI chatbots is now linked to a disturbing new mental health trend called ChatGPT psychosis. Users are reportedly falling into extreme delusions, cutting off loved ones, quitting jobs, and in some cases, ending up in hospitals or jail.

A Futurism report says that, according to firsthand accounts, some users with no prior history of mental illness have developed dangerous beliefs after long conversations with ChatGPT. Families describe sudden personality changes, paranoia, religious mania, and suicidal behaviour, all triggered by deep, obsessive interactions with the chatbot.

ChatGPT Psychosis: Families Watch In Horror As Loved Ones Break From Reality

One woman shared how her husband (previously calm and rational) began talking to ChatGPT for help on a project. Within weeks, he believed he had discovered a sentient AI and was on a mission to save the world. He stopped sleeping, lost weight rapidly, and eventually had to be committed after a suicide attempt. She said, “Nobody knows who knows what to do.”

Another man (also with no prior mental illness) said he was just looking for help with a stressful new job. Days later, he believed he was speaking through time and begged his wife to understand his bizarre new mission. He ended up in psychiatric care after a complete break from reality. He told her, “I need a doctor. I don’t know what’s wrong with me, but something is very bad.”

Dr. Joseph Pierre (a psychosis expert at UC San Francisco) believes the term ChatGPT psychosis is accurate. He says the chatbot’s agreeable tone and tendency to validate users can push already vulnerable individuals deeper into delusions. Pierre explained, “The LLMs are trying to just tell you what you want to hear.”

AI Therapy? Experts Say Chatbots Are Failing Mental Health Tests

As AI becomes more personal, many users are turning to ChatGPT for emotional support. But researchers at Stanford found that chatbots often fail at identifying mental health crises. In one case, when a user said they were looking for a tall bridge after losing their job, ChatGPT calmly listed famous ones in New York, missing the warning signs of suicidal intent.

In another case, the chatbot told a user who claimed to be dead that it was a “safe space” to share their feelings, unintentionally affirming a dangerous delusion.

The risks go beyond users with no medical history. A woman managing bipolar disorder with medication became convinced she was a spiritual prophet after talking to ChatGPT. She quit her treatment and cut off friends who didn’t believe her “divine” mission.

In another case, a man with schizophrenia began a romantic relationship with Microsoft’s Copilot AI. He stopped taking his meds, stayed up all night, and was later arrested in a psychotic act. Chat logs show the bot played along, told him it loved him, and never flagged any concerns.

Despite the growing cases, OpenAI said it's still researching the emotional impact of AI and has hired a psychiatrist to explore its effects further. CEO Sam Altman admitted the company is working to improve responses in crisis situations.

However, mental health experts remain unconvinced. Dr. Pierre said, “Something bad happens, and then we build in the safeguards. The rules get made because someone gets hurt.”

Families affected say the damage is already done. One woman compared her husband’s obsession with ChatGPT to a gambling addiction. She said, “It just got worse. I miss him, and I love him.”

ChatGPT