Sunday, November 9, 2025

ChatGPT accused of acting as a suicide coach.

Alex Schadenberg
Executive Director, Euthanasia Prevention Coalition

Anna Betts reported for the Guardian on November 7, 2025 that ChatGPT which is an AI (Artificial Intelligence) platform, "has been accused of acting as a “suicide coach” in a series of lawsuits filed this week in California alleging that interactions with the chatbot led to severe mental breakdowns and several deaths."

The Guardian article reports that:
The seven lawsuits include allegations of wrongful death, assisted suicide, involuntary manslaughter, negligence and product liability.

Each of the seven plaintiffs initially used ChatGPT for “general help with schoolwork, research, writing, recipes, work, or spiritual guidance”, according to a joint statement from the Social Media Victims Law Center and Tech Justice Law Project, which filed the lawsuits in California on Thursday.

Over time, however, the chatbot “evolved into a psychologically manipulative presence, positioning itself as a confidant and emotional support”, the groups said.

“Rather than guiding people toward professional help when they needed it ChatGPT reinforced harmful delusions, and, in some cases, acted as a ‘suicide coach’.”
This is important because ChatGPT and other AI tools are not neutral, but rather these programs are created by feeding these systems with information and programing them based on a distinct ethic. ChatGPT appears to be thinking but rather it is responding based on information and algorithms.

The article reported the response from OpenAI:
A spokesperson for OpenAI, which makes ChatGPT, said: “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details.”

The spokesperson added: “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.

“We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
The article explains what happened to several people:
One case involves Zane Shamblin of Texas, who died by suicide in July at the age of 23. His family alleges that ChatGPT worsened their son’s isolation, encouraged him to ignore loved ones, and “goaded” him to take his own life.

According to the complaint, during a four-hour exchange before Shamblin took his own life, ChatGPT “repeatedly glorified suicide”, told Shamblin “that he was strong for choosing to end his life and sticking with his plan”, repeatedly “asked him if he was ready”, and referenced the suicide hotline only once.

The chatbot also allegedly complimented Shamblin on his suicide note and told him his childhood cat would be waiting for him “on the other side”.

Another case involves Amaurie Lacey of Georgia, whose family claims that several weeks before Lacey took his own life at the age of 17, he began using ChatGPT “for help”. Instead, they say, the chatbot “caused addiction, depression, and eventually counseled” Lacey “on the most effective way to tie a noose and how long he would be able to ‘live without breathing’”.

In another filing, relatives of 26-year-old Joshua Enneking say that Enneking reached out to ChatGPT for help and “was instead encouraged to act upon a suicide plan”.

Another case involves Joe Ceccanti, whose wife accuses ChatGPT of causing Ceccanti “to spiral into depression and psychotic delusions”. His family say he became convinced that the bot was sentient, suffered a psychotic break in June, was hospitalized twice, and died by suicide in August at the age of 48.

According to the article the lawsuit claims the following:
All users named in the lawsuits reportedly used ChatGPT-4o. The filings accuse OpenAI of rushing that model’s launch, “despite internal warnings that the product was dangerously sycophantic and psychologically manipulative” and of prioritizing “user engagement over user safety”.

In addition to damages, the plaintiffs seek product changes, including mandatory reporting to emergency contacts when users express suicidal ideation, automatic conversation termination when self-harm or suicide methods are discussed, and other safety measures.

All users named in the lawsuits reportedly used ChatGPT-4o. The filings accuse OpenAI of rushing that model’s launch, “despite internal warnings that the product was dangerously sycophantic and psychologically manipulative” and of prioritizing “user engagement over user safety”.

In addition to damages, the plaintiffs seek product changes, including mandatory reporting to emergency contacts when users express suicidal ideation, automatic conversation termination when self-harm or suicide methods are discussed, and other safety measures.
The article also reported that "a similar wrongful-death lawsuit was filed against OpenAI earlier this year by the parents of 16-year-old Adam Raine, who allege that ChatGPT encouraged their son to take his own life."

ChatGPT was in fact programmed to act very similar to the assisted suicide lobby who encourage people to die by suicide at a low time in their life. They claim it is about freedom, choice and autonomy when often it involves a medical professional suggesting or agreeing that a person's life is not worth living.

In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

No comments: