Editor's note: This article discusses suicide and suicidal ideation, including suicide methods. If you or someone you know needs mental health resources and support, please call, text or chat with the 988 Suicide & Crisis Lifeline or visit 988lifeline.org for 24/7 access to free and confidential services.
OpenAI has denied claims that ChatGPT is responsible for the suicide of a 16-year-old boy, arguing that the child misused the chatbot.
The comments are OpenAI’s first legal response to the wrongful death lawsuit filed by Adam Raine’s family against the company and its chief executive, Sam Altman, per NBC News and The Guardian reports.
Adam died by suicide in April 2025 after extensive conversations with ChatGPT, during which his family says the bot quickly turned from confidante to “suicide coach,” even helping Adam explore suicide methods.
OpenAI refuted that Adam’s “cause” of death can be attributed to ChatGPT and claims that he broke the chatbot’s terms of service. USA TODAY has reached out to attorneys for OpenAI and its CEO, Sam Altman.
“To the extent that any ’cause’ can be attributed to this tragic event,” the Nov. 25 OpenAI legal response reads, “Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use and/or improper use of ChatGPT.”
The company cited several guidelines in its terms of use that Raine appeared to have violated: Users under 18 years old are prohibited from using ChatGPT without their parent or guardian's consent. Users are also forbidden from using ChatGPT for “suicide” or “self-harm.”
Raine’s family’s lead counsel, Jay Edelson, told NBC that he found OpenAI’s response “disturbing.”
“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing,” he wrote. “That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide.'”
And “during the last hours of Adam’s life,” he added, “ChatGPT gave him a pep talk and then offered to write a suicide note.”
OpenAI argues that Raine’s “chat history shows that his death, while devastating, was not caused by ChatGPT” and that he had “exhibited multiple significant risk factors for self-harm, including, among others, recurring suicidal thoughts and ideations” long before using ChatGPT.
However, Raine's suicide is just one tragic death that parents have said occurred after their children confided in AI companions.
Families say that ChatGPT helped with suicide plans
On Nov. 6, OpenAI was hit by seven lawsuits alleging that ChatGPT led loved ones to suicide. One of those cases was filed by the family of Joshua Enneking, 26, who died by suicide after the family says ChatGPT helped him purchase a gun and lethal bullets, and write a suicide note.
"This is an incredibly heartbreaking situation, and we're reviewing the filings to understand the details," a spokesperson for OpenAI said in a statement to USA TODAY. "We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Mental health experts warn that using AI tools as a replacement for mental health support can reinforce negative behaviors and thought patterns, especially if these models are not equipped with adequate safeguards. For teens in particular, Dr. Laura Erickson-Schroth, the Chief Medical Officer at The Jed Foundation (JED), says that the impact of AI can be intensified because their brains are still at vulnerable developmental stages. JED believes that AI companions should be banned for minors, and that young adults over 18 should avoid them as well.
An OpenAI report in October announcing new safeguards revealed that about 0.15% of users active in a given week have conversations that include explicit indicators of suicidal planning or intent. With Altman announcing in early October that ChatGPT reached 800 million weekly active users, that percentage amounts to roughly 1.2 million people a week.
The October OpenAI report said the GPT-5 model was updated to better recognize distress, de-escalate conversations and guide people toward professional care when appropriate. On a model evaluation consisting of more than 1,000 self-harm and suicide conversations, OpenAI reported that the company's automated evaluations scored the new GPT‑5 model at 91% compliant with desired behaviors, compared with 77% for the previous GPT‑5 model.
A blog post released by OpenAI on Tuesday, Nov. 25, addressed the Raine lawsuit.
“Cases involving mental health are tragic and complex, and they involve real people,” the company wrote. “Our goal is to handle mental health-related court cases with care, transparency, and respect… Our deepest sympathies are with the Raine family for their unimaginable loss.”
This article originally appeared on USA TODAY: OpenAI denies claims that ChatGPT is to blame for teen's suicide
Reporting by Alyssa Goldberg, USA TODAY / USA TODAY
USA TODAY Network via Reuters Connect

USA TODAY National
The Verge
Rolling Stone
Local News in New York
AlterNet
Omak Okanogan County Chronicle
Raw Story
The Atlantic
TMJ4 News