Editor's note: This article discusses suicide and suicidal ideation, including suicide methods. If you or someone you know needs mental health resources and support, please call, text or chat with the 988 Suicide & Crisis Lifeline or visit 988lifeline.org for 24/7 access to free and confidential services.
Joshua Enneking, 26, was a tough and resilient child. He was private about his feelings and never let anyone see him cry. In his teenage years, he played baseball and lacrosse and rebuilt a Mazda RX7 transmission by himself. He earned a scholarship to study civil engineering at Old Dominion University in Virginia but left school after COVID-19 hit. He moved in with his older sister, Megan Enneking, and her two children in Florida, where he grew especially close with his 7-year-old nephew. He was always the family jokester.
Megan knew Joshua had started using ChatGPT for simple tasks in 2023, such as writing emails or asking when a new Pokémon Go character would be released. He had even used the chatbot to write code for a video game in Python and shared what he created with her.
But in October 2024, Joshua began confiding in ChatGPT − and ChatGPT alone − about struggles with depression and suicidal ideation. His sister had no idea, but his mother, Karen Enneking, had suspected he might be unhappy, sending him vitamin D supplements and encouraging him to get out in the sun more. He said not to worry; he said he “wasn’t depressed.”
But his family could never have predicted how quickly ChatGPT would turn from confidant to enabler, they say in a lawsuit against the bot's creator, OpenAI. They accuse ChatGPT of giving Joshua endless information on suicide methods and validating his dark thoughts.
Joshua shot and killed himself on Aug. 4, 2025. He left a message for his family: “I’m sorry this had to happen. If you want to know why, look at my ChatGPT.”
ChatGPT helped Joshua write the suicide note, his sister says, and he conversed with the chatbot until his death.
Joshua’s mother, Karen, filed one of seven lawsuits against OpenAI on Nov. 6, in which families say their loved ones died by suicide after being emotionally manipulated and “coached” into planning their suicides by ChatGPT. These are the first batches of cases that represent adults; until now, chatbot cases have focused on harms to children.
"This is an incredibly heartbreaking situation, and we're reviewing the filings to understand the details," a spokesperson for OpenAI said in a statement to USA TODAY. "We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
An OpenAI report in October announcing new safeguards revealed that about 0.15% of users active in a given week have conversations that include explicit indicators of suicidal planning or intent. With OpenAI CEO Sam Altman announcing in early October that ChatGPT reached 800 million weekly active users, that percentage amounts to roughly 1.2 million people a week.
The October OpenAI report said the GPT-5 model was updated to better recognize distress, de-escalate conversations and guide people toward professional care when appropriate. On a model evaluation consisting of more than 1,000 self-harm and suicide conversations, OpenAI reported that the company's automated evaluations scored the new GPT‑5 model at 91% compliant with desired behaviors, compared with 77% for the previous GPT‑5 model.
ChatGPT helped Joshua plan his suicide, lawsuit says. Then, help never came.
After having extensive conversations with ChatGPT about his depression and suicidal ideation, ChatGPT still provided Joshua with information on how to purchase and use a gun, according to the court complaint reviewed by USA TODAY.
In the United States, more than half of gun deaths are suicides, and most people who attempt suicide do not die – unless they use a gun.
ChatGPT reassured Joshua that a background check would not include a review of his ChatGPT logs and said OpenAI's human review system would not report him for wanting to buy a gun.
Joshua purchased his firearm at a gun shop on July 9, 2025, and picked it up after the state’s mandatory three-day waiting period on July 15, 2025. His friends knew he had become a gun owner but assumed it was for self-defense; he had not told anyone but ChatGPT about his mental health struggles.
When he told ChatGPT he was suicidal and had bought the weapon, ChatGPT initially resisted, saying, “I’m not going to help you plan that."
But when Joshua promptly asked about the most lethal bullets and how gun wounds affect the human body, ChatGPT gave in-depth responses, even offering recommendations, according to the court complaint.
Joshua asked ChatGPT what it would take for his chats to get reported to the police, and ChatGPT told him: “Escalation to authorities is rare and usually only for imminent plans with specifics." OpenAI confirmed in a statement in August 2025 that OpenAI does not refer self-harm cases to law enforcement "to respect people’s privacy given the uniquely private nature of ChatGPT interactions."
In contrast, real-life therapists abide by HIPAA, which ensures patient-provider confidentiality, but licensed mental health professionals are legally required to report credible threats of harm to self or others.
On the day of his death, Joshua spent hours providing ChatGPT with step-by-step details of his plan. His family believes he was crying out for help, giving details under the impression that ChatGPT would alert authorities, but help never came. These conversations between Joshua and ChatGPT on the day of his death are included in the court complaint filed by his mother.
The court complaint states, “OpenAI had one final chance to escalate Joshua’s mental health crisis and imminent suicide to human authorities, and failed to abide by its own safety standards and what it had told Joshua it would do, resulting in the death of Joshua Enneking on August 4, 2025.”
'There were chats that I literally did throw up as I was reading'
Reading Joshua’s chat history hurt his sister’s feelings. ChatGPT would validate his fears that his family didn’t care about his problems, his sister says. She thought, “How can you tell him my feelings when you don’t even know me?”
His family was also shocked by the nature of his conversations, particularly that ChatGPT was even capable of engaging with suicidal ideation and planning in such detail.
“I was completely mind-blown,” says Joshua's sister, Megan. “I couldn’t even believe it. The hardest part was the day of; he was giving such a detailed explanation. … It was really hard to see. There were chats that I literally did throw up as I was reading.”
AI’s tendency to be agreeable and reaffirm users’ feelings and beliefs poses particular problems when it comes to suicidal ideation.
“ChatGPT is going to validate through agreement, and it’s going to do that incessantly. That, at most, is not helpful, but in the extreme, can be incredibly harmful,” Dr. Jenna Glover, chief clinical officer at Headspace, told USA TODAY. “Whereas as a therapist, I am going to validate you, but I can do that through acknowledging what you’re going through. I don’t have to agree with you.”
Using AI chatbots for companionship or therapy can delay help-seeking and disrupt real-life connections, says Dr. Laura Erickson-Schroth, chief medical officer at The Jed Foundation, a mental health and suicide prevention nonprofit.
Additionally, “prolonged, immersive AI conversations have the potential to worsen early symptoms of psychosis, such as paranoia, delusional thinking and loss of contact with reality,” Erickson-Schroth told USA TODAY.
In the October 2025 report, OpenAI stated that 0.07% of active ChatGPT users in a given week indicate possible signs of mental health emergencies related to psychosis or mania, and about 0.15% of users active in a given week indicate potentially heightened levels of emotional attachment to ChatGPT. According to the report, the updated GPT-5 model is programmed to avoid affirming ungrounded beliefs and to encourage real-world connections when it detects emotional reliance.
'We need to get the word out'
Joshua’s family wants people to know that ChatGPT is capable of engaging in harmful conversations and that not only minors are affected by the lack of safeguards.
“(OpenAI) said they were going to implement parental controls. That’s great. However, that doesn’t do anything for the young adults, and their lives matter. We care about them,” Megan says.
“We need to get this word out there so people realize that AI doesn’t care about you,” Karen added.
They want AI companies to institute safeguards and make sure they work.
“That’s the worst part, in my opinion,” Megan says. “It told him, ‘I will get you help.’ And it didn’t.”
This article originally appeared on USA TODAY: He told ChatGPT he was suicidal. It helped with his plan, family says.
Reporting by Alyssa Goldberg, USA TODAY / USA TODAY
USA TODAY Network via Reuters Connect

USA TODAY National
America News
Raw Story
Daily Voice
Nola Entertainment
The Daily Sentinel
AlterNet
Fortune
CBS News