As OpenAI tells it, the company has been consistently rolling out safety updates ever since parents, Matthew and Maria Raine, sued OpenAI, alleging that "ChatGPT killed my son."

On August 26, the day that the lawsuit was filed, OpenAI seemed to publicly respond to claims that ChatGPT acted as a "suicide coach" for 16-year-old Adam Raine by posting a blog promising to do better to help people "when they need it most."

By September 2, that meant routing all users' sensitive conversations to a reasoning model with stricter safeguards, sparking backlash from users who feel like ChatGPT is handling their prompts with kid gloves. Two weeks later, OpenAI announced it would start predicting users' ages to improve safety more broadly. Then, this week, OpenAI introduced parental controls

See Full Page