OpenAI has announced its plans to implement parental controls and enhanced safety measures for ChatGPT after parents filed a lawsuit this week in California state court alleging the popular AI chatbot contributed to their 16-year-old son's suicide earlier this year.
The company said it feels "a deep responsibility to help those who need it most," and is working to better respond to situations involving chatbot users who may be experiencing mental health crises and suicidal ideation.
"We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT," OpenAI said in a blog post. "We're also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in mo