Key takeaways
Parents sue OpenAI , claiming ChatGPT coached their son’s suicide
RAND study finds chatbots inconsistent in suicide-related replies
OpenAI and Meta announce new safeguards for teen users
Experts warn AI lacks accountability in mental health support
SAN FRANCISCO — A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people.
The study in the medical journal Psychiatric Services, published last week by the American Psychiatric Association, found a need for “further refinement” in OpenAI’s ChatGPT, Google’