EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

A study of how three popular artificial intelligence chatbots respond to queries about suicide found that they generally avoid answering questions that pose the highest risk to the user, such as for specific how-to guidance. But they are inconsistent in their replies to less extreme prompts that could still harm people.

The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for “further refinement” in OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude.

It came on the same day that the parents of 16-year-old Adam Raine sue

See Full Page