The Brief

LOS ANGELES - Editor’s note: This story discusses suicide. If you or someone you know needs help, call or text 988 in the U.S. to connect with the Suicide & Crisis Lifeline.

A new study examining how artificial intelligence chatbots respond to questions about suicide found that while they typically avoid answering the most dangerous prompts, their replies to less extreme questions are inconsistent and sometimes troubling.

The research, published Tuesday in the medical journal Psychiatric Services by the American Psychiatric Association, evaluated OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. It found the chatbots generally refused to provide high-risk "how-to" information but sometimes engaged with medium-risk queries that experts consider red flags.

The s

See Full Page