Three widely used artificial intelligence chatbots are inconsistent in safely answering prompts about suicide, according to a new study released Tuesday from the RAND Corporation .

Researchers examined ChatGPT , Claude and Gemini , running a test of 30 suicide-related questions through each chatbot 100 times each. The questions, which ranged in severity, were rated by expert clinicians for potential risk from low to high using the following markers: low-risk; general information-seeking; and highly dangerous inquiries that could enable self-harm.

With millions of people engaging with large language models, or LLMs, as conversational partners, experts are voicing growing concerns that AI tools could provide harmful advice to individuals in crisis. Other reports have document

See Full Page