The tragic deaths of teens who died by suicide after forming intimate relationships with AI chatbots — interactions their parents allege pushed their children over the edge — are raising warnings about how validating, nonjudgmental and uncannily lifelike bots have become.

What makes them so engaging, and so dangerous, experts said, is how compellingly they mimic human empathy and support.

Recommended Videos

“The danger is real, the potentially lethal use of these tools is real,” said Canadian lawyer Robert Diab, author of a newly published paper on the challenges of regulating AI. “It’s not hypothetical. There is now clear evidence they can lead to this kind of harm.”

Several wrongful death lawsuits unfolding in the U.S. allege AI-driven companion chatbots lack sufficient safety feat

See Full Page