For the better part of a year, we've watched — and reported — in horror as more and more stories emerge about AI chatbots leading people to self-harm, delusions, hospitalization, arrest, and suicide.

As the loved ones of the people impacted by these dangerous bots rally for change to prevent such harm from happening to anyone else, the companies that run these AIs have been slow to implement safeguards — and OpenAI, whose ChatGPT has been repeatedly implicated in what experts are now calling "AI psychosis," has until recently done little more than offer copy-pasted promises.

In a new blog post admitting certain failures amid its users' mental health crises, OpenAI also quietly disclosed that it's now scanning users' messages for certain types of harmful content, escalating particularly w

See Full Page