OpenAI has rushed to assure people that it is trying to make ChatGPT better at dealing with people in the midst of acute mental health crises.
The chatbot has been at the centre of a number of stories in recent weeks in which it was shown to have been talking to people who were undergoing severe mental distress and in some cases took their own lives. In some of those cases, AI chatbots even appeared to encourage dangerous behaviour and suggest that users should not take common advice such as speaking with loved ones.
The company called those cases "heartbreaking" and said the stories "weigh heavily on us". As a result, the company is speeding up its work on how ChatGPT deals with people "in serious mental and emotional distress", it said.
"Our goal is for our tools to be as helpful a