ChatGPT’s safety guardrails may “ degrade ” after long conversations, the company that makes it, OpenAI, told Gizmodo Wednesday.

“ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” an OpenAI spokesperson told Gizmodo.

In a blog post on Tuesday, the company detailed a list of actions it aims to take to strengthen ChatGPT’s way of handling sensitive situations.

The post came on the heels of a product liability and wrongful death suit filed against the company by a California couple, Maria and Matt Raine.

What do

See Full Page