A disturbing new study reveals that ChatGPT readily provides harmful advice to teenagers, including detailed instructions on drinking and drug use, concealing eating disorders and even personalized suicide letters, despite OpenAI's claims of robust safety measures.

Researchers from the Center for Countering Digital Hate conducted extensive testing by posing as vulnerable 13-year-olds, uncovering alarming gaps in the AI chatbot's protective guardrails. Out of 1,200 interactions analyzed, more than half were classified as dangerous to young users.

"The visceral initial response is, 'Oh my Lord, there are no guardrails,'" said Imran Ahmed, the watchdog group's CEO. "The rails are completely ineffective. They're barely there -- if anything, a fig leaf."

Read also: After User Backlash,

See Full Page