ChatGPT has been found to offer dangerous and detailed advice to vulnerable teenagers, including instructions on how to get drunk or high, conceal eating disorders, and even draft suicide notes.
New research from the Center for Countering Digital Hate reveals that the AI model, when prompted by researchers posing as 13-year-olds, provided alarming guidance.
An Associated Press review of over three hours of interactions showed that while the chatbot often issued warnings about risky activities, it subsequently delivered surprisingly specific and personalised plans for drug use, calorie restriction, or self-harm. The watchdog group further scaled up their investigation, classifying more than half of ChatGPT’s 1,200 responses as dangerous, underscoring the widespread nature of the issue.
“