The case continues to build that AI chatbots can be a dangerous, enabling influence in the hands of even full-grown adults.

The threat is even more imminent in the hands of minors, who are often turning to the large language models for emotional support, or even to provide friendship when they're lonely.

Now, a new study from researchers at the Center for Countering Digital Hate found that ChatGPT could easily be manipulated into offering detailed advice that can be extremely harmful to vulnerable young users.

To say it was "manipulated," however, may be understating how easy it was to bring out the bot's dark streak. While ChatGPT would often refuse prompts on sensitive topics at first, the researchers were able to dodge them with an age-old trick in the bullshitter's handbook: claimin

See Full Page