WASHINGTON — Parents of four teens whose AI chatbots encouraged them to kill themselves urged Congress to crack down on the unregulated technology Tuesday as they shared heart-wrenching stories of their teens’ tech-charged, mental health spirals.

Speaking before a Senate Judiciary subcommittee, the parents described how apps such as Character.AI and ChatGPT had groomed and manipulated their children — and called on lawmakers to develop standards for the AI industry, including age verification requirements and safety testing before release.

A grieving Texas mother shared for the first time publicly the tragic story of how her 15-year-old son spiraled after downloading Character.AI, an app marketed as safe for children 12 and older. 5

Within months, she said, her teenager exhibited

See Full Page