Federal regulators and elected officials are moving to crack down on AI chatbots over perceived risks to children's safety. However, the proposed measures could ultimately put more children at risk.
On Thursday, the Federal Trade Commission (FTC) sent orders to Alphabet (Google), Character Technologies (blamed for the suicide of a 14-year-old in 2024), Instagram, Meta, OpenAI (blamed for the suicide of a 16-year-old in April), Snap, and xAI. The inquiry seeks information on, among other things, how the AI companies process user inputs and generate outputs, develop and approve the characters with which users may interact, and monitor the potential and actual negative effects of their chatbots, especially with respect to minors.
The FTC's investigation was met with bipartisan applause from