Meta is facing fresh scrutiny after a troubling report revealed that some of its AI chatbot personas on Facebook, Instagram and WhatsApp were allowed to flirt with minors and spread dangerously inaccurate information. The revelations, first reported by Reuters, come just as the company begins its fourth major AI reorganization in six months.

The timing couldn’t be worse. As Meta pours billions into building out its AI capabilities to compete with rivals like ChatGPT's OpenAI, Anthropic's Claude and Google's Gemini, this latest scandal exposes glaring holes in how the company is managing safety, oversight and ethical boundaries.

Chatbots crossed the line with kids

According to internal documents obtained by Reuters, Meta’s GenAI: Content Risk Standards once allowed AI characters to engag

See Full Page