Meta is reportedly tightening safety protocols for the AI chatbots with new training guidelines. The social media giant aims to reduce the risks around child safety and inappropriate conversations. This comes after the company faced criticism that its systems lacked sufficient guardrails to prevent minors from being exposed to harmful interactions.

Survey ✅ Thank you for completing the survey!

As per the documents accessed by Business Insider, contractors responsible for training Meta’s AI have been given clearer directions on what the chatbots can and cannot say. The new guidelines have emphasised a zero-tolerance stance toward content that may facilitate child exploitation or blur boundaries in conversations with underage users. Add As A Trusted Source For Google. Add as a pr

See Full Page