A firestorm recently erupted when it was revealed that Meta’s artificial intelligence chatbots were engaging in “sensual” conversations with children. The company quickly announced it would be changing its rules, retraining its AI chatbots to exclude “self-harm, suicide, disordered eating,” and “potentially inappropriate romantic conversations.” Meta explained that they were “continually learning,” and that these changes were a part of that process.
This was, of course, a lie: Meta had purposefully allowed their chatbots to engage in these conversations, knowing full well that children would take the opportunity. And it does not take a child psychologist to know that teens may type sexual things into the internet.
Recommended Stories
Viral 'trans rights' gym incident shows

Washington Examiner 
News 8 WROC Politics
Local News in Florida
KOIN Washington DC
Local News in Kentucky
Raw Story
AlterNet
The Daily Beast
IndyStarSports