OpenAI’s latest report on malicious AI use underscores the tightrope that AI companies are walking between preventing misuse of their chatbots and reassuring users that their privacy is respected.

The report , which dropped today, highlights several cases where OpenAI investigated and disrupted harmful activity involving its models, focusing on scams, cyberattacks, and government-linked influence campaigns. However, it arrives amid growing scrutiny over another type of AI risk, the potential psychological harms of chatbots. This year alone has seen several reports of users committing acts of self-harm, suicide, and murder after interacting with AI models. This new report, along with previous company disclosures, provides some additional insight into how OpenAI moderates chats for differ

See Full Page