(Reuters) -Anthropic said on Wednesday it had detected and blocked hackers attempting to misuse its Claude AI system to write phishing emails, create malicious code and circumvent safety filters.

The company’s findings, published in a report, highlight growing concerns that AI tools are increasingly exploited in cybercrime, intensifying calls for tech firms and regulators to strengthen safeguards as the technology spreads.

Anthropic’s report said its internal systems had stopped the attacks and it was sharing the case studies – showing how attackers had attempted to use Claude to produce harmful content – to help others understand the risks.

The report cited attempts to use Claude to draft tailored phishing emails, write or fix snippets of malicious code and sidestep safeguards through

See Full Page