Anthropic, the company behind the popular AI model Claude, said in a new Threat Intelligence report that it disrupted a "vibe hacking" extortion scheme. In the report, the company detailed how the attack was carried out, allowing hackers to scale up a mass attack against 17 targets, including entities in government, healthcare, emergency services and religious organizations.
(You can read the full report in this PDF file .)
Anthropic says that its Claude AI technology was used as both a "technical consultant and active operator, enabling attacks that would be more difficult and time-consuming for individual actors to execute manually." Claude was used to "automate reconnaissance, credential harvesting, and network penetration at scale," the report said.
Making the findings more dist