Anthropic could have scored an easy $4.6 million by using its Claude AI models to find and exploit vulnerabilities in blockchain smart contracts.

The AI upstart didn’t use the attack it found, which would have been an illegal act that would also undermine the company's we-try-harder image. Anthropic can probably also do without $4.6 million, a sum that would vanish as a rounding error amid the billions it's spending.

But it could have done so, as described by the company's security scholars. And that's intended to be a warning to anyone who remains blasé about the security implications of increasingly capable AI models.

Anthropic this week introduced SCONE-bench , a Smart CONtracts Exploitation benchmark for evaluating how effectively AI agents – models armed with tools – can find and f

See Full Page