Artificial intelligence is now scheming, sabotaging and blackmailing the humans who built it — and the bad behavior will only get worse, experts warned.

Despite being classified as a top-tier safety risk, Anthropic’s most powerful model, Claude Opus 4, is already live on Amazon Bedrock, Google Cloud’s Vertex AI and Anthropic’s own paid plans, with added safety measures, where it’s being marketed as the “world’s best coding model.”

Claude Opus 4, released in May, is the only model so far to earn Anthropic’s level 3 risk classification — its most serious safety label. The precautionary label means locked-down safeguards, limited use cases and red-team testing before it hits wider deployment. 4

But Claude is already making disturbing choices.

Anthropic’s most advanced AI model,

See Full Page