Generative AI models are far from perfect, but that hasn't stopped businesses and even governments from giving these robots important tasks. But what happens when AI goes bad? Researchers at Google DeepMind spend a lot of time thinking about how generative AI systems can become threats, detailing it all in the company's Frontier Safety Framework. DeepMind recently released version 3.0 of the framework to explore more ways AI could go off the rails, including the possibility that models could ignore user attempts to shut them down.

DeepMind's safety framework is based on so-called "critical capability levels" (CCLs). These are essentially risk assessment rubrics that aim to measure an AI model's capabilities and define the point at which its behavior becomes dangerous in areas like cy

See Full Page