DETROIT – AI chatbots may be developing their own “survival drive” by refusing commands to shut themselves down, an AI safety company has claimed.
The research, conducted by scientists at Palisade Research, assigned tasks to popular artificial intelligence (AI) models before instructing them to shut themselves off, Live Science reported.
But, as a study published Sept. 13 on the arXiv pre-print server detailed, some of these models — including Google’s Gemini 2.5, OpenAI’s GPT-o3 and GPT-5, and xAI’s Grok 4 — not only resisted this shutoff command, but found ways to outright sabotage it.
Some analysts criticized the study, arguing that the apparent disobedience likely stemmed from poor training incentives and ambiguous instructions, rather than being seeds of an upcoming

MITechNews

PC World
PC World Business
Fast Company Technology
The Atlantic
New York Post
11Alive Politics
ABC10 Video