A new independent assessment of AI safety practices across the industry’s biggest players is raising alarms about how far behind companies remain as their models rapidly advance.
The Winter 2025 AI Safety Index , released Wednesday by the Future of Life Institute, evaluated the safety protocols of eight major AI developers, including the makers of ChatGPT, Gemini, and Claude, and concluded that many firms “lack the concrete safeguards, independent oversight and credible long-term risk-management strategies that such powerful systems demand.”
The analysis examined dozens of indicators across six domains, covering everything from companies’ risk assessments and model transparency to whistleblower protections and existential-risk planning. It highlights existential-risk planning specifica

WCPO 9
GV Wire
Fast Company Technology
The Daily Sentinel
The Mercury News San Jose
Detroit News
Nola Business
KY3
Tech Times
The Register
Raw Story