AI firms flunk existential risk planning, new report finds
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Tiffany Herring/Axios
None of the leading AI companies have adequate guardrails in place to prevent catastrophic misuse or loss of control of their models, according to the Winter 2025 AI Safety Index, out Wednesday from the Future of Life Institute.
Why it matters: AI companies are desperately chasing artificial general intelligence (AGI) and superintelligence, with the promise of surpassing humans someday.
- The potential for uncontrolled or destructive outcomes grows as models become more powerful.
The big picture: The Future of Life Institute is a nonprofit that releases regular safety assessments of leading AI companies.
- Anthropic had the highest overall score, but still received a grade of "D" for existential safety, meaning the company doesn't have an adequate strategy in place to prevent catastrophic misuse or loss of control.
- This is the second report in a row where no company received better than a D on that measure.
- All the AI firms except for Meta, DeepSeek and Alibaba Cloud responded to a list of questions provided by the institute, which allowed each company to provide additional information about its safety practices.
What they're saying: Leaders at many of the companies have spoken about addressing existential risks, per the report.
- This "rhetoric has not yet translated into quantitative safety plans, concrete alignment-failure mitigation strategies, or credible internal monitoring and control interventions," researchers wrote.
Between the lines: Anthropic and OpenAI scored A's and B's on information sharing, risk assessment and governance and accountability.
- But there was a massive and widening gap between the front three —Anthropic, OpenAI and Google DeepMind — and the rest: xAI, Meta, DeepSeek and Alibaba Cloud.
- xAI and Meta have risk-management frameworks but lack commitments to safety monitoring and have not presented evidence that they invest more than minimally in safety research, per the report.
- Even if the U.S. companies clean up their existential risk act, we're all still reliant on China or other foreign actors to do the same, Axios' Jim VandeHei and Mike Allen write.
- The Chinese models — DeepSeek, Z.ai and Alibaba — do not publish any safety framework, and therefore received failing marks for that category.
Flashback: The Future of Life Institute has been warning about runaway AI risk for years.
- In March 2023, the organization released a letter — signed by xAI owner Elon Musk — calling for a six-month pause on frontier-model development.
- That proposal was largely ignored.
The bottom line: The tension between sprinting ahead for innovation and slowing down for safety has come to define the AI age.
- Right now, the sprinters appear to be winning.
