AI standards institute sounds alarm over DeepSeek
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
A new government report warns that China's DeepSeek models pose risks to national security, even as they trail far behind American competitors on performance and cost.
The big picture: The report could give China hawks in Congress sturdier standing in their efforts to ban DeepSeek on government devices.
- "I am hopeful that this report will encourage more bipartisan support for the No DeepSeek on Government Devices Act and any future legislation to ban harmful AI programs that could be used for malign purposes by our foreign adversaries," Rep. Darin LaHood (R-Ill.) said.
Driving the news: The National Institute of Standards and Technology's Center for AI Standards and Innovation report released on Tuesday marks the first time a government agency has issued a comprehensive assessment of DeepSeek against U.S. frontier AI models.
What's inside: The report presents the center's evaluations of DeepSeek models against three OpenAI models and one from Anthropic. According to the evaluation:
- OpenAI's GPT-5 mini costs 35% less on average to achieve the same results as the best DeepSeek model.
- DeepSeek's most secure model was, on average, 12 times more likely than U.S. frontier models to "follow malicious instructions designed to derail them from user tasks."
- The center also said that DeepSeek models echo Chinese Communist Party narratives more frequently than U.S. models, with "4 times as many inaccurate and misleading" responses on a dataset of "politically sensitive questions."
What they're saying: "CAISI's evaluation confirms what people have long warned: PRC models are easier to subvert, more likely to push CCP narratives, and are spreading fast. It's like Huawei on steroids," said Beacon Global Strategies' Divyansh Kaushik.
