"Not a supply chain risk:" Ex-NSA, OpenAI board member
Add Axios as your preferred source to
see more of our stories on Google.

Gen. Paul Nakasone testifies during a House hearing in January 2024. Photo: Kevin Dietsch via Getty Images
Retired Gen. Paul Nakasone, former NSA and U.S. Cyber Command director and an OpenAI board member, criticized the Trump administration's decision to label Anthropic a supply chain risk.
Why it matters: Designating just one American AI company as a risk could dismantle the Pentagon's decades of work to build trust across Silicon Valley, he warned.
What they're saying: "This is not a good space for our nation," Nakasone said at the Aspen Institute's Crosscurrent conference in Sausalito on Monday.
- "We need Anthropic. We need OpenAI. We need all of our large language model companies to be partnering with our government."
Zoom in: Nakasone added that designating Anthropic a supply chain risk is "not good."
- "The discussions over the weekend and the tenor of those discussions were tough for me to listen to," he said.
- "As an American citizen, as someone who served in government, I think it's just not right — this is not a supply chain risk," Nakasone said.
Catch up quick: Last week, President Trump said the U.S. government would blacklist Anthropic and the Pentagon declared the company a "supply chain risk."
- Meanwhile, OpenAI has inked a deal a deal to be used within classified Pentagon systems.
- As of Monday, the Pentagon has not yet sent Anthropic a formal notice designating the company as a supply chain risk, as Axios previously reported.
The big picture: One of the biggest concerns about frontier AI model use within classified systems is its potential to be weaponized for mass surveillance.
- Nakasone said — to assuage those concerns — surveillance powers need to fall in line with the Fourth Amendment, the Foreign Intelligence Surveillance Act and key presidential executive orders.
What to watch: Nakasone also said lawmakers need to start thinking critically about how to monitor military AI use.
- "Our DNA as a people is always looking at government surveillance as being bad, and we have to have that trust in us — us being the National Security Agency, our intelligence community — being able to do these types of missions with the confidence that what we are doing is by the letter of the law," Nakasone said.
Go deeper: AI's mass surveillance problem
