DOGE's "AI-first" strategy courts disaster
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
A rush to use artificial intelligence to root out government waste, as apparently planned by Elon Musk's DOGE operation, is likely to trigger chaotic outcomes and surprise disasters, AI experts tell Axios.
Why it matters: AI can help cut costs — but careless deployment risks harming people who need the government's help, amplifying inefficiencies, opening security holes and automating flawed decision-making.
Driving the news: Musk's allies at the so-called Department of Government Efficiency and within other government units are reportedly pursuing an "AI-first" strategy to integrate systems across federal agencies, assess contracts and recommend cuts, per the New York Times and other outlets.
What they're saying: No one at DOGE has publicly discussed using AI to replace government employees recently let go in mass layoffs, and the White House did not comment.
- The General Services Administration is working on a custom AI chatbot designed to boost productivity and analyze contract and procurement data, per Wired.
Catch up quick: Silicon Valley talent has been descending on Washington for nearly two decades in efforts to modernize government tech.
- DOGE itself has taken over the main organization left by one such previous effort, the Obama-era U.S. Digital Service.
The AI industry is eager to demonstrate the value of its products to government.
- OpenAI launched ChatGPT Gov in January.
- Google spokesperson Jose Castaneda pointed Axios to successful state projects that have used the company's AI to save money and speed up unemployment claims.
- Microsoft foresees a vast market for its AI in government.
- Perplexity gives its pro version free for a year to anyone with a .gov email address.
- And Anthropic has been working with Palantir and Amazon Web Services to help U.S. intelligence and defense agencies more efficiently process data.
Between the lines: It's likely that DOGE is trying to create a tool that lets you feed in government documents and ask where to trim spending, says Meredith Broussard, NYU professor and author of "Artificial Unintelligence: How Computers Misunderstand the World."
- "And then the machine will give an answer because the machine always gives an answer," Broussard tells Axios. "But that answer is not necessarily correct."
Threat level: "Everyone is excited about the use of AI, and over time it will undoubtedly be used in government," Donald Moynihan, public policy professor at the University of Michigan and co-director of the Better Government Lab, tells Axios.
- "But broad-scale rollouts without extensive testing are a recipe for disaster," he adds.
- Moynihan says he's seen too many cases where algorithms fueled systemic errors and discriminations because the limitations of AI weren't properly understood beforehand.
- "At this point, I have little trust that DOGE will engage in the sort of careful development and testing of AI," Moynihan says.
All of the experts Axios spoke to had privacy and security concerns about unleashing AI on government documents, but some also said AI just isn't up to the task.
- If technology were capable of so quickly optimizing government, Broussard argues, then someone would have already done it.
- "I am not optimistic that any small team of people, no matter how talented, can go in and unravel or streamline and make sense of the big ball of mud that is government technology in a short period of time," Broussard says.
Zoom in: While AI can quickly analyze large amounts of data and identify patterns, it may arrive at wrong or even nonsensical conclusions.
- AI is known for inaccuracies — errors the industry has come to call "hallucinations" or "confabulations."
- Unless the technology's conclusions are carefully double-checked, they can lead to costly mistakes.
- "You can just imagine all the ways that that could go wrong," Broussard says.
Mike Lu, founder of Triller and Turrem, imagined a scenario where you asked AI to "optimize hockey."
- "AI would come back and tell you, 'Oh, you should make hockey boxing on ice,'" Lu told Axios.
- "The AI would see the highest level of engagement when the teams take off their gloves and start fighting," he said.
The other side: Used right, AI could help a "streamline government" project, Dmitry Shevelenko, chief business officer at Perplexity, tells Axios.
- He says that "lazy" or unsophisticated prompting could prove ineffective, but "if you describe exactly what you're looking for, it can be very efficient."
Freezing funding to government programs shouldn't proceed based only on guidance from an AI tool, Shevelenko adds, and "autonomous AI" shouldn't be making spending decisions.
- "It's all about doing 80% of that initial work faster, where you get your target list, and then you still need humans very much to review it and check for accuracy," he says.
