FDA's plan to roll out AI agencywide raises questions
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Maura Losch/Axios
The Food and Drug Administration is rolling out an aggressive plan to make generative AI a linchpin in its decision-making, part of a bid to get faster and leaner in evaluating drugs, foods, medical devices and diagnostic tests.
Why it matters: The plan raises urgent questions about what's being done to secure the vast amount of proprietary company data that's part of the process and whether sufficient guardrails are in place.
Driving the news: The FDA is racing to roll out generative AI across all its centers to augment employees' work following a successful pilot, officials said.
- Commissioner Marty Makary has ordered immediate deployment, with all offices to run on a unified, secure system tied to internal data platforms by June 30.
- Leading the effort are newly appointed chief AI officer Jeremy Walsh, formerly chief technologist at Booz Allen Hamilton, and Sridhar Mantha, a longtime FDA data leader.
- Makary said the technology could slash tasks in the review process for new therapies from "days to just minutes."
The big picture: Trump's overhaul of federal AI policy — ditching Biden-era guardrails in favor of speed and dominance — has turned the government into a tech testing ground.
- With Musk leading the charge under an "AI-first" strategy, critics warn rushed rollouts at a range of agencies could compromise data security, automate important decisions, and put Americans at risk.
- The General Services Administration is piloting an AI chatbot to automate routine tasks, and the Social Security Administration plans to use AI software to transcribe applicant hearings. GSA officials said their tool has been in development for 18 months.
Several experts told Axios the integration of AI at the FDA is a good move, but the speed of the rollout and lack of specifics raise multiple questions.
- "There's been a lot of AI already happening across different centers [in the FDA] for a variety of different reasons, but there's never been a concerted effort," said former FDA commissioner Robert Califf. "I have nothing but enthusiasm tempered by caution about the timeline."
- The industry would likely welcome anything that might get their drugs to market faster and temper cost increases, but a key question pharmaceutical companies will have is how the proprietary data they submit will be secured, said Mike Hinckle, an FDA compliance expert at K&L Gates.
- "While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the center. We're pleased to see the FDA taking concrete action to harness the potential of AI," PhRMA spokesperson Andrew Powaleny said in a statement.
Zoom in: Another key question is which models are being used to train the AI, and what inputs are being provided for specialized fine tuning, Eric Topol, founder of the Scripps Research Translational Institute, told Axios.
- "The idea is good, but the lack of details and the perceived 'rush' is concerning," Topol said.
Last week, Wired reported the FDA was in discussions with OpenAI about a project called cderGPT, which it said seems to be an AI tool for the Center for Drug Evaluation and Research (CDER).
- In response to questions from Axios, a Health and Human Services spokesperson did not confirm that, but said the technology was not meant to supplant humans.
- "Commissioner Makary has emphasized AI is a tool to support — not replace — human expertise," the spokesperson said. "When used responsibly, AI can enhance regulatory rigor by helping predict toxicities and adverse events for certain conditions."
The bottom line: As the Trump administration turns federal agencies into AI proving grounds, the FDA's rapid deployment will be an early test of whether innovation can be balanced with risks.
