Axios Future of Health Care

March 13, 2026
Good morning. We're picking up where we left off last week with how AI is changing health care.
- Last week we told you how patients are using it. This week, we'll look at the actual doctors.
- So many of you sent in thoughtful and interesting feedback — keep it coming!
Today's newsletter is 1,213 words, a 4.5-minute read.
1 big thing: How doctors are using AI
Doctors are increasingly using generative AI in clinical care, and enthusiasm is building for how the technology could solve deeply entrenched problems.
Why it matters: There's a wide spectrum of applications — some already widely embraced by the medical establishment, others viewed more skeptically as not ready for primetime or even dangerous.
The intrigue: I talked to several of the same people as I did for last week's story about patients using AI. Those who sounded bullish last week were even more confident about AI in the hands of licensed professionals.
- "For a physician, I actually think these tools are fantastic because they can help augment what you already know and push you on things, but they don't replace you — yet," said Ashish Jha, the Biden administration's COVID response coordinator and former dean of the Brown University School of Public Health.
Driving the news: Only 19% of physicians aren't using AI professionally now, according to a new survey released yesterday by the American Medical Association.
- The most common use was to summarize medical research and standards of care, with 70% of respondents saying they're already using AI that way or planning to in the next year.
- Other top uses are to create discharge instructions and care plans, summarize patient charts and to document billing codes, notes from patient conversations and more.
Between the lines: Today's uses are primarily about saving doctors time with research and administrative tasks. And that is meaningful amid widespread physician burnout and provider shortages.
- Using the tools to diagnose a condition, create a treatment plan or otherwise direct patient care are promising, but much riskier.
- AI tools flooding onto the health care market aren't all created equal when it comes to reliability and trustworthiness.
- "There are a number of organizations out there that are doing things in an untrustworthy manner. There are also a ton of organizations out there that are doing a fantastic job," said Shawn Griffin, president and CEO of URAC, which accredits providers and recently launched an AI program.
- "The problem is you can't tell everybody who is doing it right," he added.
Some uses are being framed as downright dangerous, especially in diagnostics.
- "Navigating the AI diagnostic dilemma" was named the number one concern of 2026 by the patient safety organization ECRI and the Institute for Safe Medication Practices.
- "AI systems are only as good as the algorithms they use and the data on which they are trained, and the potential for errors remains a significant concern," the report warns.
What we're watching: The less-controversial uses, like AI for administrative tasks, will likely build a case for more disruptive applications down the road.
- "They create a level of trust that is necessary as you move on to more ambitious uses, which will almost inevitably be riskier uses," said Bob Wachter, chair of the Department of Medicine at UCSF and the author of "A Giant Leap: How AI is Transforming Healthcare and What That Means for Our Future."
- "It's hard to change physician behavior. They're used to doing things a certain way," AMA CEO John Whyte said.
The bottom line: "I think what we'll be talking about in terms of AI tools a year from now, two years from now, will be different from what we were talking about end of last year, [or] early this year," Whyte said.
2. The AI hope for diagnostics
I reported this newsletter keeping in mind my own frustrating diagnostic story.
- I told you last week about what happened with my mom last summer.
- One big question I've had is whether AI will one day help patients like her receive a diagnosis sooner, especially when early intervention can be the difference between life and death.
- The answer, at least according to one expert, is that while AI can increase the scope and depth of a physician's expertise, it doesn't resolve some fundamental dilemmas.
Catch up quick: My mom died six weeks after she was first admitted to the hospital for a series of tests, following several months of vague GI and cardiac symptoms. Her care team couldn't figure out what was wrong, and her symptoms rapidly progressed.
- We didn't get an official diagnosis until the day after she died. It turns out she had AL amyloidosis, a rare disorder that can affect many different organs and cause symptoms resembling less serious illnesses.
- Those included arrhythmia, gastrointestinal issues that she initially managed with a blander diet, and general fatigue.
- Amyloidosis was not only pretty far down the list of possibilities, but doctors weren't connecting the dots between the symptoms in different organ systems.
So she had two big, systemic issues working against her when it came to getting a diagnosis:
- Most doctors have little — if any — experience with amyloidosis. Assuming it's a rare disease also goes against the doctrine "when you hear hoofbeats, think horses, not zebras."
- Doctors are either generalists with superficial knowledge about a lot of things or specialists with deep knowledge about a few things.
I spoke with Wachter at length about whether AI could combine the best of both in a scenario like the one we were in. Here are some takeaways:
1. It can help when you don't have access to a specialist (or even just a good generalist).
- My mom eventually saw an amyloidosis specialist, but most patients aren't going to be so fortunate when they show up at their local emergency department.
- "One of the advantages of AI is that it could be a better doctor when you don't have access to one," Wachter said.
- "I don't think AI is going to be smarter than a smart doctor, but ... not every doctor is above average," he added.
2. AI can help bridge the gap between specialists and generalists.
- "It's one of the magical things in generative AI. It has the generalism of the primary care doctor, meaning it's not stuck on one thing, and has the ability to think broadly, and has the knowledge of a specialist," Wachter said.
- Generative AI offers a "generalist mindset, subspecialty knowledge and an unlimited amount of time. And that's almost impossible to find in the American medical system. And that's probably what she needed."
3. This isn't as simple as plugging in some symptoms and having ChatGPT or whatever LLM you're using spit out an answer.
- The act of knowing which information is relevant to input requires expertise in and of itself, Wachter said.
- If I had simply told an AI tool that my mom had a heart arrhythmia and some GI issues early on, it would generate a very long list, including conditions causing both symptoms and individual ones that happened to be occurring at the same time.
- "It doesn't matter how smart the AI is, it's just a challenge of diagnosis," Wachter told me.
The bottom line: When it comes to complicated cases, AI offers promise, but no slam dunk.
Thanks to Adriel Bettelheim and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios Future of Health Care



