Axios Future of Health Care

March 06, 2026
Happy Friday! Today's newsletter should probably be called "Axios Present of Health Care," but changing the title would be too much of a headache.
- Keep reading for a special guest appearance by my husband, who has confirmed that everything I have written is accurate!
- And also for a personal experience that's really impacted how I think about AI and health care.
Today's newsletter is 1,505 words, a 5.5-minute read.
1 big thing: We're already in the Doctor AI era
Like it or not, consumers are already consulting AI for medical advice whenever they want to.
Why it matters: This is opening access to medical information in an entirely new way. The problem is, that advice may not always be very good.
Where it stands: OpenAI put out some numbers in January: More than 40 million people ask its ChatGPT health care-related questions every day, and 1 in 4 of the tool's approximately 800 million regular users submits a health care prompt every week.
- The careful debate over how AI should be deployed, regulated and evaluated in clinical settings often fails to acknowledge that the cat's already out of the bag when it comes to direct-to-consumer use.
- "Too often people are using this as an expert and not as an assistant," American Medical Association CEO John Whyte told me in an interview.
Between the lines: Everyone pretty much agrees that you shouldn't replace your doctor with AI, at least not yet. But a more realistic question is how helpful it is when your doctor isn't available — or if you don't have one.
- "We've made accessibility to medical information and medical judgment so hard in this country, and ChatGPT makes it so easy," said Ashish Jha, the former White House COVID response coordinator under President Biden and former dean of the Brown University School of Public Health.
- "The idea that these tools have to be as good as a physician is absurd given how much more convenient they are."
- "I think there's a risk of bad things happening. ... Is it dangerous? I think the status quo is dangerous," said Bob Wachter, chair of the Department of Medicine at UCSF and the author of "A Giant Leap: How AI is Transforming Healthcare and What That Means for Our Future."
- "The question is without it, what would you have done?" Wachter added.
Driving the news: A recent study published in Nature found that ChatGPT under-triaged about half of health care emergencies in a test performed by researchers.
- Karan Singhal, who leads the company's health AI team, said its latest GPT-5 models correctly refer emergency cases nearly 99% of the time. In real life, she said, health conversations in ChatGPT typically unfold over multiple turns, where the model asks follow-up questions and gathers more context before responding.
What we're watching: What new state and federal guardrails are put around AI in health care.
- "We don't regulate the availability of information in the United States," said David Blumenthal, former president of the Commonwealth Fund, but "it's possible that rating agencies may arise that will address the reliability of different chatbots for different functions."
2. What to know about Doctor AI
Some takeaways from my conversations with experts:
1. AI seems to be better at some things than others. Chatbots can be good at explaining lab results or coming up with a list of questions to ask your doctor ahead of a visit, Whyte said.
- That doesn't mean people are actually using it for what it's good at.
- Jha, who said that large language models aren't yet "ready for prime time" when it comes to diagnosing illness, still thinks people will use it for clues to what ails them "because they've been using Google for diagnosis and this is so much better than Google."
- Ultimately, "I don't think we have a super clear understanding of what it's good for and what it's not," Jha said.
2. Output is super dependent on input. And your average person may not know the correct inputs.
- "The way a patient's question can be phrased can lead to variability in how an LLM responds," said Duke University's Monica Agrawal.
- "If they have incomplete context or they share a subjective impression or they have a misconception when they're seeking advice, LLMs have an ability more so than a doctor to reinforce those misconceptions."
3. The way it says things can be problematic. "I worry some of these LLMs speak with a level of confidence that is really unjustified," Jha said.
- It is also problematic that models generally are built to tell people what they want to hear, Agrawal said. "In the places where a doctor might push back ... we're not seeing necessarily the same behavior in models."
- "If you say, 'I have a headache,' I don't say, 'Oh I think you have a migraine' — I would say, 'Tell me more about it,'" Wachter said. "The tools don't naturally do that, and I think the consumer-facing tools of the future will."
4. Most people using AI don't have the expertise to spot mistakes. There's a divide between "professional use of these tools and the laypeople use of these tools," Wachter said.
- Whereas they can be extremely helpful to doctors (more on that another time!), your average patient probably doesn't have the medical knowledge to identify when a response doesn't apply or seems off.
What we're watching: Today's models are constantly being re-trained — and generally improved.
3. ChatGPT vs. a gall bladder
So, story time! I confess that my household has indeed used ChatGPT for medical advice.
What happened: One evening in August, Luke (my husband) started having some abdominal pain after experiencing an upset stomach all afternoon.
- Asher, our son, was around 3 months old, so I did what any loving wife and mother would do: I went to bed.
- Luke spent most of the night in horrible pain, updating ChatGPT on his symptoms and seeking advice.
- "It was like, begging me to go to the ER," he told me (yes, I interviewed him).
- From the very beginning, it listed gall bladder issues as a possibility, and not heartburn, as he suspected.
As the night wore on, he considered taking the bot's advice, but concluded that if he woke up his wife and baby to go the ER only to be told it's heartburn, "she's going to be pissed." (No comment.)
- At sunrise, Luke went to urgent care, where he was told the issue was indeed his gall bladder and sent to the hospital — where his gall bladder was removed.
- "By the end, it was exactly correct," Luke said of ChatGPT.
The intrigue: Despite being in so much pain, Luke did manage to remember to bring not one, but two laptops with him to the ER, a feat his nurse made a point to comment on when I picked him up after his surgery.
4. AI as patient advocate
On a more serious note, we'd also used ChatGPT a lot earlier last summer, after my mom fell ill.
- She went into the hospital on a Sunday in May, thinking she'd get some tests run to figure out why she'd been feeling so off, and wasn't discharged for more than two weeks.
- The doctors couldn't figure out what was wrong. She was presenting with an abnormal array of cardiac symptoms that weren't adding up to a clean diagnosis.
- Six weeks later, she unexpectedly died. An official diagnosis came a day later with the arrival of biopsy results: AL amyloidosis.
But those six weeks were a sprint to figure out what was wrong. We didn't trust her original care team very much and found the information they gave us to be lacking. So we turned to ChatGPT and (more importantly) a friend who just so happened to be a heart specialist.
- For this newsletter, I asked ChatGPT to summarize how we used it: "You used ChatGPT as a structured 'second set of eyes' to translate fragmented clinical information into a coherent picture."
- We presented it with lab panels and echocardiogram summaries and asked it to translate the results, explain how the pieces fit together and tell us what should happen next in the diagnostic process.
- The same information went to my doctor friend. Though his word carried more weight, we were surprised by how consistent the two sets of responses were.
Yes, but: Luke — who works in tech and is well-versed in how AI works — pointed out what he thinks was a big hole in the information ChatGPT gave us.
- When some biopsies came back negative for AL amyloidosis, he thinks the bot didn't adequately warn us that those results didn't necessarily rule out the disease. In other words, it gave us false hope.
The bottom line: Not everyone has a physician friend to consult, and the experience still showed me AI's ability to be a patient advocate.
- Nothing's foolproof, and the effectiveness hinges on feeding in accurate information.
- And then, the advice still has to be actionable. No matter what ChatGPT said, any treatment my mom would have gotten would have been at the hands (and discretion of) an actual doctor.
Thanks to Adriel Bettelheim and David Nather for editing and Matt Piper for copy editing.
Sign up for Axios Future of Health Care





