Hospitals size up ways to ensure their AI works
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Maura Losch/Axios
Health systems are under increasing pressure to embrace new artificial intelligence tools without a formal system for evaluating how well they work.
Why it matters: Even AI developers can struggle to explain why a model makes a particular prediction or recommendation.
- That has big implications in clinical settings, where algorithm errors or bias can result in patient harm.
Driving the news: The Coalition for Health AI, made up of more than 3,000 health systems, tech companies and patient advocates, is creating a network of "assurance labs" with the talent and bandwidth to validate systems and evaluate ongoing performance.
- The idea is to create an ecosystem of labs that are trustworthy and "don't have commercial entanglements with the vendor that they're validating the model for," CEO and co-founder Brian Anderson told Axios.
- The goal is to use "ingredient and nutrition labels" to standardize how AI models are evaluated and ensure they're tested on data representative of a range of patients in a particular region.
Between the lines: While health systems generally rely on the Food and Drug Administration to vet the tools they use, AI algorithms present different challenges because they change over time, and because the data they ingest can be highly variable.
- That presents a unique challenge for regulators, hospitals and clinics.
- An algorithm trained with data from patients in Boston may not work when applied to patients at a hospital in Santa Fe, New Mexico.
- "If you want to know your AI is actually doing what you thought it was doing, you actually need to validate it in the situation in which it's being used," FDA commissioner Robert Califf said recently while speaking at the HLTH conference in Las Vegas.
- In an article last month in JAMA Network, Califf and co-authors wrote about the need for ongoing post-market monitoring of AI to prevent algorithm failure and model bias.
- "I don't know of a single health system in the U.S. which is capable of doing the validation that's needed," Califf said.
Zoom in: Health systems are getting inundated with pitches for AI technology, David Newman, chief medical officer of virtual care for Sanford Health System, which operates 48 medical centers and more than 200 clinics in the Midwest, told Axios.
- "I looked at my inbox yesterday and I had 22 emails from AI companies," Newman said. "I don't know if they've been validated or not. I don't know if they're solving a problem at all. But it's really hard to wade through that to see what actually is useful for patients and our providers."
- At Sanford, any new AI products are vetted by a governance committee and then internally validated by a data analytics team before they can be deployed.
- But it's incredibly resource-intensive and still relies on external studies, he said. And it's hardly sustainable for smaller health systems.
What to watch: CHAI is soliciting feedback from different users and AI developers and plans to release a final version of its plan early next year.
Editor's note: This story has been corrected to say David Newman is the chief medical officer of virtual care for Sanford Health System (not its chief medical officer).
