Illustration: Rebecca Zisser/Axios

Some scientists are calling on the Food and Drug Administration to establish standards for advanced algorithms that are developing at a "staggering" pace before they are put in medical devices to help predict patients' outcomes.

What's new: Advanced algorithms are starting to be deployed in some devices to help provide automated real-time predictions, but these offer a whole new level of possibilities and challenges from older predictive tools. Standards are needed to check for safety and effectiveness before they are implemented in a clinical setting, the scientists say in a policy forum in Science Thursday.

The FDA tells Axios it is working on developing a framework to handle advances in AI and medicine, as pointed out by Commissioner Scott Gottlieb last year. While unable to comment on this paper, a spokesperson says the FDA has used its current process for novel medical devices to authorize these AI algorithms:

  • Viz.ai for helping providers detect stroke in CT scans.
  • IDx-DR for detecting diabetic retinopathy.
  • OsteoDetect for detecting bone fractures.

Meanwhile, Ravi B. Parikh, co-author of the paper and a fellow at University of Pennsylvania's School of Medicine, tells Axios that the FDA needs to set standards to evaluate the "staggering" pace of AI development. He adds:

"Five years ago, AI and predictive analytics had yet to make a meaningful impact in clinical practice. In just the past 2-3 years, premarket clearances have been granted for AI applications ranging from sepsis prediction to radiology interpretation."
"But if these tools are going to be used to determine patient care ... they should meet standards of clinical benefit just as the majority of our drugs and diagnostic tests do. We think being proactive in creating and formalizing these standards is essential to protecting patients and safely translating algorithms to clinical interventions."

Why it matters: Advanced algorithms present both opportunities and challenges, says Amol S. Navathe, co-author and assistant professor at Penn's School of Medicine. He tells Axios:

"The real opportunity is that these algorithms outperform clinicians in medical decisions, not a small feat. The challenge is that the data generated for algorithms is not randomly generated, rather, most of [what] the data algorithms 'see' is a result of a human decision. We have a ways to go in our scientific approaches to overcome this challenge and uniformly develop algorithms that can help improve upon human clinician decisions."

Details: The authors list the following as recommended standards...

  1. Meaningful endpoints for clinical benefit from the algorithms should be rigorously validated by the FDA, such as downstream outcomes like overall survival or clinically relevant metrics like the number of misdiagnoses.
  2. Appropriate benchmarks should be determined, similar to the recent example of the FDA approving Viz.AI, the deep-learning algorithm for diagnosing strokes, after it was able to diagnose strokes on computed tomography imaging more rapidly than neuroradiologists.
  3. Variable input specifications should be clarified for all institutions, such as defining inputs for electronic health records so results are reliable across institutions. Plus, algorithms should be trained on data sources from as broadly representative populations as possible so they are generalizable across all populations.
  4. Guidance on possible interventions that would be connected to an algorithm's findings to improve patient care should be considered.
  5. Run rigorous audits after FDA clearance or approval, particularly to check on how the new variables included by deep-learning may have altered its performance over time. For instance, regular audits could find the algorithm had a systematic bias against certain groups after being deployed across large populations. This could be tracked in a manner similar to the current FDA Sentinel Initiative program for approved drugs and devices.

Outside comment: Eric Topol, founder and director of Scripps Research Translational Institute, who was not part of this paper, says the timing of these proposed standards is "very smart" before advanced algorithms are placed into too many devices.

  • "[The algorithm] doesn't translate necessarily into helping people," Topol tells Axios. "It can actually have no benefit."
  • Even worse, he adds, if the variables are off, the predictive analyses can have negative ramifications.

What's next: The scientists hope the FDA considers integrating the proposed standards alongside its current pre-certification program under the Digital Health Innovation Act to study clinical outcomes of AI-based tools, Ravi says.

Go deeper

Updated 30 mins ago - Politics & Policy

Coronavirus dashboard

Illustration: Eniola Odetunde/Axios

  1. Global: Total confirmed cases as of 10 p.m. ET: 12,009,301 — Total deaths: 548,799 — Total recoveries — 6,561,969Map.
  2. U.S.: Total confirmed cases as of 10 p.m. ET: 3,053,328 — Total deaths: 132,256 — Total recoveries: 953,420 — Total tested: 37,532,612Map.
  3. Public health: Houston mayor cancels Republican convention over coronavirus concerns Deaths are rising in hotspots — Déjà vu sets in as testing issues rise and PPE dwindles.
  4. Travel: United warns employees it may furlough 45% of U.S. workforce How the pandemic changed mobility habits, by state.
  5. Education: New York City schools will not fully reopen in fallHarvard and MIT sue Trump administration over rule barring foreign students from online classes.
  6. 🎧 Podcast: A misinformation "infodemic" is here.

Transcripts show George Floyd told police "I can't breathe" over 20 times

Photo: Gary Coronado/Los Angeles Times via Getty Images

Newly released transcripts of bodycam footage from the Minneapolis Police Department show that George Floyd told officers he could not breathe more than 20 times in the moments leading up to his death.

Why it matters: Floyd's killing sparked a national wave of Black Lives Matter protests and an ongoing reckoning over systemic racism in the United States. The transcripts "offer one the most thorough and dramatic accounts" before Floyd's death, The New York Times writes.

5 hours ago - Health

Fighting the coronavirus infodemic

Illustration: Sarah Grillo/Axios

An "infodemic" of misinformation and disinformation has helped cripple the response to the novel coronavirus.

Why it matters: High-powered social media accelerates the spread of lies and political polarization that motivates people to believe them. Unless the public health sphere can effectively counter misinformation, not even an effective vaccine may be enough to end the pandemic.