Axios Pro Exclusive Content

Medical AI, meet reality

Tweet showing a screenshot from an MIT article about why AI tools for Covid haven't worked.

Screenshot: @hoalycu

In case you'd heard any scary stories about AI advancing to the point of taking clinicians' jobs, here's your regular reminder that we're very, very far from such a scenario.

Driving the news: In a recent thread, Twitter user Lucy Hao shared several screenshots from an MIT article detailing problems that befell several AI tools designed to help identify COVID-19.

Why it matters: In even the most advanced of cases, health care-related algorithms are typically being used in an effort to guide clinical decision making — not replace a doctor's judgment.

  • Even when it comes to those types of tools, developers must be careful to ensure their algorithms are trained with the appropriate data and account for any confounding variables.
  • One common issue that's surfaced a lot of late involves AI tools trained on non-diverse populations and then deployed on diverse populations. Can you say error-prone?

(Small) details: When unaccounted for, unexpected variables rendered several COVID algorithms useless.

  • In one example from the article, developers trained their tool on a dataset containing chest scans of children without COVID. The resulting AI tools "learned to identify kids, not COVID."
  • In another, patients who were scanned while lying down were more likely to be seriously ill. So "the AI learned wrongly to predict serious COVID risk from a person's position."

Oops.

Go deeper