Medical AI, meet reality
In case you'd heard any scary stories about AI advancing to the point of taking clinicians' jobs, here's your regular reminder that we're very, very far from such a scenario.
Why it matters: In even the most advanced of cases, health care-related algorithms are typically being used in an effort to guide clinical decision making — not replace a doctor's judgment.
- Even when it comes to those types of tools, developers must be careful to ensure their algorithms are trained with the appropriate data and account for any confounding variables.
- One common issue that's surfaced a lot of late involves AI tools trained on non-diverse populations and then deployed on diverse populations. Can you say error-prone?
(Small) details: When unaccounted for, unexpected variables rendered several COVID algorithms useless.
- In one example from the article, developers trained their tool on a dataset containing chest scans of children without COVID. The resulting AI tools "learned to identify kids, not COVID."
- In another, patients who were scanned while lying down were more likely to be seriously ill. So "the AI learned wrongly to predict serious COVID risk from a person's position."