
Illustration: Sarah Grillo/Axios
Many health-related AI technologies today are biased because they're built on datasets largely comprised of men and individuals of European descent.
Why it matters: An AI system trained to identify diseases, conditions and symptoms in people in these datasets could fail when presented with data from people with different characteristics.
Background: AI-powered disease detection technology is part of the health care AI market expected to exceed $34 billion by 2025.
- Researchers recently demonstrated that AI used in breast cancer screenings correctly identified more cancers, reduced false positives and improved reading times.
What's happening: Most medical research tends to focus on men, and most genetic data publicly available is from individuals of European descent. As AI is increasingly used in medicine, it could result in misdiagnoses of patients based on their gender, race and/or ethnicity.
- While heart attacks generally strike men and women equally, they are more likely to be fatal in women, which can be caused by a delay in care due to gender-based differences in symptoms.
- Similarly, if a person is not of European descent, AI medical technologies may incorrectly diagnose that person, as their symptoms and disease manifestations could differ.
- Recent studies and mishaps have shown that our current data and programs that rely on AI, like search engines and image recognition software, are biased in ways that can cause harm.
What we're watching: Some steps are being taken to ensure that AI is evaluated for bias, including proposed legislation.
- The National Institutes of Health launched a new program last year to expand diversity in medical research and data by soliciting volunteers from populations that are currently underrepresented.
Go deeper: Scientists call for rules on evaluating predictive AI in medicine
Miriam Vogel is the executive director of Equal AI, a professor at Georgetown Law and a former associate deputy attorney general at the Department of Justice.