Medical AI's weaponization
Machine learning can bring us cancer diagnoses with greater speed and precision than any individual doctor — but it could also bring us another pandemic at the hands of a relatively low-skilled programmer.
Driving the news: The World Health Organization is warning about the risks of bias, misinformation and privacy breaches in the deployment of large language models in healthcare.
- WHO officials worry that datasets which do not fully reflect the population can generate misleading or inaccurate information.
- There is a 1 in 300 chance of an individual being harmed throughout the patient journey, most often through data error, per WHO research.
The big picture: As this technology races ahead, everyone — companies, government and consumers — has to be clear-eyed that it can both save lives and cost lives.
- Next, it's set to help beat the trickiest cancers and boost rates of IVF success.
- But disaster is sometimes only one click or security breach away.
1. Escaped viruses are a top worry. Around 350 companies in 40 countries are working in synthetic biology.
- With more artificial organisms being created, there are more chances for accidental release of antibiotic resistant superbugs, and possibly another global pandemic.
- The UN estimates superbugs could cause 10 million deaths each year by 2050, outranking cancer as a killer.
- Through tolerance to high temperatures, salt, and alkaline conditions, escaped artificial organisms could overrun existing species or disturb ecosystems.
- What they're saying: AI models capable of generating new organisms "should not be exposed to the general public. That's really important from a national security perspective," Sean McClain, founder and CEO of Absci, which is working to develop synthetic antibodies, told Axios. McClain isn't opposed to regulator oversight of his models.
2. One person's lab accident is another's terrorism weapon.
- Researchers in 2022 proved they could create 40,000 new chemical weapons compounds in just six hours.
- They used AI models meant to predict and ultimately reduce toxicity, and trained them to increase toxicity instead.
3. Today's large language models make things up when they don't have ready answers. These so-called hallucinations could be deadly in a health setting.
- Arizona State University researchers Visar Berisha and Julie Liss say clinical AI models often have large blind spots, and sometimes worsen as data is added.
- Some medical research startups have started working with smaller datasets, such as the 35 million peer reviewed studies available on PubMed, to avoid the high error rate and lack of citations common with models trained on the open internet.
- System CEO Adam Bly told Axios the company's latest AI tool for medical researchers "is not able to hallucinate, because it’s not just trying to find the next best word." Answers are delivered with mandatory citations: when Axios searched causes of stroke, 418 citations were offered alongside the answer.
On top of the dangers of weaponizing medical research, AI in healthcare settings poses a risk of worsening racial, gender and geographic disparities, since bias is often embedded in the data used to train the models.
- Equal access to technology matters, too.
- German kids with Type 1 diabetes from all backgrounds are now achieving better control of glucose levels: because patients are provided smart devices and fast internet. That's not a given in the U.S., per Stanford pediatrician Ananta Addala.
- CDC still points healthcare facilities to a guide from 1999 for tips on avoiding bioterrorism. There's no mention of AI.
What we're watching: Updated CDC and FDA guidance would be a first line of defense.
- The Department of Health and Human Services is consulting on a proposed rule on algorithm transparency, including patient demographics.