
A radiologist in Berlin, Germany, looks at a patient's brain images in an AI-based app on a tablet. Photo: Monika Skolimowska/picture alliance via Getty Images
Amid new signs that AI could transform cancer care, clinicians and health systems are taking stock of thorny ethical and practical questions that still stand in the way of the technology's widespread adoption.
Why it matters: Cancer remains the second leading cause of death in the U.S. and innovations like AI-enhanced mammography could detect cases sooner and cut down on unnecessary tests and treatments.
- But experts warn there are limitations like biased algorithms and a narrow window into how AI systems analyze data to make decisions.
- In the best-case scenario, AI could mean treating patients "more efficiently in terms of time and money, and ... more accurately choosing the right therapeutic course," Jae Ho Sohn, education co-chair of the UCSF Center for Intelligent Imaging, told Axios.
What's happening: A recent trial of 80,000 women in Sweden found that AI-enabled breast cancer screening outperformed the standard reading of two experienced radiologists, according to an analysis in the journal The Lancet Oncology.
- It was the first randomized trial to look at AI and breast cancer screening and showed how the technology could improve the accuracy of screening programs without an increase in false positives, Kristina Lång, the study's lead author, told Axios.
- But two other studies in the Annals of Internal Medicine found AI didn't improve detection of advanced adenomas, or precursors, of colorectal cancer and left it unclear whether it could reduce the incidence of colon cancer.
The big picture: The Food and Drug Administration has approved more than 500 AI and machine learning-enabled medical devices — including imaging software, computer-assisted triage systems, camera-based systems for measuring vital signs and remote cardiac monitoring devices.
- Sohn, whose work focuses on lung cancer, said AI could be useful in detecting cancerous nodules that are so tiny they can be easily missed by clinicians.
- That could ultimately help prioritize cases, he added.
- AI is also being deployed via algorithms that can examine genetic and other molecular markers in blood tests to identify cancer risks, Paul Pinsky, chief of the early detection research branch within the National Cancer Institute's division of cancer prevention, told Axios.
But health systems using AI tools will likely have to wrestle with practical and logistical issues, experts say.
- "If the AI algorithm is predicting one way, and then the doctor thinks the other way, then what kind of decision should we make? Who's going to take responsibility if the AI gets something wrong in its prediction?" Sohn asked.
- Regina Barzilay, AI faculty lead at the MIT Jameel Clinic, said while clinicians must trust their own judgement, AI can do things that human radiologists can't validate, because it can detect extremely subtle patterns and changes in images, she said.
- When a patient gets a mammogram, "the only thing that a computer can tell you is whether you have, according to them, the cancer right now or not. It could not tell you whether you're likely to get cancer within a year," she said.
Zoom in: Machines powered by AI can give patients much more specific information about their risk for developing cancer in the future, Barzilay told Axios.
- "The bigger question is, how do you create a workflow where you're not just giving this piece of information to the patient, but you giving them ... different pathways what to do next," depending on their level of risk, Barzilay said, such as getting an MRI.
- How those clinical workflows will be created remains to be seen, she added, as uptake of AI tools has been extremely limited in the U.S. health care system.
Worth noting: For all its promise, AI adoption is lagging in part because doctors and hospitals don't want to be held accountable for biased algorithms whose flawed decision-making isn't immediately apparent, according to a Brookings review. Other reasons include a lack of large, high-quality sets of electronic health data and regulatory barriers like privacy rules that make it hard to pool such data.
- Other countries, including developing countries, have been much quicker to embrace AI tools in health care, Barzilay said.
But, but, but: Careful attention needs to be paid to how these AI systems are trained in order to reduce the potential for bias.
- If the AI screening programs are trained on data from individuals that are all of one ethnicity, sex or even geographic region, the algorithm may be less accurate for people outside of that group, Sohn said.
- There's also the "black box problem" of AI and its lack of transparency around how the technology uses data to make decisions. If it's making incorrect predictions in cancer screenings, it can be hard to determine exactly why, Sohn said.
- One solution, Barzilay noted, is to create AI machines that can warn when they're not working correctly, much like when a car's dashboard lights up a warning when there's an issue with the engine.
- Then, when an image or data point is outside of the scope of the data the machine has been trained on, the machine can warn it may not be accurate.
The bottom line: For now, both Sohn and Lång stressed that AI screening systems are likely to be used to help doctors but won't be a standalone tool.
- "Can we use this AI in a clinically impactful, useful way to help the doctors, essentially? That's where it's going at this point," Sohn said.