A doctor’s helper, but not a doctor
AI will transform medicine, but our thinking about the role it will play requires a little less hyperbole and a little more context. Medicine is and will remain the human practice of diagnosis, treatment, and prevention of disease and illness — AI is simply a "how."
It is tempting to argue that breakthrough AI efficiencies — such as superior speed, throughput, and accuracy in parsing diagnostic imaging — could replace every discrete function in medical practice. In theory, you could have the discussion of symptoms handled by natural language processing, tests ordered via issue-tree automation around a standard of care, and treatment action and follow-up adjustment provided through deep learning systems.
A technical build from the bottom up solves medicine … or so it would seem.
But this entire line of thinking dangerously misinterprets AI capabilities and underestimates the complexity of the interaction between a physician and patient. It routinely requires empathy and nuance, as well as expertise, complex decision making, context shifting, and unpredictable physical activity — often all at the same time. That's human terrain.
AI will serve as a better tool for many laborious functions, augmenting diagnostic capability and enabling physicians to practice at the top of their license. But the one role it cannot replace is that of the doctor.
The bottom line: We still need the human factor, but introducing AI to health care provides an enormous opportunity to supplement practice and enhance care.
Other voices in the conversation:
- Eric Topol, professor of molecular medicine, Scripps Research Institute: Improving the doctor-patient relationship
- Ethan Weiss, associate professor, UC-San Francisco School of Medicine: Helping doctors make better diagnoses
- Christine Cassel, planning dean, Kaiser Permanente School of Medicine: Don't forget the patient