Deepfakes could supercharge health care's misinformation problem
For all the promise that artificial intelligence holds for health care, one of the industry's big fears is its potential to churn out more convincing misinformation.
Why it matters: AI experts are warning that tech used to create sophisticated false images, audio and video known as deepfakes is getting so good it could soon become almost impossible to distinguish fact from fiction.
- The COVID-19 pandemic laid bare the deadly stakes of health care misinformation, as false information on vaccines, treatments and masks flooded social media sites.
- Deepfakes could make it even more challenging to react to emerging public health threats, secure patients' sensitive data or combat increasing cyberattacks on hospitals, experts told Axios.
The big picture: This technology is becoming better and more ubiquitous sooner than experts expected at a time when health information is being politicized and social media's already weak guardrails have been whittled down.
- "Really this year, it has come to the forefront based on the explosive, explosive development of generative AI," said John Riggi, national adviser for cybersecurity and risk for the American Hospital Association.
State of play: The threat to health care appears to be theoretical for now, but the industry doesn't want to get caught flat-footed.
- "We really need to be vigilant about it and try to get a hold of it now when it's still a bit nascent," said Chris Doss of RAND Corporation, who led a recent study on deep fakes in scientific communication published in Scientific Reports.
- In September, AHA urged health systems to be vigilant about the emerging risk deepfakes pose to patient information and hospitals' cyber defenses.
- "We do not want to play catch-up as we have, unfortunately, in the past with, for instance, ransomware attacks," Riggi said.
Among health care's major concerns with deepfakes:
Harder to stop misinformation: False images and audio that appear to come from a trusted source will make it harder to spread accurate health messages and will erode the public's confidence in legitimate sources.
- Imagine the impact of a deepfake Anthony Fauci video telling people not to get vaccinated, for instance.
- AI could enable disinformation to be automated and disseminated at scale. "That's the super-threat here," said Heather Lane, senior architect of the data science team for Athenahealth.
More convincing phishing: Phone calls and messages to patients appearing to come from their health insurer or doctor could be a tool for scammers to steal their financial or health information.
More effective cyberattacks: Similarly, a hacker could gain entry into a hospital's information systems by using artificially or synthetically generated audio of a known individual — such as the hospital's CEO — to call the organization's help desk for a new password, Riggi said.
The other side: Of course, health care is still very bullish on the upsides of generative AI — even including deepfakes.
- Early work with ChatGPT has found it can offer patients more empathetic answers than doctors can.
- Researchers have suggested that deepfakes could improve facial emotion recognition by AI and also create artificial patients to help in designing new molecules for treating disease.
The intrigue: The RAND study of how well individuals can identify deepfakes in scientific communication does little to allay fears about the technology.
- Even those working in science were fooled by messaging in deepfake videos relaying climate information. And the more individuals were exposed to deepfakes, the worse they were at identifying them.
- You might think "as deepfakes proliferate, people are going to get good at it just by being able to pick it out better with experience," Doss said. "Our study says that might not be true."
- "In fact, the opposite might be true."