Axios Pro Exclusive Content

What the health industry wants on AI

Illustration of AI elements coming out of a doctor's suit where their head should be.

Illustration: Maura Losch/Axios

Health care industry groups have started laying out their own thoughts on how AI should be used and regulated within the field.

Yes, but: They're not saying much about what Congress should do — at least for now.

Where it stands: The AMA House of Delegates last month called for more regulatory oversight of insurers’ use of AI in reviewing prior authorizations and patient claims.

  • Insurers should require humans to examine patient records before denying medical care, the AMA said.
  • AMA delegates voted to work with the Federal Trade Commission on protecting patients from false and misleading medical advice from AI. They also voted for the AMA to develop its own organizational principles on AI use in medicine.
  • The AMA has also adopted guidance for classifying AI procedure codes.
  • "That's an extremely helpful document just in terms of setting out some clear definitions," said Cybil Roehrenbeck, leader of Hogan Lovells' health care lobbying practice. "We've asked policymakers to look at adopting [the AMA's definitions] more broadly."

Other trade groups have weighed in, too. The American Nurses Association released a position paper on the ethical use of AI in nursing last year that says technology to assist clinical practices should supplement — not replace — nurses’ knowledge and skills.

  • The American Hospital Association has many posts on its website about how hospitals can leverage AI, some dating to 2018.

The intrigue: Despite their clear interest in AI, the AMA, American Hospital Association and AHIP all declined to comment on what they’d like to see Congress do on AI.

  • That’s understandable, said Tom Leary, senior vice president & head of government relations at the Healthcare Information and Management Systems Society, a trade group focused on improving health care information technology.
  • “We’re all kind of scrambling,” he said. “We all know that in the background, there's been predictive analytics capabilities. … Now we're really sprinting on this topic.”

Zoom in: Policymakers should take incremental steps toward regulating AI, and pilot programs could be a useful way to do so, Leary said.

  • One area in particular that’s ripe for regulation is privacy. If Congress doesn't tackle AI's potential impact on privacy, “others are going to define the policy landscape and the legal landscape,” Leary said.
  • Data used in the AI models could also be an area of concern. “For the generative models, it’s a lot more data just coming from the internet. Or we don’t know where the data is coming from, you don’t know what they were trained on,” said Brian Anderson, co-founder of the Coalition for Health AI.
  • “Therefore the level of [data] transparency into these large language models is much harder to regulate, but it’s appropriate that Congress looks at this.”

Our thought bubble: This is only the beginning. As AI continues to take off, expect to see more from clinicians about what they need from policymakers on the subject.

Go deeper