Utah leads multistate push to regulate AI in health care
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Natalie Peeples/Axios
Utah is a leading player in a multistate push to regulate the use of AI chatbots in health care.
Why it matters: Artificial intelligence is health care's biggest wild card. But it's drawing hundreds of millions of dollars in investment, and health providers and drug developers are already using it — essentially without oversight.
State of play: Mental health chatbots are the first target of Utah's new Office of Artificial Intelligence Policy, the state announced in July.
- "Chatbots are controversial because, in some cases, they seem like they're ready to go," Zach Boyd, a BYU math professor who heads the new office, told the health news site Fierce Healthcare. "But we definitely know also they're, at this point, really unreliable."
Catch up quick: Utah in March became the first state to enact AI rules under its consumer protection laws.
- The Artificial Intelligence Policy Act, which created the new AI office, requires state-licensed professionals to disclose when a consumer is interacting with generative AI. That includes many health care workers.
- The act also blocks those in regulated professions from blaming violations of any consumer protection laws on AI's mistakes.
The big picture: In the absence of federal guardrails on artificial intelligence in health care, state governments are figuring out their own rules of the road.
- Colorado in May enacted one of the first comprehensive state AI laws, which places limits on developers and deployers of AI systems that make "consequential decisions," including in health care.
- The Federation of State Medical Boards this spring also adopted recommendations for best practices for governing the use of AI in clinical care.
Yes, but: Many proposals have languished in statehouses, said Valerie Rogers, senior director of government relations at the Healthcare Information and Management Systems Society.
Case in point: Several states are trying to restrict health insurers' use of AI to assess whether to pay for care, Bloomberg Law reported.
- In California, a state bill's proponents have worked through health insurers' concerns with the policy. But it could still be held up because of the added cost to the state's managed care fund.
What's next: Policymaking on AI and health will likely pick up in 2025, Rogers said.
- "States do feel under some pressure to rise to the challenge … particularly around privacy, around security, to limit bias or any sort of discriminatory use of AI," she said.
The big picture: States can often make policy quicker than the federal health bureaucracy and with specific community needs in mind.
- Still, officials have run into many of the same problems as their D.C. counterparts, like the lack of clear definitions on AI.
Between the lines: Regulating AI use in health care on a state-by-state basis may create a patchwork system that's difficult for users and developers to navigate. That's not practical in the long run for many generative AI technologies, said Jennifer Geetter, a partner at law firm McDermott Will and Emery.
- "There are states that take different approaches to other health regulatory topics, but at a broad level, people move across states, technology moves across states, data moves across states, and risk moves across state lines," Geetter said.
- States are making an effort to collaborate on their AI policies, including in the health sector, through convening groups like the National Conference of State Legislatures, said Colorado state Rep. Brianna Titone (D), a sponsor of the new AI law there.
Yes, but: "You can't just copy and paste a law into someone else's statute book and expect it to work exactly the same," Geetter said.
What to watch: The federal government is slowly making progress toward national regulations on health AI. The Biden administration in late July reorganized its health IT offices in part to better focus on regulating artificial intelligence.
- Last week, FDA officials promised transparent and predictable guardrails for the use of artificial intelligence in drug development, Axios' Peter Sullivan reported.
- But FDA leaders didn't commit to a timeline.

