States take the lead policing AI in health care
Add Axios as your preferred source to
see more of our stories on Google.

Photo: Sarah Grillo/Axios
While President Trump demands a single national framework on AI policy, states are going their own way with hundreds of proposals aimed at setting guardrails for how the technology is used in health care.
Why it matters: That could set up a clash over who determines how AI models and systems can be deployed in insurer reviews, mental health treatment and chatbots that interact with patients.
By the numbers: More than 250 AI bills affecting health care were introduced in 47 states as of mid-October, according to a tracker from Manatt, Phelps & Phillips.
- 33 of those bills in 21 states became law.
- A half dozen states have enacted laws focused on use of AI-enabled chatbots, including Illinois' new law banning apps or services from providing mental health and therapeutic decision-making.
What they're saying: "There is a lot of bipartisan alignment on the topic. Red states are mirroring provisions of laws introduced in blue states and vice versa," said Randi Seigel, a partner at Manatt.
Driving the news: The current state AI activity falls into several broad categories, per Manatt:
🤖 New rules for chatbots in medical settings, including measures aimed at preventing bots from misrepresenting themselves as humans, producing harmful responses, or not reliably detecting crises.
- Many bills are in response to findings that chatbots are unsafe for mental health support, especially among teens.
💵 New requirements on insurers and managed care plans that use AI for pre-treatment reviews or to scrutinize claims.
- Some would make plans report the number of contested denied claims that involved AI or the use of predictive algorithms.
🔎 Transparency and antidiscrimination measures aimed at "high-risk" AI systems that can influence decisions about eligibility for health care, insurance, housing, education and other services.
- Colorado lawmakers recently delayed implementation of a sweeping transparency law, facing strong tech industry lobbying and other concerns.
🥼 New rules and mandates addressing potential bias or misuse of sensitive health data in clinical settings.
- Texas, Nevada and Oregon are among the states that recently enacted laws as more providers use AI tools to ease administrative tasks.
Friction point: Many of these efforts could bump up against Trump's push to establish a federal framework for AI and preempt state laws.
- Trump signed an executive order Thursday that requires the attorney general to establish a task force to challenge burdensome state AI regulations.
- It also draws Congress into the fight by calling for a legislative recommendation for a federal AI framework.
The burst of state rulemaking contributed to recent tensions between the White House and Anthropic, the maker of the Claude chatbot, over its support for efforts like a California law that would mandate transparency measures from "frontier" AI companies that develop the most advanced models.
- The company said it wants a federal standard, but couldn't wait for Congress to act.
Manatt's Seigel expects continued state interest in areas such as AI companions used by minors for mental health purposes.
- A bipartisan coalition of 42 state attorneys general this week gave OpenAI, Meta, Google, Microsoft and nine other companies until Jan. 16 to come up with new safeguards to protect children and vulnerable people against emotional manipulation and other risks from generative AI content.
Yes, but: Beyond who has jurisdiction, future standard-setting could be complicated by the way that AI can be applied to many different tasks, and criteria such as whether an algorithm is involved in making a "consequential decision."
- And some AI players are warning that a patchwork of laws could squelch innovation and create complex reporting requirements.
- "There are state-by-state restrictions that can be limiting," Rajaie Batniji, CEO of Medicaid health tech company Waymark Care, said at a recent Axios event, noting that some differentiate between "machine learning" and "artificial intelligence."
