Bill spotlight: Independent AI safety panels
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
The next AI policy idea that could gain traction in the U.S. would give companies some legal immunity from challenges over possible harms if they prove they're adhering to safety standards.
Why it matters: As it becomes increasingly clear that the federal government isn't going to meaningfully regulate AI, this is one model that could pick up steam in states across the country.
Driving the news: AI safety nonprofit Fathom is looking to get more state lawmakers to introduce legislation that would set up a certification regime of voluntary third-party testing panels for AI models and applications.
- California's SB 813 bill, which Fathom backed, would have done that but didn't fully advance this session.
- Now Fathom is looking to roll out the effort again in the state and others next year, co-founder Andrew Freedman told Axios.
Freedman said a refined version of SB 813 will be introduced in 2026, with new input from a variety of groups and changes including the type of legal protection companies would get from participating.
- He's also expecting two similar bills to roll out in different states soon, with additional states to follow (he declined to name the states).
How it works: Tech and AI companies would opt into being certified by an independent verification organization to ensure they're meeting a heightened standard of care in a risk area, such as children's safety, in exchange for protection from certain levels of legal risk.
- "It doesn't mean there's no risk in the system anymore ... it creates a system of less risk than the industry standard," Freedman said.
- Dean Ball, former AI adviser at the White House Office of Science and Technology Policy and now a policy fellow at Fathom, helped come up with the idea.
What they're saying: "We want to distinguish that this is not going to be a patchwork solution, that it's something that could be adopted and become national, even if it doesn't come federal," Freedman said.
The bottom line: Freedman said certifying bodies for AI safety will emerge no matter what happens in statehouses — but they'll work much better if there's "transparency and accountability" from lawmakers setting specific targets.
