
Illustration: Sarah Grillo/Axios
Frustrated by inertia in Congress, state lawmakers from across the nation are organizing an effort to more aggressively tackle artificial intelligence.
Driving the news: A task force, announced this week at a national summit in Indianapolis, is focused on developing legislation with unified language to put guardrails on how AI is used in the public and private sectors.
- Lawmakers from Colorado, California, Texas, Florida and Minnesota are part of the coalition.
- "We are training these systems as the world is now, we know there are biases there," said Connecticut state Sen. James Maroney (D-Milford), a leader of the effort.
Why it matters: AI is creating concern and fear in statehouses, and legislators are scrambling to respond, even as they admit they won't be able to keep up with the fast-paced evolution of the technology.
The intrigue: The discussion of AI regulation Tuesday at the National Conference of State Legislatures (NCSL) summit drew hundreds — a standing-room only crowd that spilled into the hallway.
- Given the political realities in Washington, "a lot of these questions will be in your hands to solve," Chloe Autio, the director of applied AI governance at the Cantellus Group, told lawmakers.
The big picture: The state-level approach will build on what the European Union is developing to regulate Big Tech, lawmakers here say.
By the numbers: At least 25 states introduced legislation related to AI this year, and measures passed in 14 of them, according to NCSL, which just published a report to guide state lawmakers on the topic.
Zoom in: The initial steps are focused on bringing more transparency and accountability to AI systems with required public disclosures and limits on how the data is used.
- Connecticut is cataloging its state systems that use AI to ensure they do not discriminate or lead to disparate outcomes, while at least six other states have formed councils to study the issue.
- Colorado lawmakers passed a bill in 2022 to put limits on the use of facial recognition technology by state and local governments and required public disclosure and accountability reports.
- Other states are exploring impacts to the education system and criminal justice.
Yes, but: Tech industry officials and AI experts cautioned lawmakers to not go too far with the rules while acknowledging they need to build more trust in the technology.
- Nicole Foster, the director of global AI and machine learning at Amazon, said states should incentivize adoption in industries short on workers and other innovative "use cases to make our lives better."
What they're saying: "I think it's a necessary conversation," Autio said. "But we need to target it so we don't lose the good."
The bottom line: Colorado state Sen. Robert Rodriguez (D-Denver) said the new task force won't "look at excessively regulating" AI because there's no way to keep up with the advancing technology. Still, limits are needed, he told Axios in an interview.
- "This is evolving, and if we don't do something now, what will it evolve to? We are going to be the Terminator or Cyberdyne systems taking over the world," he said.

Get more local stories in your inbox with Axios Denver.
More Denver stories
No stories could be found

Get a free daily digest of the most important news in your backyard with Axios Denver.