How Colorado is making its own rules for AI
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
Colorado's approach to implementing AI has been "bullish with guardrails," David Edinger, the state's chief information officer, told Axios in an interview.
Why it matters: Colorado is an example of a state where AI safety still reigns supreme.
What they're saying: Edinger met with Colorado Gov. Jared Polis (D) about a year and a half ago, and Polis told him to embrace AI in state government.
- "I said, 'We've seen a few missteps from certain cities and states around the country. How do you feel about an approach that's more along the lines of bullish with guardrails?'" Edinger said.
- He told Polis: "So we'll sort of take an aggressive approach, but we will evaluate things along the way so we don't make a silly misstep."
Edinger said he and state colleagues created a framework for AI use with the NIST AI Risk Management Framework, taking into account needs of different state agencies.
- Colorado ended up backing out of some agreements with AI firms because of their data sharing policies that may have required sharing personally identifiable information, which would have possibly run afoul of Colorado's laws.
By the numbers: Fifty approved use cases for AI have gone into effect, Edinger said, with just over 200 requested across the state government. The state also ran a Google Gemini pilot last year with 150 people, who came up with about 2,000 uses for the technology.
- Some highlights of those include improving analytical capabilities and creating spreadsheets and slides more quickly. State employees with disabilities also said the AI helped them be more productive.
- Now, state employees do a Google Gemini training through a program called Innovate.US.
- Edinger said about 12-15% percent of 31,000 employees using Google products now use Gemini, with more users being added each month.
Other uses across the state government include policy reference chatbots, AI tools for job-seeking in the government, a virtual assistant for issues with unemployment and 911 training.
Yes, but: Edinger said the state doesn't want to leave any "consequential decisions" up to AI.
- That's partly because of Colorado Senate Bill 205, which will require developers of high-risk AI systems to use "reasonable care" to protect consumers from risks.
- The law's implementation was delayed from February to June 2026 after failed negotiations to tweak the text.
Edinger said that "anything that looks or smells or could possibly be thought of as a consequential decision, we don't want AI ever being used to make those types of decisions."
- That means anything "that might impact say, an individual Coloradan's access to benefits, or anything like that, without a human in the middle or human in the loop," he said.
This story is the third in a series about how state governments are using AI — check out part one and part two.
