August 28, 2025
🚨 Welcome to the final issue of this newsletter. As we announced in our Aug. 4 email to you, our expert policy reporters will share their stories in our free Axios newsletters going forward.
- So please keep following our work in newsletters like Axios Vitals, Axios AI+ and Axios Generate.
- And watch for the Friday editions of Axios AI+ next month, where Ashley and Maria will have you covered with special coverage of AI policy.
👋 Thanks for subscribing to Pro Policy. If you have any questions about prorated refunds for your subscription, please reach out to [email protected].
1 big thing: Three state AI laws we're watching
States have been busy with AI policy while Congress has struggled to pass much at the national level.
Why it matters: AI companies are increasingly focused on state-level action as Congress and the White House embrace an innovation-first, competition-focused approach.
State of play: Local laws are getting a lot of attention from tech companies that are increasingly worried about a state-by-state policy approach.
Here are three state AI laws we're watching, highlighted because we think the debate over them might shape how much states are able to get done on AI policy:
The Colorado AI Act: The law will require developers of "high-risk" AI systems to use "reasonable care" to protect consumers from "any known or reasonably foreseeable risks of algorithmic discrimination."
- This week, Colorado lawmakers decided to delay the AI law's implementation from February to June 30, 2026 and abandoned an effort to make the rules stronger, our Axios Denver colleague John Frank reports.
- Colorado is home to major Trump-favorite AI defense company Palantir. The drama in Colorado is a clear example of tech lobbying power hobbling state policy plans.
California's AI Transparency Act: This law, going into effect in 2026, mandates that companies "that create or produce a generative AI system with more than 1 million monthly users face new contracting requirements intended to help California users identify AI-generated content," per a summary from law firm Orrick.
- It's a major watermarking requirement for AI companies, and we'll be watching to see how they comply with it and if that will be a model for other states or for federal legislation.
- Besides that law, California has dozens of AI laws being proposed and debated — and tech industry lobbyists are spending more time than ever in Sacramento.
Tennessee's ELVIS Act: This law, which went into effect last year, updated Tennessee's Protection of Personal Rights law "to include protections or songwriters, performers, and music industry professionals' voice from the misuse of" AI.
- It's the first major law to try to protect creatives from AI using their work, and could inspire laws in other states.
- Sen. Marsha Blackburn has cited the ELVIS Act as she urges Congress to take up a similar federal law.
- Tennessee's law was a key reason she fought hard to prevent Congress from banning state AI laws during the last reconciliation fight.
What we're watching: Following news of a teen dying by suicide and his parents filing suit against ChatGPT, California Sen. Steve Padilla is urging fellow lawmakers to support his legislation, SB 243.
- The bill would "require chatbot operators to implement safeguards to protect users from the addictive, isolating, and influential aspects of AI chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers."
- We'll be tracking the legislative responses to AI chatbots at the state and federal levels going forward at Axios.com.
2. Exclusive: Civil rights groups urge to drop Grok
More than 30 consumer-focused groups are calling on the federal government to block using xAI's Grok in a letter shared exclusively with Ashley, saying it is ideologically biased and lacks safety testing.
Why it matters: The groups, most of whom object to many of President Trump's policy moves, are using a specific aspect of the administration's AI approach to try to keep Grok out of federal agencies.
Driving the news: Organizations including the Consumer Federation of America, Common Cause, The Center for AI and Digital Policy and the Leadership Conference wrote to OMB director Russell Vought, saying Grok is incompatible with Trump's "woke AI" executive order.
- The letter urges OMB to block Grok for federal work.
- xAI and other AI companies have been landing lucrative government contracts. xAI did not respond to a request for comment.
What they're saying: "The administration's AI Action Plan and the recent [Executive Order 14099] provide a clear and unequivocal framework for the procurement of AI tools," the letter's authors write.
- "These documents emphasize that federal AI systems must 'objectively reflect truth rather than social engineering agendas' and be 'neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas,'" the letter states.
- "Grok's record falls short of these fundamental requirements ... [with a] well-documented history of generating content characterized by hate speech, racism, and antisemitism."
The groups point out that OMB guidance requires agencies to "discontinue use of an AI system if proper risk-mitigation is not possible" and lay out examples of Grok spitting out hate speech.
- Grok's unwillingness to share details about its safety testing make it both unfit and non-secure for the government's use, they write.
✅ Thank you for reading, and thanks to editors Mackenzie Weinger and David Nather and copy editor Bryan McBournie.
View archive



