How Colorado is working to prevent AI deepfakes
Add Axios as your preferred source to
see more of our stories on Google.
/2024/09/09/1725906998241.gif?w=3840)
Illustration: Natalie Peeples/Axios
Colorado's attorney general issued a public advisory Monday warning voters to watch for election-related misinformation created by artificial intelligence.
Why it matters: "The use of deepfakes in political messaging and on social media is increasingly common, especially with the development of sophisticated systems of artificial intelligence," Attorney General Phil Weiser wrote.
Driving the news: Colorado is emerging as a national leader in regulating AI, partly for how it plans to crack down on election-related messages that could cause voter confusion.
- A new law requires any AI-generated visual or audio communication about a candidate to carry a disclosure warning voters if it "depicts speech or conduct that falsely appears to be authentic or truthful."
- The law includes exemptions for parody and satire, as well as for media organizations airing political advertisements.
The intrigue: A violation of the law may lead to an injunction, including financial damages and criminal penalties, the attorney general warns.
Flashback: The issue came to the forefront earlier this year when Axios Denver first reported that a state legislative candidate was using fake, AI-generated images in his campaign newsletter, including one of a bogus community meeting.
- Under the new rules, the images would need to carry a disclaimer about their origin.
Zoom in: The attorney general offered three tips to help consumers discern whether an election communication is false, AI-generated content:
- Check or listen for a disclaimer that it is deepfake.
- Verify the information with a separate trusted source.
- Be skeptical of other political communications not covered by the law and verify their veracity.
