How Arizona is trying to fight election deepfakes
Add Axios as your preferred source to
see more of our stories on Google.
/2024/02/20/1708465909201.gif?w=3840)
Illustration: Natalie Peeples/Axios
Prepare yourselves, Arizona voters. Election deepfakes are coming, and fighting them is going to be difficult.
Why it matters: This is the first election cycle since the rapid adoption of generative artificial intelligence that can realistically mimic public figures.
- Misinformation and disinformation have long been staples of elections, but AI and deepfakes will make it harder to distinguish truth from fiction.
Driving the news: Arizona is among the states where lawmakers are considering legislation to combat this new technology.
- A bill sponsored by state Rep. Alexander Kolodin (R-Scottsdale) would give candidates two years to ask a court to officially declare an image or audio to be a digital impersonation — but would not give them the ability to sue for damages.
- For deepfakes published within 180 days of an election, candidates could seek an expedited preliminary declaration.
Kolodin tells Axios he seeks to strike a balance between fighting deepfakes and free speech rights in the bill, which passed unanimously out of committee and is scheduled for vote in the full House on Wednesday.
- "Once something's on the internet, you're not going to be able to get it down anyway," he says.
- "But at least … your campaign can say, 'Hey, we went to a court, presented the evidence, and we claim it's not real and the court agrees.'"
Threat level: Nadya Bliss, executive director of Arizona State University's Global Security Initiative, tells Axios that people already struggle to determine what's real and bad actors often create deepfakes in order to reduce public trust in institutions.
- Disinformation's damage can be extremely difficult to undo, even when something is proved false, she says.
Flashback: The New Hampshire presidential primary may have provided a glimpse of the future when voters received a deepfake phone call from President Biden's manipulated voice urging them not to vote.
Zoom out: There are existing laws against voter suppression and election disruption that can be used against deepfakes, Secretary of State Adrian Fontes told Axios in an interview. He hosted an anti-deepfake exercise in December for state and federal officials.
- New Hampshire's attorney general said the fake Biden robocalls appeared to be an illegal attempt to disrupt and suppress voting, for example.
- The FCC earlier this month also banned AI-voiced deepfake robocalls.
What he's saying: Fontes called Kolodin's bill a good start, but he wants stricter laws against spreading election misinformation and disinformation.
But, but, but: Legislation can only go so far.
- Fontes said there needs to be lots of open communication between election officials, the media, law enforcement, political parties and others to repudiate deepfakes as they pop up.
Plus: He said social media companies bear some responsibility, too. Several major tech companies last week signed a pact to voluntarily adopt "reasonable precautions" against the use of AI to disrupt elections.
- Bliss with ASU said computer scientists are looking at technology such as "watermarking" to quickly identify deepfakes.
Reality check: Voters will also have to become savvier when it comes to sniffing out the new generation of election disinformation.
- Bliss said people need to learn to assess the information they hear and check it against other sources.
What we're watching: The focus now is on disinformation before the election.
- But the potential use of deepfakes to spread the kinds of post-election falsehoods that proliferated after 2020 is "the huge space to watch," said Katie Reisner, senior counsel for the national advocacy group States United Democracy Center.
Editor's note: This story was corrected to reflect that Katie Reisner's title is senior counsel.
