The U.S. isn't ready to tackle AI in elections
The burden of detecting and flagging harmful election content will likely fall on average Americans in the face of government inaction and jurisdictional confusion, experts warn.
Why it matters: 2024 marks the biggest election year in history, with more than 2 billion people voting worldwide — and all eyes are on how the U.S. process unfolds.
- AI is injecting new challenges into the country's already vulnerable democracy, and the government has yet to catch up to the evolving threat.
Zoom in: A recent incident in New Hampshire, in which robocalls were made impersonating President Biden's voice and discouraging recipients from voting, is highlighting the muddiness of regulating AI deepfakes.
- It's still unknown where the call originated, how it was placed and how many people it reached. Those facts will help clarify whose jurisdiction it falls under.
- The FTC has cracked down on illegal robocalls made over the internet but is unlikely to take on an election disinformation investigation, a source familiar told Axios.
- The Federal Elections Commission traditionally has been the one to take on political issues, and it is in the middle of a rulemaking to prohibit deliberately deceptive AI campaign ads that's expected in early summer.
- The FCC has led efforts to tackle robocalls placed on common carrier networks and is looking into AI's impacts on such calls.
What they're saying: "It would be a shame if each agency takes a pass because they're hoping another agency can do it. Someone should be on top of this," a former FTC official said.
- A communications industry effort launched in 2015 to trace illegal robocalls' origin has made "enormous progress" in thwarting the calls enabled with or without AI, Industry Traceback Group executive director Josh Bercu said.
- "Among the numerous dangers posed by generative AI, deepfake technology stands alone as the most urgent problem. Congress needs to act fast, because this is clearly only the beginning," Rep. Yvette Clarke wrote to Axios.
Meanwhile, Senate Majority Leader Chuck Schumer has said AI and elections will be prioritized, but tech legislation is notoriously difficult to get across the finish line.
- One bill would ban the use of AI to generate deceptive content of federal candidates in political ads, and another would require a disclaimer when AI is used in such ads.
- Biden's AI executive order instructs agencies across the federal government to examine how best to regulate AI, but that process is just beginning.
Social media companies are laying out new rules focused on AI.
- Meta is requiring advertisers to disclose when they use AI for social issues, elections and politics.
- TikTok doesn't allow AI generated content of public figures if it depicts them endorsing a political view. It also requires creators to label realistic AI generated content, offering a tool to help them do this.
- Google is requiring AI-generated election ads to include a disclaimer on YouTube.
Of note: Providing consumers information about where content originated and how it has been altered over time, a practice known as provenance, is also gaining traction.
- OpenAI last week laid out its elections plans, which include implementing digital credentials for images created through Dall·E 3.
- Adobe's chief trust officer Dana Rao said that in the last two months, platforms have been more engaged in the Content Authenticity Initiative, a coalition of companies working to implement provenance.
Yes, but: Social media companies aren't transparent about how effective their tactics are, and in the absence of government action, voters will be left to fend for themselves in figuring out or flagging any AI-generated content.
- "We need a lot more watchdogs," Mozilla Foundation President Mark Surman said.
- "Democracy is going to need citizens and data journalists to do the work that the platforms aren't going to be able to do themselves in this cycle of elections around the world."