Google's AI ad policy kicks off an era of self-regulation
Google's shift on AI and election ads is kicking off an era of tech self-policing while regulators mull new rules in the fast-moving world of generative AI.
Driving the news: Starting in November, election advertisers on Google will be required to "prominently disclose when their ads contain synthetic content that's been digitally altered or generated and depicts real or realistic-looking people or events ... inclusive of AI tools," per a company announcement last week.
- Google is the first major platform to announce a specific policy addressing AI and political ads.
- Snapchat and Meta previously told Axios they were reviewing policies around AI and ads and would update accordingly if needed.
The big picture: As online political advertising has increasingly become a larger part of the landscape alongside traditional TV and radio ads, regulators have struggled to keep up, as in so many other tech policy debates.
- It's a fairly unregulated space. But now, the FEC is mulling expanded new rules, voting last month to receive public input on a petition brought by the advocacy group Public Citizen, which pushes to restrict the use of AI from generating intentionally false content in campaign materials.
What they're saying: "I applaud Google for what they're doing," Sen. Mark Warner told Axios' Maria Curi in an interview last week.
- "But if we have one standard from Google and another standard from Microsoft and another standard from X and a third from Amazon, that isn't going to give us the transparency we need so that voters and investors can make sure they pause before they accept at face value a message that may have been AI-generated."
- Sen. Amy Klobuchar said Google's announcement was "a step in the right direction, [but] we can't solely rely on voluntary commitments."
- "Voters deserve nothing less than full transparency, and I'm continuing to push to immediately pass stronger disclosure laws that account for AI-manipulated content in campaign ads, as well as to ban deceptive AI-generated content in our elections and counter the spread of election-related disinformation," Klobuchar said in a statement.
Flashback: Google's early moves on labeling AI-generated election advertising is reminiscent of a debate over online ads stemming from 2017, when Klobuchar, Warner and John McCain first started pushing a bill called the Honest Ads Act. (The bill has been reintroduced multiple times, including this year.)
- The backdrop for that bill was bipartisan concern over foreign election meddling in the wake of a Russian troll farm placing ads on American social media platforms, meant to sow discord in the 2016 election.
- It aims to update campaign laws to include internet and digital advertisements and require companies to maintain a public file of such ads.
The intrigue: When the bill was first gaining steam, Meta (then known as Facebook) announced its political advertising library, enabling anybody to look up who's advertising on the platform on social, political and election topics and how much they're spending.
- Following the 2016 election and the 2021 Capitol insurrection, as social media platforms have gone back and forth on what sort of political ads are permitted to run online, online advertising has remained a major driver of political campaigns.
Prompted partly by the momentum around the Honest Ads Act, Meta then set a new standard for transparency in online advertising its own way by rolling out the ad library, tweaking it and adding features in the years since.
- The company eventually said it supported the bill.
- Google followed suit with a political ad archive, though Twitter (now X) offers little for the public on transparency of election ads.
Our thought bubble: We're in a new era of self-regulatory moves by tech, this time focusing on AI. Such moves haven’t shielded the companies from criticism of how they handle political ads, nor has enforcement of their own policies been perfect.
- But until new laws are passed, it's what we've got.