September 11, 2023
It's Monday, Pro readers! The House is in town and we're gearing up for a wild week in tech policy.
📍 Join Ashley tomorrow for a conversation featuring Sen. Michael Bennet on whether an agency is needed to regulate tech platforms. Plus: alternative ideas on the role government should play. Register here.
1 big thing: A new era of self-regulation
Illustration: AĂŻda Amer/Axios
Google's shift on AI and election ads is kicking off an era of tech self-policing while regulators mull new rules in the fast-moving world of generative AI, Ashley writes in her column today.
Driving the news: Starting in November, election advertisers on Google will be required to "prominently disclose when their ads contain synthetic content that's been digitally altered or generated and depicts real or realistic-looking people or events ... inclusive of AI tools," per a company announcement last week.
- Google is the first major platform to announce a specific policy addressing AI and political ads.
- Snapchat and Meta previously told Axios they were reviewing policies around AI and ads and would update accordingly if needed.
The big picture: As online political advertising has increasingly become a larger part of the landscape alongside traditional TV and radio ads, regulators have struggled to keep up, as in so many other tech policy debates.
- It's a fairly unregulated space. But now, the FEC is mulling expanded new rules, voting last month to receive public input on a petition brought by the advocacy group Public Citizen, which pushes to restrict the use of AI from generating intentionally false content in campaign materials.
What they're saying: "I applaud Google for what they're doing," Sen. Mark Warner told Maria in an interview last week.
- "But if we have one standard from Google and another standard from Microsoft and another standard from X and a third from Amazon, that isn't going to give us the transparency we need so that voters and investors can make sure they pause before they accept at face value a message that may have been AI-generated."
- Sen. Amy Klobuchar said Google's announcement was "a step in the right direction, [but] we can't solely rely on voluntary commitments."
Flashback: Google's early moves on labeling AI-generated election advertising is reminiscent of a debate over online ads stemming from 2017, when Klobuchar, Warner and John McCain first started pushing a bill called the Honest Ads Act. (The bill has been reintroduced multiple times, including this year.)
- The backdrop for that bill was bipartisan concern over foreign election meddling in the wake of a Russian troll farm placing ads on American social media platforms, meant to sow discord in the 2016 election.
- It aims to update campaign laws to include internet and digital advertisements and require companies to maintain a public file of such ads.
The intrigue: When the bill was first gaining steam, Meta (then known as Facebook) announced its political advertising library, enabling anybody to look up who's advertising on the platform on social, political and election topics and how much they're spending.
- Following the 2016 election and the 2021 Capitol insurrection, as social media platforms have gone back and forth on what sort of political ads are permitted to run online, online advertising has remained a major driver of political campaigns.
Prompted partly by the momentum around the Honest Ads Act, Meta then set a new standard for transparency in online advertising its own way by rolling out the ad library, tweaking it and adding features in the years since.
- The company eventually said it supported the bill.
- Google followed suit with a political ad archive, though Twitter (now X) offers little for the public on transparency of election ads.
Our thought bubble: We're in a new era of self-regulatory moves by tech, this time focusing on AI. Such moves haven't shielded the companies from criticism of how they handle political ads, nor has enforcement of their own policies been perfect.
- But until new laws are passed, it's what we've got.
2. Hill hearing watch
Illustration: Brendan Lynch/Axios
Hope you've got enough coffee to get through this week — here's everything on the Hill that we've got our eyes on.
1. Schumer's first AI insight forum: Senators gather Wednesday for the high-level, closed-door meeting with X's Elon Musk, Meta's Mark Zuckerberg, Google's Sundar Pichai, OpenAI's Sam Altman and Microsoft co-founder Bill Gates, among others.
- As we reported last week, the insight forum will be a two-parter: A three-hour morning session, at which the tech CEOs and other invited speakers will give remarks, and an afternoon meeting.
- Not all of the big names will stick around for the afternoon sesh, though the majority leader has asked the speakers to have a high-level representative present.
2. Transparency in AI: As Ashley let you know first last week, the Senate Commerce consumer protection panel holds a hearing on artificial intelligence tomorrow at 2:30pm ET.
- BSA CEO Victoria Espinel, Carnegie Mellon University's Ramayya Krishnan, and Sam Gregory, executive director of the human rights organization Witness, testify before lawmakers on how to increase transparency for consumers.
3. AI framework: At the same time, tomorrow at 2:30pm, the Senate Judiciary panel on privacy, technology and the law meets to discuss panel leaders Richard Blumenthal and Josh Hawley’s newly announced framework to regulate AI.
- The framework includes establishing an independent oversight body to administer licenses for certain AI models and holding companies accountable by giving people the right to sue and denying companies Section 230 immunity.
- Microsoft president Brad Smith, Nvidia’s William Dally and Boston University law professor Woodrow Hartzog are on the witness list.
4. Fed AI in the Senate: The Senate Homeland Security & Governmental Affairs Committee gathers Thursday at 10am ET "to examine governing AI through acquisition and procurement."
- Just before leaving for August recess, the committee advanced Chair Gary Peters'Â AIÂ LEADÂ Act, which would require agencies to appoint a "chief artificial intelligence officer" and establish a council to help coordinate strategy across the federal government.
5. Fed AI in the House: That afternoon, the focus on federal agencies and AI moves to the lower chamber. Two House Oversight and Accountability subcommittees hold a hearing Thursday at 1pm ET looking into how the government is "harnessing" AI.
- OSTP's Arati Prabhakar, DOD's Craig Martell and DHS' Eric Hysen are on tap to testify.
6. Immigration and competition: The Senate Budget Committee convenes a hearing Wednesday at 10am ET on "how immigration fuels economic growth and our competitive advantage."
👨‍⚖️ Meanwhile, off the Hill, the Justice Department's highly anticipated antitrust case against Google kicks off tomorrow.
- Get up to speed fast with Ashley's deep dive.
âś… Thank you for reading Axios Pro Policy, and thanks to editors Mackenzie Weinger and David Nather and copy editor Brad Bonhall.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive

