February 13, 2024
Good afternoon ... Want to know what Hill staff think will happen with AI legislation? Just keep scrolling.
1 big thing: Hill staffers chart path for AI legislation
Maria talks with Rob Hicks and John Beezer at the State of the Net conference. Photo: Courtesy Internet Education Foundation
AI action is heating up on Capitol Hill, key congressional staffers told Maria at the State of the Net conference yesterday.
Why it matters: Lawmakers have a short window to move on AI legislation before the 2024 elections kick into high gear.
- All eyes are on committee-level work after Senate Majority Leader Chuck Schumer's AI Insight Forums helped set the stage for legislation.
State of play: John Beezer, senior advisor to Senate Commerce Committee Chair Maria Cantwell, said Schumer is preparing an AI position paper that could serve as the "starting gun" in a matter of weeks.
- Schumer's office did not respond to a request for comment.
- In the House, lawmakers are waiting for Speaker Mike Johnson and Minority Leader Hakeem Jeffries to green-light an AI working group, which could happen in a matter of weeks.
- Rep. Jay Obernolte's legislative director Rob Hicks: "It would be a cross-section of members from different committees, because AI is not one thing ā it's a tool. And when you talk about legislating AI, it's about what it means for each specific issue area."
Meanwhile, Commerce continues to prioritize AI and workforce issues, which Beezer called "the clearest, most obvious thing that we need to be working on right now."
- As for Cantwell's vision for an AI workforce bill modeled after the GI bill, Beezer said the committee is still trying to get a handle on how the labor market will be impacted by the technology.
- "There are some arguments that it will create so many new opportunities and that there'll be lots of new jobs and everything will be fine, but I don't think that's guaranteed, and I think we need to be prepared for it," he said.
Of note: Beezer and Hicks both said the CREATE AI Act to authorize and fund the National AI Research Resource is the priority.
- Beezer said it has the best prospects among the dozens of AI bills circulating in Congress.
Yes, but: Appropriators are fielding a myriad of funding requests, and there's little appetite to spend money, especially in the House.
The big picture: AI and privacy efforts are increasingly becoming interlinked, especially in the House Energy and Commerce Committee.
- States are making progress on both fronts, advancing and passing legislation that can help inform federal efforts but also inject familiar disagreements over preemption.
- "It certainly gives a sense of urgency to a lot of what we've been thinking about, and my boss would say he would preempt state activity on AI," Hicks said, warning against fractured treatment of data across states.
Beezer said he sees AI and privacy legislation playing out in three equally possible ways:
- Congress decides AI can't be addressed without privacy and advances a bill with limited privacy components, such as requiring authorization to use sensitive personal data to train models: "I think that's a thing that needs to be addressed immediately."
- Congress decides AI is so transformative that lawmakers put together a package so big they decide to throw in comprehensive privacy while they're at it.
- Congress does nothing. "The logic on that is that's what we always do."
2. Tech unites on deepfake election content
Illustration: Sarah Grillo/Axios
Google, Meta, Microsoft, TikTok, Adobe and OpenAI will pledge to try to mitigate risks around deceptive AI election content, per a draft of an upcoming announcement obtained by Ashley.
Why it matters: Tech and AI companies are trying to get ahead of a potential explosion of deepfakes around global elections in 2024, putting their hands up as good actors who are trying to prevent bad outcomes before anyone can accuse them of not being proactive.
Driving the news: Announcement of the pledge is set for Friday at the Munich Security Conference in Germany.
- News of the draft was first reported in Politico EU.
- Tech companies have significantly scaled back on allowing political content in past years, but it continues to proliferate online and be seen as an important tool for campaigns.
- The advent of widely available generative AI tools is invoking worry that political misinformation online will be worse than ever.
What they're saying: "We will continue to build upon efforts we have collectively and individually deployed over the years to counter risks from the creation and dissemination of Deceptive AI Election Content and its dissemination, including developing technologies, standards, open-source tools, user information features, and more," the draft announcement reads.
- The companies alone can't find the problem, they concede.
- "We recognize that no individual solution or combination of solutions, including those described below such as metadata, watermarking, classifiers, or other forms of provenance or detection techniques, can fully mitigate risks related to deceptive AI election content, and that accordingly it behooves all parts of society to help educate the public on these challenges."
- Adobe said in a statement: "In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters."
- "Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective and we hope to finalize and present details on Friday at the Munich Security Conference."
Details: The voluntary framework is based on seven principles: prevention, provenance, detection, responsive protection, evaluation, public awareness and resilience. The signatories commit through 2024 to:
- develop technology to mitigate deceptive AI election content
- address content in a matter consistent with both free speech and safety
- share best practices with one another
- update the public on findings
- offer resources to researchers looking to stem the same risks.
The bottom line: Companies responsible for deploying AI tools to the public know they have to brace for elections or risk disaster.
- The draft could change by Friday, and more companies may sign on.
3. What we're hearing: "Daunting" AI executive order
Ashley, left, with State of the Net panelists including Benjamin Della Rocca, center. Photo: Courtesy Internet Education Foundation
"There's a reason that the [AI] executive order last year was 20,000 words. It is so long because there's frankly so much to do, both in government and nongovernment.ā White House policy advisor Benjamin Della Rocca yesterday at the State of the Net conference when asked what keeps him up at night about AI
"I think a lot of that obviously adds up to a pretty daunting set of tasks and a great deal of work to be done, which we signed up to do. And I'm confident that we can do it, but it certainly makes you appreciate the enormity of the challenge."
ā Thank you for reading Axios Pro Policy, and thanks to editors Mackenzie Weinger and David Nather and copy editor Brad Bonhall.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive


