October 24, 2023
Happy Tuesday, Pro readers. The speaker drama may have delayed a House hearing on deepfakes slated for today, but we've still got you covered on everything you need to know.
- We'll be back in your inbox later with any news from the Senate AI forum.
1 big thing: Speaker fight derails work to combat deepfakes
Illustration: Aïda Amer/Axios
House lawmakers' plans to tackle deepfakes are on hold while Republicans struggle to pick a speaker, Maria reports.
What's happening: The House Oversight's Cybersecurity, Information Technology, and Government Innovation subcommittee was scheduled to hold a hearing this morning on how to incentivize private sector solutions for detecting and deterring deepfake technology.
- Nick Burroughs, subcommittee ranking member Gerry Connolly's spokesperson, said most of the committee's hearings are being postponed because of the speaker situation.
Why it matters: People rely on the internet to make informed decisions about current events and voting, but the threat of disinformation is intensifying.
- Advancements in AI are both spurring the sophistication of deepfakes and providing the verification tools to authenticate images, videos and audio.
- "You're seeing it in Israel and Gaza, where there's a debate about some news being fake," Adobe's chief trust officer Dana Rao, who was set to testify today, told Axios. "It may or may not even be fake, but now no one believes anything they're seeing and there are people fighting about actual events."
- "So it's almost a doubt about what is real, and that is as bad as the fact that there are actually fake things out there fooling you."
Nearly 2,000 creators, from Qualcomm to the Associated Press and Universal Music Group, are members of Adobe's Content Authenticity Initiative, which provides a tool for citing the facts and origins of content.
- Adobe's digital media content provenance technology generates a set of credentials that details who created an image, when and where it was made and how the image was edited.
- Rao said, "Once this is available everywhere, people are going to want to see important news events come with provenance. They're going to say, if this is so important, why wouldn't you use this tool to show me that the thing you're telling me is true? Because if you didn't, I'm skeptical."
Politicians on both sides of the aisle are also entering the 2024 elections worried that videos of them can easily be distorted and influence voter behavior.
- Adobe supports the FEC's efforts to hold people accountable for deceptive advertising. In addition, Rao said politicians should use provenance tools in their own ads to build trust.
- Adobe is pushing U.S., EU and U.K. government officials to implement provenance tools in their own communications, which Rao described as the "low-hanging fruit" that will help spread the technology more broadly.
Of note: Cooperation from platforms, many of which automatically strip out the type of metadata seen in provenance credentials, will be key to the success of the tool.
- "It should be the policy of all democratic governments that if a piece of content has Content Credentials attached, those credentials should not be stripped away," according to Rao's opening remarks for the postponed hearing, shared exclusively with Axios.
Flashback: DARPA's Media Forensics program, which wrapped up in fiscal year 2021, was created to study how counterfeit pictures and videos were being generated.
- Now the agency is asking Congress to appropriate $18 million for Semantic Forensics in fiscal year 2024, a program that builds on previous efforts by detecting, attributing and characterizing the threat level of deepfakes.
What they're saying: The public needs to know authentication tools are reliable, Connolly had planned to say today in his remarks.
- Connolly was also going to say the maturity of DARPA's deepfake detection tools should be assessed, and federal efforts must be well-coordinated with the private sector and academia.
2. Schatz, Kennedy roll out AI labeling bill
Schatz speaks during a Senate Appropriations subcommittee hearing on June 9, 2021. Photo: Al Drago-Pool/Getty Images
Sens. Brian Schatz and John Kennedy are trying to drum up support for a bill that would boost transparency on AI-generated content, making clear when someone is viewing content made by AI or interacting with an AI chatbot, Ashley reports.
What they're saying: "Our bill is simple – if any content is made by artificial intelligence, it should be labeled so that people are aware and aren't fooled or scammed," said Schatz in a statement released today.
- "Our bill would set an AI-based standard to protect U.S. consumers by telling them whether what they're reading, seeing or hearing is the product of AI, and that's clarity that people desperately need," Kennedy said in the statement.
Details: The two senators' AI Labeling Act, which has support from groups like SAG-AFTRA, the American Federation of Teachers and the Writers Guild of America, would require generative AI developers to include a "clear and conspicuous" disclosure identifying AI-generated content and chatbots.
- It would also require developers and third-party licensees of such content to "take reasonable steps" to prevent "systematic publication of content without disclosures" and establish non-binding technical standards for identifying such content for social media platforms.
- The senators dropped the bill shortly before the August recess, but only announced it today.
- A handful of other AI-focused bills have been introduced this month.
✅ Thank you for reading Axios Pro Policy, and thanks to editor Mackenzie Weinger and copy editor Steven Patrick.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive

