Axios Pro Exclusive Content

Speaker fight derails work to combat deepfakes

Oct 24, 2023
Illustration of a computer wearing a bandit's mask.

Illustration: Aïda Amer/Axios

House lawmakers' plans to tackle deepfakes are on hold while Republicans struggle to pick a speaker.

What's happening: The House Oversight's Cybersecurity, Information Technology, and Government Innovation subcommittee was scheduled to hold a hearing Tuesday morning on how to incentivize private sector solutions for detecting and deterring deepfake technology.

  • Nick Burroughs, subcommittee ranking member Gerry Connolly's spokesperson, said most of the committee's hearings are being postponed because of the speaker situation.

Why it matters: People rely on the internet to make informed decisions about current events and voting, but the threat of disinformation is intensifying.

  • Advancements in AI are both spurring the sophistication of deepfakes and providing the verification tools to authenticate images, videos and audio.
  • "You're seeing it in Israel and Gaza, where there's a debate about some news being fake," Adobe's chief trust officer Dana Rao, who was set to testify Tuesday, told Axios. "It may or may not even be fake, but now no one believes anything they're seeing and there are people fighting about actual events."
  • "So it's almost a doubt about what is real, and that is as bad as the fact that there are actually fake things out there fooling you."

Nearly 2,000 creators, from Qualcomm to the Associated Press and Universal Music Group, are members of Adobe's Content Authenticity Initiative, which provides a tool for citing the facts and origins of content.

  • Adobe's digital media content provenance technology generates a set of credentials that details who created an image, when and where it was made and how the image was edited.
  • Rao said, "Once this is available everywhere, people are going to want to see important news events come with provenance. They're going to say, if this is so important, why wouldn't you use this tool to show me that the thing you're telling me is true? Because if you didn't, I'm skeptical."

Politicians on both sides of the aisle are also entering the 2024 elections worried that videos of them can easily be distorted and influence voter behavior.

  • Adobe supports the FEC's efforts to hold people accountable for deceptive advertising. In addition, Rao said politicians should use provenance tools in their own ads to build trust.
  • Adobe is pushing U.S., EU and U.K. government officials to implement provenance tools in their own communications, which Rao described as the "low-hanging fruit" that will help spread the provenance technology more broadly.

Of note: Cooperation from platforms, many of which automatically strip out the type of metadata seen in provenance credentials, will be key to the success of the tool.

  • "It should be the policy of all democratic governments that if a piece of content has Content Credentials attached, those credentials should not be stripped away," according to Rao's opening remarks for the postponed hearing, shared exclusively with Axios.

Flashback: DARPA's Media Forensics program, which wrapped up in fiscal year 2021, was created to study how counterfeit pictures and videos were being generated.

  • Now the agency is asking Congress to appropriate $18 million for Semantic Forensics in fiscal year 2024, a program that builds on previous efforts by detecting, attributing and characterizing the threat level of deepfakes.

What they're saying: The public needs to know authentication tools are reliable, Connolly had planned to say Tuesday in his opening remarks.

  • Connolly was also going to say the maturity of DARPA's deepfake detection tools should be assessed, and federal efforts must be well-coordinated with the private sector and academia.
Go deeper