
Illustration: Aïda Amer/Axios
Despite the sharp alarms being sounded over deepfakes — uncannily realistic AI-generated videos showing real people doing and saying fictional things —security experts believe that the videos ultimately don't offer propagandists much advantage compared to the simpler forms of disinformation they are likely to use.
Why it matters: It’s easy to see how a viral video that appears to show, say, the U.S. president declaring war would cause panic — until, of course, the video was debunked. But deepfakes are not an efficient form of a long-term disinformation campaign.
Deepfakes are detectable. Deepfakes are only undetectable to humans, not computers. In fact, a leading online reputation security firm, ZeroFOX, announced last week it would begin offering a proactive deepfake detection service.
- “It’s not like you’ll never be able to trust audio and video again,” said Matt Price, principal research engineer at ZeroFOX.
- There are a number of ways to detect AI-generated video, ranging from digital artifacts in the audio and video, misaligned shadows and lighting, and human anomalies that can be detected by machine, like eye movement, blink rate and even heart rate.
- Price noted that current detection techniques likely won't be nimble enough for a network the size of YouTube to screen every video, meaning users would likely see — and spread — a fake before it was debunked.
But, but, but: If we have learned anything from the manipulated Nancy Pelosi video and years of work from conservative provocateur James O’Keefe, it's this: A lot of people will go on believing manipulative content rather than demonstrable truth if the manipulation brings them comfort. It doesn’t take high-tech lying to do that.
The intrigue: As Camille François, chief innovation officer at Graphika, a firm used by the Senate Intelligence Committee to analyze Russian disinformation on social media, told Codebook, “When I consider the problem, I don’t worry about deepfakes first.”
- She added, “There are really sophisticated disinformation campaigns run by threat actors with a lot of money, and they don’t do fake stuff — it’s not efficient. They steal content that’s divisive or repurpose other content.”
- Or as Darren L. Linvill, a Clemson University researcher on Russian social media disinformation, put it, deepfakes will be “less of a problem than funny memes.”
- “A lot of research shows fake news is not the problem many people think it is," he said. "[The Internet Research Agency, a Russian social media manipulation outfit], for instance, barely employed what you could truly call ‘fake news’ after early 2015."
When disinformation groups do use fake media in their campaigns, it usually takes the form of fake images presented in a misleading context — so-called "shallow fakes." François uses the example of denying the reality of a chemical weapons attack by tweeting a photo of the same area that predates the attack.
- "Shallow fakes" are cheaper, faster, require no technical expertise and can’t be disproven by signals analysis.
The bottom line: Deepfakes take advantage of human vulnerabilities that can be exploited much more efficiently by other means.
- That means the disinformation problem won't be solved through technology or policy alone.
- “Nations that have successfully built resilience to these problems have included digital literacy elements to better protect their populations,” said Peter Singer, co-author of "LikeWar," a book on social media disinformation.