Aug 15, 2019

Why the deepfakes threat is shallow

Illustration: Aïda Amer/Axios

Despite the sharp alarms being sounded over deepfakes — uncannily realistic AI-generated videos showing real people doing and saying fictional things —security experts believe that the videos ultimately don't offer propagandists much advantage compared to the simpler forms of disinformation they are likely to use.

Why it matters: It’s easy to see how a viral video that appears to show, say, the U.S. president declaring war would cause panic — until, of course, the video was debunked. But deepfakes are not an efficient form of a long-term disinformation campaign.

Deepfakes are detectable. Deepfakes are only undetectable to humans, not computers. In fact, a leading online reputation security firm, ZeroFOX, announced last week it would begin offering a proactive deepfake detection service.

  • “It’s not like you’ll never be able to trust audio and video again,” said Matt Price, principal research engineer at ZeroFOX.
  • There are a number of ways to detect AI-generated video, ranging from digital artifacts in the audio and video, misaligned shadows and lighting, and human anomalies that can be detected by machine, like eye movement, blink rate and even heart rate.
  • Price noted that current detection techniques likely won't be nimble enough for a network the size of YouTube to screen every video, meaning users would likely see — and spread — a fake before it was debunked.

But, but, but: If we have learned anything from the manipulated Nancy Pelosi video and years of work from conservative provocateur James O’Keefe, it's this: A lot of people will go on believing manipulative content rather than demonstrable truth if the manipulation brings them comfort. It doesn’t take high-tech lying to do that.

The intrigue: As Camille François, chief innovation officer at Graphika, a firm used by the Senate Intelligence Committee to analyze Russian disinformation on social media, told Codebook, “When I consider the problem, I don’t worry about deepfakes first.”

  • She added, “There are really sophisticated disinformation campaigns run by threat actors with a lot of money, and they don’t do fake stuff — it’s not efficient. They steal content that’s divisive or repurpose other content.”
  • Or as Darren L. Linvill, a Clemson University researcher on Russian social media disinformation, put it, deepfakes will be “less of a problem than funny memes.”
  • “A lot of research shows fake news is not the problem many people think it is," he said. "[The Internet Research Agency, a Russian social media manipulation outfit], for instance, barely employed what you could truly call ‘fake news’ after early 2015."

When disinformation groups do use fake media in their campaigns, it usually takes the form of fake images presented in a misleading context — so-called "shallow fakes." François uses the example of denying the reality of a chemical weapons attack by tweeting a photo of the same area that predates the attack.

  • "Shallow fakes" are cheaper, faster, require no technical expertise and can’t be disproven by signals analysis.

The bottom line: Deepfakes take advantage of human vulnerabilities that can be exploited much more efficiently by other means.

  • That means the disinformation problem won't be solved through technology or policy alone.
  • “Nations that have successfully built resilience to these problems have included digital literacy elements to better protect their populations,” said Peter Singer, co-author of "LikeWar," a book on social media disinformation.

Go deeper

U.S. laws don't cover campaign disinformation

A now-defunct loudspeaker system set up to bombard North Korea with South Korean messaging. Photo: Chung Sung-Jun/Getty Images

The international industry of disinformation-for-hire services has already reared its head in Western politics, and it's growing fast.

The big picture: There is no U.S. law that prevents candidates, parties or political groups from launching their own disinformation campaigns, either in-house or through a contractor, so long as foreign money isn't involved. It's up to individual candidates to decide their tolerance for the practice.

Go deeperArrowAug 22, 2019

New fake-news worry for Instagram

Illustration: Aïda Amer/Axios

Instagram could become a new platform for the sharing of disinformation around the 2020 election because of the way propagandists are relying on images and proxy accounts to create and circulate fake content, leading social intelligence experts tell Axios.

The big picture: "Disinformation is increasingly based on images as opposed to text," said Paul Barrett, the author of an NYU report that's prompted a renewed look at the problem. "Instagram is obviously well-suited for that kind of meme-based activity."

Go deeperArrowSep 9, 2019

Social media reconsiders its relationship with the truth

Illustration: Aïda Amer/Axios

For years, Facebook and other social media companies have erred on the side of lenience in policing their sites — allowing most posts with false information to stay up, as long as they came from a genuine human and not a bot or a nefarious actor.

The latest: Now, the companies are considering a fundamental shift with profound social and political implications: deciding what is true and what is false.

Go deeperArrowAug 21, 2019