Sep 28, 2019

Revenge of the deepfake detectives

Illustration: Sarah Grillo/Axios

Tech giants, startups and academic labs are pumping out datasets and detectors in hopes of jump-starting the effort to create an automated system that can separate real videos, images and voice recordings from AI forgeries.

Why it matters: Algorithms that try to detect deepfakes lag behind the technology that creates them — a worrying imbalance given the technology's potential to stir chaos in an election or an IPO.

Driving the news: Dessa, the AI company behind the hyper-convincing fake Joe Rogan voice from earlier this summer, published a tool today for detecting deepfake audio — the kind that recently scammed a CEO out of $240,000.

  • The new detector, which Axios is reporting first, is open source, so anybody can go through the code for free to understand and potentially improve it.
  • But the company gets something out of it: The detector is built on a Dessa platform, which you have to download (without paying) to set it up.

The big picture: There's an all-hands scramble for better detectors, which generally require a lot of really good examples of deepfakes. Researchers use them to train algorithms that can tell if media was created by AI.

  • Yesterday, SUNY Albany deepfake expert Siwei Lyu released a dataset filed with celebrity deepfakes.
  • Earlier in the week, Google and Jigsaw — both owned by parent company Alphabet — released a large set of video deepfakes.
  • And earlier this month, Facebook, Microsoft and the Partnership on AI teamed up with academic researchers to release more deepfake videos — and offer a prize to the team that uses them to make the best detector.

Unlike these datasets, which allow researchers to cook up their own detectors, Dessa is releasing a pre-baked system — which has advantages and risks.

  • The company felt a responsibility to release an antidote after it made the realistic Rogan voice, says Ragavan Thurairatnam, Dessa's co-founder.
  • "I think it's inevitable that malicious actors are going to move much faster than those who want to stop it," he tells Axios. The free detector is a "starting point" for people to push detection forward.

But, but, but: Thurairatnam acknowledged that an open-source detector could help a particularly determined troll create new audio fakes that fool it. That's because generative AI systems can be trained to trick a specific detector.

  • He argues that the potential for creating better detectors outweighs the probability that someone will misuse Dessa's code.
  • But Lyu of SUNY Albany says there's some reason to worry. "In principle, such code will help both but probably more for making better generators."

Go deeper: Researchers struggle with containing potentially harmful AI

Go deeper

The hidden costs of AI

Illustration: Eniola Odetunde/Axios

In the most exclusive AI conferences and journals, AI systems are judged largely on their accuracy: How well do they stack up against human-level translation or vision or speech?

Yes, but: In the messy real world, even the most accurate programs can stumble and break. Considerations that matter little in the lab, like reliability or computing and environmental costs, are huge hurdles for businesses.

Go deeperArrowOct 26, 2019

The roots of the deepfake threat

Illustration: Aïda Amer/Axios

The threat of deepfakes to elections, businesses and individuals is the result of a breakdown in the way information spreads online — a long-brewing mess that involves a decades-old law and tech companies that profit from viral lies and forgeries.

Why it matters: The problem likely will not end with better automated deepfake detection, or a high-tech method for proving where a photo or video was taken. Instead, it might require far-reaching changes to the way social media sites police themselves.

Go deeperArrowOct 19, 2019

The deepfake threat to evidence

Illustration: Rebecca Zisser/Axios

As deepfakes become more convincing and people are increasingly aware of them, the realistic AI-generated videos, images and audio threaten to disrupt crucial evidence at the center of the legal system.

Why it matters: Leaning on key videos in a court case — like a smartphone recording of a police shooting, for example — could become more difficult if jurors are more suspicious of them by default, or if lawyers call them into question by raising the possibility that they are deepfakes.

Go deeperArrowOct 12, 2019