
Illustration: Sarah Grillo/Axios
Tech giants, startups and academic labs are pumping out datasets and detectors in hopes of jump-starting the effort to create an automated system that can separate real videos, images and voice recordings from AI forgeries.
Why it matters: Algorithms that try to detect deepfakes lag behind the technology that creates them — a worrying imbalance given the technology's potential to stir chaos in an election or an IPO.
Driving the news: Dessa, the AI company behind the hyper-convincing fake Joe Rogan voice from earlier this summer, published a tool today for detecting deepfake audio — the kind that recently scammed a CEO out of $240,000.
- The new detector, which Axios is reporting first, is open source, so anybody can go through the code for free to understand and potentially improve it.
- But the company gets something out of it: The detector is built on a Dessa platform, which you have to download (without paying) to set it up.
The big picture: There's an all-hands scramble for better detectors, which generally require a lot of really good examples of deepfakes. Researchers use them to train algorithms that can tell if media was created by AI.
- Yesterday, SUNY Albany deepfake expert Siwei Lyu released a dataset filed with celebrity deepfakes.
- Earlier in the week, Google and Jigsaw — both owned by parent company Alphabet — released a large set of video deepfakes.
- And earlier this month, Facebook, Microsoft and the Partnership on AI teamed up with academic researchers to release more deepfake videos — and offer a prize to the team that uses them to make the best detector.
Unlike these datasets, which allow researchers to cook up their own detectors, Dessa is releasing a pre-baked system — which has advantages and risks.
- The company felt a responsibility to release an antidote after it made the realistic Rogan voice, says Ragavan Thurairatnam, Dessa's co-founder.
- "I think it's inevitable that malicious actors are going to move much faster than those who want to stop it," he tells Axios. The free detector is a "starting point" for people to push detection forward.
But, but, but: Thurairatnam acknowledged that an open-source detector could help a particularly determined troll create new audio fakes that fool it. That's because generative AI systems can be trained to trick a specific detector.
- He argues that the potential for creating better detectors outweighs the probability that someone will misuse Dessa's code.
- But Lyu of SUNY Albany says there's some reason to worry. "In principle, such code will help both but probably more for making better generators."
Go deeper: Researchers struggle with containing potentially harmful AI