Sign up for our daily briefing

Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Twin Cities

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa Bay news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa Bay

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Charlotte news in your inbox

Catch up on the most important stories affecting your hometown with Axios Charlotte

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Kim Jong-Un with Elvis. Photoshop: Lazaro Gamio/Axios

Researchers are in a pitched battle against deepfakes, the artificial intelligence algorithms that create convincing fake images, audio and video, but it could take years before they invent a system that can sniff out most or all of them, experts tell Axios.

Why it matters: A fake video of a world leader making an incendiary threat could, if widely believed, set off a trade war — or a conventional one. Just as dangerous is the possibility that deepfake technology spreads to the point that people are unwilling to trust video or audio evidence.

The big picture: Publicly available software makes it easy to create sophisticated fake videos without having to understand the machine learning that powers it. Most software swaps one person’s face onto another’s body, or makes it look like someone is saying something they didn’t.

This has ignited an arms race between fakers and sleuths.

  • In one corner are academics developing face-swap tech that could be used for special-effects departments, plus myriad online pranksters and troublemakers. Doctored photos are a stock-in-trade of the internet, but as far as experts know, AI has not yet been used by state actors or political campaigns to produce deepfakes.
  • Arrayed against them are other academic researchers, plus private companies and government entities like DARPA and Los Alamos National Labs, all of whom have marshaled resources to try and head off deepfakes.

Facing an uphill fight, the deepfake detectives have approached the problem from numerous angles.

  • Gfycat, a gif-hosting platform, banned deepfake porn and uses a pair of tools to take down offending clips. One compares the faces in each frame of a gif to detect anomalies that could give away a fake; the other checks whether a new gif has simply pasted a new face onto a previously uploaded clip.
  • Researchers at SUNY Albany created a system that monitors video blinking patterns to determine whether it's genuine.
  • Hany Farid, a Dartmouth professor and member of DARPA's media forensics team, favors a physics-based approach that analyzes images for giveaway inconsistencies like incorrect lighting on an AI-generated face. He says non-AI, forensics-based reasoning is easier to explain to humans — like to, say, a jury.
  • Los Alamos researchers are creating a neurologically inspired system that searches for invisible tells that photos are AI-generated. They are testing for compressibility, or how much information the image actually contains. Generated images are simpler than real photos, because they reuse visual elements. The repetition is subtle enough to trick the eye, but not a specially trained algorithm.

AI might never catch 100% of fakes, said Juston Moore, a data scientist at Los Alamos. "But even if it’s a cat-and-mouse game," he said, "I think it’s one worth playing."

What’s next: We’re years away from a comprehensive system to battle deepfakes, said Farid. It would require new technological advances as well as answers to thorny policy questions that have already proven extremely difficult to solve.

Assuming the technology is worked out, here is how it could be implemented:

  • An independent website that verifies uploaded photos and videos. Verified content could be displayed in a gallery for reference.
  • A platform-wide verification system on social-media sites like Twitter, Facebook, and Reddit that checks every user-uploaded item before allowing it to post. A displayed badge could verify content.
  • A tracking system for the origin of a video, image, or audio clip. Blockchain could play a role, and a company called Truepic has raised money to use it for this purpose.
  • Watermarks could be placed on images verified as real or deepfakes. Farid said that one possibility is to add an invisible signature to images created with Google’s TensorFlow technology, which powers the most popular currently available deepfake generator.

The big question: Will tech companies implement such protections if they might be seen as infringing on free speech, a similar conundrum faced by social networking companies policing extremist content?

Go deeper:

Go deeper

In photos: D.C. and U.S. states on alert for pre-inauguration violence

National Guard troops stand behind security fencing with the dome of the U.S. Capitol Building behind them, on Jan. 16. Photo: Kent Nishimura / Los Angeles Times via Getty Images

Security has been stepped up in Washington, D.C., and state capitols across the U.S. as authorities brace for potential violence this weekend.

Driving the news: Following the Jan. 6 insurrection at the U.S. Capitol by some supporters of President Trump, the FBI has said there could be armed protests in D.C. and in all 50 state capitols in the run-up to President-elect Joe Biden's inauguration Wednesday.

10 hours ago - Politics & Policy

The new Washington

Illustration: Sarah Grillo/Axios

The Axios subject-matter experts brief you on the incoming administration's plans and team.

Rep. Lou Correa tests positive for COVID-19

Lou Correa. Photo: Tom Williams/CQ-Roll Call, Inc via Getty Images

Rep. Lou Correa (D-Calif.) announced on Saturday that he has tested positive for the coronavirus.

Why it matters: Correa is the latest Democratic lawmaker to share his positive test results after last week's deadly Capitol riot. Correa did not shelter in the designated safe zone with his congressional colleagues during the siege, per a spokesperson, instead staying outside to help Capitol Police.