Jul 22, 2018

The impending war over deepfakes

Kim Jong-Un with Elvis. Photoshop: Lazaro Gamio/Axios

Researchers are in a pitched battle against deepfakes, the artificial intelligence algorithms that create convincing fake images, audio and video, but it could take years before they invent a system that can sniff out most or all of them, experts tell Axios.

Why it matters: A fake video of a world leader making an incendiary threat could, if widely believed, set off a trade war — or a conventional one. Just as dangerous is the possibility that deepfake technology spreads to the point that people are unwilling to trust video or audio evidence.

The big picture: Publicly available software makes it easy to create sophisticated fake videos without having to understand the machine learning that powers it. Most software swaps one person’s face onto another’s body, or makes it look like someone is saying something they didn’t.

This has ignited an arms race between fakers and sleuths.

  • In one corner are academics developing face-swap tech that could be used for special-effects departments, plus myriad online pranksters and troublemakers. Doctored photos are a stock-in-trade of the internet, but as far as experts know, AI has not yet been used by state actors or political campaigns to produce deepfakes.
  • Arrayed against them are other academic researchers, plus private companies and government entities like DARPA and Los Alamos National Labs, all of whom have marshaled resources to try and head off deepfakes.

Facing an uphill fight, the deepfake detectives have approached the problem from numerous angles.

  • Gfycat, a gif-hosting platform, banned deepfake porn and uses a pair of tools to take down offending clips. One compares the faces in each frame of a gif to detect anomalies that could give away a fake; the other checks whether a new gif has simply pasted a new face onto a previously uploaded clip.
  • Researchers at SUNY Albany created a system that monitors video blinking patterns to determine whether it's genuine.
  • Hany Farid, a Dartmouth professor and member of DARPA's media forensics team, favors a physics-based approach that analyzes images for giveaway inconsistencies like incorrect lighting on an AI-generated face. He says non-AI, forensics-based reasoning is easier to explain to humans — like to, say, a jury.
  • Los Alamos researchers are creating a neurologically inspired system that searches for invisible tells that photos are AI-generated. They are testing for compressibility, or how much information the image actually contains. Generated images are simpler than real photos, because they reuse visual elements. The repetition is subtle enough to trick the eye, but not a specially trained algorithm.

AI might never catch 100% of fakes, said Juston Moore, a data scientist at Los Alamos. "But even if it’s a cat-and-mouse game," he said, "I think it’s one worth playing."

What’s next: We’re years away from a comprehensive system to battle deepfakes, said Farid. It would require new technological advances as well as answers to thorny policy questions that have already proven extremely difficult to solve.

Assuming the technology is worked out, here is how it could be implemented:

  • An independent website that verifies uploaded photos and videos. Verified content could be displayed in a gallery for reference.
  • A platform-wide verification system on social-media sites like Twitter, Facebook, and Reddit that checks every user-uploaded item before allowing it to post. A displayed badge could verify content.
  • A tracking system for the origin of a video, image, or audio clip. Blockchain could play a role, and a company called Truepic has raised money to use it for this purpose.
  • Watermarks could be placed on images verified as real or deepfakes. Farid said that one possibility is to add an invisible signature to images created with Google’s TensorFlow technology, which powers the most popular currently available deepfake generator.

The big question: Will tech companies implement such protections if they might be seen as infringing on free speech, a similar conundrum faced by social networking companies policing extremist content?

Go deeper:

Go deeper

New York Times says Tom Cotton op-ed did not meet standards

Photo: Avalon/Universal Images Group via Getty Images)

A New York Times spokesperson said in a statement Thursday that the paper will be changing its editorial board processes after a Wednesday op-ed by Sen. Tom Cotton (R-Ark.), which called for President Trump to "send in the troops" in order to quell violent protests, failed to meet its standards.

Why it matters: The shift comes after Times employees began a coordinated movement on social media on Wednesday and Thursday that argued that publishing the op-ed put black staff in danger. Cotton wrote that Trump should invoke the Insurrection Act in order to deploy the U.S. military against rioters that have overwhelmed police forces in cities across the country.

George Floyd updates

Thousands of protesters march over the Brooklyn Bridge on June 4 in New York City. Photo: Angela Weiss/AFP via Getty Images

All four former Minneapolis police officers have been charged for George Floyd’s death and are in custody, including Thomas Lane, J. Alexander Kueng and Tou Thao, who were charged with aiding and abetting second-degree murder and aiding and abetting second-degree manslaughter.

The latest: Civil rights groups filed a lawsuit Thursday against President Trump, Attorney General Bill Barr and other federal officials on behalf of Black Lives Matter and other peaceful protesters who were forcibly removed with rubber bullets and chemical irritants before Trump's photo-op at the historic St. John’s Episcopal Church on Monday.

The long journey to herd immunity

Illustration: Sarah Grillo/Axios

The sought-after state of herd immunity — in which widespread outbreaks are prevented because enough people in a community are immune to a disease — is complicated by open questions about the effectiveness of a future vaccine and how COVID-19 spreads.

Why it matters: Unless a sufficient level of immunity is achieved in the population, the coronavirus could circulate indefinitely and potentially flare up as future outbreaks.