Jul 19, 2019

The coming deepfakes threat to businesses

Illustration: Rebecca Zisser/Axios

In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.

Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company's stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.

  • Imagine a convincing fake video or audio clip of Elon Musk, say, disclosing a massive defect the day before a big Tesla launch — the company's share price would crumple.

What's happening: For all the talk about fake videos, it's deepfake audio that has emerged as the first real threat to the private sector.

  • Symantec, a major cybersecurity company, says it has seen three successful audio attacks on private companies. In each, a company's "CEO" called a senior financial officer to request an urgent money transfer.
  • Scammers were mimicking the CEOs' voices with an AI program that had been trained on hours of their speech — culled from earnings calls, YouTube videos, TED talks and the like.
  • Millions of dollars were stolen from each company, whose names were not revealed. The attacks were first reported in the BBC.

And in March, a Twitter account falsely claiming to belong to a Bloomberg journalist reportedly tried to coax personal information from Tesla short-sellers. Amateur sleuths said the account's profile photo had the hallmarks of an AI-generated image.

Big picture: This threat is just beginning to emerge. Video and audio deepfakes are improving at a frightening pace and are increasingly easy to make.

  • There's been an uptick in sophisticated audio attacks over the past year, says Vijay Balasubramaniyan, CEO of Pindrop, a company that protects call centers from scammers.
  • But businesses aren't ready, experts tell Axios. "I don’t think corporate infrastructure is prepared for a world where you can’t trust the voice or video of your colleague anymore," says Henry Ajder of Deeptrace, a deepfakes-detection startup.

Even if companies were clamoring for defenses, few tools exist to keep harmful deepfakes at bay, says Symantec's Saurabh Shintre. The challenge of automatically spotting a deepfake is almost insurmountable, and there are hurdles still ahead for a promising alternative: creating a digital breadcrumb trail for unaltered media.

  • Pindrop monitors for audio attacks like altered voices on customer service lines.
  • Symantec and ZeroFOX, another cybersecurity company, say they are developing technology to detect audio fakes.

What's out there already isn't cheap.

  • New Knowledge, a firm that defends companies from disinformation, says its services can run from $50,000 to "a couple million" a year.
  • Just monitoring the internet for potential fakes comes at "a substantial cost," says Matt Price of ZeroFOX. "And that's not even talking about the detection piece, which will probably be fairly expensive."

As a result, businesses are largely defenseless for now, leaving an opening for a well-timed deepfake to drop like a bomb.

  • "If you're waiting for it to happen, you're already too late," New Knowledge COO Ryan Fox tells Axios.

Go deeper: Companies take the battle to online mobs

Go deeper

Why the deepfakes threat is shallow

Illustration: Aïda Amer/Axios

Despite the sharp alarms being sounded over deepfakes — uncannily realistic AI-generated videos showing real people doing and saying fictional things —security experts believe that the videos ultimately don't offer propagandists much advantage compared to the simpler forms of disinformation they are likely to use.

Why it matters: It’s easy to see how a viral video that appears to show, say, the U.S. president declaring war would cause panic — until, of course, the video was debunked. But deepfakes are not an efficient form of a long-term disinformation campaign.

Go deeperArrowAug 15, 2019

A shaky first pass at criminalizing deepfakes

Illustration: Aïda Amer/Axios

Since Sen. Ben Sasse (R-Neb.) introduced the first short-lived bill to outlaw malicious deepfakes, a handful of members of Congress and several statehouses, have stabbed at the growing threat.

But, but, but: So far, legal and deepfake experts haven't found much to like in these initial attempts, which they say are too broad, too vague or too weak — meaning that, despite all the hoopla over the technology, we're not much closer to protecting against it.

Go deeperArrowJul 27, 2019

Misinformation haunts 2020 primaries

Illustration: Sarah Grillo/Axios

Despite broad efforts to crack down on misinformation ahead of the 2020 election, the primary season so far has been chock full of deceptive messages and misleading information.

Why it matters: More sophisticated tactics that have emerged since 2016 threaten to derail the democratic process by further polluting online debate. And the seemingly unending influx of fakery could plant enough suspicion and cynicism to throw an otherwise legitimate election into question.

Go deeperArrowAug 6, 2019