Illustration: Rebecca Zisser/Axios
In the first signs of a mounting threat, criminals are starting to use deepfakes — starting with AI-generated audio — to impersonate CEOs and steal millions from companies, which are largely unprepared to combat them.
Why it matters: Nightmare scenarios abound. As deepfakes grow more sophisticated, a convincing forgery could send a company's stock plummeting (or soaring), to extract money or to ruin its reputation in a viral instant.
- Imagine a convincing fake video or audio clip of Elon Musk, say, disclosing a massive defect the day before a big Tesla launch — the company's share price would crumple.
- Symantec, a major cybersecurity company, says it has seen three successful audio attacks on private companies. In each, a company's "CEO" called a senior financial officer to request an urgent money transfer.
- Scammers were mimicking the CEOs' voices with an AI program that had been trained on hours of their speech — culled from earnings calls, YouTube videos, TED talks and the like.
- Millions of dollars were stolen from each company, whose names were not revealed. The attacks were first reported in the BBC.
And in March, a Twitter account falsely claiming to belong to a Bloomberg journalist reportedly tried to coax personal information from Tesla short-sellers. Amateur sleuths said the account's profile photo had the hallmarks of an AI-generated image.
Big picture: This threat is just beginning to emerge. Video and audio deepfakes are improving at a frightening pace and are increasingly easy to make.
- There's been an uptick in sophisticated audio attacks over the past year, says Vijay Balasubramaniyan, CEO of Pindrop, a company that protects call centers from scammers.
- But businesses aren't ready, experts tell Axios. "I don’t think corporate infrastructure is prepared for a world where you can’t trust the voice or video of your colleague anymore," says Henry Ajder of Deeptrace, a deepfakes-detection startup.
Even if companies were clamoring for defenses, few tools exist to keep harmful deepfakes at bay, says Symantec's Saurabh Shintre. The challenge of automatically spotting a deepfake is almost insurmountable, and there are hurdles still ahead for a promising alternative: creating a digital breadcrumb trail for unaltered media.
- Pindrop monitors for audio attacks like altered voices on customer service lines.
- Symantec and ZeroFOX, another cybersecurity company, say they are developing technology to detect audio fakes.
What's out there already isn't cheap.
- New Knowledge, a firm that defends companies from disinformation, says its services can run from $50,000 to "a couple million" a year.
- Just monitoring the internet for potential fakes comes at "a substantial cost," says Matt Price of ZeroFOX. "And that's not even talking about the detection piece, which will probably be fairly expensive."
As a result, businesses are largely defenseless for now, leaving an opening for a well-timed deepfake to drop like a bomb.
- "If you're waiting for it to happen, you're already too late," New Knowledge COO Ryan Fox tells Axios.
Go deeper: Companies take the battle to online mobs