Jun 13, 2023 - Technology

Generative AI is making voice scams easier to believe

Illustration of a bear trap shaped like a "Q" with a glowing red phone in the middle

Illustration: Sarah Grillo/Axios

Generative AI has already lowered the bar for cybercriminals looking to clone someone's voice and use it in their schemes.

Why it matters: Cybercriminals now need as little as three seconds of someone's voice to successfully clone it and make it usable in a scam call thanks to generative AI tools, researchers at McAfee have found.

  • People are already losing money from these incidents: In a recent McAfee survey, 77% of victims in AI-enabled scam calls said they lost money. More than a third of those victims lost more than $1,000.

The big picture: Scam phone calls have often taken the form of mass robocalls pretending to be a health care provider, the Internal Revenue Service or even a provider looking to extend someone's auto warranty.

  • AI-enabled voice scams use generative AI-enabled voice-cloning services to take these one step further and make the calls seem like they're coming from a loved one.

How it works: So far, these AI-enabled voice scams are building on an age-old scheme that targets family members and loved ones.

  • In the original scam, someone would call pretending to be the police and claiming that a friend or family member needs money quick to get out of legal trouble.
  • Now, using AI, the scammer can pretend to actually be someone's child or other relative using a clone of the real relative's voice.
  • Legitimate tools let the scammer respond in real time as they type out sentences in their voice-cloning apps.
  • Some scammers even go as far as to research personal information about the victim's relative to make the call more believable.

What they're saying: "It's just a very easy-to-use medium, and the attacker doesn't have to have really any expertise in artificial intelligence," Steve Grobman, chief technology officer and senior vice president at McAfee, told Axios.

Between the lines: Government officials are already seeing signs of generative AI leading to increased voice scams.

  • Federal Trade Commission chair Lina Khan said earlier this month that she's seen AI "turbocharge" fraud and scams, including voice calls.

By the numbers: Nearly half of adults (45%) said in McAfee's survey, released last month, that they would respond to a voice note or voicemail from a friend or loved one asking for money.

  • 48% also said they'd send money if they got a note saying a friend or loved one was having car trouble or was in an accident.

The intrigue: The AI voice-cloning tools criminals are using also have legitimate use cases, making an outright ban on their use impossible, Grobman said.

  • Some of those use cases include helping people with speech impediments communicate or assisting authors recording the audio versions of their books, he said.

Yes, but: AI-enabled voice scams require significant effort that not all criminals will be willing to make.

  • Telecom providers are also in the last stages of rolling out call authentication technology that should help flag even more robocalls and spam calls as they come in, John Haraburda, director of product management at data communications provider Transaction Network Services, told Axios.

Be smart: Experts suggested that people adopt codewords between themselves and loved ones to use during calls of distress to ensure that they're legitimate.

  • Another easy way to ensure the person on the other end of the line is who they say they are is to hang up and immediately call them back, if possible, Haraburda said.

Sign up for Axios’ cybersecurity newsletter Codebook here

Go deeper