How AI is helping scammers target victims in "sextortion" schemes
Rapidly advancing AI technologies are making it easier for scammers to extort victims, including children, by doctoring innocent photos into fake pornographic content, experts and police say.
Why it matters: The warnings coincide with a general "explosion" of "sextortion" schemes targeting children and teens that have been linked more than a dozen suicides, according to the FBI.
Driving the news: The National Center for Missing & Exploited Children has recently received reports of manipulated images of victims being shared on social media and other platforms, says John Shehan, a senior vice president at the organization.
- "Right now, it can feel a bit like the Wild West," Shehan told Axios. "These technologies could spiral very quickly out of control."
How it works: Typical sextortion schemes involve scammers coercing victims into sending explicit images, then demanding payment to keep the images private or delete them from the web.
- But with AI, malicious actors can pull benign photos or videos from social media and create explicit content using open-source image-generation tools.
- So-called "deepfakes" and the threats they pose have been around for years, but the tools to create them have recently become extremely powerful and more user-friendly, said John Wilson, a senior fellow at cybersecurity firm Fortra.
The big picture: The FBI said earlier this month that it has received reports from victims — including minors — that innocuous images of them had been altered using AI tools to create "true-to-life" explicit content, then shared on social media platforms or porn sites.
- "Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the internet," the FBI said.
- Last year, the FBI received 7,000 reports of financial sextortion against minors that resulted in at least 3,000 victims — primarily boys — according to a December public safety alert.
Background: Equality Now, which promotes the human rights of women and girls, said in 2021 that laws in many countries don't address sexually abusive deepfakes — particularly against adults — or artificially generated images of child sexual exploitation.
- Amanda Manyame, a digital law and rights consultant for Equality Now, told Axios that laws in general have lagged significantly behind the rapid development of AI technologies.
- "The question becomes, 'How do we develop a law now that's going to protect children 10 years from now?'" Manyana said.
The bottom line: Shehan, the senior VP at the National Center for Missing & Exploited Children, said these new technologies should be developed differently.
- "With tools like this, there needs to be a bit of a pause so that they're created with safety by design from the onset — not after after the fact."