AI-generated kidnapping scams are coming, FBI warns
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
Criminals are using AI-generated photos and videos of people's loved ones to create virtual kidnapping scams for ransom, the FBI warned Friday.
The big picture: AI-related scams, from widespread voice-cloning tools to increasingly realistic AI-generated pictures and videos, are leaving people more vulnerable to fabricated crimes as the line between real and fake blurs.
How it works: In these scams, the FBI says criminals text victims claiming to have kidnapped a loved one and demand payment, often escalating violent threats if the ransom is not paid immediately.
- When victims ask for proof, scammers send what appears to be a real photo or video of the loved one, sometimes using timed messaging features so the recipient has limited time to scrutinize it.
Yes, but: Upon closer view, many AI-generated photos and videos contain inaccuracies.
- Victims should look for missing tattoos or scars and imprecise body proportions with real images of the person, the FBI said.
By the numbers: A deepfake attack occurred every five minutes globally in 2024, while digital document forgeries jumped 244% year-over-year, according to the Entrust Cybersecurity Institute.
- U.S. losses from fraud that relies on generative AI are projected to reach $40 billion by 2027, according to the Deloitte Center for Financial Services.
Zoom out: Anyone targeted by a suspected AI scam should always contact their loved one before agreeing to any terms or payments. Establishing a family or friend safe word can also help distinguish real from fake communications.
- The agency notes that criminals act quickly to induce panic, so it's important to pause and question whether the kidnapper's claims are legitimate.
Go deeper: Scammers may benefit from ChatGPT's new image tool
