AI scams create new risks for police
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
A call convinced a Lawrence, Kansas, woman that her mother had been taken hostage, prompting police to track the call and pull over a car — only to learn it was all an AI scam.
Why it matters: AI-powered fraud is growing faster than police training and laws can keep pace, futurists tell Axios.
Zoom in: The Lawrence woman called 911 last month after a caller using her mother's phone number claimed her mother was being threatened with a gun, according to the City of Lawrence.
- She told dispatchers she heard a voice in the background that sounded like her mother. Officers tracked the phone to her mother's workplace, then saw the location move, triggering a high-risk vehicle stop.
- Police later confirmed her mother was safe, the driver of the car was not involved and the call was a scam.
- Officials say many similar scams originate overseas, making them nearly impossible to trace.
How it works: Lawrence police say scammers pull audio from public social media accounts, websites or voicemail greetings and feed it into an AI tool that learns a person's speech patterns, accent and mannerisms to create a realistic voice clone.
- Scammers then use caller-ID spoofing to make the call look legitimate, play the cloned voice and threaten the person on the other end of the line.
- They typically demand immediate payments through wire transfers, gift cards or cryptocurrency.
What they're saying: Kansas City Police Department public information officer Jake Becchina tells Axios they haven't seen many reported cases because the victims rarely know if the voice on the phone belongs to a real person or AI.
- Becchina says scammers can sound convincing enough that people don't question it.
- "All of these scams could be using AI technology, but we haven't received any reports of the victim knowing it was AI-generated," he says.
Zoom out: Nationally, AI is fueling a surge in deepfake fraud, automated hacks and synthetic-identity crimes, Axios reports.
- A deepfake attack occurred every five minutes globally in 2024, while digital document forgeries jumped 244% year-over-year, according to the Entrust Cybersecurity Institute.
The bottom line: AI isn't just accelerating fraud — it's blurring what's real in ways that can pull officers into dangerous situations before anyone knows the threat is fake.
