Internet crime claims more victims but less cash in Indiana
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Artificial intelligence has emerged as a powerful tool for cybercriminals who stole a record-setting amount of money from Americans in 2024.
Why it matters: FBI data reveals that Indiana is one of the states being hit the hardest.
Driving the news: Hoosiers filed 23,659 internet crime complaints in 2024, per the FBI Internet Crime Center's latest annual report, No. 8 in the U.S.
- Indiana was also No. 3 for complaints per capita, trailing only Alaska and Washington, D.C.
By the numbers: Indiana's 2024 complaint total is a new high for the state — a 113% increase over the 11,097 complaints filed in 2023.
- The last time Indiana saw this big of an annual increase was 2018 to 2019, when complaints increased 108% from 4,676 to 9,746.
- The previous record was in 2020, when there were 12,786 complaints, meaning last year battered the old best by 85%.
Yes, but: While targets are up, losses are down. Indiana residents were manipulated out of more than $125.1 million in 2024, down from a record $162.2 million in 2023.
Stunning stat: Hoosiers ages 60 and up lost the most, with reported losses of more than $37.2 million.
State of play: The increase is driven in part by the accessibility and power of AI that gives criminals the power to commit fraud on a larger scale.
Reality check: Fear and embarrassment keeps many victims from reporting crimes, meaning these totals could be even higher.


Zoom out: Nationwide losses eclipsed $16 billion in 2024, a 33% increase from 2023.
- Phishing/spoofing, extortion and personal data breaches were the three most common crimes reported.
What they're saying: "As technology continues to evolve, so do cybercriminals' tactics. Attackers are leveraging AI to craft highly convincing voice or video messages and emails to enable fraud schemes against individuals and businesses alike," FBI special agent in charge Robert Tripp said in a statement.
How it works: Here are a few examples from the FBI of how scammers are using AI.
💬 Text generators are used to appear believable for social engineering, spear phishing and financial fraud. This includes romance schemes, phony investments and unpaid toll scams.
- AI helps perps target victims faster and reach a wider audience.
- Tools can limit the grammatical or spelling errors that are often signs of deception.
🖼️ Image generation is often used in private communications to convince victims they are speaking to a real person.
- Fake disasters are used to elicit donations to fraudulent charities, and generated pornographic photos are used to demand payment in sextortion schemes.
🗣️ Audio, most commonly involving vocal cloning, can be used to impersonate people or access voice-protected accounts.
- A common scheme is a short clip of a close relative in a crisis situation, asking for immediate financial assistance or demanding a ransom.
📽️ Videos are used to create believable depictions of trusted or public figures as a means to support investment fraud schemes.
- Clips can also be used in extortion and presented as proof of identity.
