Axios AM Deep Dive

October 25, 2025
Good afternoon and welcome to a Deep Dive by Axios' cybersecurity reporter Sam Sabin. She takes us inside the upcoming wave of AI-powered hacks — and how defenders are trying to keep up.
- This newsletter, edited by Dave Lawler and copy edited by Khalid Adad, is 1,176 words, a 4½-minute read.
For more on the world of cyber threats, sign up for Sam's weekly newsletter, Axios Future of Cybersecurity.
1 big thing: ⏰ AI-powered cyberattacks are coming
U.S. companies are up against a ticking time bomb: Thanks to AI, hackers are on the verge of launching fully automated cyberattacks that can move faster, smarter and more personally than ever.
- Why it matters: Those attacks could halt production at factories, knock hospitals offline or control power grids — all before anyone even realizes something's wrong.
🖼️ The big picture: Advancements in generative AI are giving hackers the ability to boost their own skill sets and automate parts of the attack chain.
- OpenAI and Anthropic have both already found evidence of nation-state adversaries and cybercriminals using their models to write code and research their attacks.
- Sandra Joyce, who leads Google's Threat Intelligence Group, tells Axios her team has seen evidence of malicious hackers attempting to use legitimate, AI-powered hacking tools in their schemes.
🔎 Zoom in: AI-powered phishing scams can now mimic how a friend or colleague writes, making it easier to trick employees into clicking malicious links.
- Voice-cloning tools are already so good that they can impersonate practically anyone you trust — and convince you to hand over sensitive passwords.
Between the lines: Nation-state hackers are going to build tools to automate everything — from spotting vulnerabilities to launching customized attacks on company networks, says Phil Venables, partner at Ballistic Ventures and former security chief at Google Cloud.
- "It's definitely going to come," Venables tells Axios. "The only question is: Is it three months? Is it six months? Is it 12 months?"
🚨 Threat level: A recent Microsoft report found that AI-automated phishing emails achieved a 54% click-through rate, compared with 12% for phishing lures that didn't use AI.
2. 🧰 Scammers' new toolbox
To convince you to send them money, scammers typically need to make you believe they're someone they're not — a family member, or perhaps a love interest.
- 📱 Why it matters: AI tools are making that a heck of a lot easier.
🔎 Zoom in: Gone are the days when ignoring calls from unknown numbers was enough to avoid scammers.
- They can now easily replicate a legitimate person's voice and likeness using low-cost tools.
- Pair that with the ability to spoof actual phone numbers, and suddenly it's much harder to know whether there's a scammer on the line.
😧 The intrigue: Apps like OpenAI's Sora also make it possible to create videos that aid scammers — such as a celebrity seeming to promote a fake investment opportunity or even a child appearing to be in danger.
🧮 By the numbers: Scam operators based in China made more than $1 billion over the last three years from text messages alone, according to the Department of Homeland Security.
💡 Experts recommend families come up with a code word to mention in a conversation to verify identities if they're asked for money.
- Another good piece of advice: Hang up and call your loved one back directly whenever you're in doubt.
3. 💥 Facing attacks


🔎 Zoom in: Hackers are increasingly targeting critical infrastructure and financial services companies, according to a survey of cyber professionals conducted by Deep Instinct, an AI-powered cybersecurity firm.
- 50% of respondents at critical infrastructure organizations said they had already faced an AI-powered attack in the last year.
4. 👀 U.S. adversaries embrace AI
Chinese, Russian, Iranian and North Korean cyber warriors are already embracing their new AI future as they experiment with ways to enhance their spying and hacking operations.
💻 The big picture: Even before generative AI, they were pretty good at hacking into U.S. systems.
- China was burrowing deep into critical infrastructure, like ports and water systems. Russia advanced its disinformation operations to mimic legitimate news sites. And North Korea was scoring jobs at nearly every Fortune 500 company.
- Suspected Chinese hackers broke into a major cybersecurity vendor just last week.
😰 Threat level: In the last six months, nation-state hackers have expedited their use of AI tools, allowing them to be "more advanced, scalable, and targeted," Microsoft recently warned.
Between the lines: Microsoft researchers laid out three AI trends that have picked up in recent months:
- 🥸 AI twinning, where disinformation operators create digital replicas of trusted news anchors to deliver state-backed propaganda.
- 🧪 Model poisoning, which focuses on deliberately feeding biased and misleading information into training data to influence AI models.
- 🗣️ Voice cloning, using generative AI tools to impersonate real individuals.
Zoom in: Each foreign adversary is also using AI to inform its hacking operations, but in different ways, incident responders tell Axios.
- 🇨🇳 China: Chinese hackers are using AI "as a side saddle" or "a buddy" to enhance their influence operations and other schemes, Google Threat Intelligence vice president Sandra Joyce says.
- 🇷🇺 Russia: Hackers have been experimenting with AI-powered malware in their attacks on Ukrainian entities as part of the ongoing war, Joyce says.
- 🇰🇵 North Korea: The regime's prolific worker fraud scam — in which North Korean workers steal American identities to get hired at major companies around the world — uses live deepfake videos during interviews, as well as chatbots to create fraudulent IDs and resumes.
- 🇮🇷 Iran: Hackers linked to the Islamic Revolutionary Guard Corps have been seen using generative AI to create malicious PDFs that are then attached to phishing emails, Sam Rubin, senior vice president at Palo Alto Networks' Unit 42 threat intelligence team, tells Axios.
5. 🦾 The case for optimism


To avoid the catastrophic future so many fear, cybersecurity leaders are making the only bet they can: Their robots can beat the others.
🖼️ The big picture: AI can be used to cause mayhem, but the good guys can also use it to bulk up their own capabilities.
- Defenders envision a world where they can use AI to instantly comb through hundreds of threat notifications, then proactively respond to the legitimate threats in that pile of alerts.
- AI models are also proving adept at writing secure code that's free from security flaws and vulnerabilities.
🛑 Zoom in: Defenders are already seeing results, Wendi Whitmore, chief security intelligence officer at Palo Alto Networks, tells Axios.
- In one case, they were able to use automation to help a major transportation manufacturing company bring its attack response time down from three weeks to 19 minutes.
- "We've just got so many more layers of defense," Whitmore says. "I can talk myself into being completely optimistic about AI."
👀 What to watch: Autonomous AI-driven cybersecurity could soon help identify vulnerabilities that no human could ever find on their own, according to Jen Easterly, former head of the federal government's Cybersecurity and Infrastructure Security Agency.
- It could also spot cyber intrusions before they happen, deploy countermeasures in milliseconds — and then learn from those actions to improve for next time.
💪 "If we get that right, frankly, we can ensure that the balance tips to the defenders," Easterly says.
📱 Thanks for sharing your weekend with us! Please encourage your friends to get their own weekly issue of Axios Future of Cybersecurity. Sign up here.
Sign up for Axios AM Deep Dive




