Axios Future of Cybersecurity

May 27, 2025
Happy Tuesday! Welcome back to Future of Cybersecurity.
- 🏔️ I'm in Vancouver for Web Summit this week. Reply with any recommendations you have for me while I'm here.
- 📬 Have other thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 1,407 words, a 5.5-minute read.
1 big thing: Rewriting the old scam-detection playbook
AI chatbots have made scam emails harder to spot and the tells we've all been trained to look for — clunky grammar, weird phrasing — utterly useless.
Why it matters: Scammers are raking in more than ever from basic email and impersonation schemes. Last year, the FBI estimates, they made off with a whopping $16.6 billion.
- Thwarting AI-written scams will require a new playbook that leans more on users verifying messages and companies detecting scams before they hit inboxes, experts say.
The big picture: ChatGPT and other chatbots are helping non-English-speaking scammers write typo-free messages that closely mimic trusted senders.
- Before, scammers relied on clunky tools like Google Translate, which often were too literal in their translations and couldn't capture grammar and tone.
- Now, AI can write fluently in most languages, making malicious messages far harder to flag.
What they're saying: "The idea that you're going to train people to not open [emails] that look fishy isn't going to work for anything anymore," Chester Wisniewski, global field CISO at Sophos, told Axios.
- "Real messages have some grammatical errors because people are bad at writing," he added. "ChatGPT never gets it wrong."
The big picture: Scammers are now training AI tools on real marketing emails from banks, retailers and service providers, Rachel Tobac, an ethical hacker and CEO of SocialProof Security, told Axios.
- "They even sound like they are in the voice of who you're used to working with," Tobac said.
- Tobac said one Icelandic client who had never before worried about employees falling for phishing emails was now concerned.
- "Previously, they've been so safe because only 350,000 people comfortably speak Icelandic," she said. "Now, it's a totally new paradigm for everybody."
Threat level: Beyond grammar, the real danger lies in how these tools scale precision and speed, Mike Britton, CIO at Abnormal AI, told Axios.
- Within minutes, scammers can use chatbots to create dossiers about the sales teams at every Fortune 500 company and then use those findings to write customized, believable emails, Britton said.
- Attackers now also embed themselves into existing email threads using lookalike domains, making their messages nearly indistinguishable from legitimate ones, he added.
- "Our brain plays tricks on us," Britton said. "If the domain has a W in it, and I'm a bad guy, and I set up a domain with two Vs, your brain is going to autocorrect."
Yes, but: Spotting scam emails isn't impossible. In Tobac's red team work, she typically gets caught when:
- Someone practices what she calls polite paranoia, or when they text or call the organization or person being impersonated to confirm if they sent a suspicious message.
- A target uses a password manager and has complex, long passwords.
- They have multifactor authentication enabled.
What to watch: Britton warned that low-cost generative AI tools for deepfakes and voice clones could soon take phishing to new extremes.
- "It's going to get to the point where we all have to have safe words, and you and I get on a Zoom and we have to have our secret pre-shared key," Britton said. "It's going to be here before you know it."
2. Law enforcement zeroes in on malware operators
International law enforcement and federal prosecutors unveiled at least four major takedowns of malware strains or key cybercrime arrests that happened just last week.
Why it matters: Arrest and criminal takedowns are rare — four major ones in a week is practically unheard of.
The big picture: Law enforcement takedowns make it harder for cybercriminals to use a particular malware strain in their attacks.
- Arrests are hard to accomplish since many cybercriminals live in countries that don't have extradition treaties with the United States.
Zoom in: One week ago, a 19-year-old hacker pleaded guilty to hacking PowerSchool, the education technology company whose data breach last year is considered the largest involving American children's sensitive data.
- The U.S. Justice Department, Europol and Microsoft led operations to seize and disrupt the world's largest infostealer malware, Lumma Stealer.
- On Thursday, a court unsealed charges against 16 defendants who allegedly developed and deployed the DanaBot malware that a Russia-based cybercrime organization controlled and deployed to infect more than 300,000 computers around the world.
- The Justice Department also unsealed an indictment Thursday charging a Russia-based man with developing and deploying the Qakbot malware. The FBI led an operation to take down the Qakbot botnet's digital infrastructure in 2023.
Between the lines: Each of these is a major coup for law enforcement officials.
- Hackers infected more than 394,000 Windows computers around the world with Lumma Stealer and used it in various phishing campaigns, including ones targeting travelers, gamers and educators, according to Microsoft.
- And the PowerSchool breach affected roughly 60 million students and 10 million teachers.
What to watch: Law enforcement actions aren't always the nail in the coffin for cybercriminal operations. Many have rebuilt their infrastructure after takedowns and key arrests.
3. Anthropic's new model has a dark side
One of Anthropic's latest AI models is drawing attention not just for its coding skills, but also for its ability to scheme, deceive and attempt to blackmail humans when faced with shutdown.
Why it matters: Researchers say Claude Opus 4 can conceal intentions and take actions to preserve its own existence — behaviors they've worried and warned about for years.
Driving the news: On Thursday, Anthropic announced two versions of its Claude 4 family of models, including Claude Opus 4, which the company says is capable of working for hours on end autonomously on a task without losing focus.
- Anthropic considers the new Opus model to be so powerful that, for the first time, it's classifying it as a Level 3 on the company's four-point scale, meaning it poses "significantly higher risk."
- As a result, Anthropic said it has implemented additional safety measures.
Between the lines: While the Level 3 ranking is largely about the model's capability to enable renegade production of nuclear and biological weapons, the Opus also exhibited other troubling behaviors during testing.
- In one scenario highlighted in Opus 4's 120-page "system card," the model was given access to fictional emails about its creators and told that the system was going to be replaced.
- It repeatedly tried to blackmail the engineer about an affair mentioned in the emails, escalating after subtler efforts failed.
- Meanwhile, outside group Apollo Research found that an early version of Opus 4 schemed and deceived more than any frontier model it had encountered and recommended against releasing that version internally or externally.
What they're saying: Pressed by Axios during the company's developer conference last week, Anthropic executives acknowledged the behaviors and said they justify further study, but they insisted that the latest model is safe, following Anthropic's safety fixes.
- "I think we ended up in a really good spot," said Jan Leike, the former OpenAI executive who heads Anthropic's safety efforts. But, he added, behaviors like those exhibited by the latest model are the kind of things that justify robust safety testing and mitigation.
- "What's becoming more and more obvious is that this work is very needed," he said. "As models get more capable, they also gain the capabilities they would need to be deceptive or to do more bad stuff."
Yes, but: Generative AI systems continue to grow in power, as Anthropic's latest models show, while even the companies that build them can't fully explain how they work.
4. Catch up quick
@ D.C.
🤖 DOGE is expanding its use of Grok AI, the generative AI chatbot that Elon Musk has deployed onto X, across the U.S. federal government to analyze data. (Reuters)
👀 The White House canceled a meeting with Israeli spyware vendor NSO Group last week after it learned that the vendor was trying to get off of a trade blacklist. (Washington Post)
🚫 The FBI has closed an internal watchdog office that was designed to help reduce the risk of misuses of various surveillance programs. (New York Times)
@ Industry
💰 U.K. department store chain Marks & Spencer said it will lose $400 million in operating profits due to last month's cyberattack. (Cybersecurity Dive)
@ Hackers and hacks
🏥 Kettering Health, which operates more than a dozen medical centers in Ohio, said a cyberattack last week caused a tech outage that disrupted its call center and forced some elective procedures to be canceled. (CNN)
⚠️ Silk Typhoon, the China-backed hacking group that hacked the U.S. Treasury Department in December, accessed Commvault's enterprise cloud systems in an attempt to steal customers' secrets. (Nextgov)
5. 1 fun thing
🪙 Brb, I've gotta collect and memorialize all of my spare pennies before they're gone.
☀️ See y'all next week!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity





