Axios Future of Cybersecurity

October 07, 2025
Happy Tuesday! Welcome back to Future of Cybersecurity.
- 🏔️ I'm in Denver today for The Identity Underground Summit. Come say hi after my panel if you're here!
- 📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 1,836 words, a 7-minute read.
1 big thing: AI video apps are a scammer's goldmine
New AI video apps are providing fertile ground for scammers looking to take their fraud and impersonation schemes to the next level.
Why it matters: AI-generated content is quickly blurring the lines between what's real and what's not — and scammers thrive on blurred realities.
Driving the news: OpenAI rolled out its new Sora iOS app last week, powered by the company's updated, second-generation video-creation model.
- The app is unique in that it allows users to upload photos of themselves and others to create AI-generated videos using their likenesses — but app users need the consent of anyone who will be shown in a video.
- OpenAI CEO Sam Altman said in an update Friday that the app will also give people "more granular control over generation of characters," including specifying in what scenarios their character can be used.
- People have been quick to show off fun ways they can use the tool, with some posting videos of themselves in TV ads or being arrested.
The flip side: The number of reported impersonation scams has skyrocketed in the U.S. in recent years — and that's before AI tools have came into the picture.
- In 2024, Americans lost $2.95 billion to imposter scams where fraudsters pretended to be a known person or organization, according to the Federal Trade Commission.
Between the lines: AI voice scams — which have a lower barrier to entry given how advanced the technology already is — have already taken off.
- Earlier this year, scammers impersonated the voices of Secretary of State Marco Rubio, White House chief of staff Susie Wiles, and other senior officials in calls to government workers.
- Last week, a mother in Buffalo, New York, said she received a scam call in which someone pretended to be holding her son hostage and used a likeness of his voice to prove he was there.
What they're saying: "This problem is ubiquitous," Matthew Moynahan, CEO of GetReal Security, which helps customers identify deepfakes and forgeries, told Axios. "It's like air, it's going to live everywhere."
Threat level: It's easy to download and share content created using Sora outside of OpenAI's platform, and it's possible to remove the watermark indicating it's AI-generated.
- Scammers can use that capability to dupe unsuspecting people into sending money, clicking on malicious links, or making poor investment decisions, Rachel Tobac, CEO of SocialProof Security, told Axios.
- "We have to inform everyday people that we now live in a world where AI video and audio is believable," she said.
Zoom in: Tobac laid out a few scenarios where she could see Sora being abused:
- A parent could receive a video as part of an extortion scam impersonating their child.
- A threat actor hoping to keep people from voting could create a video of a long line outside a polling center or fake interviews with poll workers saying the polls are closing early.
- A nation-state could even create a fake but believable video of an attack on a major city to sow unrest and panic in the U.S.
The intrigue: Fraudsters were already impersonating company executives, and new AI video tools are only going to amplify those schemes, Rafe Pilling, director of threat intelligence at Sophos, told Axios.
- "Things have improved leaps and bound," Pilling said. "Ultimately, [these services] will get abused, no doubt."
The other side: Meanwhile, creating realistic deepfakes with Meta AI's new tools has proven difficult because of one simple thing: They don't clone people's voices.
- In Meta AI's new "Vibes" section, every video was just set to vague music and showed people and animals vibing to the tunes.
- Each one looked like the AI slop videos that have flooded users' Facebook and Instagram feeds for months.
Yes, but: Ben Nimmo, principal investigator on OpenAI's intelligence and investigations team, told reporters yesterday that people are using ChatGPT three times more often to identify potential scams than adversaries are using it in their scam operations.
What to watch: The world is still only at the beginning of AI development, and experts have warned that video tools will only get better at duping everyone.
- "This is the greatest unmanaged enterprise risk I have ever seen," Moynahan said. "This is an existential problem."
2. Adversaries use multiple AI tools, OpenAI warns
Foreign adversaries are increasingly using multiple AI tools to power hacking and influence operations, according to an OpenAI report released today.
Why it matters: In the cases OpenAI discovered, the adversaries typically turned to ChatGPT to help plan their schemes, then used other models to carry them out — reflecting the range of applications for AI tools in such operations.
Zoom in: OpenAI banned several accounts tied to nation-state campaigns that seemed to be using multiple AI models to improve their operations.
- A Russian-based actor that was generating content for a covert influence operation used ChatGPT to write prompts seemingly for another AI video model.
- A cluster of Chinese-language accounts used ChatGPT to research and refine phishing automation they wanted to run on China-based model DeepSeek.
- OpenAI also confirmed that an actor the company previously disrupted was the same one Anthropic recently flagged in a threat report, suggesting they were using both tools.
Between the lines: OpenAI mostly observed threat actors using ChatGPT to improve their existing tactics, rather than creating new ones, OpenAI's Ben Nimmo told reporters in a call ahead of the report's release.
- However, the multi-model approach means that investigators have "just a glimpse" at how threat actors are using a specific model, Nimmo said.
The intrigue: Nation-state hackers and scammers are also learning to hide the telltale signs of AI usage, OpenAI's research team found. One scam network asked ChatGPT to remove em dashes from its writing, for example.
The big picture: Much like the U.S. government, foreign adversaries have been exploring ways to use ChatGPT and similar tools for years.
- In its report, OpenAI said it had banned accounts that appeared to be tied to both China-based entities and Russian-speaking criminal groups for using the model to help develop malware and write phishing emails.
- The company also banned accounts linked to Chinese government entities, including some that were asking OpenAI's models to "generate work proposals for large-scale systems designed to monitor social media conversations," according to the report.
What to watch: The campaigns OpenAI identified didn't seem to be very effective, per the report. But nation-state entities are still early in their AI experiments.
3. Threat spotlight: Extorting executives for ransom
Hackers have been engaging in a high-volume attack looking to extort executives into paying ransoms to avoid the publication of data purportedly stolen from Oracle data storage tools, cyber investigators warn.
Why it matters: Oracle says it's now investigating the hacks — suggesting some of the stolen data may be legitimate.
Driving the news: Google started warning last week that Cl0p, a cybercriminal gang known for ransomware and data extortion, has been sending extortion emails to executives across several organizations, claiming to have stolen internal documents.
- The hackers have been targeting a mix of vulnerabilities in Oracle's E-Business Suite tools, which companies use to store customer data, human resources information and similarly sensitive information.
- Some of those vulnerabilities were patched in July, but at least one was a zero-day flaw that wasn't patched until this weekend.
Threat level: In the emails, hackers are demanding payments to keep them from publishing or sharing the stolen internal documents.
- Incident responders at Google said the campaign started Sept. 29. It's unclear how many companies have been targeted and whether any executives have fallen for the scheme.
- "Given the broad mass 0-day exploitation that has already occurred (and the n-day exploitation that will likely continue by other actors), irrespective of when the patch is applied, organizations should examine whether they were already compromised," Charles Carmakal, CTO at Google Cloud's Mandiant unit, said on LinkedIn.
What to watch: No companies have publicly admitted they were targeted, but if they decide not to pay, Cl0p will likely start posting about its alleged victims on its dark web sites.
- CrowdStrike also warned in a blog post yesterday that its investigators could not "rule out the possibility that multiple threat actors" are targeting Oracle's tools.
4. ✈️ Sitting down with Jen Easterly
I flew out to Austin, Texas, last week to sit down with former CISA Director Jen Easterly at this year's SailPoint Navigate conference.
- Here are a few snippets from our fireside chat:
🤖 AI's impact on the threat landscape: "We're already starting to see attacks like phishing become much more hyper-personalized, much more tailored. We're starting to see much more stealthy, much more difficult-to-detect malware. It will make the defense side and the offense side more enhanced."
💻 A pitch for secure by design: "If we are real about this, the reason that we have cybersecurity is because of decades and decades of misaligned economic incentives where technology vendors have been able to build software that is fundamentally insecure and defective and full of flaws."
🦾 AI's potential for good: "I actually think the coolest thing is leading to the end of cybersecurity as we know it — fixing the core software quality that has led to the trillions and trillions of dollars of cyberattacks that businesses large and small have been suffering for the past many years."
Identity is the new target: "Long gone are the days where it was all about defending the perimeter or the boundaries. Identity is the new perimeter, and it's all about who and what has access to our networks."
5. Catch up quick
@ D.C.
👀 Some companies have pledged to keep sharing threat intelligence with the federal government even after a major liability protection law expired last week. (Politico)
📲 U.S. Immigration and Customs Enforcement plans to hire nearly 30 contractors to sift through social media posts, photos and messages to inform its deportation raids and arrests, according to contracting documents. (Wired)
@ Industry
✍🏻 Google DeepMind introduced a new AI agent that automatically detects, patches and rewrites vulnerable code. (SiliconANGLE)
📈 United Natural Foods raised its sales expectations in its most recent earnings report as it recovers from a June cyberattack. (Cybersecurity Dive)
🔍 The French government is investigating how Apple collects user recordings through its Siri assistant. (Bloomberg)
@ Hackers and hacks
Discord warned users that in a recent breach, hackers compromised the identity documents they had to submit to verify their age. (The Guardian)
⚠️ A hacking group claims it stole more than 1 billion records from Salesforce customers. (TechCrunch)
🧑🏻⚖️ A Chinese court sentenced 11 criminal gang leaders to death for their involvement in a massive scam operation in northern Myanmar. (Washington Post)
6. 1 fun thing
💿 Long-time readers knew exactly what I was going to write about here today: Taylor Swift and "The Life of a Showgirl," duh.
- 🎶 My current favs: "Opalite," "Fate of Ophelia," "Elizabeth Taylor" and "Honey."
- 📩 Hit reply to share your takes!
☀️ See y'all next week!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity






