Axios Future of Cybersecurity

May 12, 2026
Happy Tuesday! Welcome back to Future of Cybersecurity.
📬 Have thoughts, feedback or scoops to share? [email protected].
🗓️ Axios' AI+ Summit will return to New York City during NY Tech Week on June 3. The lineup includes IBM CEO Arvind Krishna, congressional candidate/NY state assemblyman Alex Bores, YouTube vlogger Casey Neistat and more. Secure your spot here.
Today's newsletter is 1,991 words, a 7.5-minute read.
1 big thing: Trump's China trip collides with AI security fears
As the U.S. and China barrel ahead in their quest for AI supremacy, their race could come at the expense of global cybersecurity.
Why it matters: The U.S. and China both have an interest in preventing each other from weaponizing AI tools against them or letting rogue systems into the wild.
- But it remains to be seen whether they can hold a productive dialogue around AI security norms or trust the other to abide by them.
Driving the news: President Trump is expected to discuss AI guardrails with Chinese President Xi Jinping in Beijing this week, U.S. officials told reporters Sunday.
- "We want to take this opportunity with the leaders meeting to open up a conversation and see if we should establish a channel of communication on AI matters," one official said.
Between the lines: The U.S. is using export controls to slow China's AI progress, but U.S. officials increasingly recognize that the two countries may still need shared rules of the road for how the technology is deployed.
- Chinese models like DeepSeek are the primary competitors to U.S. models.
- Advanced AI systems are increasingly viewed in both Washington and Beijing as economic engines, intelligence tools and potential cyber weapons. That makes cooperation harder, but also more urgent.
- Sixteen business executives, including Elon Musk and Tim Cook, are reportedly joining Trump on the trip — but CEOs from leading AI firms aren't on the list.
The big picture: The visit comes as U.S. AI companies wrestle with how to safely release increasingly powerful models that are exceptionally good at finding and exploiting software vulnerabilities.
- The White House has been embroiled in a monthlong back-and-forth over how to regulate those rollouts, after more than a year of denouncing such regulation.
- Meanwhile, the White House accused China last month of running "industrial-scale" campaigns to distill and copy American AI models.
Yes, but: It's hard for either country to call for restraint around AI-enabled cyber operations when both are actively testing the offensive cyber capabilities of frontier models — potentially to use against each other.
- In November, Anthropic accused Beijing of using Claude to automate parts of a broader espionage campaign targeting about 30 global organizations.
- The National Security Agency, which is behind many U.S. espionage campaigns, is already testing out Mythos.
"The topic is important enough and dangerous enough that we should be having engagement with China on this," Melanie Hart, senior director of the Atlantic Council's Global China Hub and a former State Department official, told reporters.
- However, the Chinese government used previous meetings on AI safety held under the Biden administration primarily "to gather information about the United States, rather than to be serious about AI guardrails," Hart said.
- During those talks, Beijing often sent representatives from the foreign ministry who lacked technical AI expertise, she added.
What to watch: Don't expect a single visit to reshape U.S. AI policy overnight. Instead, Hart said, the trip is more likely to determine whether future U.S.-China discussions on AI security become substantive or remain largely symbolic.
- "From there, we then need to judge who shows up for the China side," she said. "We want to see the technical experts showing up at the table. That's how we'll know that that's actually real."
2. AI-assisted hacking is already here, Google warns
Google says it has identified what may be the first known case where cybercriminals used AI to discover and weaponize a previously unknown zero-day vulnerability.
Why it matters: Security researchers have long warned AI could one day accelerate cyberattacks. That day appears to be here.
Driving the news: Google's threat intelligence group said in a report yesterday that it found evidence of several "prominent cyber crime threat actors" partnering to identify a bug in a Python script that would let them bypass two-factor authentication on a popular open-source system.
- The groups, which Google didn't identify, then used AI-assisted code to weaponize the previously unknown vulnerability, according to the report.
- The attempt to exploit the unidentified open-source system was thwarted, and Google said it has since disclosed the flaw to the vendor.
The intrigue: Google based its assessment on characteristics common in AI-generated code, including overly explanatory comments in the code, a made-up severity rating for the bug, and coding patterns commonly seen in AI-generated Python scripts.
Threat level: Google warned that advanced AI models are getting better at finding subtle security weaknesses in software that conventional cybersecurity tools often fail to catch.
- In the zero-day example, the model appeared to identify a hidden trust assumption in the software's login logic that could be exploited to bypass two-factor authentication protections.
What they're saying: "There's a misconception that the AI vulnerability race is imminent," John Hultquist, chief analyst at Google's Threat Intelligence Group, said in a statement. "The reality is that it's already begun."
- "For every zero-day we can trace back to AI, there are probably many more out there," he added.
The big picture: The AI-assisted exploit was one of several cases Google uncovered in recent months highlighting growing interest among both cybercriminals and nation-state hackers in using AI to supercharge attacks.
- North Korean and Chinese state actors are experimenting with AI in a variety of ways to exploit vulnerabilities, according to the report.
- In one case, researchers found APT45, a North Korean military group, using AI to test and validate thousands of exploits targeting software flaws.
- Google also uncovered malware, dubbed PromptSpy, that uses Gemini to autonomously navigate Android devices by interpreting on-screen activity and generating commands in real time.
What to watch: U.S. AI companies are increasingly grappling with how to prevent their more sophisticated AI models from being abused by cybercriminals and state-backed hackers.
3. Exclusive: JPMorgan invests $14M to fight scams
JPMorgan Chase is investing nearly $14 million in seven anti-scam organizations and initiatives aimed at stopping fraud before consumers lose money, the company first shared with Axios.
Why it matters: Major banks have spent years building fraud defenses inside their own platforms, but scams have evolved into a sprawling ecosystem spanning telecom companies, social media platforms, tech firms and financial institutions.
- Firms like JPMorgan are increasingly arguing the problem can't be solved by banks alone.
- "We don't need one hero, we need a system that works," said Mercedeh Mortazavi, JPMorgan Chase's head of financial health.
Driving the news: The funding will support organizations working on projects focused on scam prevention, consumer education and real-time fraud detection, including:
- The Aspen Institute Financial Security Program and Propel, which are piloting a real-time transaction-blocking tool designed to prevent theft from Electronic Benefit Transfer (EBT) programs.
- The BBB Institute for Marketplace Trust, which plans to turn its Scam Tracker platform into an AI-powered, real-time scam intelligence system.
- finEQUITY, which is developing a platform that screens suspicious text messages and connects users with financial coaching resources.
- San Francisco's Treasurer & Tax Collector Office, which is launching a citywide anti-scam initiative called StopScamsSF.
- AARP's Senior Planet program, which is planning a two-year anti-fraud education campaign focused on older adults.
- Prosperity Now and Alumbra, which are building a text-based scam detection and reporting platform for community lenders, consumers and small businesses.
- The Stop Scams Alliance and Gallup, which are preparing what the groups describe as the largest U.S. consumer survey to date on scam victimization — set to be released next month.
Between the lines: JPMorgan typically sees scams only once money is already moving, Mortazavi said.
- She added that JPMorgan wanted to use philanthropic funding to help test and scale anti-scam tools, particularly for vulnerable populations, including lower-income Americans and older adults.
What they're saying: Ryan Loftus, JPMorgan Chase's managing director and head of trust and security, said advances in AI are lowering barriers for scammers and pushing companies to work together and share threat information across sectors.
- "It's very, very rare, if not improbable, that a bad actor has activity with only one bank or only one social media company or only one tech firm or only one telecom," Loftus said.
Catch up quick: JPMorgan was a founding member of the Aspen Institute's National Task Force on Fraud and Scam Prevention, which released a national anti-scam strategy last year.
What's next: Many of the projects being funded will start rolling out their services and campaigns later this year.
4. ICYMI: AI vibe-coding apps leak sensitive data
The AI coding tools letting anyone "build" software without engineering skills are also letting medical records, financial data and Fortune 500 internal docs leak onto the open web, security researchers said Thursday.
Why it matters: AI coding tools are enabling employees without engineering or cybersecurity training to publish internal tools publicly, often without company oversight or basic access controls.
Driving the news: Israeli cybersecurity firm RedAccess told Axios it found 380,000 publicly accessible assets built with tools from Lovable, Base44, Replit and Netlify, including about 5,000 containing sensitive corporate data.
- RedAccess CEO Dor Zvi said his team found the applications while researching "shadow AI" — unauthorized employee use of AI tools — for customers.
- Researchers said privacy settings on some of the vibe-coding tools were set to make the apps publicly accessible unless users manually changed them to private.
- Many of these apps are also indexed by Google and similar search engines, making it possible for just about anyone to stumble upon them, Zvi added.
Case in point: Axios independently verified multiple exposed apps this week, including:
- An app for a shipping company detailing which vessels are expected at which ports.
- An internal app for a health company that details active clinical trials across the U.K.
- Full, unredacted customer service conversations for a cabinet supplier in the U.K.
- Internal financial information for a Brazilian bank.
Zoom in: RedAccess also found exposed apps that leaked customer data and personally identifiable information, including:
- Conversations with patients at a long-term care facility for children.
- A security company that used one of these platforms to triage information about ongoing incidents that its customers were facing.
- A personal app someone created to help plan a couple's vacation in Belgium, including details about their hotel and dinner reservations.
- An app for a hospital that had doctor and patient conversation summaries, patient complaints, and staff schedules.
- An app created for a school that includes recordings of lessons, as well as student-related data and the teacher's schedule.
5. Catch up quick
@ D.C.
👀 The White House is now preparing an AI security order that could omit mandatory pre-deployment model testing (Bloomberg) — and officials are currently arguing over whether the Commerce Department or intelligence agencies should over see model evaluations. (Washington Post)
💪🏻 The Cybersecurity and Infrastructure Security Agency kicked off a new plan to get critical infrastructure operators to prepare for a world where they need to deliver essential services after being knocked offline. (CyberScoop)
🏛️ An inside look at how an alleged Chinese spy attempted to coerce information about U.S. foreign policy from a congressional aide in exchange for $10,000. (New York Times)
@ Industry
🍎 Apple rolled out end-to-end encryption for RCS messaging on iOS. (Wired)
@ Hackers and hacks
📚 Instructure, the maker of online education platform Canvas, says it has "reached an agreement" with the hackers who broke into its systems to delete any stolen data and stop extorting its customers. (TechCrunch)
🤖 The lead developer for Curl, a widely used open-source project, said that Anthropic's Mythos found five security vulnerabilities — three of which were false positives. (Curl)
🚰 Poland's security agency said in a recent report that it detected cyberattacks on five water treatment plants over the last two years. (TechCrunch)
6. 1 fun thing
🌳 Consider this your weekly reminder to go outside and take a walk! It's worth it.
☀️ See y'all next week!
Thanks to Dave Lawler for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Future of Cybersecurity, spread the word.
Sign up for Axios Future of Cybersecurity






