Axios AI+

March 14, 2025
Thanks for opening me. I was sitting there between two pieces of spam — not my idea of a good sandwich. Today's AI+ is 1,228 words, a 4.5-minute read.
1 big thing: Malware's AI time bomb
Hackers already have the AI tools needed to create the adaptable, destructive malware that security experts fear. But as long as their basic tactics — phishing, scams and ransomware — continue to work, they have little reason to use them.
Why it matters: Adversaries can flip that switch anytime, and companies need to prepare now.
Driving the news: The looming threat of autonomous cyberattacks was a top talking point at the inaugural HumanX conference in Las Vegas this week.
- "You know that phrase, 'Keep your powder dry'? That's what attackers are doing right now," James White, chief technology officer at AI security startup CalypsoAI, told Axios, implying that bad actors are ready for battle.
The big picture: Cyber leaders have long feared generative AI would enable autonomous cyberattacks, making current security tools ineffective.
- These attacks could involve AI agents carrying out hackers' bidding or malware that adapts in real time as it spreads.
Between the lines: A few years into the generative AI revolution, experts are split on how imminent these threats are.
- Some say we're less than two years away from seeing agentic malware in nation-state cyber warfare.
- Others argue hackers have little incentive to change tactics as they continue to profit from simple scams, phishing and ransomware.
Threat level: Even though AI-powered malware has yet to flood the zone, companies can't rest easy.
- "The rate of acceleration is insane," Evan Reiser, CEO of email security company Abnormal Security, told Axios. "You don't have to be a total science fiction nerd, like me, to imagine where this can go in one year, two years."
- AI will speed up attacks, leaving defenders with little time to react.
- Meanwhile, most organizations are still behind on basic security measures, Reiser said, noting that the typical company is focused on setting up two-factor authentication. Abnormal Security works with about 20% of the Fortune 500.
Reality check: Startups selling AI security tools have an interest in hyping potential threats.
- Mandiant says it has yet to respond to an attack involving truly autonomous AI or adaptable malware.
- "I'm actually not worried about any of that right now," Charles Carmakal, CTO at Mandiant, told Axios.
- Mandiant has mostly seen adversaries using AI for basic tasks like crafting phishing emails or researching targets.
The intrigue: Companies hiring cybersecurity vendors are beginning to understand that the best way to fight AI attacks is with AI security tools, said Itai Tevet, CEO of Intezer, a startup that offers an autonomous security operation center.
- "It's dramatically different between 2023 and today," Tevet told Axios. "In the past, we needed to evangelize on why technology can do the same job. Today, all CISOs are getting asked by their board, 'How do you leverage AI?'"
Zoom in: AI agents can also help threat intelligence teams review the pile of notifications they receive about new vulnerabilities, phishing emails and other malicious activity, Steve Schmidt, chief security officer at Amazon, said in a fireside chat with Axios.
- Amazon currently doesn't let agents make decisions or act on their own, but they can review the threat intelligence coming in to determine what needs to be prioritized.
- "We've ended up significantly improving the lives of the security engineers, making them more efficient at what they have to do," Schmidt said.
This story is from The Future of Cybersecurity, our newly revamped newsletter. Subscribe here.
2. Firms weigh in on Trump 2.0's AI stance
Major AI companies are hoping to shape how the White House will approach AI policy as the government shifts from a risk-averse approach to one of full-throttle acceleration.
Why it matters: Trump administration officials have made it clear that beating China is a major priority, and have been knocking down or reshaping Biden-era AI policy focused on safety.
- March 15 marks the deadline for comment on the Office of Science and Technology Policy's "Development of an Artificial Intelligence Action Plan," following President Trump's executive order calling for a new AI policy plan.
Zoom in: Google and OpenAI both used part of their filings to argue that AI developers should be legally cleared to train their systems on any information that is publicly accessible, even if it's under copyright.
Catch up quick: We already summed up what Anthropic and OpenAI told the White House. Here are a few more notable filings we reviewed:
Google: The company calls for AI investment both federally and locally, for balanced export controls, public-private partnerships with national labs, and preemption of state-level AI laws.
Microsoft: The tech giant and major partner of OpenAI calls for investment in AI infrastructure, skills-based training and access to data in a summary of its filing seen by Axios.
TechNet: The tech lobbying group says that existing legislation often "already provides a way to more effectively regulate the safe use of AI" and it encourages an "incremental" approach to any new regulations in its filing, seen first by Axios.
What's next: The administration has until mid-July to develop and submit the AI "action plan" called for in Trump's EO.
Axios Tech Policy is covering every twist and turn in the White House and Congress' efforts to regulate AI. Get it in your inbox.
3. AI failed to detect critical health conditions
AI systems designed to predict the likelihood of a hospitalized patient dying largely aren't detecting worsening health conditions, a new study found.
Why it matters: Some machine learning models trained exclusively on existing patient data didn't recognize about 66% of injuries that could lead to patient death in the hospital, according to the research published in Nature's Communications Medicine journal.
State of play: Hospitals increasingly use tools that harness machine learning, a subset of AI that focuses on systems that continuously learn and adjust as they're given new data.
- A separate study recently published in Health Affairs found that about 65% of U.S. hospitals use AI-assisted predictive models, most commonly to figure out inpatient health trajectories.
Zoom in: Researchers looked at several machine learning models commonly cited in medical literature for use in predicting patient deterioration and fed them publicly available sets of data about the health and metrics of patients in ICUs or with cancer.
- The researchers then created test cases for the models to predict potential health issues and risk scores if some patient metrics were altered from the initial data set.
- The models for in-hospital mortality prediction could only recognize an average of 34% of patient injuries, the study found.
What they're saying: "We are asking the models to make big decisions, and so we really need to figure out ... in what kind of situations they can perform," said Danfeng (Daphne) Yao, an author of the study and a computer science professor at Virginia Tech.
- It's extremely important for technology being used in patient care decisions to incorporate medical knowledge, Yao said.
- The study shows that "purely data-driven training alone is not sufficient," she added.
What we're watching: Large language models could be more useful in medical settings if they're trained on medical literature, but they're not yet trustworthy enough for clinical use, the study says.
5. + This
I just learned about Brazilian skydiver Luigi Cani, who in 2022 managed to scatter 100 million seeds during a jump as part of an effort to regenerate a large swath of degraded rainforest.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing it.
Sign up for Axios AI+








