Cybersecurity 101 still applies in the AI world
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Allie Carl/Axios
In the hype cycle of AI development, one thing still rings true for security professionals: securing AI doesn't look any different from securing any other enterprise software tool.
Why it matters: AI is already amplifying and automating many of the cyber threats that businesses face on a day-to-day basis. Making the wrong investments in tools to fend off these threats could be a costly mistake.
The big picture: Many of the biggest AI-driven cyber threats seen today are amplifications of longstanding security issues.
- Hackers recently targeted widely used agent Salesloft Drift by compromising authentication tokens and using them to log into systems. The attack started with a hacker gaining access to a GitHub repository.
- In July, researchers found that chats with McDonald's AI hiring bot were exposed because the administrator's password was still the default "123456."
- Earlier this year, researchers also found that hundreds of Model Context Protocol servers, which help connect AI models to their data sources, were misconfigured — making them easy targets for cyberattacks.
Between the lines: Securing against each of these cases requires basic cybersecurity tools that companies are already investing in — such as insider threat monitoring, zero-trust frameworks and multifactor authentication (MFA).
- The problem with securing AI tools is that enterprises aren't treating them with the same level of rigor that they apply to human accounts, cloud infrastructure and enterprise software, Anton Chuvakin, senior staff consultant in Google Cloud's Office of the CISO, told Axios.
- "This problem isn't new, it's just faster," he said.
State of play: Currently, bad actors are predominantly using AI just to amplify existing tactics, such as writing phishing emails, researching targets and creating new malware strains.
- "AI is lowering the entry bar for a threat actor," said Vikram Thakur, technical director at Symantec, a division of Broadcom. "They don't need to know how to code. They don't need to know how to harvest somebody's email address from the web. They can essentially just go through a public system and make them do all the hard work."
Reality check: The tools to defend against these threats already exist — and cybersecurity vendors and startups are increasingly rolling out new tools to help defenders keep up with the onslaught.
- Last week, CrowdStrike introduced new phishing-resilient MFA to help secure both human and AI agent identities.
- Both Microsoft and Google have rolled out agents to help defenders, including agents that can help triage phishing reports and detect zero-day bugs on their networks.
- Symantec has been building tools to better predict which new threats could target a company's networks, Thakur said.
Yes, but: Advancements in the AI threat landscape are coming, and experts warn that no one really knows what AI tools will be capable of a year from now.
- Cyber veterans have warned that hackers will soon be able to tap AI tools to find new zero-days and customize their attacks for each company at scale.
- Researchers are already using LLMs to create ransomware that can fully automate attacks and are finding evidence that the models can help create polymorphic malware.
