May 10, 2024 - Technology
Human Intelligence

AI makes it easier for anyone to become a cybercriminal, top official says

headshot
Photo illustration of CISA chief Jen Easterly beside an image of a lock made out of binary code and various shapes

Photo illustration: Sarah Grillo/Axios. Photo courtesy CISA

Generative AI is not just teaching cyber bad guys new tricks — it's also making it easier for anyone to become a bad guy, said Cybersecurity and Infrastructure Security Agency chief Jen Easterly.

Why it matters: Cybercriminals with AI at their disposal will be able to do more of everything: from phishing and spamming, to acts of blackmail and terrorism, to campaigns of misinformation and election sabotage.

  • "I think it'll make people who are less sophisticated actually better at doing some of the bad things that they want to do," Easterly told Axios in an interview on the sidelines of the RSA Conference Tuesday.
  • "AI will exacerbate the threats of cyberattacks — more sophisticated spear phishing, voice cloning, deepfakes, foreign malign influence and disinformation," said Easterly.

Context: The fast-moving nature of AI adds fresh layers of risk and uncertainty.

  • "I look at AI: how fast it's moving, how unpredictable it is, how powerful it is," Easterly said. "A powerful tool will create a powerful weapon; we can just sort of make that assumption."

The big picture: The organization Easterly runs — one of the newest federal agencies — doesn't have its own regulatory authority over private businesses.

  • Given that, Easterly's approach has been to work with businesses inside and outside the tech industry to improve their cybersecurity practices.

Driving the news: This past week, CISA unveiled a "secure by design" pledge with dozens of tech companies, including Microsoft, Cisco, IBM, Scale AI and others.

  • The pledge incorporates a variety of best practices around boosting the strength of default passwords, adopting multifactor authentication and reducing entire classes of vulnerabilities.
  • Although the pledge itself isn't binding, Easterly noted it includes reporting requirements, and just having companies be accountable can be a positive force. "Yes, it's voluntary, but there is virtue in radical transparency."

Catch up quick: Before assuming her role leading CISA in 2021, Easterly served in the military, worked in counterterrorism during the Obama administration, and then was a top cybersecurity executive at Morgan Stanley.

  • CISA, part of the Department of Homeland Security, has taken on a broad cybersecurity role assigned by Congress in 2018. Its first director, Christopher Krebs, was fired by former President Trump following Trump's loss in the 2020 election, after Krebs affirmed the election's integrity.

Zoom in: Easterly and her colleagues have spent a lot of time with election officials preparing for various cyber threats, including those fueled by AI.

  • For 2024, Easterly said she feels pretty good about the ability of the election apparatus itself to withstand any attacks.
  • "Election infrastructure is more secure than ever before," Easterly said. "[AI] won't fundamentally introduce new threats into this election."
  • However, Easterly said she is concerned with how generative AI could supercharge existing efforts to sow distrust.
  • "We know that adversaries like Russia, like China, like Iran, and others, are intent on interfering, manipulating, influencing our elections to undermine American confidence in the integrity of those elections and to stoke even more partisan discord," she said. "And we know that those efforts will be exacerbated by generative AI capabilities."

Between the lines: Complicating things is the lack of global rules over what does and doesn't constitute an act of war.

  • "We have norms, right? But at the end of the day, norms are for good guys," Easterly said. "We know that Chinese cyber actors are burrowing into our civilian critical infrastructure to launch disruptive and destructive attacks."

Easterly is concerned that new generative AI tools are joining an already-fraught security landscape.

  • For four decades, we've seen what happens when "you have an internet full of malware, software full of vulnerabilities and social media full of disinformation," Easterly said.

Yes, but: AI may also have benefits if properly harnessed by those looking to make systems more secure, including finding vulnerabilities before software is released or identifying new techniques to protect older systems still in use.

  • "AI could be powerful to help us deal with legacy technology, which is the scourge of the security community," Easterly said.
  • "I genuinely am an optimist," she said. "My journey started when I was a lieutenant colonel in the army in Iraq and we were using technology to be able to to help the troops on the ground to locate bomb makers' technology. So I've seen the power of technology to save lives."

What's next: Just improving existing practices around patching, secure passwords and other security hygiene is probably the best defense against attacks, AI-infused or not, she said.

Go deeper