Mar 22, 2024 - Technology

Generative AI puts GPU security in the spotlight

Illustration of an infinite, recursive tunnel of semiconductor chips.

Illustration: Shoshana Gordon/Axios

Rapid adoption of generative AI tools is bringing renewed attention to the cybersecurity threats facing the chips and processing units powering these technologies.

Why it matters: Only a few manufacturers have chips capable of processing the large data sets that power generative AI systems — making them a ripe target for attackers.

  • If chips aren't secured properly, hackers could deploy malware, steal proprietary information and poison large language models (LLMs), experts tell Axios.

Driving the news: Nvidia unveiled cybersecurity partnerships during its annual GPU technology conference in the Bay Area this week.

The big picture: Most cyberattacks that gain national attention or lead to vast data breaches come from hackers targeting a piece of software or a flaw in a company's network, like a firewall, operating system or browser.

  • But AI technologies create a new threat: Much of the data powering LLMs flows through graphics processing units (GPUs) found in chips and other hardware that face the same kinds of security threats.

What they're saying: "People are asking, 'Are there threats, is that even real?'" Kobi Kalif, CEO and co-founder of ReasonLabs, told Axios. "Well, there are threats, there are actually a lot of threats."

Between the lines: GPUs face similar cyber threats as traditional central processing units (CPUs), experts said, and often the mechanics of hacking these units are similar to those of any other attack.

  • Much like the CPUs found in computer systems, GPUs operate inside an operating system or piece of cloud infrastructure.
  • Hackers only need to target that operating infrastructure to access the data processed in a GPU, Ofir Israel, vice president of threat prevention at Check Point Software Technologies, told Axios.

Zoom in: Security threats against GPUs can be broken into four categories, Kalif said.

  • Malware attacks — including "cryptojacking" attacks, in which a hacker can siphon off the processing power in a CPU or GPU to mine cryptocurrencies.
  • Side-channel attacks, in which hackers exploit a flaw in how a GPU transmits and processes data or is implemented in a device to steal information.
  • Firmware vulnerabilities, or security flaws that give hackers access to the firmware controlling hardware devices.
  • And supply chain attacks, in which an attacker will target the GPU in the hopes of targeting end users to steal their information or gain control of their systems.

Threat level: Now, as generative AI becomes more popular, GPUs face a bigger risk of hackers tampering with an LLM's training data through so-called data-poisoning attacks, Israel said.

Yes, but: While security researchers have documented several security flaws in GPUs, there aren't many reports of successful attacks on GPUs specifically.

  • The only recent example appears to be a high-severity flaw in Arm's Mali GPU kernel that was discovered in July. The Cybersecurity and Infrastructure Security Agency added the flaw to its list of known exploited vulnerabilities.

The intrigue: Defending a GPU requires a different strategy than defending CPUs and other software.

  • Applying a basic security update now requires more speed and agility, Israel said. Customers are paying so much of a premium to access GPUs that they can't afford to have their products go offline due to a system update.
  • "Even a 2% decrease in their functionality costs the cloud service provider or the customer a lot," he said.

Zoom in: Startups are already cropping up to redesign AI chips so they're both safer from attacks and more efficient.

  • AI chip startup d-Matrix's chip, for instance, is designed so that if a hacker breaks in, they won't be able to access everything processed on a silicon chip — only the data that exists in a specific partition, CEO Sid Sheth told Axios.

What we're watching: So far, early conversations about AI security have focused heavily on model manipulations and safety risks.

  • Expect to hear more about hardware and chip security as hackers and cyber defenders learn more about what is — and isn't — possible.
Go deeper