Apr 3, 2024 - Technology

AI security is a new battle between employers and workers, survey shows

Illustration of a laptop with many different locks on it

Illustration: Natalie Peeples/Axios

Employees are flocking to do work with AI apps that their employers haven't approved, setting up a security showdown in many organizations.

Why it matters: It's the latest battle between workplace IT departments seeking to lock down networks and workers who want to use their favorite tools and devices.

  • This time around, it's not about employees trying to do personal tasks on work time or escape boredom. Instead, workers are turning to generative AI apps to boost productivity and save time.

Driving the news: A survey of 1,500 North American workers — including 500 IT security professionals — conducted for cybersecurity firm 1Password found that 22% of employees admit to knowingly violating company rules on the use of generative AI.

  • One in three employees, about 34%, admit to using unapproved apps and tools to be more productive.
  • 56% of employees said they did work on a personal device in the past year, and 17% exclusively used personal devices for work.
  • Workers breaking the rules use an average of five unapproved apps or tools.

The big picture: Many workplace IT security policies — from limiting app downloads to single sign-on screens — were designed with office networks and employer-owned devices in mind.

  • The COVID-19 pandemic's shift to remote work left that world largely behind.

Friction point: IT teams and other employees are at odds over convenience.

  • Just 9% of security professionals in the 1Password survey say that employee convenience is their top consideration when selecting security software, but 44% of workers want convenience prioritized.

Meanwhile: Roughly four in five security professionals said that they don't feel their organization's security protections are adequate. They're particularly worried about generative AI, with these top concerns:

  • Sensitive company data can be entered into public AI tools.
  • AI systems may have been trained with bad or malicious data.
  • Employees can fall for AI-enhanced phishing attempts.

Context: It's become common for large companies — from Apple to most of America's leading banks — to either restrict or ban use of publicly available generative AI tools at work.

Yes, but: Access to AI tools is only going to widen, and users no longer need to log into popular services like ChatGPT to use them.

What they're saying: In workplace cybersecurity, "everything's identity- or device-oriented today, but the sprawl of apps and tools is discounted," 1Password chief product officer Steve Won tells Axios.

  • "There's a reckoning that's coming. Decision-makers are gonna [have to] recognize there's no putting the genie back in the bottle," he says.
Go deeper