September 19, 2023
Happy Tuesday! Welcome back to Codebook.
- 📺 I caught up on the new season of Apple TV+'s "The Morning Show" this weekend, and to my surprise, one of the episodes follows a ransomware attack targeting the morning broadcast! 10/1o, would recommend.
- 📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 1,556 words, a 6-minute read.
1 big thing: The burgeoning startup market capitalizing on AI security
As Washington and Silicon Valley rush to mitigate artificial intelligence's security risks, a new crop of entrepreneurs and investors is clamoring to monetize the latest emerging security category.
Why it matters: AI security startups are just the latest cohort trying to capitalize on the craze around generative AI and large-language models.
- And the interest in their offerings exists as AI operators and government officials host meeting after meeting to figure out how to best regulate AI before it becomes even more widespread.
The big picture: Security experts are worried about a long list of threats to AI models, including prompt injection (where users trick large-language models to go against their rules and share malicious outputs), data leaks of sensitive corporate information that the models ingest, and run-of-the-mill hacks of AI models' training data.
- The solutions AI security startups are offering either tackle a subset of these problems or try to solve all of them.
- But just like the industry's overall understanding of AI security threats, these startups are still quite early in their quest to secure AI, Avivah Litan, distinguished VP analyst at Gartner, told Axios.
By the numbers: Investors are increasingly jumping at the chance to pour money into the next big AI security company.
- In the first three quarters of 2023, AI security startups have raised roughly $130.7 million, according to PitchBook data shared with Axios — already surpassing the $122.2 million raised in all of 2022.
Driving the news: HiddenLayer, an AI startup that emerged from stealth last year, announced a $50 million Series A funding round Tuesday led by M12 and Moore Strategic Ventures.
- The company is just the latest in a long string of startups promising to protect AI models — including CalypsoAI, Protect AI and others — that have raised money in recent months.
Between the lines: These startups are tackling AI security in slightly different ways.
- CalypsoAI focuses on auditing the sensitive data in an enterprise and preventing that data from being sucked into outside AI models. Its customer base is largely in the U.S. government, including the Defense Department and parts of the intelligence community.
- HiddenLayer provides a solution similar to endpoint security tools to review the outputs from AI models and ensure malicious actors don't tamper with the algorithms through prompt injection or other misuse.
- Lakera AI, a security startup based in Switzerland, employs a similar idea and offers a firewall-like tool for AI model inputs and outputs to detect AI "hallucinations," prompt injections and other misuses.
The intrigue: Since OpenAI's ChatGPT became available to the public, some of the AI security startups catching investors' eyes are attracting more demand than they originally anticipated.
- While HiddenLayer CEO Chris Sestito told Axios his company's approach hasn't changed, he said potential buyers have become more aware and educated about the risks that AI models pose.
- CalypsoAI raised its recent $23 million round to further fund the development of its large-language model security solutions.
- Lakera AI started in 2021 by securing biometrics and medical imaging algorithms but pivoted at the end of 2022 to securing AI models due to customer demand, David Haber, founder and CEO of the company, told Axios.
Zoom out: The exit strategy for these startups is still up in the air.
- Some could sell their products to larger cybersecurity vendors, like CrowdStrike, Litan said.
- But others told Axios they see a market for AI security to become its own standalone product vertical, in much the same way that companies buy from standalone cloud security vendors.
Yes, but: Enterprises are still in the early stages of figuring out how they'll use AI internally, and until they land on an answer, they're not going to know what kinds of AI security startups to buy from, Litan said.
- Gartner estimates that the market of AI security and risk management companies will be worth $150 million by 2025, Litan said.
- "It's very much an influx market," she added. "There's definitely demand, it's just early."
2. Zoom in: Our biggest AI security fears
The majority of U.S. adults don't believe the benefits of artificial intelligence outweigh the risks, according to a new Mitre-Harris Poll released this morning.
By the numbers: 54% of the 2,063 adults in a Mitre-Harris Poll survey in July said they were more concerned about the risks of AI than they were excited about the potential benefits.
- At the same time, 39% of adults said they believed today's AI technologies are safe and secure — down 9 points from the previous survey in November 2022.
Why it matters: AI operators and the tech industry are eyeing new regulations and policy changes to secure their models and mitigate the security and privacy risks associated with them.
- The new survey data is some of the first to highlight the growing support for these regulatory efforts.
What they're saying: "While the public has started to benefit from new AI capabilities such as ChatGPT, we've all watched as chatbots have spread political disinformation and shared dangerous medical advice," said Douglas Robbins, vice president of engineering and prototyping at the nonprofit security research and development firm Mitre, in a statement.
- "Strengthening existing government regulation and increasing public and private investments in AI assurance can play a critical role in addressing these concerns," he added.
Between the lines: It's pretty typical for adults to be anxious about a new innovation and its potential impacts in the early days of its usage.
Details: Specifically, respondents were more concerned about AI being used in malicious cyberattacks (80%) and identity theft schemes (78%) than they were about it being used to cause "harm to disadvantaged populations" (66%) or replacing their jobs (52%).
- Roughly three-fourths of respondents were also concerned about AI technologies being used to harvest and sell their personal data.
Yes, but: Not all demographics feel the same wariness about AI technologies.
- 57% of Gen Z respondents and 62% of millennials actually said they were more excited about the potential benefits of AI than they were worried about the risks.
- And men were typically more excited than concerned about AI technologies (51%) than women (40%).
3. Mandiant CEO's tips for cyber defenders
Mandiant, one of the top incident response firms in the U.S., is hosting its mWISE threat intelligence conference in Washington this week.
- CEO Kevin Mandia — who spends a lot of time consulting governments, companies and others in the cybersecurity industry — kicked off the event with some tips for enterprises trying to defend against hackers and their changing tactics:
📞 Assume an employee is going to fall for a social engineering attack and ensure your company has basic defenses like multifactor authentication in place.
- Companies should also assume there are critical, "zero-day" vulnerabilities in their products.
- "Unless you hire a bunch of mean people that don't want to help anybody, you might fall victim to social engineering as a company," Mandia said during his keynote address Monday.
🤖 Embrace AI to help cyber defenders work faster.
- Mandia believes AI will help incident responders cut down on the number of hours they spend writing briefs for lawyers and increase the speed at which threat intelligence teams can scour the dark web for circulating vulnerability exploits.
- "All of our jobs as defenders will change very fast and frequently over the upcoming years based on this," he said.
💪🏼 Security leaders should consider organizing a tabletop exercise with senior executives and the company's board to run through their cyberattack response plan.
- Doing so allows executives and board members to practice their response plans and work out any problems before a real-life cyber incident happens, Mandia said.
- "You should absolutely do a scenario based on the worst-case scenario," he added. "It's feasible, it's possible, it could happen to us, and we really hope it doesn't."
4. Catch up quick
😬 Lina Khan, chair of the Federal Trade Commission, got caught up in the MGM Grand chaos in Las Vegas last week as the company responded to an apparent cyberattack. (Bloomberg)
🇬🇺 Guam, a South Pacific island that has become a key U.S. military outpost, has also become a testing ground for China-backed hackers. (Politico)
🕵🏻♂️ Recently obtained documents detail how a government contractor is providing agencies with social media surveillance technology to spy on protests and other events. (404 Media)
🧼 Clorox warned that last month's cyberattack is still causing production disruptions and said it's unclear when the company will return to full operations. (CNBC)
💰 Dragos, a cybersecurity firm focused on critical infrastructure systems, raised a $74 million extension to its Series D round. (Wall Street Journal)
🎰 An inside look at what it was like to gamble at MGM's hacked casinos over the weekend. (404 Media)
@ Hackers and hacks
🗃️ Microsoft AI researchers accidentally exposed a GitHub repository that was storing 38 terabytes of sensitive information, including private keys and passwords. (TechCrunch)
🌀 Okta's top security executive said five of his company's clients, including MGM and Caesars, have fallen victim to attacks from Scattered Spider and Alphv since August. (Reuters)
🪙 Billionaire Mark Cuban lost $870,000 in a crypto scam Friday night. (Decrypt)
5. 1 fun thing
☀️ See y'all on Friday!
Thanks to Scott Rosenberg and Megan Morrone for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Codebook, spread the word.