Axios AI+

September 19, 2023
Hi, it's Ryan here, reporting from the permanent traffic jam that is New York City during the UN General Assembly. Today's AI+ is 1,179 words, a 4-minute read.
1 big thing: AI security startup frenzy
Illustration: Brendan Lynch/Axios
As Washington and Silicon Valley rush to mitigate AI's security risks, a new crop of entrepreneurs and investors are clamoring to monetize the latest emerging security category, Axios' Sam Sabin reports.
Why it matters: AI security startups are just the latest cohort trying to capitalize on the craze around generative AI and large language models.
- And the interest in their offerings exists as AI operators and government officials host meeting after meeting to figure out how to best regulate AI before it becomes even more widespread.
The big picture: Security experts are worried about a long list of threats to AI models, including prompt injection (where users trick large-language models to go against their rules and share malicious outputs); data leaks of sensitive corporate information that the models ingest; and run-of-the-mill hacks of AI models' training data.
- The solutions AI security startups are offering either tackle a subset of these problems or try to solve all of them.
- But just like the industry's overall understanding of AI security threats, these startups are still quite early in their quest to secure artificial intelligence, Avivah Litan, distinguished vice president analyst at Gartner, told Axios.
By the numbers: In the first three quarters of 2023, AI security startups have raised roughly $130.7 million, according to PitchBook data shared with Axios — already surpassing the $122.2 million raised in all of 2022.
Driving the news: HiddenLayer, an AI startup that emerged from stealth last year, announced a $50 million Series A funding round Tuesday led by M12 and Moore Strategic Ventures.
- The company is just the latest in a long string of startups promising to protect AI models — including CalypsoAI, Protect AI, and others — that have raised money in recent months.
Between the lines: Many of these startups are tackling AI security in a slightly different way.
- CalypsoAI focuses on auditing the sensitive data in an enterprise and preventing that data from being sucked into outside AI models. Their customer base includes the Defense Department and parts of the intelligence community.
- HiddenLayer provides a solution similar to endpoint security tools to review the outputs from AI models and ensure malicious actors didn't tamper with the algorithms through prompt injection or other misuse.
- Lakera AI, a security startup based in Switzerland, offers a firewall-like tool for AI model inputs and outputs to detect AI "hallucinations," prompt injections and other misuses.
The intrigue: Some of the AI security startups catching investors' eyes are attracting more demand than they originally anticipated since OpenAI's ChatGPT became available to the public.
- While HiddenLayer CEO Chris Sestito told Axios his company's approach hasn't changed, he said potential buyers have become more aware and educated about the risks that AI models pose.
- CalypsoAI raised its recent $23 million round to further fund the development of its large-language model security solutions.
- Lakera AI originally started in 2021 by securing biometrics and medical imaging algorithms, but pivoted to securing AI models at the end of 2022 due to customer demand, David Haber, founder and CEO of the company, told Axios.
Zoom out: The exit strategy for these startups is still up in the air.
- Some could sell their products to larger cybersecurity vendors, like CrowdStrike, Litan said, but others told Axios they see a market for AI security to become its own standalone product vertical.
Yes, but: Enterprises are still in the early stages of figuring out how they'll use AI internally, and until they land on an answer, they're not going to know what kinds of AI security startups to buy from, Litan said.
- Gartner estimates that the market of AI security and risk management companies will be worth $150 million by 2025, Litan said.
Sign up for Axios' cybersecurity newsletter Codebook here.
2. Google's Bard launches fact-check features
Illustration: Shoshana Gordon/Axios
Users can now ask Google's Bard chatbot to double-check its answers and can connect it to their Google apps and services, in the first update to the product since July.
Why it matters: Google is racing to integrate AI into its main products, hoping to extend their dominance, after being caught out earlier this year when Microsoft integrated ChatGPT into its Bing search engine and added an AI "copilot" to Microsoft 365.
- With 1 in 3 Americans describing themselves as "very concerned" about the development of AI — driven partly by the rise of chatbots that cannot explain themselves — citation and transparency improvements could help win over some AI skeptics.
What's happening: New and existing English-language Bard users will be prompted to decide if they want to add Bard Extensions when they use the chatbot.
- The opt-in Extensions plug-in connects Bard to Google products such as Gmail, Docs, Drive, Maps, YouTube, and Google Flights. The service could, for example, summarize unread emails or construct a draft trip itinerary combining material from your Gmail, Docs and Google Flights.
- If a user clicks on a "Google it" feature button (the "G" icon) next to a Bard answer, Bard will check line by line "if there is content across the web to substantiate" its response, the company said in a statement, and provide that breakdown with applicable links.
- Existing English language features — including image uploading and inclusion of search images in Bard responses — are expanding to more than 40 supported languages.
The intrigue: Google promised in a blog post that "your content from Gmail, Docs and Drive is not seen by human reviewers, used by Bard to show you ads or used to train the Bard model."
- Tech companies have been accused of a data land grab as they grapple with how to train new and improved AI products.
What they're saying: While walking Axios through Extensions, Jack Krawczyk, Google's product lead for Bard, said he found the new features most useful in helping him to parse back-to-school correspondence for his child and prompting him to "bring his curiosity to life" by searching in better ways.
- Two types of user feedback drove today's product updates.
- "The average person around the country says, '[AI] is super cool, but I still haven't figured out how to make it useful in my life,'" Krawczyk said.
- Many told Google they were concerned about chatbot "hallucinations."
3. Training data
- A federal judge has sided with tech companies and declared that a new California law designed to protect minors when they access the internet, — the California Age Appropriate Design Code Act — is "likely" unconstitutional. (Washington Post)
- Here's what's new in Apple's new iOS 17 operating system, which the company began distributing Monday night. (Axios)
- Elon Musk said he wants all users of X, formerly Twitter, to pay to use the service. (Axios)
- Leaked court documents reveal Microsoft plans for a next gen Xbox that that uses AI for "super resolution." (Axios)
- Trading places: Panos Panay, a 20-year Microsoft veteran who led the company's hardware division, announced he is departing the company, and will reportedly replace Amazon devices chief Dave Limp. (Bloomberg)
5. + This
A long read about a short-lived phenomenon: Inside the AI-generated "Seinfeld" spoof, which lurched from viral hit to fan backlash in less time than it takes humans to create a sitcom season.
Thanks to Scott Rosenberg and Meg Morrone for editing and Bryan McBournie for copy editing this newsletter.
Sign up for Axios AI+

Scoops on the AI revolution and transformative tech, from Ina Fried, Madison Mills, Ashley Gold and Maria Curi.

