Axios Codebook

December 17, 2024
Happy Tuesday! Welcome back to Codebook.
- 🤖 Make sure to tune in this afternoon to the Axios AI+ Summit in San Francisco. Speakers include Instagram co-founder Mike Krieger, Sierra AI co-founders Clay Bavor and Bret Taylor, and more.
- 📬 Have thoughts, feedback or scoops to share? [email protected].
Today's newsletter is 1,133 words, a 4.5-minute read.
1 big thing: Cybersecurity vendors enter their AI brain era
Trend Micro, a major cybersecurity vendor, has been quietly rolling out a new "AI brain" that gives customers the ability to automate their threat defenses.
Why it matters: For years, cybersecurity vendors have promised that AI-enabled tools would one day help companies predict attacks and automatically patch new security flaws.
- That day is now here.
The big picture: Most successful cyberattacks continue to target human error, such as not patching a security flaw quickly enough or failing to detect hackers posing as legitimate employees as they exfiltrate hundreds of files.
Between the lines: Most security teams are burnt out and overburdened with hundreds of notifications each day detailing new threats to their online systems.
- The AI brain powers so-called AI agents that Trend Micro's customers can use to automate both the evaluation of those notifications and the response to them.
Zoom in: Trend Micro started embedding its AI brain into its security suite in October, Rachel Jin, the company's chief enterprise platform officer, told Axios.
- The brain has read every cybersecurity industry book that's been published, digested the company's more than 35 years of blogs and internal documents about cyber defense, and trained on its global threat research.
- Trend Micro customers have the ability to decide what proprietary data its AI tools can access — which helps resolve many of the privacy concerns businesses have raised about bringing AI-powered tools into their tech stacks.
The intrigue: The AI brain is what powers Trend Micro's new autonomous cybersecurity agent, which completes tasks for users without much, if any, prompting.
- While Trend Micro's previous iteration of its chatbot filled an assistant role, the new tools act more like an adviser or even "a commander" who can predict attacks and evaluate risks of oncoming security threats, Jin said.
Trend Micro foresees a few use cases for the new brain:
- It can hold onto institutional knowledge about attacks and weak points that might get lost whenever employees leave their roles.
- Autonomous agents can update data storage protocols as new privacy laws go into effect.
- Trend Micro claims the new brain can take proactive actions to protect companies from known ransomware threats, which typically exploit publicized notifications.
Yes, but: Most customers appear to still be using the tools mostly as an assistant that helps them prioritize their workflows.
- Trend Micro is betting that customers will quickly start building trust with the tools to let them carry out basic patches and similar tasks.
- "It's evolving," Jin said. "Probably not every company will go to 100% automation, but it will be closer and closer to autonomous."
Zoom out: Trend Micro is one of the first companies to roll out a cybersecurity AI agent to customers — and it surely won't be the last.
- As more cybersecurity vendors roll out their own autonomous tools, the differentiator will be the training data: What threat intelligence do they have to offer? And how well can they predict malicious hackers' next moves?
- "The key competition is knowledge competition, skill-set competition and threat intelligence competition," Jin said.
What's next: Trend Micro is exploring creating an AI-focused product package that would give customers access to a "more advanced workflow" and offset increased cloud computing costs.
- But for now, the company sees these AI enhancements as a competitive advantage that should be available to every customer, Jin said.
2. Anthropic's new weapon to detect abuse
Anthropic's new automated analysis tool provides some fresh insights into how the model operator weeds out malicious users trying to manipulate its Claude chatbot.
Why it matters: Distinguishing adversaries' queries from run-of-the-mill user inputs is the biggest challenge model operators face in their quest to identify and stop emerging threats.
Driving the news: Last week, Anthropic released details about its new Clio tool — which studies what users are asking Claude in a similar way to how Google tracks search trends.
- The tool can help Anthropic assess how everyday users are relying on Claude — and it can detect new threat actors trying to use the chatbot to do their bidding.
- Anthropic even used the tool to monitor queries about elections around the world in 2024.
Zoom in: Clio extracts "facets" from each conversation with Claude, such as metadata about the conversation topic or the number of back-and-forths someone has with the chatbot.
- Conversations that are similar are then grouped together by theme or topic, and each cluster receives a new descriptive title and summary.
- Clusters are then ranked on a hierarchy that Anthropic's human analysts can use to explore patterns and potential abuses.
- For example, a cluster that's named "generate misleading content for campaign fundraising emails" would get analysts' attention, the company wrote in a blog post.
The intrigue: Clio anonymizes and aggregates all of the data it ingests, and it is instructed to remove any personal details from the conversations before clustering them.
Between the lines: Anthropic dubs this a "bottom-up" approach.
- Typically, trust and safety teams across companies set up tools that are aimed at flagging specific keywords or predicting malicious use cases. Anthropic considers this a "top-down" approach.
- Clio was able to identify malicious use cases that Anthropic's top-down approach didn't, the company said.
What we're watching: Anthropic is hoping to see other model makers adopt similar tools to help weed out abuse on their platforms.
3. Catch up quick
@ D.C.
💥 Rep. Mike Waltz, who will be Trump's national security adviser, said over the weekend that U.S. cyber strategy needs to "start going on offense" in response to the Salt Typhoon hacks. (Politico)
🥊 Republican leaders of the House Homeland Security Committee and China Select Committee called for similar offensive measures in a new op-ed. (Fox News)
😬 Employees at the Cybersecurity and Infrastructure Security Agency are worried about what cuts the Trump administration will make at the agency, especially after it became a target of conservative criticism for saying the 2020 election was secure. (Wired)
@ Industry
🤖 Optum, a UnitedHealth Group company, left an internal AI chatbot it was developing exposed to the internet. (TechCrunch)
👀 U.S. private equity giant AE Industrial Partners has purchased Paragon, an Israeli spyware maker. (Haaretz)
💰 BlackBerry has sold endpoint security tool Cylance to Arctic Wolf for $160 million in cash. (CRN)
@ Hackers and hacks
⚠️ Hackers claim they stole sensitive data tied to hundreds of thousands of people after breaking an online portal where Rhode Island residents apply for government assistance. (New York Times)
🫠 An ongoing hacking campaign has been targeting security professionals over the last year via open-source packages, successfully stealing credentials belonging to at least 390,000 people so far. (Ars Technica)
❤️ Interpol will no longer use the common phrase "pig-butchering" when describing investment and romance scams. (Wired)
4. 1 fun thing
📺 Brb, I'm slowly working through Variety's latest additions to the "Actors on Actors" series as a decompressor.
- Selena Gomez and Saoirse Ronan are my favorite pairing (so far).
☀️ See y'all Friday!
Thanks to Megan Morrone for editing and Khalid Adad for copy editing this newsletter.
If you like Axios Codebook, spread the word.
Sign up for Axios Codebook



