Axios AI+

October 21, 2024
Congrats to the New York Liberty, who won their first WNBA championship in overtime of a winner-take-all finale to their fiercely competitive series with the Minnesota Lynx.
Today's AI+ is 1,276 words, a 5-minute read.
1 big thing: Microsoft announces new AI agents
While some are eager to declare the era of autonomous agents upon us, the reality is a bit more complicated — and that's probably a good thing.
Why it matters: Giving autonomy to generative AI tools opens up a range of tantalizing possibilities for increased productivity, but also vastly increases the potential of catastrophic risk.
Driving the news: Microsoft today announced a new series of semi-autonomous agents that business customers can either configure to their liking or use straight out of the box.
- Microsoft's fresh crop of agents will qualify sales leads, communicate with suppliers and understand customer intent. Next month, Microsoft will release a tool in public preview that will allow users to customize agents in Copilot Studio.
- Some of the agent-building capabilities are included as part of Microsoft's $30-per-worker-per-month Microsoft 365 Copilot. Copilot Studio offers tools with additional customization capabilities, and agents are priced per query ($200 for up to 25,000 messages per month).
The big picture: Agents that can act autonomously (within confined boundaries) are the logical next evolution of generative AI, which has thus far largely been limited to providing information for humans to act on.
- Agents, by contrast, are designed to operate partly or entirely without direct human intervention, though best practices call for thorough testing and close oversight.
- Sierra, a startup from former Salesforce executive and OpenAI chair Bret Taylor and ex-Google exec Clay Bavor, has been focused on AI agents from the start, while Salesforce, Google and others have been heavily touting the approach only in recent weeks.
Zoom in: On the plus side, agents can work 24/7 and a small number of humans can theoretically oversee vast numbers of AI agents. Even with a great AI assistant, there are finite limits to human productivity.
- The risk of agents, however, is that generative AI, by its nature, doesn't always respond predictably — and without a human to approve each action, it could act in harmful ways. A Google DeepMind paper from this year highlighted such concerns.
- Companies try to mitigate this by having agents perform a small set of known tasks, often with specific rules and limits. For example, an AI customer service agent might be able to answer a range of questions about orders, but only provide refunds or discounts up to a set amount.
- Some companies learned this lesson on guardrails the hard way by empowering chatbots to, for example, sell a plane ticket or car for well below normal cost.
- Being able to highlight when human intervention is necessary is key, says Microsoft corporate VP Charles Lamanna. "Because if, say, the agent can do the work 90% of the time, if it didn't have a way to call a person to help with that 10%, you couldn't actually use it."
Between the lines: Microsoft says it sees agents as separate from, and an addition to, highly personalized AI Copilots that help an individual worker with their tasks.
- "You don't want just a copilot or just an agent," Lamanna told Axios. "You want both."
The other side: Salesforce CEO Marc Benioff, meanwhile, has been bashing both the notion of a copilot and Microsoft's interpretation, comparing it to Clippy, the company's ill-fated Office assistant.
- "When you look at how Copilot has been delivered to customers, it's disappointing," he said on X this past week, echoing comments he has been making since August. "It just doesn't work, and it doesn't deliver any level of accuracy."
- Benioff also claims that Copilots are spilling corporate data, a charge that Microsoft strongly denies.
2. DeepMind's Demis Hassabis sees a "watershed moment"

Demis Hassabis — co-founder and CEO of Google DeepMind, and one of the world's top AI pioneers — says the technology's coming power has been clear for so long that he's amazed the rest of the world took so long to catch on.
- "I've been thinking about this for decades. It was so obvious to me this was the biggest thing," Hassabis, 48, told Axios in a virtual interview from London, where DeepMind is based.
- "Obviously I didn't know it could be done in my lifetime ... Even 15 years ago when we started DeepMind, still nobody was working on it, really."
Why it matters: Hassabis and a DeepMind colleague, John Jumper, were part of a joint Nobel Prize in Chemistry.
- "Maybe it's a watershed moment for AI that it's now mature enough, and it's advanced enough, that it can really help with scientific discovery," Hassabis said.
- "We don't have to wait," he said, for artificial general intelligence — systems that can outsmart humans, the holy grail for AI developers. AI can already "revolutionize drug discovery," he added.
The big picture: Hassabis said AI may be "overhyped in the near term" because of the success of OpenAI's ChatGPT, which has fueled a frenzy among investors.
- Hassabis voiced a view shared by many big-name researchers who spent years working slowly and deeply, out of the spotlight, to make the present era possible. "I'd rather it would have stayed more of a scientific level," he said. "But it's become too popular for that."
- He thinks AI is "still massively underrated in the long term": "People still don't really understand what I've lived with and sat with for 30 years."
Between the lines: Hassabis has moved into the driver's seat for Google's total AI efforts, with other teams being consolidated under DeepMind, as Axios' Ina Fried reported last week.
- DeepMind co-founders now run AI at both Google and Microsoft. Mustafa Suleyman, another DeepMind co-founder, in March became CEO of Microsoft AI, leading Copilot and consumer AI.
The backstory: Hassabis says he wrote his first AI program when he was about 11, to help play the strategy board game Othello (Reversi). He then led the chess team at Cambridge, where he got top honors in computer science, before earning a Ph.D. in cognitive neuroscience at University College London.
- Hassabis, who read lots of science fiction growing up, said he was "always interested in the big questions" — which often leads to a life in physics. But even back then, he sensed there was something bigger.
- "Physics was my favorite subject," he told Axios. "If you want to understand the fabric of reality or the nature of time or any of these big questions or just the universe, you study physics. But I felt that having read about all the physics greats ... we were lacking some tools to tackle such momentous questions."
The bottom line: Hassabis marveled at viewing the earth from a 747, or talking on Zoom 3,000 miles apart — both products of the human mind.
- "So if we could create that artificially and make that abundant and have even super intelligence in some directions, that would change the whole world," he said. "So it seems an obvious logical progression. It's sort of surprising to me that more people haven't realized that a lot earlier."
3. Training data
- Sources say AI startup Perplexity is on track to raise $500 million — which would more than double its valuation — in its fourth funding round of the year. (Wall Street Journal)
- Internal studies at Apple showed that OpenAI's ChatGPT was 25% more accurate and was able to answer 30% more questions than Siri. (Bloomberg)
- ByteDance says it fired an ad tech intern in August for "maliciously interfering" with one of the company's internal AI projects. The TikTok owner claims the company's large language AI models were not affected. (BBC)
4. + This
Floppy disks are a distant memory for some of us — and not a memory at all for the younger generations. However, San Francisco's light rail still relies on them each day to keep the trains moving. A new contract with Hitachi should finally end that.
Thanks to Megan Morrone and Scott Rosenberg for editing this newsletter and to Caitlin Wolper for copy editing it.
Sign up for Axios AI+



