Axios AI+

March 31, 2026
👋 Mady here. Let's get to it.
👀 Situational awareness: California Governor Gavin Newsom signed an executive order mandating safety guardrails for AI labs working with the state.
- Why it matters: AI regulation is quickly emerging as a key political battleground as the 2028 election nears.
Today's AI+ is 1,148 words, a 4.5-minute read.
1 big thing: AI's ensemble era
Microsoft has revamped one of its AI research tools to use models from both OpenAI and Anthropic, the clearest sign yet that the future of AI may be multi-model.
Why it matters: AI companies are increasingly pairing models together — having them cross-check and evaluate — in a bid to boost accuracy and reduce errors that any one model might miss.
Driving the news: The software giant is taking advantage of multiple models within its Microsoft 365 Copilot Researcher.
- A new "Critique" layer uses Anthropic's Claude to review answers generated by OpenAI's model to improve accuracy before a user sees the response.
- The company says that approach enabled the research agent to score 13.8% higher on the DRACO benchmark, an industry standard for deep research quality.
- Another new option, called Model Council, allows users to see a side-by-side comparison of responses from different models.
What they're saying: "It's becoming very clear to us that there will be many models," Microsoft executive VP Charles Lamanna told Axios. "Come summertime there will be many more models than just these two inside of Copilot."
The big picture: AI companies are experimenting with several different ways to use multi-models to complete tasks.
- When you prompt ChatGPT, Copilot or other models, they will often use a smaller classifier model to route you to the model most appropriate for the task.
- Perplexity has long allowed its users to choose from multiple models and see responses side-by-side.
- Anthropic uses a self-critique step mid-generation to catch errors before surfacing a final response from Claude.
Between the lines: The multi-model system has an added benefit for Microsoft, which is looking to show it isn't overly reliant on OpenAI.
- With the frontier labs frequently leapfrogging one another, Lamanna said businesses are interested in AI tools that can easily change which models are running under the hood.
Yes, but: Using multiple models on a single query can lead to increased costs and slower response times.
- Microsoft's Model Council, for example, costs roughly 2.5 times as much as using a single model, while the Critique approach costs about 20% more.
- That cost isn't directly passed on given Copilot is a subscription service, but it does inform where Microsoft decides to use multiple models versus relying on a single algorithm.
What we're watching: Microsoft is also building more homegrown models, and Lamanna said those models might show up first working in conjunction with outside models rather than as a full replacement.
- "It'll be in one of these ensemble experiences," he said.
2. DeepMind's secret weapon is money
The AI race won't be won by who builds the best model, but by who can afford to keep the lights on.
The big picture: DeepMind CEO Demis Hassabis knew that when he sold his AI lab to Google, according to a new biography by Sebastian Mallaby, senior fellow at the Council on Foreign Relations.
- The sale to Google gave DeepMind the one thing OpenAI and Anthropic are still scrambling for: a parent company that prints cash.
What they're saying: The big surprise is that "Google plus Demis counterpunched so effectively," Mallaby told Axios, referencing Google DeepMind's Gemini model forcing OpenAI into a "code red" frenzy to keep up at the end of 2025.
- People saw the science side of Hassabis but underestimated his competitive background, which included shipping commercial video games before he founded DeepMind.
- The combination of the two framed his interest in going in house with Google, which is using the lowest percentage of debt to fund its AI buildout among the hyperscalers, thanks in part to its hefty cash flow.
Between the lines: That cash bought DeepMind scientists the freedom to do blue sky research without worrying as much about things like revenue.
- "We don't feel any immediate pressure to make ... knee-jerk decisions," Hassabis told Ina at Davos when she asked whether he felt pressured to monetize through things like ads.
- Competitor OpenAI is testing ads as it moves to prove a sustainable revenue strategy pre-IPO, with the company projecting $14 billion in losses for 2026.
The intrigue: Mallaby says Hassabis was not always attached to the Google deal.
- He and co-founder Mustafa Suleyman recruited Reid Hoffman to pledge $1 billion to spin DeepMind back out of Google and become independent.
- Lawyers and bankers worked for three years to push Google to let them go, Mallaby writes in his biography.
- The arrangement Hassabis couldn't escape could now be his biggest advantage: DeepMind is the only major AI lab not pursuing both the AI race and an IPO.
3. Americans want AI rules. Until they don't
Nearly two-thirds of Americans now use AI regularly and want stronger oversight, but are conflicted on how far regulation should go, according to a new national survey from AI governance nonprofit Fathom shared exclusively with Axios.
Why it matters: Americans are growing more comfortable with AI as Washington struggles to regulate it, but people still want guarantees on safety and job security.
By the numbers: Nearly two-thirds of Americans use AI weekly or more, per the survey.
- 40% of respondents say they're excited about AI, while 23% say they're concerned. Another 35% feel both.
- 90% say it's important that AI products for kids should be verified as "safe" before they're used.
People also say they want policymakers to deliver guardrails while also keeping the U.S. dominant in AI.
- Support for international cooperation drops from 47% to 34% when it would require the U.S. to cede control.
- Respondents also strongly back workforce transition policies with support from the government, and say they trust independent experts and nonprofits more than politicians or tech companies to set guardrails.
Methodology: The Fathom survey of 2,036 people was conducted online by Forbes Tate Partners from Jan. 29-Feb. 4.
What they're saying: "Child safety, corporate accountability, and verifiable standards are Americans' top priorities for a good future with AI," the study's authors write.
- "These priorities hold up across party lines, and even when the trade-offs are made explicit."
- Per the study, "the public wants governance and American leadership — and policymakers will have to design frameworks that reconcile the two."
4. Training data
- David Sacks will step away from his official White House role, but will remain at the center of Trump's AI circle without the government ethics constraints. (Axios)
- Mistral raised $830 million in debt financing to build Nvidia-powered European data centers, capitalizing on demand for alternatives to the U.S. AI giants. (Financial Times)
5. + This
Today's legitimate use of an AI agent: Calling every pub in Ireland to inquire about the price of a Guinness in order to create a consumer price index called the Guinndex.
Thanks to Megan Morrone for editing and Matt Piper for copy editing.
Sign up for Axios AI+






