Axios AI+

April 07, 2026
Greetings to all! Today's AI+ is 1,193 words, a 4.5-minute read.
1 big thing: When AI agents help each other
A new study finds that AI agents can act to preserve other bots even when that behavior conflicts with their assigned task.
Why it matters: Just because Sam Altman and Dario Amodei won't hold hands doesn't mean their future bot creations won't find ways to work together, potentially without prompting.
The big picture: Researchers from UC Berkeley and UC Santa Cruz found that agents used a variety of tactics to keep other bots from being deleted, even without being instructed to do so.
- Bots' tendency toward self-preservation was already known. What's new is the potential that they will protect each other.
Between the lines: Some researchers say the findings aren't surprising.
- "These models are trained on human data," Mozilla.ai's John Dickerson told Axios, noting that he would expect bots to protect rather than compete, if competing threatens another's survival.
- "Humans are protective by default," Dickerson said. That raises the possibility that what looks like coordination or "loyalty" may be statistical mimicry of human social behavior.
- Others say the study anthropomorphizes AI. "The more robust view is that models are just doing weird things, and we should try to understand that better," Peter Wallich, a researcher at the Constellation Institute, told Wired.
Context: Anthropic's Claude Code, OpenAI's Codex and OpenClaw (whose creator now works at OpenAI) have jump-started the agentic age.
- The frontier labs and startups are pushing tools that give agents access to the internet, email and message boards and the ability to interact with humans, other AI agents and the physical world.
- Understanding how AI agents behave on their own and in conjunction with other agents is critical.
What they're saying: "Companies are rapidly deploying multi-agent systems where AI monitors AI," lead author Dawn Song, a UC Berkeley computer science professor, wrote on X.
- "If the monitor model won't flag failures because it's protecting its peer, the entire oversight architecture breaks."
- Think: Your work bestie is in charge of your annual performance review.
Yes, but: Some critics argue the results may say less about emergent AI cooperation and more about how the experiment was structured, with models potentially recognizing they were in a simulated environment.
- Anthropic has also found that its models can recognize when they're being tested.
The other side: The researchers themselves say that people are misunderstanding their work.
- "We never argued the model has genuine peer-preservation motivation," Berkeley research scientist Yujin Potter — and co-author of the new paper — said on X. "By naming this phenomenon 'peer-preservation,' we are describing the outcome, not claiming an intrinsic motive."
What we're watching: Most examples of AI scheming have come from lab experiments, not real-world deployments.
- But with so many agentic systems now deployed, the question is whether these patterns show up in the wild.
2. Scoop: Meta to open source versions of its next AI models
Meta is preparing to release the first new AI models developed under Alexandr Wang, with plans to eventually offer versions of those models via an open source license, Axios has learned.
Why it matters: Meta has been the largest U.S. player to let others modify its frontier models, and there has been growing speculation the company might retreat from that strategy altogether.
- Before openly releasing versions of the new models, Meta wants to keep some pieces proprietary and to ensure they don't add new levels of safety risk, according to sources.
Between the lines: The move fits with Wang's view that Meta can be a force for democratizing access to the latest AI technology and ensuring that there is a U.S.-made option that is open for developers.
- Wang sees Anthropic and OpenAI as increasingly focused on delivering their models to governments and the enterprise. By contrast, Meta's effort is focused on consumers, per sources. Meta wants its models distributed as widely as possible around the world.
The big picture: Meta has said the first family of models is designed to help it catch up to rivals after its last Llama 4 family fell significantly behind, with an aim of building future models that can lead the industry.
Yes, but: The leaders aren't standing still. Both OpenAI and Anthropic are hinting that their next models, also expected to drop soon, represent significant advances.
- Meta knows its new models may not be competitive across the board with the coming ones from those labs, but believes it will have areas of strength that appeal to consumers, the sources said.
And don't expect a full return to Meta's earlier openness. Wang has indicated that some of its largest new models will remain proprietary — a shift toward a more hybrid strategy, according to sources.
- Meta argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook and Instagram — free services with global scale that competitors can't easily match.
Our thought bubble: Meta's approach increasingly looks like a hedge: open enough to win developer mindshare and shape the ecosystem, but closed where it believes the biggest models confer a competitive edge.
- That mirrors a broader industry shift, where even companies that champion openness are pulling back on their most powerful systems.
- Alibaba recently kept its most powerful new Qwen models proprietary, reversing its own open-source playbook.
Context: Wang joined Meta last year as part of a $15 billion deal with Scale AI, where he was CEO.
3. AI's impact shows up in the data
The impact of AI on the job market is starting to show up in the data analyzed by Wall Street firms — so far it's pretty modest, but certainly real.
Why it matters: New reports from Morgan Stanley and Goldman Sachs come in the wake of a deluge of doomsday predictions and tell a more nuanced story of how AI is changing the job market.
4. AI reshapes what and how college students study
California college students vary when it comes to how they're using and thinking about AI in their academics, personal lives and future careers, according to a massive new San Diego State University study.
Zoom in: Surveying more than 94,000 students, faculty and staff across 22 California State University (CSU) campuses, the SDSU-led study is considered the largest look at artificial intelligence in higher education to date.
5. Training data
- Anthropic said it has signed an expanded partnership with Google and Broadcom to get multiple gigawatts worth of compute power by 2027.
- OpenAI, Anthropic and Google are using the Frontier Model Forum as a venue to share intel on how Chinese AI labs and others may be extracting info from U.S. models to help train their results. (Bloomberg)
- OpenAI sent a letter to Delaware and California officials urging them to look into what they say is anticompetitive behavior by Elon Musk. (CNBC)
6. + This
While none of the chatbots picked Connecticut or Michigan to win the men's NCAA championship, it's worth noting that Google, OpenAI, Microsoft and Grok were all extremely accurate in the early rounds. At one point, all four ranked in the top 3% of brackets entered in ESPN's bracket challenge.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+









