Axios AI+

February 25, 2026
👋 Mady here, thinking about this Anthropic news that landed Tuesday night:
👀 Situational awareness: Anthropic is softening its core safety policy to stay competitive with rival AI labs.
- Previously, the company paused development of potentially dangerous models. Now, Anthropic won't do that if a comparable or better model is released by a competitor.
- Why it matters: Anthropic built its brand on being the industry's safety-first lab. The shift signals what insiders who have recently left AI companies are concerned about: the impact of competition reshaping incentives across the AI sector.
Today's AI+ is 1,251 words, a 4.5-minute read.
1 big thing: How sci-fi is shaping AI's future
The tech sector is "gripped" by an ideological vision of AI that's rooted in science fiction — but it doesn't have to be that way, three prominent economists write in a new paper to be discussed today at the Brookings Institution.
Why it matters: The paper argues that despite a flood of recent essays portraying AI as an inevitable job-crushing force, the technology can be pro-worker.
How it works: Job replacement is the leading vision for AI because it appears cheaper and it seems cool.
- Businesses like the idea of replacing people and reducing costs.
- The authors explain that the AI community has been consumed by a specific ideology.
- "Even though it is a technical and highly quantitative field, computer science — and the AI community in general — is nonetheless gripped by an ideological vision that places AGI, meaning machines that exceed all human capabilities, as its highest possible pursuit."
Where it stands: Researchers from the very beginning went with the idea that AI would be just like the human brain, MIT economist and coauthor Daron Acemoglu tells Axios.
- There were alternate visions that saw AI as complementary to humans.
- "But the AI community never got on board with them," he says. Instead, a "replacing humans agenda" took hold, and it happens to get a lot of support from science fiction.
- Economics writer Derek Thompson agrees, posting yesterday on X: "The conversation about AI is a marketplace of competing science fiction narratives."
Between the lines: Science fiction is fiction.
Zoom out: The paper was coauthored by MIT economists David Autor, known for his work around "The China Shock," and Simon Johnson, who shares a Nobel Prize with Acemoglu for their work on political systems and economic growth.
AI can be pro-worker in a couple of ways, they say.
- Disruptive technology creates entirely new occupations. For example: Six out of 10 workers in 2018 were employed at jobs that did not exist in 1940, Autor found in research from 2024.
- Tech can also enhance expertise and productivity — think of the spreadsheet's impact on accounting, finance and consulting.
Zoom in: The authors have an offbeat example: hearing aids. In 2024, software developers in China noticed that hearing-impaired gig delivery workers were at a real disadvantage to their peers.
- They invented a voice chatbot for the delivery app — and voilà ! — these workers performed at the same level as their peers.
- "This instance of pro-worker AI is so straightforward that one may wonder if it even fits our definition. It does so, because this technology makes human skills and expertise more valuable," they write.
The big picture: Technology is under human control — despite apocalyptic visions to the contrary.
- The authors have several suggestions for redirecting the conversation. These include more federal government investment in AI through grant-making and changes to the tax code to incentivize hiring more workers — right now you get better tax breaks for investments in machines.
Context: The federal government has long spent money on research into promising new technologies, as well as on the infrastructure to support that tech. All of that has led to public benefits. (Take the federal interstate system — what if automakers built the highways?)
- Now the bulk of investments in AI are coming from the private sector.
- That's another reason for the industry's anti-worker bent.
The bottom line: The future of AI is up for grabs, and some economists are disputing the notion that it will lead to our doom.
2. Anthropic launches job-specific tools
Anthropic launched tools that companies, teams and even individual employees can customize across business units ranging from human resources to design.
Why it matters: Anthropic is making Claude easier to use for enterprises as it works to become the go-to AI platform for businesses.
State of play: Anthropic's latest plugins let Claude act as a conductor working in other apps or systems in continually more autonomous ways.
- When a user prompts Claude to do something, instead of it feeding back instructions, it can go do it: connecting with other programs, or completing tasks in Excel or PowerPoint without the user being involved.
- The plugins are meant to be customized and trained by so-called "power users" at different companies. Those users will help design plugins that can then be easily used by more novice team members.
The other side: OpenAI launched its own enterprise platform to manage corporate AI agents earlier this month.
Zoom in: Some of the companies that investors worry AI will kill are instead partnering with Anthropic on these plugins, and their stocks are up as a result.
- Docusign and Intuit stocks rallied off the back of their partnership announcements with Anthropic.
Threat level: It can also connect with Google's apps, including Google Drive and Gmail.
- It's interesting that Google allowed that given its own AI ambitions with its model, Gemini.
What we're watching: To what extent these tools actually drive enterprise revenue.
- And how that revenue could impact Anthropic's valuation.
3. AI's investment gain may be energy's loss


AI may be siphoning off investment dollars that once flowed to energy startups, according to a new report from the International Energy Agency.
Why it matters: The global shift is a stark, data-driven sign of the times — climate ambition is collapsing just as the AI race accelerates.
4. Exclusive: Pentagon still pressuring Anthropic
Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to give the military unfettered access to its AI model or face harsh penalties, Axios has learned.
The big picture: Hegseth told Amodei in a tense meeting yesterday that the Pentagon will either cut ties and declare Anthropic a "supply chain risk," or invoke the Defense Production Act to force the company to tailor its model to the military's needs.
Why it matters: The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty, but officials are also worried about the consequences of losing access to its industry-leading model, Claude.
- "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the meeting.
5. Training data
- OpenAI banned a ChatGPT account used by Chinese law enforcement to undermine support for Japan's prime minister, the company said in a report today. (Axios)
- Waymo added its fleet of robotaxis to Dallas, Houston, San Antonio and Orlando. (Axios)
6. + This
Mady again with my first contribution to +This as Ina travels back from the Olympics. I was told this section is meant to be fun and AI-related.
- To me, that's the discovery that the author of the viral Citrini report had short positions (the Wall Street version of bets against) in some of the businesses mentioned in the essay, which he confirmed in a Bloomberg Television interview.
What we're watching: Are viral AI doomerism pieces the new short-seller reports? Yes, pondering this question is my definition of fun.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+










