Axios AI+

April 16, 2026
👋 Mady here thinking about the 580% stock pop for Allbirds after the sneaker company decided to become an AI firm with just $50 million in funding.
Today's AI+ is 1,204 words, a 4.5-minute read.
1 big thing: Claude power users have complaints
Anthropic users across online forums are raising the same complaint: Claude suddenly feels ... bad.
Why it matters: The backlash lands just as Anthropic is testing a more powerful model, Mythos — raising questions about whether cutting-edge AI is becoming less accessible even as it gets more capable.
Driving the news: Over the past few weeks, users on X, GitHub and Reddit have been swapping anecdotes, benchmarks and prompts in an effort to pinpoint what changed and why.
- "Claude has regressed to the point it cannot be trusted to perform complex engineering," an AMD senior director wrote in a widely shared post on GitHub.
- Others have posted side-by-side outputs and benchmarks they say show Claude generating answers that are less accurate or nuanced.
- Much of the speculation centers on whether Claude has been deliberately scaled back — what users are calling "nerfed" — either to control costs or to redirect scarce compute toward Mythos and other frontier efforts.
The other side: Anthropic says it adjusted the default level of reasoning in Claude Code, but denies the changes were tied to compute constraints or Mythos.
- When asked about the online complaints, Anthropic pointed Axios to a post on X from Boris Cherny, head of Claude Code, from March 6.
- "You can change it anytime in the /model selector if you prefer low effort (faster) or high effort (more intelligence). The setting is sticky and will persist for your next session," Cherny said.
Between the lines: Analyst Patrick Moorhead decided to ask Claude to weigh in.
- "Anthropic made real configuration changes that objectively reduced default thinking depth across all surfaces including claude.ai, but the most extreme 'secret nerfing' narrative overstates what happened," Claude said as part of its lengthy response.
Another theory is that users aren't seeing decline so much as acclimating to what previously felt magical.
- Over time, expectations rise and flaws become more noticeable — a phenomenon known as habituation.
Yes, but: Even if the change is explainable, the perception problem is real — especially for power users relying on consistent performance for coding and research workflows.
The big picture: The fight over Claude's "intelligence" points to a broader shift: access to top-tier AI is fragmenting.
- Advanced capabilities are increasingly gated behind higher-cost tiers, API usage or experimental programs.
- Anthropic is also reportedly close to upgrading its high-end Opus model to version 4.7.
The increasing stratification could lead to a division between those who can afford to pay top dollar for the best models and those who can't.
- Anthropic recently moved large enterprise customers to a fully usage-based (token) pricing model, tying intelligence more directly to spend.
- It's also reinforcing a widening divide between power users and dabblers around AI's capabilities.
What we're watching: Whether "default" AI experiences continue to get worse even as frontier systems get dramatically stronger.
2. Scoop: BNY tests new OpenAI, Anthropic models
BNY, America's oldest bank, has early access to OpenAI's and Anthropic's advanced cyber capability models, according to CEO Robin Vince, making the bank one of few vetted enterprises with early access.
Why it matters: Wall Street is working overtime to win the AI security race.
What they're saying: Anthropic and OpenAI recognize the importance of releasing their cyber-capable models to certain institutions early, Vince tells Axios. It's key to protecting critical infrastructure, "and in our case, obviously the financial services world," Vince says.
- The AI labs also want feedback and real-world testing, Vince says.
- Other firms with access to these previews will be able to share lessons learned with one another as well as the labs themselves, Vince said.
Catch up quick: The access comes after Treasury Secretary Scott Bessent and Fed Chair Jerome Powell called a meeting with the biggest names on Wall Street to discuss Mythos, first reported by Bloomberg and confirmed by Axios.
- The meeting focused on risks of AI-powered attacks on bank systems as well as preventative measures.
Zoom in: OpenAI's new model variant, GPT-5.4-Cyber, will be rolled out to a broader set of organizations than Anthropic's Mythos, which initially reached about 40 enterprises.
- While Anthropic signaled that its model was too dangerous to release broadly, OpenAI is making tools more widely available for defensive cyber work while still preventing nefarious actors from accessing them, Axios' Sam Sabin writes.
Follow the money: BNY is all-in on AI.
- The bank, which plans to announce its earnings later this morning, has over 100 digital employees that have their own tasks, managers and email addresses.
- Under Vince's leadership, BNY rose to the best-performing stock in an index tracking a group of major banks, up 218%.
What we're watching: How banks maintain their long-held status as titans of cybersecurity defense in an AI-powered world.
- And which banks are defined by their ability to adapt as new models get stronger, faster.
3. Exclusive: OpenAI lobbies for science
Advances in AI's ability to take on novel scientific work are helping researchers move faster, connect siloed knowledge, and design treatments more efficiently, according to a new report from OpenAI's policy, research and sciences team shared first with Axios.
Why it matters: The life sciences have saved hundreds of millions of lives over the past century, but progress has slowed dramatically — even as the toughest diseases remain unsolved.
The big picture: Biomedical discovery moves at a snail's pace.
- In the U.S., bringing a new drug from research to approval often takes 12 to 15 years.
- "Eroom's Law" — Moore's Law in reverse — holds that scientific research slows as knowledge accumulates, because each new generation of scientists must spend more time just absorbing what came before.
- AI has nothing but time.
Reality check: OpenAI's report functions as a policy pitch, arguing for changes that would support broader AI use in life sciences.
- It calls for greater access to medical and scientific data, treating advanced AI as a national research resource, and pushes for investment in the "physical stack," meaning compute, labs, energy and infrastructure.
Yes, but: Only a few AI-discovered or AI-designed drugs have reached clinical trials.
4. Training data
- Exclusive: Sen. Maggie Hassan (D-N.H.) sent letters to ElevenLabs, Speechify and other AI voice firms for details on how they're stopping scammers. (Axios)
- Apple's reputation for enhanced privacy could become a selling point in the AI era. (Axios)
- Government cybersecurity cuts and its Anthropic lawsuit are complicating the Trump administration's response to Anthropic's new Mythos model. (Axios)
5. + This
One of the coolest spaces at this year's TED is a complete living room decorated a la 1984, the year of the first TED conference.
- From a working Apple II running The Oregon Trail and original Mac down to the plush shag carpeting and lava lamps, it offered me a chance to relive the world of my 10-year-old self.
- And it really was something to watch on a wood-paneled CRT television as Peter Steinberger gave his talk about OpenClaw. As a bonus, the livestream was playing on just one of seven stacked TVs, with other retro sets showing the 1984 Super Bowl, talks from the first TED and clips from 1980s shows like "Knight Rider" and "Family Ties."
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+







