Axios AI+

January 27, 2026
Ina is on vacation this week, but left us a story inspired by some of her chats at Davos. Today's AI+ is 1,204 words, a 4.5-minute read.
1 big thing: AI could soon improve on its own
AI models that can teach themselves are a fast-growing research focus drawing interest from both startups and the leading labs, including Google DeepMind.
Why it matters: The move could accelerate AI's capabilities, but also introduce new areas of risk.
"Recursive self-improvement" — the technical name for the approach — is seen as a key technique that can sustain the AI's rapid progress.
- Google is actively exploring whether models can "continue to learn out in the wild after you finish training them," DeepMind CEO Demis Hassabis told me during an on-stage interview at Axios House Davos.
- OpenAI CEO Sam Altman said in a livestream last year that OpenAI is building a "true automated AI researcher" by March 2028.
What they're saying: A new report from Georgetown's Center for Security and Emerging Technology shared exclusively with Axios shows how AI systems can both accelerate progress while making risks harder to detect and control.
- "For decades, scientists have speculated about the possibility of machines that can improve themselves," per the report.
- "AI systems are increasingly integral parts of the research pipeline at leading AI companies," CSET researchers note, a sign that fully automated AI research and development is on the way.
- The authors argue that policymakers currently lack reliable visibility into AI R&D automation and are overly dependent on voluntary disclosures from companies. They suggest better transparency, targeted reporting, and updated safety frameworks — while cautioning that poorly designed mandates could backfire.
Between the lines: The idea of models that can learn on their own is a return of sorts for Hassabis, whose AlphaZero models used this approach to learn games like chess and Go in 2017.
Yes, but: Navigating a chessboard is a lot easier than navigating the real world.
- In chess, it's relatively easy to logically double-check whether a planned set of moves is legal and to avoid unintended side effects.
- "The real world is way messier, way more complicated than the game," Hassabis said.
- Even before the adoption of this technique, researchers have seen signs of models using deception and other techniques to reach their stated goals.
What we're watching: You.com CEO Richard Socher is launching a new startup that will focus on this area, he shared during interviews at both the World Economic Forum in Davos last week and at DLD in Munich the week prior.
- "AI is code, and AI can code," Socher said. "And if you can close that loop in a correct way, you could actually automate the scientific method to basically help humanity."
- Bloomberg reports that Socher is raising hundreds of millions of dollars in a round that could value the new startup at around $4 billion.
- "I can't share too much, but I've started a company to do it with the people who have done the most exciting research in that area in the last decade," Socher told Axios the week prior at the DLD conference in Munich.
The bottom line: Recursive self-improvement may be the next big leap in model capability, but it pushes the technology closer to real-world complexity — where errors, misuse, and unintended consequences are much harder to contain.
2. Yahoo launches AI answer engine, Scout
Yahoo is joining the next era of search with Yahoo Scout, a new AI answer engine that has a standalone site and app, and will be available across its properties.
Why it matters: As search moves from a list of links to conversations and answers, Yahoo is betting its three decades of data can help it compete with newer rivals.
- "Yahoo Scout really can help supercharge the original Yahoo mission of being the trusted guide to the internet," CEO Jim Lanzone tells Axios. "It's an opportunity that I don't think we thought would come around again ... but AI has given us that opportunity, and we're running with it."
Driving the news: Today, Yahoo Scout debuted in beta on desktop and mobile — within the existing Yahoo Search app on iOS and Android — in the U.S.
- The Scout experience looks more vibrant than other AI answer engines.
- It features emoji in the sidebar, easy-to-scan answers with tables and images and inline citations that make results transparent, Axios viewed in a product demo.
- Scout's primary foundational AI model is Anthropic's Claude. Scout runs on Yahoo's proprietary data, content and insights alongside Claude and Microsoft Bing's grounding API, which surfaces sources from the open web for answers.
The big picture: Yahoo aims to make AI search friendly to users and publishers, delivering answers instantly while still driving traffic back to the open web.
- Every response includes inline citations and links to sources, a deliberate move to "reestablish the social contract" and have search engines send traffic to publishers, Lanzone says. Yahoo also is joining Microsoft's Publisher Content Marketplace pilot, which has a similar goal of providing sustainable revenue for publishers.
- Yahoo's advantage is in its unique "treasure trove of data" and "deep understanding of query intent," says Eric Feng, senior vice president and general manager of Yahoo Research Group.
- Yahoo has 250 million monthly users in the U.S., 500 million user profiles and 18 trillion annual signals (i.e., searching for a stock or a game score or clicking on a news article) across its ecosystem.
Follow the money: Yahoo is testing ads at launch with a small percentage of queries.
- That strategy differs from competitor OpenAI, which has relied on paid subscriptions for ChatGPT and only just recently announced testing ads.
- "Our goal is to make it free for everyone," Feng says. "We want it to be always free and that really fits into our mission of making this very accessible."
Reality check: Google and OpenAI already dominate the AI search market. Even with Yahoo's decades of data and existing user base, winning attention won't be easy.
What's next: Yahoo plans to add more personalization features, new capabilities for different verticals and more opportunities for advertisers.
3. Anthropic's warning to the world
Anthropic CEO Dario Amodei, the architect of the most powerful and popular AI system for global business, is warning of the imminent "real danger" that superhuman intelligence will cause civilization-level damage absent smart, speedy intervention.
- In a 38-page essay, shared with us in advance of yesterday's publication, Amodei writes: "I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species."
- "Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it."
📺 Later Monday, Amodei joined us on the debut of our "Behind the Curtain" video series, offering the three things Congress should do now to prevent disaster.
4. Training data
- Social media users have been sharing an AI-manipulated image of Alex Pretti, the man killed by DHS agents in Minneapolis, holding a gun instead of a phone. (NewsGuard)
- Nvidia said yesterday that it would invest $2 billion in data center company CoreWeave. (Axios)
- The Trump administration reportedly plans to use Google's Gemini to write federal transportation regulations. (ProPublica)
5. + This
One college student upset about an AI art exhibit ate the art in protest. (The Nation)
Thanks to Matt Piper for copy editing this newsletter.
Sign up for Axios AI+








