Axios AI+

January 05, 2026
I hope everyone had a good holiday break. I had some valuable time with the family — and the kiddo is now a teenager. Today's AI+ is 1,148 words, a 4.5-minute read.
1 big thing: 2026 is AI's "show me the money" year
The AI model maker race will continue in 2026, along with more agents and a growing pressure on companies to prove AI can pay off in the real world, experts tell Axios.
Why it matters: AI may be both the current and next big thing, but success increasingly hinges less on being the "best" model and more on timing.
The big picture: Rapid progress by OpenAI, Anthropic, Google and others drove frequent leapfrogging — and fierce price competition — in 2025. That dynamic is expected to intensify next year and beyond.
- "We're just gonna be in this constant race," Box CEO Aaron Levie tells Axios.
Reality check: There are important, often-overlooked steps between the arrival of more powerful algorithms and a boost in productivity.
- The winners must understand when a technology is mature enough to deploy and how to integrate it into messy, human-run organizations without burning money or credibility.
- "Good AI won't need long prompts. The more you have to explain, the worse the product is." Winston Weinberg, CEO and co-founder of Harvey, tells Axios. "The best systems will already know the context."
- "A jump in model capability does not instantly mean that task gets automated in the economy," Levie says. "There's still a lot of work and software that has to get built out from there."
Case in point: Coding has been among the earliest and biggest beneficiaries of generative AI for a simple reason: The work is already structured for it. It's largely text-based, modular, and designed around tight human-machine feedback loops.
- "You have the perfect workflow in coding," Levie says. And then you have knowledge work, "which is 10 times messier than what engineering workflows look like."
Between the lines: Semi-autonomous agents were the talk of 2025, but businesses were hesitant to hand off too much work to AI models that were still prone to making mistakes.
- Improved models could help make agents more of a reality.
- "A year from now, answering questions will be the least useful thing AI can do. (And it will be excellent at answering questions!)," Fidji Simo, OpenAI CEO of applications, tells Axios.
- "Instead, we'll have proactive AI assistants constantly running in the background, getting things done for us across the web and the real world," she says. "It will anticipate our needs, and we'll be able to trust it to make decisions and take action on our behalf."
Zoom in: Willem Avé, head of product at Square, agrees that agents will continue to be more trustworthy and capable. And "companies implementing AI will get more creative about connecting them to deterministic systems that will take the variability out of AI results," Avé tells Axios.
- "In 2026, the most successful companies will set goals that sound absurd without AI — and then use agent collaboration to make them routine," Dan Rogers, CEO of Asana, tells Axios.
Yes, but: Businesses could be in for another year of messy agent rollouts.
- Ryan Gavin, CMO of Slack at Salesforce, predicts that "2026 will be the year of the lonely agent."
- Gavin says companies will spin out "hundreds of agents per employee," but most will sit idle, like unused software licenses.
- "In an agentic solution, you're breaking down the problem into many, many steps. And the overall solution is only accurate if you're accurate each step of the way," AT&T chief data officer Andy Markus tells Axios. "That's the challenge."
The need for financial payoff was a consistent theme among the experts we asked to predict AI trends for 2026.
- "2026 is the 'show me the money' year for AI," Venky Ganesan, a partner at Menlo Ventures, tells Axios. "Enterprises will need to see real ROI in their spend, and countries need to see meaningful increases in productivity growth to keep the AI spend and infrastructure going."
- Ganesan predicts that some of the aggressive spending could bankrupt major companies.
The other side: "We will also see a major AI company have an IPO and GDP growth numbers will go up in America by over 100 basis points," Ganesan says.
The bottom line: The pace of AI adoption in business continues to be limited by the ability of humans (and organizations made up of humans) to adapt.
- "The agents that matter will show up where work happens, understand context, and just work," says Gavin.
2. Exclusive: ChatGPT is 2026's health helper


More than 40 million people globally turn to ChatGPT daily for health information, according to a report OpenAI has shared exclusively with Axios.
Why it matters: Americans are turning to AI tools to navigate the notoriously complex and opaque U.S. health care system.
The big picture: Patients see ChatGPT as an "ally" in navigating their health care, according to analysis of anonymized interactions with ChatGPT and a survey of ChatGPT users by the AI-powered tool Knit.
- Users turn to ChatGPT to decode medical bills, spot overcharges, appeal insurance denials, and when access to doctors is limited, some even use it to self-diagnose or manage their care.
By the numbers: More than 5% of all ChatGPT messages globally are about health care.
- OpenAI found that users ask 1.6 to 1.9 million health insurance questions per week for guidance comparing plans, handling claims and billing and other coverage queries.
- In underserved rural communities, OpenAI says users send an average of nearly 600,000 health care-related messages every week.
- 7 in 10 health care conversations in ChatGPT happen outside of normal clinic hours.
Zoom in: Patients can enter symptoms, prior advice from doctors, and context around their health care issues and ChatGPT can deliver warnings on the severity of certain conditions.
- When care isn't available, this can help patients decide if they should wait for appointments or if they need to seek emergency care.
- "Reliability improves when answers are grounded in the right patient-specific context such as insurance plan documents, clinical instructions, and health care portal data," OpenAI says in the report.
Reality check: ChatGPT can give wrong and potentially dangerous advice, especially in conversations around mental health.
3. Training data
- AI use in policing is outpacing public rules, and could embed errors and bias deep within the criminal justice system. (Axios)
- Meta's deal with AI agent company Manus could be the Facebook-owner's key to the enterprise. (Axios)
- Racist AI content is spreading fast and could sway voters. (Axios)
- Google is adding Nano Banana and Veo support to Google TV to allow users to create AI videos and images in Gemini on the big screen. (The Verge)
4. + This
This twist on Flappy Bird is the first game I've seen that can only be played on a foldable smartphone. It looks fun, though I'm not sure I'd want to give the hinge the added strain.
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+



