Axios AI+

April 17, 2025
Congrats to the minor league Albuquerque Isotopes, who managed to somehow score three runs on this bases-loaded walk. Today's AI+ is 1,025 words, a 4-minute read.
1 big thing: Hot new protocol glues together AI and apps
AI developers are embracing a new technical standard that's speeding up chatbots' capacity to access the rest of the software we use every day.
Why it matters: The faster AI can start using the services and programs that shape our lives and work, the quicker it can save us time and money — and cause us new headaches.
How it works: Anthropic's Model Context Protocol is a technical definition that standardizes a relatively simple method developers can use to wire up today's AI models and bots to most other programs and data sources.
- MCP lets users with modest technical skills give conversational bots — like ChatGPT and Claude — the keys to their other digital tools. (If you're comfortable running software locally in a terminal window, you can probably handle it.)
- It's supported by many of AI's biggest players, including OpenAI, Google and Microsoft.
- Developers have already built and shared hundreds of programs — called MCP servers — that developers and users can plug in.
What they're saying: "MCP is the first time I can say: here's a (relatively) easy way to connect your organization's tools and knowledge into an AI chat app and see what you learn," digital design guru Matt Webb wrote on his blog.
Zoom out: AI vendors have made big promises about a future in which autonomous AI agents will get things done for us with minimal supervision.
- But few of these agents are working right now to handle everyday work, outside of a small number of very specific technical environments.
Zoom in: MCP offers a fast-and-dirty way to bridge the world of generative AI models with the web and mobile apps that most of us rely on today to do real work.
- If you want ChatGPT to access your data in Notion or Evernote or some similar app, or you'd like Claude to access the files on your computer or in your Dropbox, MCP is an answer.
Reality check: Authentication, security and privacy are obvious trouble spots any time you give one system access to another. Right now MCP is largely a "proceed at your own risk" zone.
Between the lines: A protocol is just a description of an agreed method for one system to access another without knowing every detail of how the other one works.
- Both the internet and the web are built on a foundation of protocols, which allowed them to connect computers and later phones made by different firms using different operating systems.
- MCP's status as an open protocol gives both model-makers and app-builders confidence that they can use it without getting locked into a particular vendor's tool set.
- The protocol approach has prevailed in the past, advocates say, because it's fair, it's pro-competitive and it produces healthy software ecosystems full of choices for users.
- Google recently unveiled its own open protocol, Agent2Agent (A2A), for connecting one AI agent to another.
Yes, but: It's tough to make money directly by creating or adopting an open protocol.
- Microsoft veteran Steven Sinofsky identifies MCP as the latest form of "middleware," a category of software tool — like web browsers — that operates across platforms and often thrives during industrywide platform shifts like the rush into AI.
- Middleware, Sinofsky argues, "never quite lives up to [its] promises in practice."
Our thought bubble: Websites and apps are designed for people to use, so they have "human interfaces" like buttons, search features and dialogue boxes. MCP provides a way for AI to bypass that layer.
- We want AI agents to "do things for us," and we imagine them as substitute versions of ourselves going off and arranging travel or gathering research.
- But it's deeply inefficient for an AI bot to "talk" to an app or website as if it were a person.
- The bot is code, the website is code, and they don't need to keep translating their work into human-ese to get things done.
2. New OpenAI models "think" with images
OpenAI yesterday released two new AI models — o3 and o4-mini — designed to handle a broader range of tasks, from coding to visual analysis.
Why it matters: OpenAI says o3 is its most advanced reasoning model yet and the first in its series of models to handle web browsing, image generation and visual understanding.
Driving the news: OpenAI said it was immediately releasing o3 as well as o4-mini, which it bills as a "smaller, faster model that delivers impressive results — especially in math, coding, and visual tasks — at lower cost."
- The integration of web browsing and image capabilities into o3 "helps them solve complex, multi-step problems more effectively and take real steps toward acting independently," OpenAI said.
- "OpenAI o3 and o4-mini are our first models that can think with images — meaning they don't just see an image, they can integrate visual information directly into their reasoning chain."
- Both models will be available imminently for ChatGPT Plus, Pro, and Team users, with o3-pro coming "in a few weeks."
- OpenAI is also debuting Codex CLI, which it describes as "a lightweight, open-source coding agent that runs locally" in a computer's terminal app and works with o3 and o4-mini.
Catch up quick: After initially planning a standalone release for o3, OpenAI said in February that it would merge o3 into the release of GPT-5, but then reversed course again earlier this month, with CEO Sam Altman saying o3 and o4-mini would soon be released, with GPT-5 to follow "in a few months."
The intrigue: The models were both evaluated using the revised preparedness framework that OpenAI announced on Tuesday.
- OpenAI said that the model was not scored for its persuasion capabilities, since that metric is no longer included under the new framework; however OpenAI stressed that it does look at model persuasiveness as part of its broader safety work.
3. Training data
- OpenAI is said to be in talks to acquire coding startup Windsurf for around $3 billion. (Bloomberg)
- Tech stocks took another dive after Nvidia and other chipmakers warned of outsized tariff charges. (Axios)
4. + This
Now this is a cool Tide pod challenge. A Massachusetts man broke open a laundry soap pod to provide the needed lubrication to rescue a raccoon that was stuck in a storm drain.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+




