Axios AI+

March 24, 2026
Ina here, saying hi from D.C. I'm excited for the AI+ Summit — and for the fortuitous timing of peak cherry blossoms. Today's AI+ is 1,241 words, a 4.5-minute read.
1 big thing: AI health care's reliability problem
Hundreds of AI tools for health care — from transcription and imaging to diagnostics — tout accuracy rates above 90%, but most are tested only in isolation.
Why it matters: Those tools become less reliable when used together, an analysis by Korean AI scientist Kwansub Yun suggests.
Inside the room: Yun and health consultant Claire Hast ran an example scenario in which a patient had a physical transcribed by AI, received a mammogram using AI-assisted imaging and got a diagnosis with help from an AI tool.
Stunning stat: While each tool individually had a reported accuracy rating of more than 85%, the system as a whole had a reliability score of just 74%.
What he did: Yun used a systems-level analysis to estimate the overall workflow reliability of the three tools used together.
- Drawing on publicly available accuracy data for an imaging tool (90%), a documentation tool (85%) and a diagnostic (97%), Yun arrived at a reliability score of 74%.
- "The formula is a standard reliability engineering heuristic — the same structural logic used to estimate system reliability in aerospace and defense," says Yun.
Between the lines: In practical terms, if erroneous data from one AI tool is fed into another, the secondary tool has no way to flag the unreliable inputs, says Yun.
- "The result looks authoritative, but the chain that produced it was never measured end to end."
Friction point: That's particularly troubling given that the standard regulatory procedure for evaluating the tools involves standalone model performance testing, Hast and Yun say.
- "What no one is currently required to measure is the reliability of the full workflow that model sits inside," Yun says.
The other side: Human doctors are also typically evaluated as individuals, not as part of a broader system — there's no data on how much reliability slips as patients move between providers.
- "If you chain together the probabilities of accuracy for any human making many sequential decisions, you realize how likely you are to get errors," says Mark Sendak, CEO of AI infrastructure and evaluation startup Vega Health.
- "My fear is that we're going to hold AI to a standard of perfection that is clearly not the standard that we hold the existing medical system to," says UC San Francisco department of medicine chair Robert Wachter.
What we're watching: More attention should be paid to the overall performance of what Wachter calls "the human-AI dyad."
- For example, AI tools could be designed to more clearly signal to humans in the loop where their clinical reasoning is needed.
- In such a scenario, AI findings made with 100% confidence could be colored green, while those with less confidence be colored yellow or orange.
- Such a setup would better enable regulators and evaluators of such tools to look at "that dyad and its actual outcomes, rather than just assuming the human-in-the-loop adds safety," Wachter says.
The bottom line: When it comes to AI in health care, "we have no data or oversight on the orchestra of it all," says Hast.
2. Exclusive: Labor Department's new AI course
The Labor Department will announce a free AI literacy course today aimed at Americans skeptical of the technology.
Why it matters: Americans are bracing for an AI-driven economy where many jobs may look different or cease to exist, and policymakers are under pressure to show they're responding.
Driving the news: The Trump administration's latest idea is to offer Americans a seven-day course requiring about 10 minutes a day.
- The "Make America AI Ready" course covers AI's core capabilities and how to create clear prompts, among other basics, according to an announcement shared first with Axios.
- Users can enroll by texting "READY" to 20202. Phone numbers used to enroll won't be shared with third parties.
- DOL said that the course is "intentionally designed for Americans who may be a little fearful of or unsure about AI."
Context: The course mirrors the department's recently announced voluntary AI literacy framework.
Between the lines: There have been many big pronouncements about how AI will impact jobs, with some warning of mass layoffs and others promising new opportunities.
- So far, it's largely been the promise of AI — not AI itself — that has led to job loss as companies reorganize around the technology.
- Politicians are looking to calm voter fears that their livelihoods are in jeopardy.
What they're saying: "This initiative will help demystify AI for American workers," Labor Deputy Secretary Keith Sonderling said.
- "The 'Make America AI-Ready' initiative is designed to ensure every American worker has the chance to learn foundational skills so they can benefit from the opportunities that the AI economy presents," Labor Secretary Lori Chavez-DeRemer said.
3. America's next class war: AI fluency
Anthropic just dropped the most granular data yet on who's actually using AI and how — and the findings should rattle anyone thinking the AI revolution will be evenly distributed.
- It won't. In fact, it's creating a new form of economic inequality: AI fluency.
Why it matters: The Anthropic data, out today, reveals something subtler and more consequential than the "robots take your job" narrative.
- The real divide isn't between people who have or use AI and people who don't. It's between people who've learned to use AI well and everybody else.
AI will also be a growing threat to casual or unsophisticated users who fall behind their more AI-savvy peers, regardless of role or level.
- "Much of the discussion focuses on how AI is something that happens to you," Peter McCrory, Anthropic's head of economics, told us from the company's headquarters in San Francisco.
- "This analysis shows you can develop skills that make you better at getting value out of Claude or whatever large language model you want to use."
Some context: Anthropic's new report, "Anthropic Economic Index: Learning Curves," studied over 1 million conversations on the company's Claude platform last month. The headline finding: "Experienced AI users are dramatically more successful than newcomers" — and the gap isn't explained by what tasks they're doing, what country they're in, or what model they're using.
- People who've used Claude for six months or more have a 10% higher success rate in their conversations with AI. "The longer you've been using it, the stronger this effect," McCrory says.
4. Training data
- Anthropic debuted new computer use options for Claude Code and Cowork. For now the tools are billed as a research preview and limited to the Mac, though the company says a Windows version is in the works. (Engadget)
- It was a big day for big AI lab hiring, with Microsoft nabbing former AI2 CEO Ali Farhadi and several key researchers from the Allen Institute for AI to join Mustafa Suleyman's superintelligence team. (GeekWire)
- Meanwhile, Meta has hired the team behind Dreamer, including former Google and Xiaomi executive Hugo Barra. (Bloomberg)
- And OpenAI has hired former Meta executive Dave Dugan to run its ad sales operation, reporting to COO Brad Lightcap. (WSJ)
- Former IBM and Docusign executive Inhi Cho Suh is taking over as CEO of Niantic Spatial, with John Hanke shifting to executive chairman.
- Aiming to outflank Anthropic at setting up joint ventures with private equity firms, OpenAI is offering better terms and a guaranteed rate of return. (Reuters)
Thanks to Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+








