Axios AI+

March 06, 2025
Sadly, it looks like the Stanford women's hoops team will miss the NCAA Tournament for the first time since 1987.
Yes, but: They have a great recruiting class coming next year, and could appear in the Women's National Invitation Tournament instead.
Today's AI+ is 1,189 words, a 4.5-minute read.
1 big thing: Humans needed — until they're not
There will — and must — always be "humans in the loop," tech leaders reassure the world when they publicly address fears that AI will eliminate jobs, make mistakes or destroy society.
Why it matters: Who these humans are, what the loop is and where exactly the people fit into it remain very much up for grabs. How the industry answers those questions will shape what work looks like in the future.
Here are three ways of thinking about what "humans in the loop" can mean.
1. AI assists humans
Chatbots need us to prompt them or give them instructions in order to work. Agents are also assistants, but they require less supervision from humans.
- As agents' abilities grow, keeping humans in the loop ensures "that AI systems make decisions that align with human judgment, ethics, and goals," Fay Kallel, VP of product and design at Intuit Mailchimp, told Axios in an email.
- "By automating tedious tasks, we create space for creative and strategic work," Kelly Moran, VP of engineering, search and AI at Slack, told Axios.
- "Humans aren't always rowing the boat — but we're very much steering the ship," Paula Goldman, chief ethical and humane use officer at Salesforce, wrote last year.
2. AI hands over the wheel at key moments
As agents grow more common and more capable, systems are likely to build in checkpoints for human involvement.
- In a demo last month, Operator, OpenAI's ChatGPT-based agent for accomplishing online tasks, made dinner reservations, called an Uber and purchased concert tickets.
- But at key moments, Operator switched into a "takeover mode" to let the human user enter login credentials, payment details or other sensitive information.
3. Humans review AI's final work
Most chatbot users have learned by now that genAI needs a fact-checker.
- Bots can make things up, misinterpret data or make incorrect recommendations. Even as models get smarter, humans are often still required to audit an AI's work.
- "By design, systems must be built with checkpoints for human experience and judgment, allowing for verification when appropriate without losing the efficiency gains AI provides," Allan Thygesen, CEO at Docusign, said in an email.
- Because of "the probabilistic nature of the technology," George C. Lee, co-head of the Goldman Sachs Global Institute, told Axios that the company uses human "checkers," especially for sensitive workflows.
Reality check: The idea of keeping "humans in the loop" assumes that humans are better at making decisions than AI, which isn't always true.
- "We're accustomed to trusting humans," Stefano Soatto, professor of computer science at UCLA and VP at Amazon Web Services, told Axios — but "not all humans are trustworthy."
Between the lines: The ability to provide oversight and to understand what tasks should be handed off to AI are the skills that human workers will need in the future, says Kelly Monahan, managing director of the research institute at Upwork.
- As a freelance platform, Upwork is able to see trends of how work is shifting faster than you might see at a job level.
- Monahan told Axios that Upwork's clients are searching for more "high-value work" and that includes people with "the ability to read context, to be creative, to be empathetic, all those unique qualities that actually make us intelligent."
Zoom in: It's becoming increasingly unclear when human oversight is necessary, when AI should take the lead and what risks we're willing to take along the way.
- OpenAI CEO Sam Altman explored this dilemma from a military AI perspective at a Brookings Center discussion last year.
- "I've never heard anyone advocate that AI should get to make decisions about launching nuclear weapons. I've also never heard anyone advocate that AI shouldn't be used to intercept inbound missiles where you have to act really quickly," Altman told the moderators.
- "And then there's this whole area in the middle. ... If there's like a plane coming to bomb South Korea and you don't have time to have a human in the loop, and you can make an intercept decision or not, but you're very sure that it's happening, like how sure do you have to be? What would be the expected impact on human life? Where do you draw the line in that gray area?"
- "I hope this is never an OpenAI decision," Altman added.
What we're watching: As AI's abilities improve and the gap between human and machine intelligence narrows, today's "humans in the loop" promise could end up as just a placeholder.
2. Exclusive: Russian disinfo floods AI chatbots

A Russian disinformation effort that flooded the web with false claims and propaganda continues to impact the output of major AI chatbots, according to a new report from NewsGuard, shared first with Axios.
Why it matters: The study, which expands on initial findings from last year, comes amid reports that the U.S. is pausing some of its efforts to counter Russian cyber activities.
Driving the news: NewsGuard says that a Moscow-based disinformation network named "Pravda" (the Russian word for truth) is spreading falsehoods across the web.
- Rather than directly sway people, it aims to influence AI chatbot results.
- More than 3.6 million articles were published last year, finding their way into leading Western chatbots, according to the American Sunlight Project.
- "By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information," NewsGuard said in its report.
- Newsguard said it studied 10 major chatbots — including those from Microsoft, Google, OpenAI, You.com, xAI, Anthropic, Meta, Mistral and Perplexity — and found that a third of the time they recycled arguments made by the Pravda network.
Zoom in: NewsGuard says the Pravda network has spread at least 207 provably false claims, including many related to Ukraine.
- The Pravda network launched in April 2022, following Russia's full-scale invasion of Ukraine, and has since grown to cover 49 countries and dozens of languages, NewsGuard said.
- Of the 150 sites in the network, about 40 are Russian-language sites using domain names referencing various regions of Ukraine.
- Pravda is not producing original content itself, NewsGuard says, but instead is aggregating content from others, including Russian state media and pro-Kremlin influencers.
The big picture: Deliberate falsehoods (disinformation) as well as inadvertent misinformation have both been called out as significant — and pressing — risks of generative AI.
- NewsGuard's findings build on a February report from the American Sunlight Project that warned that the network appeared aimed at influencing chatbots rather than persuading individuals.
Between the lines: NewsGuard said the strategy "was foreshadowed in a talk American fugitive-turned-Moscow-based-propagandist John Mark Dougan gave in Moscow last January at a conference of Russian officials."
- Dougan told the crowd: "By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI."
3. Training data
- Google added a new experimental mode to its search that relies on AI to answer longer queries. (TechCrunch)
- Google Cloud is marketing two DeepMind AI weather forecast models to its enterprise customers. (Axios)
- Volunteers have recreated the Centers for Disease Control website as it appeared prior to Trump's inauguration at a new site: RestoredCDC.org. (Axios)
4. + This
I've been playing sudoku for years, but only just learned how the puzzle gets its name.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing it.
Sign up for Axios AI+



