"Humans in the loop" make AI work, for now
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Lindsey Bailey/Axios
There will — and must — always be "humans in the loop," tech leaders reassure the world when they publicly address fears that AI will eliminate jobs, make mistakes or destroy society.
Why it matters: Who these humans are, what the loop is and where exactly the people fit into it remain very much up for grabs. How the industry answers those questions will shape what work looks like in the future.
Here are three ways of thinking about what "humans in the loop" can mean.
1. AI assists humans
Chatbots need us to prompt them or give them instructions in order to work. Agents are also assistants, but they require less supervision from humans.
- As agents' abilities grow, keeping humans in the loop ensures "that AI systems make decisions that align with human judgment, ethics, and goals," Fay Kallel, VP of product and design at Intuit Mailchimp, told Axios in an email.
- "By automating tedious tasks, we create space for creative and strategic work," Kelly Moran, VP of engineering, search and AI at Slack, told Axios.
- "Our data shows that AI use leans more toward augmentation (57%) compared to automation (43%)," an Anthropic spokesperson told Axios in an email. "In most cases, AI isn't replacing people but collaborating with them."
- "Humans aren't always rowing the boat — but we're very much steering the ship," Paula Goldman, chief ethical and humane use officer at Salesforce, wrote last year.
2. AI hands over the wheel at key moments
As agents grow more common and more capable, systems are likely to build in checkpoints for human involvement.
- In a demo last month, Operator, OpenAI's ChatGPT-based agent for accomplishing online tasks, made dinner reservations, called an Uber and purchased concert tickets.
- But at key moments, Operator switched into a "takeover mode" to let the human user enter login credentials, payment details or other sensitive information.
3. Humans review AI's final work
Most chatbot users have learned by now that genAI needs a fact-checker.
- Bots can make things up, misinterpret data or make incorrect recommendations. Even as models get smarter, humans are often still required to audit an AI's work.
- "By design, systems must be built with checkpoints for human experience and judgment, allowing for verification when appropriate without losing the efficiency gains AI provides," Allan Thygesen, CEO at Docusign, said in an email.
- Because of "the probabilistic nature of the technology," George C. Lee, co-head of the Goldman Sachs Global Institute, told Axios that the company uses human "checkers," especially for sensitive workflows.
Reality check: The idea of keeping "humans in the loop" assumes that humans are better at making decisions than AI, which isn't always true.
- "We're accustomed to trusting humans," Stefano Soatto, professor of computer science at UCLA and VP at Amazon Web Services, told Axios — but "not all humans are trustworthy."
Between the lines: The ability to provide oversight and to understand what tasks should be handed off to AI are the skills that human workers will need in the future, says Kelly Monahan, managing director of the research institute at Upwork.
- As a freelance platform, Upwork is able to see trends of how work is shifting faster than you might see at a job level.
- Monahan told Axios that Upwork's clients are searching for more "high-value work" and that includes people with "the ability to read context, to be creative, to be empathetic, all those unique qualities that actually make us intelligent."
The intrigue: Most modern generative AI systems have been trained, in part, by humans.
- Humans select, clean and label data to fine-tune AI models to teach them how to answer questions or understand images.
- Humans decide what they want the model to achieve and what safeguards or values an AI model has.
- OpenAI also uses humans to score AI answers, which is a key way models get better. This is known as reinforcement learning with human feedback.
Zoom in: It's becoming increasingly unclear when human oversight is necessary, when AI should take the lead and what risks we're willing to take along the way.
- OpenAI CEO Sam Altman explored this dilemma from a military AI perspective at a Brookings Center discussion last year.
- "I've never heard anyone advocate that AI should get to make decisions about launching nuclear weapons. I've also never heard anyone advocate that AI shouldn't be used to intercept inbound missiles where you have to act really quickly," Altman told the moderators.
- "And then there's this whole area in the middle. ... If there's like a plane coming to bomb South Korea and you don't have time to have a human in the loop, and you can make an intercept decision or not, but you're very sure that it's happening, like how sure do you have to be? What would be the expected impact on human life? Where do you draw the line in that gray area?"
- "I hope this is never an OpenAI decision," Altman added.
What we're watching: As AI's abilities improve and the gap between human and machine intelligence narrows, today's "humans in the loop" promise could end up as just a placeholder.
