Bots venture beyond the text box
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
Anthropic's Tuesday announcement that it's giving its Claude model a new "computer use" capability has the AI world buzzing.
Why it matters: This doesn't mean that bots have busted free of the chat box to run loose on the desktop and in the browser — but that day looks much closer, and increasingly inevitable.
State of play: Anthropic's "computer use" lets developers and advanced users tell Claude to go off and do things that make use of other applications on a computer — like collecting data from the web and moving it into a spreadsheet, or building, deploying and debugging a new website from scratch.
- This is one version of what the AI industry means by "agents," and it's not hard to see how powerful it could be.
- "It feels like delegating a task rather than managing one," Wharton professor and AI-use guru Ethan Mollick wrote about Claude's new abilities.
Experts and insiders both foresee a massive multiplier effect in knowledge work as AI keeps adding new abilities.
- In an impromptu onstage demo at the TEDAI conference in San Francisco Tuesday, Mollick showed what that might look like.
- He spun up three separate "assignments" for chatbots (including both ChatGPT and Claude, without the new "computer use" mode) in quick succession — researching a business, building a financial dashboard and "figure out what this is" for a random folder.
- Then he kept talking while the bots showed their work onscreen in separate windows. It was like a plate-spinning circus stunt, only the acrobats were bots.
Yes, but: Anthropic isn't letting Claude go crazy on your laptop or phone in the wild quite yet.
- The desktop Claude works on is a sandboxed virtual machine, a software-only computer running in the cloud within some constraints, as blogger-developer Simon Willison explains.
- Claude's computer use depends on taking screenshots, counting pixels and making mouse-clicks, which means it can work with all the tools available to people — but faces the same limits, too.
- At some point it will make more sense for the AI program to bypass the human interface and just get things done, code-to-code.
A "beta release" and an "experiment" are how Anthropic is describing the computer use feature.
- Anthropic cautions that it's "imperfect" and "at times cumbersome and error-prone."
- "We're releasing computer use early for feedback from developers, and expect the capability to improve rapidly over time."
Also, it's not free — there's a meter running.
- It cost Exponential View newsletter author Azeem Azhar $4 to run a 15-minute business-research task.
- But costs are likely to decline fast with efficiency improvements.
Between the lines: Anthropic's announcement stole a march on its competition at OpenAI, which is believed to be working on similar technology.
- That's even though Anthropic has positioned itself as the more safety-conscious alternative to OpenAI, which has more speedily and aggressively deployed its innovations.
Our thought bubble: The move suggests a growing agreement between the two firms, despite their rivalry, that the best way to make AI safe is to get it in front of developers and the public quickly to find out how to improve it.
- But keeping a focus on safety will grow increasingly difficult for both companies the more that AI's evolution turns into a race.
- As agents and "computer use" improve and spread, attention will quickly move from their limitations to their potential misuses in realms like spam, cybercrime, harassment and copyright infringement.
The bottom line: Two years ago, AI providers were insistent that for safety and quality control it was vital to "keep humans in the loop" — but the loop is already beginning to squeeze humans out.
