Axios AI+

February 10, 2025
Bonjour from Paris, where I am covering the AI Action Summit, which officially kicked off a few hours ago. Today's AI+ is 1,286 words, a 5-minute read.
1 big thing: Paris AI summit's high stakes
An international AI summit in Paris this week is set to address a broader range of issues than similar past gatherings, but there are growing concerns that the event will lead to little concrete action.
Why it matters: With perhaps only a couple of years left before the tech industry delivers super-powerful AI — sometimes called artificial general intelligence (AGI) — society has precious little time to prepare for its many impacts.
Driving the news: Organized by the governments of France and India, the AI Action Summit is bringing together dozens of heads of state and top executives from OpenAI, Google, Meta, Microsoft and Anthropic as well as representatives from academia and nonprofits.
- Two prior gatherings — in Bletchley Park, England, and Seoul, South Korea — focused largely on the existential risks posed by AI. But this week's event is taking a wider lens, exploring climate impact, income inequality, bias and other issues.
One hoped-for outcome was a communique to be agreed upon by as many nations as possible.
- Critics had already assailed a leaked draft as vague and lacking in accountability. Now sources tell Axios that the U.S. is unlikely to agree to sign on based on the current draft.
- The statement was drafted with input from the Biden administration, but the Trump administration has shown a desire to go in new directions, including a fresh call for public input on a national AI strategy.
A couple of other tangible efforts are proceeding as planned.
- They include a new public-private partnership called Current AI, with $400 million in initial funding from a host of entities including the French government, Google, Salesforce and the John D. and Catherine T. MacArthur and Patrick J. McGovern foundations.
- The Current AI effort also has backing from Chile, Finland, Germany, Kenya, Morocco, Nigeria, Slovenia and Switzerland.
- Among Current AI's stated aims are expanding access to high-quality public and private datasets, investing in open-source tools and infrastructure and developing systems to measure AI's social and environmental impact.
Also announced today was Robust Open Online Safety Tools (ROOST) — an effort to make openly available a set of tools needed to ensure online safety in the AI era.
- Founding partners include Eric Schmidt, Discord, OpenAI, Google, Roblox, the John S. and James L. Knight Foundation, AI Collaborative, the Patrick J. McGovern Foundation and Project Liberty Institute. Organizers say ROOST has raised $24 million to fund its first four years of operations.
Between the lines: While action stemming from the summit may be limited, there are still benefits to gathering the key stakeholders, notably including government officials from both China and the U.S., whose delegation is led by Vice President JD Vance.
Google DeepMind chief Demis Hassabis, in an interview with Axios, said that a lack of international cooperation around AI norms and standards heightens international risks — particularly, that countries racing to bar other nations from gaining a technological edge could make choices harmful to humanity as a whole.
- Hassabis said he planned to spend much of his time in Paris seeking common ground and strengthening global ties. But he acknowledged the current geopolitical environment makes such cooperation challenging.
- "It seems to be very difficult for the world to do — just look at climate," he told Axios. "There seems to be less cooperation. So, you know, that doesn't bode well."
- He added that there probably isn't enough talk happening now about the challenges that will come even if AGI is developed safely. "I think there needs to be more time spent by economists ... and philosophers and social scientists on 'What do we want the world to be like?'" Hassabis said.
2. Exclusive: Anthropic's "index" tracks AI use
Today's AI users employ the technology more as a collaborator than as an autonomous helper, according to a new study of real-world AI use by Anthropic, shared exclusively with Axios.
Why it matters: The new Anthropic Economic Index is an ambitious effort to track the impact of AI adoption by directly analyzing anonymized data on how people are using Claude, Anthropic's chatbot.
The big picture: Today, only AI providers have a direct view of what people are actually doing with their tools. The more information AI makers share with the world, the better we'll be able to understand how the new technology is changing our lives.
- Anthropic said in a blog post that the study provides "the clearest picture yet of how AI is being incorporated into real-world tasks across the modern economy."
- "We're in an AI revolution in society. Society needs information about what that is doing to the world, and we see this as a way to contribute data there," Jack Clark, an Anthropic co-founder who is the firm's head of policy, told Axios.
What they found: "AI use leans more toward augmentation (57%), where AI collaborates with and enhances human capabilities, compared to automation (43%), where AI directly performs tasks," Anthropic reports.
- The distinctions "between you fully delegating tasks to a language model — versus, like, batting the ball back and forth — are subtle and emerging right now," Clark says.
- AI adoption was widest among workers in "computer and mathematical" fields — chiefly, software engineering. 37.2% of queries sent to Claude were in this category, per Anthropic. (That could reflect Claude's popularity among programmers.)
- The next-largest category was "arts, design, sports, entertainment, and media" (10.3% of queries), which Anthropic said "mainly reflected people using Claude for … writing and editing."
How it works: Anthropic uses its own tool called Clio to collect and analyze Claude usage data while preserving users' privacy.
- "It's a sample of around a million conversations over a seven-day period that people are having with Claude AI, and we filter that sample down to only conversations that are actually about work," Deep Ganguli, leader of Anthropic's societal impacts team, told Axios.
What's next: Anthropic plans to run follow-ups every six months to track changes in AI use over time.
- "A challenge in AI is you don't know the full scope of the capabilities of the systems that are being released," Clark says. "It's very different to, you know, cars, where you know how fast the car is that you're bringing out."
Anthropic is publishing all its data so external researchers can review and use it. "We'd love more eyes on this problem," Ganguli says.
- Anthropic is also hoping other companies will follow its lead and release similar information.
- "We want to figure out how the AI industry should make itself legible to the rest of the world," Clark adds. "Some of that comes through statements that companies make, but some of it comes through data."
3. Training data
- China's DeepSeek will provide potentially dangerous content, including information on bioweapons and self-harm, more readily than many other models. (Wall Street Journal)
- "The cost to use a given level of AI falls about 10x every 12 months," OpenAI CEO Sam Altman wrote in a blog post yesterday, and "in a decade, perhaps everyone on earth will be capable of accomplishing more than the most impactful person can today."
- Investors have pledged €109 billion ($113 billion) for AI projects in France, according to an announcement by President Emmanuel Macron yesterday. (Financial Times)
- The Federal Trade Commission is beefing up its staff with new appointees who are hostile to Big Tech. (Axios)
- Christie's is planning an auction dedicated to AI art next month despite concern from artists who complain that their work continues to be used without consent or compensation. (TechCrunch)
- Nokia named Justin Hotard, an Intel exec who ran the company's data center and AI efforts, as its new CEO. (Wall Street Journal)
4. + This
It's a bit of déjà vu for me being back in the Grand Palais — the last time I was here, I was watching Olympic fencing.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing it.
Sign up for Axios AI+





