Axios AI+

May 12, 2025
Welcome back! Today's AI+ is 1,204 words, a 4.5-minute read.
1 big thing: How AI made Wikipedia more indispensable
Far from wrecking Wikipedia, the rise of AI has so far just strengthened it, Wikipedia's outgoing leader Maryana Iskander tells Axios in an exclusive interview.
The big picture: Once seen as a possible casualty of the generative AI boom, and more recently a target of the MAGA right, Wikipedia has emerged as an enduring model for how to navigate the latest shifts in politics and technology.
Iskander points to the company's values as keys to its enduring success — things like requiring sources, ensuring a neutral point of view and transparent debate.
- "Everybody keeps predicting it's all gonna end one day, and the opposite keeps being true," Iskander said. "It keeps getting stronger."
- While other sites and services are struggling to hold on to traffic as usage of ChatGPT and other AI tools grows, Iskander says Wikipedia's page views and usage have not yet shown signs of decline: "We've just become more and more relevant and more and more important."
Driving the news: Iskander announced last week she will leave her post as CEO of the Wikimedia Foundation, which funds and oversees Wikipedia.
- Under her tenure, the organization has broadened its donor base, expanded its footprint of data centers and built a business model that seeks to keep the entirety of the site free.
- "I do not see us moving away from core principles like free access to knowledge for everybody," Iskander said. "It's about being smart about who needs to access what in what kinds of ways."
Although individuals, nonprofits and others can access Wikipedia without charge, the organization encourages tech companies that make massive use of its entire corpus to pay their fair share.
- Rather than trying to threaten tech companies, Iskander has sought to convince them that they need to support Wikipedia if they want it as a resource, while also providing them improved access.
- "It has taken some creativity to make sure that the large players also are coming to the table," she said.
Between the lines: Iskander also sees lessons in Wikipedia's approach for AI companies as they seek to mitigate bias, reduce errors and ensure a healthy information ecosystem.
- "We've tried to talk about why making the models more open is the right thing to do because we do it," Iskander said. "We've tried to talk about how to keep humans in the loop because we do it. We've tried to talk about why caring about provenance and attribution and who creates is important."
Zoom in: Wikipedia faces growing attacks in the U.S. from those who don't like the information it surfaces.
- While that's disturbing for what it signals about the direction of the country, Iskander says Wikipedia has decades of experience standing up to governments.
- "What's happening in the U.S. feels big because it's the U.S.," she said. "But Wikipedia has been dealing with these issues in an endless number of countries — India, Russia, Pakistan, Turkey — and so I think that's made us better prepared."
Iskander has a suggestion for regulators weighing changes to internet law, such as amending or limiting Section 230 protections: They should employ what Wikipedia founder Jimmy Wales has called the "Wikipedia Test" to make sure proposed changes actually protect the flow of information in the public interest.
- That means asking whether a particular law or rule is good or bad for Wikipedia. Iskander says that's "just a way of thinking through what are the consequences and the impacts" on many similar outfits.
- Well-meaning but poorly thought-out changes, she said, could threaten open source and crowdsourced information sources.
- "Whatever we change, we've got to keep making space for different kinds of models," she said.
2. Exclusive: Anthropic's all-human comms boost
Anthropic is tapping people, not AI, to build out its communications team.
Why it matters: Anthropic, the OpenAI rival that develops the Claude chatbot, plans to triple the size of its comms team by the end of year, its head of communications Sasha de Marigny told Axios.
The big picture: Last month, Anthropic chief information security officer Jason Clinton told Axios that it will be more common to see AI-powered virtual employees within the next year.
- These AI identities would have a new level of autonomy that would include their own "memories,"corporate accounts and passwords, Clinton said.
Yes, but: There are some tasks the team at Anthropic will not solely rely on AI for — namely, strategic storytelling.
What they're saying: "Claude is definitely a prominent team member for everyone, but comms people are sort of like BS detectors," de Marigny said. "I need very strong domain experts who can spot an oversimplified explanation or provide context that the model does not have."
- "Critical thinking is still a huge comparative advantage for humans. I'm looking for excellent strategists — people who understand the new world order and know how to develop holistic plans to cut through to the audiences we care about."
State of play: Anthropic was founded in 2021 by former OpenAI employees seeking to build ethically rigorous AI products. It's currently valued at more than $61 billion.
- The current communications team is made up of about 20 people who oversee Anthropic's policy, corporate, internal, product and research communications, along with editorial, social and brand.
Zoom in: Domain expertise is a priority as the team looks to engage with more subject matter experts and build up its influencer programs, de Marigny says.
- "If you are a biologist who loves to communicate, please get in touch," she said.
Anthropic is also looking to staff up with designers, documentarians, data visualization experts and journalists.
- "It is hard to capture people's attention, but I think if you can communicate in a way that is very intentional, it can resonate with people, and it can actually augment and amplify the message in very powerful ways."
The intrigue: When applying for a job at Anthropic, job seekers are asked to confirm they have not used AI for help but instead relied on "non AI-assisted communication skills."
- "If I wanted Claude to do the job for you entirely, I would probably just use Claude" instead of hiring for the role, de Marigny said.
- When hiring, "you want to get that baseline and then understand that AI will likely augment it. ... We're in the place where the AI plays a huge role, but humans are still in the driver's seat."
- This is particularly true for communicators who are "making judgment calls, reading the room and providing historical context and relational context," she added.
What's next: There are nine open communication and brand roles at the moment, with more to come in global markets.
3. Training data
- The Copyright Office issued preliminary guidance (PDF) suggesting that some amount of training on protected works could be fair use, but not broad scraping for commercial use.
- Also, the White House fired the head of the Copyright Office, a position typically overseen through the legislative branch. (Axios)
- Pope Leo highlighted the risks of AI to society in a speech to gathered cardinals. (CNN)
- OpenAI and Microsoft are locked in a high-stakes negotiation to rewrite the terms of their partnership so OpenAI's planned restructuring into a public benefit corporation can proceed. (Financial Times)
4. + This
Meet LegoGPT, a new large language model specifically optimized for creating working Lego designs from text descriptions.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing.
Sign up for Axios AI+




