Axios AI+

March 03, 2025
Look ahead: Our annual AI+ Summits are coming to not one, not two, but three locations this year. We'll be in New York on June 4, D.C. on Sept. 16, and will round out the year in San Francisco on Dec. 4. More info to come.
Today's AI+ is 1,234 words, a 5-minute read.
1 big thing: Predicting AI's future — with dice
A project that's spent six years simulating scenarios of AI's future validates growing alarm among many observers that runaway competition will drive reckless adoption of unsafe technologies.
- These simulations aren't running on some massive supercomputer in the cloud — they're powered by people sitting around a table scattered with cards and dice.
Why it matters: Even some of those who believe powerful AI can be developed safely are worried that viewing the technology's development as a race will push AI makers toward dangerous choices.
State of play: Since 2019, a group of academics has been developing and refining Intelligence Rising, an interactive game that aims to simulate the development of advanced AI, with individual players taking on the roles of government leaders and company executives.
What they found: In a paper published last year, the game's developers warned that "a race dynamic generally emerges between tech firms, with firms emphasizing safety to governments but often deprioritizing it internally."
- Governments that see themselves as falling behind, meanwhile, sometimes resort to military action to prevent the race's leader from "deploying radically transformative AI" — a concern that the game's developers say has been rising over time.
- Other key conclusions: Plenty of troubles, including risks around misinformation and bias will emerge well before the advent of highly powerful AI. And, overall, this technology can take wildly diverging paths.
How it works: Players each play a key actor in the AI race. Typically there are four teams representing governments like China and the United States as well as tech companies like OpenAI or DeepSeek. If there are more players, they can add Google or another tech company.
- Games take about four to five hours, though an abbreviated version can be finished in about three hours.
- "We've spent a lot of time trying to get it into a shorter version, and it just doesn't work," Wichita State University professor Ross Gruetzemacher, one of the creators of Intelligence Rising, told Axios.
The intrigue: Who's in power in the U.S. tends to be one of the biggest variables, Gruetzemacher said.
- "The race conditions, in general, are very sensitive to changes of power in the United States, and that's exactly what we're seeing right now," he said, adding it's "really not looking good at the moment for efforts to responsibly develop AI systems."
- Sometimes AI safety can still be prioritized if it is included as part of a broad definition of national security, he said.
What's next: Gruetzemacher said he would like to get the game in front of a D.C. crowd, especially members of the Trump administration.
- "It would be great to help the administration realize that there has to be some sort of coordination or cooperation on algorithmic development — and that's just so you develop AI responsibly," he said. "It doesn't have anything to do with DEI. It's a national security concern."
2. Untangling AI safety and security
Recent moves by the U.S. and the U.K. to frame AI safety primarily as a security issue could be risky, depending on how leaders ultimately define "safety," experts tell Axios.
Why it matters: A broad definition of AI safety could encompass issues like AI models generating dangerous content, such as instructions for building weapons or providing inaccurate technical guidance.
- But a narrower approach might leave out ethical concerns, like bias in AI decision-making.
Driving the news: The U.S. and the U.K. declined to sign an international AI declaration at the Paris summit this month that emphasized an "open," "inclusive" and "ethical" approach to AI development.
- Vice President JD Vance said at the summit that "pro-growth AI policies" should be prioritized over AI safety regulations.
- The U.K. recently rebranded its AI Safety Institute as the AI Security Institute.
- And the U.S. AI Safety Institute could soon face workforce cuts.
The big picture: AI safety and security often overlap, but where exactly they intersect depends on perspective.
- Experts universally agree that AI security focuses on protecting models from external threats like hacks, data breaches and model poisoning.
- AI safety, however, is more loosely defined. Some argue it should ensure models function reliably — like a self-driving car stopping at red lights or an AI-powered medical tool correctly identifying disease symptoms.
- Others take a broader view, incorporating ethical concerns such as AI-generated deepfakes, biased decision-making, and jailbreaking attempts that bypass safeguards.
Between the lines: It's unclear which AI safety initiatives may be deprioritized as the U.S. shifts its approach.
- In the U.K., some safety-related work — such as preventing AI from generating child sexual abuse materials — appears to be continuing, says Dane Sherrets, AI researcher and staff solutions architect at HackerOne.
- Chris Sestito, founder and CEO of AI security company HiddenLayer, says he's concerned that AI safety will be seen as a censorship issue, mirroring the current debate on social platforms.
- But he says AI safety encompasses much more, including keeping nuclear secrets out of models.
What we're watching: AI researchers and ethical hackers have already been integrating safety concerns into security testing — work that is unlikely to slow down, especially given recent criticisms of AI red teaming in a DEF CON paper.
- But the biggest signals may come from AI companies themselves, as they refine policies on whom they sell to and what security issues they prioritize in bug bounty programs.
3. Study zeroes in on AI's youngest users
Nearly 30% of parents of kids ages 0-8 say their children have used AI for learning, according to new research from Common Sense Media.
Why it matters: Even the youngest of children are experimenting with a rapidly changing technology that could reshape their learning and critical thinking skills in unknown ways.
By the numbers: One in four parents of kids ages 0-8 told Common Sense their children are learning critical thinking skills from using AI.
- 39% of parents said their kids use AI to "learn about school-related material," while only 8% said they use AI to "learn about AI."
- For older children (ages 5-8) nearly 40% of parents said their child has used an app or a device with AI to learn.
- 24% of children use AI for "creative content," like writing short stories or making art, according to their parents.
- Common Sense surveyed 1,578 parents of children 8 years old or younger last August.
Yes, but: Many parents said they didn't see a problem with their kids' AI use.
- More than half (61%) of parents of kids ages 0-8 said their kids' use of AI had no impact on their critical thinking skills.
- 60% said there was no impact on their child's well-being.
- 20% said the impact on their child's creativity was "mostly positive."
"The big findings around AI were really the most notable for older kids (ages 5-8)," Supreet Mann, director of research at Common Sense Media, told Axios.
- There were some parents of kids younger than 5 who reported that their children had used AI for learning and in other contexts, but "it's a pretty small percentage of the overall population," Mann said.
Reality check: You're supposed to be 13 or older to use OpenAI's ChatGPT, Google's Gemini, and Meta AI. To use Anthropic's Claude, you're supposed to be 18 or older.
4. Training data
- Apple may take years to deliver its full vision for Apple Intelligence. (Bloomberg)
5. + This
If you've ever failed a CAPTCHA, our editor Megan Morrone says you'll enjoy "I'm Not a Robot," the short film that just won the Oscar for best live action short film. You can watch it in full here.
Thanks to Scott Rosenberg and Megan Morrone for editing this newsletter and Matt Piper for copy editing it.
Sign up for Axios AI+






