Seeing the future of AI — through a board game
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
A project that's spent six years simulating scenarios of AI's future validates growing alarm among many observers that runaway competition will drive reckless adoption of unsafe technologies.
- These simulations aren't running on some massive supercomputer in the cloud — they're powered by people sitting around a table scattered with cards and dice.
Why it matters: Even some of those who believe powerful AI can be developed safely are worried that viewing the technology's development as a race will push AI makers toward dangerous choices.
State of play: Since 2019, a group of academics has been developing and refining Intelligence Rising, an interactive game that aims to simulate the development of advanced AI, with individual players taking on the roles of government leaders and company executives.
What they found: In a paper published last year, the game's developers warned that "a race dynamic generally emerges between tech firms, with firms emphasizing safety to governments but often deprioritizing it internally."
- Governments that see themselves as falling behind, meanwhile, sometimes resort to military action to prevent the race's leader from "deploying radically transformative AI" — a concern that the game's developers say has been rising over time.
- Other key conclusions: Plenty of troubles, including risks around misinformation and bias will emerge well before the advent of highly powerful AI. And, overall, this technology can take wildly diverging paths.
How it works: Players each play a key actor in the AI race. Typically there are four teams representing governments like China and the United States as well as tech companies like OpenAI or DeepSeek. If there are more players, they can add Google or another tech company.
- Games take about four to five hours, though an abbreviated version can be finished in about three hours.
- "We've spent a lot of time trying to get it into a shorter version, and it just doesn't work," Wichita State University professor Ross Gruetzemacher, one of the creators of Intelligence Rising, told Axios.
Gruetzemacher and his colleagues also created an online version of the game, but say the results from the board game are more instructive.
- "We find that in-person games are more engaging," Gruetzemacher said. "People on the online games tend to lose interest or work on emails or something."
The intrigue: Who's in power in the U.S. tends to be one of the biggest variables, Gruetzemacher said.
- "The race conditions, in general, are very sensitive to changes of power in the United States, and that's exactly what we're seeing right now," he said, adding it's "really not looking good at the moment for efforts to responsibly develop AI systems."
- Sometimes AI safety can still be prioritized if it is included as part of a broad definition of national security, he said.
- While the Trump administration has so far shown less interest on such issues, Gruetzemacher noted that the U.K. renamed its AI Safety Institute as the AI Security Institute. "We will just have to see what happens with that," he said.
Between the lines: The game's makers face the same challenge the rest of society does — the state of the art in AI is changing faster than they can keep pace with.
- "The unprecedented pace of technological progress in foundation models presents novel challenges that make it very difficult for experts and non-experts alike to develop a bigger picture perspective," the game's creators wrote in last year's paper.
- Meanwhile, those in the best position to coordinate global cooperation are often the governments and companies that are choosing to accelerate a race instead.
What's next: Gruetzemacher said he would like to get the game in front of a D.C. crowd, especially members of the Trump administration.
- "It would be great to help the administration realize that there has to be some sort of coordination or cooperation on algorithmic development — and that's just so you develop AI responsibly," he said. "It doesn't have anything to do with DEI. It's a national security concern."
