It's time to start preparing for AGI, Google says
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
Google DeepMind is urging a renewed focus on long-term AI safety planning even as rising hype and global competition drive the industry to build and deploy faster.
Why it matters: With better-than-human level AI (or AGI) now on many experts' horizon, we can't put off figuring out how to keep these systems from running wild, Google argues in a paper released Wednesday.
Between the lines: The argument spotlights a continuing rift in the AI world.
- The Trump administration, many industry leaders and some policy experts are hellbent on beating China in the race to AGI.
- But many of the researchers and scientists responsible for achieving that goal remain fearful of unleashing potent new tech on the world before we're sure we can control it.
Driving the news: In the 145-page paper, Google DeepMind outlines its strategy to "address the risk of harms consequential enough to significantly harm humanity," dividing the concerns into four main areas:
- Deliberate misuse by humans
- Misalignment (where the AI takes action its programmers didn't intend)
- Mistakes (where the AI causes harm by accident)
- Structural risks where the interaction of AI agents could result in harm with no one system at fault
Zoom in: Google dives into the concerns around each category and offers ways to reduce the risks, including measures taken by AI developers, as well as societal shifts and policy changes.
- The paper is designed as a starting point for conversation rather than a set of step-by-step instructions.
- It's not like, 'Oh, you just need to do X and you will be fine' or 'Oh, there's nothing you can do,'" Google DeepMind Chief AGI Scientist Shane Legg told Axios.
- Regulation needs to be part of society's response, Legg says. This will be "a very powerful technology, and it can and should be regulated."
The big picture: Google's paper comes as interest in addressing the risks of AI has fallen significantly, especially in government circles where a desire to beat other countries has seemingly supplanted concerns over existential risk that were a hot topic as recently as last year.
This shift was on full display at the Paris AI Action Summit.
- "The AI future is not going to be won by hand-wringing about safety," Vice President JD Vance said. "It will be won by building — from reliable power plants to the manufacturing facilities that can produce the chips of the future."
- Leaders of European governments echoed the sentiment, as did a number of CEOs.
- "The biggest risk could be missing out," Google CEO Sundar Pichai said on stage. "Every generation worries that the new technology will change the lives of the next generation for the worse. Yet, it's almost always the opposite."
Yes, but: Excitement for AI's possibilities shouldn't overshadow safety concerns, Legg said.
- "I think safety has become a bad word in a certain political sphere," Legg said. "Among researchers, I have not seen this." Indeed, a number of prominent researchers criticized the tone of the Paris gathering.
- "Science shows that AI poses major risks in a time horizon that requires world leaders to take them much more seriously. The summit missed this opportunity," professor and AI pioneer Yoshua Bengio said on X.
- "Greater focus and urgency is needed on several topics given the pace at which the technology is progressing," Anthropic CEO Dario Amodei said in a statement.
The intrigue: It's difficult to predict when AGI will arrive, though many experts have been shortening their timelines.
- Google is now warning that AGI could plausibly arrive by 2030.
Even with today's less-than-superintelligent AI, there are examples of the kinds of issues that Google warns about in its paper.
- A new paper from Anthropic found that today's large language models do more "thinking" than many people — including their creators — realize.
- While these models still output their results one token at a time, Anthropic says it saw evidence of deeper planning in large language models, such as when they compose a poem.
- "We were often surprised by what we saw in the model," Anthropic wrote. "In the poetry case study, we had set out to show that the model didn't plan ahead, and found instead that it did."
- There have been other real-world cases of AI systems finding workarounds when the computing resources are missing — a behavior that can be handy, but could also lead to unintended consequences.
