Helen Toner on the AI risk "you could not really talk about"
Add Axios as your preferred source to
see more of our stories on Google.

Photo illustration: Axios Visuals. Photo: Courtesy of Helen Toner
Helen Toner, who made waves in the AI world as one of the leaders of the failed effort to oust OpenAI's Sam Altman, is taking the helm of a major D.C. tank tank aimed at engaging policymakers on the "fierce debates" around AI.
The big picture: Toner was recently appointed as the interim executive director of Georgetown University's Center for Security and Emerging Technology, which she says is diving deep into what AI "means for society."
This interview has been edited and condensed for clarity.
What should lawmakers in D.C. focus on if they're serious about regulating AI?
The AI policy landscape in general is very fractured and has a lot of disagreement, but something that a lot of people can agree on is that it would be much better to have more transparency and more visibility into these cutting-edge companies.
- That is something that Congress actually can do by just inviting executives to come to hearings and testify.
- That's an important power Congress has: to ask questions. What technologies are they developing? How are they testing them? What is the rate of improvement that they're seeing? What kind of risks are they measuring for and what results are they getting?
How did you react to the Trump administration's AI action plan?
The big question mark for me is going to be around implementation.
- Two of the key White House people on AI, Dean Ball and Lynne Parker, have since stepped out of the Office of Science and Technology Policy.
- So there's a question of whether the individual agencies have the ball and are going to move forward with the priorities outlined in that action plan? Or whether there's now going to be a bit of a vacuum of White House coordination capacity, in which case it's not clear what exactly will happen with that action plan.
AI companies are staffing up in D.C. more than ever — what does it mean?
It is a real benefit that they deeply understand the technology, and that they can make sure that proposals that are on the table are actually realistic.
- At the same time, they clearly have a different set of incentives than what we would hope our elected officials and other policymakers are optimizing for, namely the broad public benefit and American interest writ large, rather than the specific pocketbooks of the specific companies.
- There's often a disconnect between what the frontier AI company policy teams seem to be saying and thinking about the technology and what their own researchers and engineers are thinking and saying.
What's the biggest AI risk now compared to a few years ago?
A huge one that is just starting to be taken more seriously used to be this sort of dark matter of AI policy, which is AI companions and people building relationships with AI systems.
- It used to be something you could not really talk about in polite company.
- But as we're starting to see issues around mental health and dependency and some really tragic stories, I think it makes sense to be paying more attention to those issues.
- I hope that it's possible to do that in a measured way that takes into account how to think about the benefits of people having access to those systems and the overall risk benefit profile, not only looking at those really worst cases, even though they are really awful.
