Inside the AI race: A conversation with Meta's Joel Kaplan

A message from: Meta

Joel Kaplan
In a conversation with Axios Publisher Nick Johnston, Meta's chief global affairs officer Joel Kaplan outlined what's at stake as the U.S. faces off with China in the global race for AI dominance. From compute access to pro-innovation policy, Kaplan emphasized that the U.S. must move faster to maintain its edge — and that policymakers' urgency may be the key to winning.
The background: How can the people in our audience today, many of whom may influence tech regulation, be a force for good, as you think about your goals, and America's goals related to AI?
Kaplan: The policies are going to be really important. For anybody to succeed in building personalised tech or building super intelligence, or personal super intelligence, in our case, you're going to need four things.
- Compute – the massive data centers that we talked about – energy, talent and data. Pro-innovation policies that ensure access to each of those things, are going to be critical for making sure that we can bring super intelligence to life as quickly as possible, and more quickly than the Chinese.
- The Chinese are all in on AI and super intelligence. They're going to be investing a trillion dollars between now and 2030. They've got massive investments. This is a real race, and it's important that policymakers set the environment and get it right so that American companies are able to compete and make the kind of investments that are going to be necessary to get to where we want to.
Next steps: When you talk to policy makers, do you think they get the sense of urgency?
Kaplan: I mean, some do, some don't. The AI action plan that the Administration released a couple of months ago is quite impressive at removing obstacles to the types of investment that we're talking about, whether that be energy, energy transmission and access to data.
- I think we have the framework for the U.S. to win. There are challenges at the state level. In 2025, something like 1100 pieces of legislation were introduced that would regulate AI and create a patchwork of regulation that would be quite difficult for companies to comply with. Despite these challenges, I think people do understand how important this competition is.
Here's what else: What's the message then, if people go back to their agencies or their companies to rise to that challenge? Is it just continuing the sense of urgency? Is there nuance in there you think that the message needs?
Kaplan: I don't think there's nuance around the sense of urgency like that. I think it is pretty incontestable. When you look at what China is doing, the U.S. is leading, but we're not leading by much and you can be sure that the Chinese government is going to remove the obstacles that the companies there will face. We saw that a few months ago, the beginning of the year, when this company came out of nowhere, DeepSeek, and released a highly competitive open source model. I think that was a wake up call for a lot of people that the Chinese are serious competitors, and it's going to require a national focus that we haven't really seen on a technology issue in decades.
Okay, but: My sensibility is that there's not a lot that folks agree on on both sides of the aisle, but the urgency around AI might be one of them. Does that jibe with the conversations you have?
Kaplan: I think that's right. There is a consensus on how important this is.
- There are still differences. I wouldn't necessarily view them as partisan differences. There are concerns that policymakers want to make sure that we address things like safety — we're very focused on building in safety from the beginning — and particularly on things that could cause massive risks like chemical, nuclear or biological advances.
- We want to make sure that we can assure policymakers that we are alert to those risks, and the other companies are as well. That's why we've worked to put out safety frameworks that indicate exactly how we take this into account and what we build, and some other companies have as well. But I just think you can't lose the forest for the trees, which is for the U.S. to maintain its economic leadership in the coming decade and beyond. We're just going to have to win the race on AI. It's just that simple.
- I do think there has been a shift in the last seven or eight months reflected in the administration's AI action plan that shows a deep appreciation for that. Just in my time in government, I haven't seen that kind of intense focus on a technology policy issue.
Looking ahead: You mentioned some of the regulatory challenges — if there're 50 different AI regulatory frameworks, that would be a little bit of a challenge. Is there a flip side of that? Are there places that are getting this right that have figured out how best to approach AI regulation?
Kaplan: I think the place that's gotten it the most right so far is the federal government, and that's unusual. You don't say that very often. There's just a general question about what's the right level at which to regulate AI. Given its importance, as we've talked about, for the economic and national security future of the United States, it just seems clear to us that that should happen at the federal level.
Worth a mention: Are you optimistic that that will continue?
Kaplan: I think they will stick with it, just because of the importance of it. Whether it's on economic interest or national security, it's going to be key to our ability to continue to lead in the world. It comes back to what we said at the top, that sense of urgency and that sense of importance of being at the forefront of this, because losing is not an option. I don't think it's an option for the United States and for our Western allies.
- One of the things that we talk about a lot right now is the transatlantic relationship. This is an area where we have to have a partnership between the U.S. and our allies, because, as we said multiple times, the competition is just so fierce, and we can't afford to be working at cross purposes on this.