Axios Pro Exclusive Content

Q&A: CSET's Dewey Murdick

headshot
Nov 27, 2023
The campus of Georgetown University in Washington, DC is pictured

A statue of John Carroll, founder of the school, on the campus of Georgetown University. Photo: Robert Knopes/UCG/Universal Images Group via Getty Images

It's not often a university research center is in the middle of a Silicon Valley boardroom showdown, but that's where Georgetown's Center for Security and Emerging Technology found itself in recent days.

Driving the news: Per media reporting, an October CSET paper, "Decoding Intentions," co-authored by former OpenAI board member Helen Toner, was one source of tension between the board and CEO Sam Altman, who was ousted and later reinstated after massive employee pushback.

What they're saying: Axios spoke with CSET director Dewey Murdick about the center's role in shaping policy and Silicon Valley and Washington perspectives on AI.

The following conversation has been edited and condensed for clarity.

What is the role of a place like CSET?

We operate in this policy place where we're providing unbiased, nonpartisan, impactful information for policymakers. Our goal is to be grounded in evidence.

  • We don't have an ideology when we start a piece, and we're committed to this rigorous data-driven approach and we make sure it's really actionable for policymakers.
  • Our incentive is to be helpful to people who are making really tough decisions about the intersection of national and economic security, public policy and governance.

What led to the October paper that's getting so much attention?

This particular paper was about decoding a really interesting problem where you've got competitors trying to ensure that they do the right kind of developments of AI system.

  • Where AI is being developed, just saying "Trust me, I'm going to do a perfectly good job implementing it," if that has no cost [to the company] associated with it, then that's a problem.
  • Our authors were trying to say "Here are some options where you could actually signal your intentions [for responsible AI development] that cost something if you back away from it."
  • We were primarily looking at government entities, but we also included a case of where OpenAI and Anthropic and others in the corporate world were doing this kind of signaling; there were examples of where it's gone well and examples where we're not sure. The jury is still out on how well it's working.

The paper argues that some things companies offer to do may appear to be a "costly" signal that could slow development, but it's really not. Is that what Altman took issue with in the paper?

I know nothing about what happened with OpenAI governance; it's been a wild watch to see what has been happening.

  • But [Altman] has often pointed to the governance structure at OpenAI as something that should give governments solace, like "Oh, I can be fired by the board because we have a unique governance structure."
  • That's one of the signals, a costly signal, that says we'll be able to handle crisis and do the right thing. We will find out whether that's relevant.

What do you think comes next for OpenAI and policymakers, along with the role of CSET?

The reality right now is AI is very much in the minds of our policymakers, and they need to have trust that companies are doing the right thing.

  • I think this particular set of events will probably give policymakers and others a pause. Maybe there will be a greater scrutiny of these firms and how effective their governance is.
  • We're going to continue trying to raise the bar in these discussions, and make sure that people are relying on solid evidence of things you should think about. That probably will keep us out of the news.… But in this case, somehow CSET got inserted into a very live discussion, and this happens from time to time.

Will this particular experience impact how CSET engages with Silicon Valley going forward?

The fact that real people are being impacted by this technology means it's going to continue to be a hot topic. Lots of money has been invested, therefore people with real resources are going to be concerned with how this plays out.

  • I see no way that CSET will not continue to be engaged in these discussions. Sometimes it will flare off and we might be in a high-profile discussion, other times it will be quiet.

What is your main takeaway from what has unfolded at OpenAI and Toner's departure from the board?

In my own reaction to watching this story [with OpenAI] unfold, it does feel like people who want to maximize profit of the AI products they're developing, to get those out quickly and be commercially relevant, definitely have an edge now.

  • Which is concerning, because we have to be very careful with balancing this tension between innovation and keeping people safe.
Go deeper