Dec 13, 2023 - Technology

Exclusive: Biden team wades into open source AI controversy

headshot
Illustration of a computer screen with the words "open" and "closed" glitching across it.

Illustration: Aïda Amer/Axios

The Biden Administration has big plans to tackle one of the AI boom's sharpest controversies — whether open source AI models make society safer or put the world at greater risk.

Why it matters: AI's ownership structure and intellectual-property rules will shape how it evolves — and government's choices today to promote or restrict open source versions of AI will set a course for decades to come.

The big picture: The White House's wide-ranging AI executive order this year tasked the National Telecommunications and Information Administration (NTIA) with studying the open source question and recommending actions.

NTIA administrator Alan Davidson tells Axios he wants to hear a wide range of viewpoints before making recommendations in a report due to the White House by next July.

  • "We've been given a pretty clear homework assignment," Davison said. "We're not trying to prejudge it one way or the other."

The big picture: Most of the AI models that power popular services like ChatGPT today — including projects at OpenAI, Google and Microsoft — are closely owned and held.

  • There are other AI projects available under full open source license, which means anyone can access their code and build on it.
  • Meta has released a number of its AI products under partial open-source terms.
  • Open source software, which often has roots in university labs rather than corporate offices, still underlies most of today's internet.

Between the lines: Some, raising the specter of bioterrorists inventing super-viruses with open source AI, argue that today's most advanced AI models are so powerful that they should be protected the way we guard nuclear secrets. Others maintain that publicly releasing powerful models will allow for more thorough testing and expand access beyond a few powerful companies.

Davidson said he was glad that President Biden's executive order sought further exploration into what role open source AI should play rather than endorsing or rejecting the approach.

  • "I'm glad no one jumped to one conclusion or the other," Davidson said. "That would be easy to do."
  • While reserving judgment, Davidson expressed optimism there are middle paths. "This isn't an either/or [situation]," he said. "We need policies that are going to promote both safety and allow for broad access to AI tools."
  • Some middle-ground options could including approaches that vary based on risk as well as exploring ways to ensure that safety mechanisms can't easily be removed from open source models.
  • "We've been talking to a lot of thoughtful people who see nuance in the approach as a real possibility," Davidson said.

Of note: In terms of risk, Davidson said he is largely focused on present harms, such as bias, security and trust, rather than the existential risk that AI could destroy humanity — though he also said addressing the former could help prevent the latter.

Details: The NTIA, which is part of the Commerce Department, is holding an event online and at the Center for Democracy & Technology in Washington, D.C. on Wednesday to kick off its work.

  • A public comment period is slated to begin early next year, and additional in-person gatherings are also possible before the agency begins drafting its report.

Separately, The NTIA is continuing work on a report expected early next year on ways that the government can use procurement guidelines and other means to encourage AI systems that are auditable and accountable.

Go deeper