Feb 27, 2024 - Technology

Stanford study outlines risks and benefits of open AI models

Animated illustration of a laptop becoming more transparent to reveal binary code

Illustration: Annelise Capossela/Axios

Researchers at Stanford University's Human-Centered AI researchers have published a paper that aimed at creating a more precise understanding of open source AI risks and benefits.

Why it matters: The availability of open AI models affects everything from global geopolitics to domestic AI competition.

  • Without a framework for assessing risks stemming from open models, regulatory debates are contentious and hard to manage.

Context: Common definitions of open source software often don't match the reality of how AI is built.

  • Stanford used the White House definition of open foundation models as those with "widely available model weights."
  • Government reactions to the rise of open models have ranged from alarm in the White House over bioterrorism threats, to Beijing blacklisting certain types of generative AI training data and the EU offering regulatory exemptions to open models due to their greater transparency.

What they did: The researchers examined open foundation models including Llama 2 and Stable Diffusion XL.

  • They then articulated five distinctive properties of such models: broader access, greater customizability, potential for local "inference," inability to rescind model access once released and weak monitoring of how a model is being used.
  • Any attempt to impose conditions on users of open models are "easy for malicious actors to ignore," the team concluded.

The main benefits are distributing decision-making power, reducing market concentration, increasing innovation, accelerating science and enabling transparency.

  • Open models "allow for greater diversity in defining what model behavior is acceptable" and because they're easily customizable, they "better support innovation across a range of applications."

Threat level: The researchers argue that there often isn't proof that theoretical risks have materialized and that risks such as disinformation, scams and bioterrorism all existed before generative AI.

  • The researchers contend that AI may amplify or accelerate those risks, but does not create them.

The intrigue: Ousted OpenAI board member Helen Toner gave "extensive feedback" to the paper's authors, who include Alondra Nelson, former director at the Office of Science and Technology at the White House and Rumman Chowdhury, who led Twitter's machine learning team.

The big picture: Open model advocates come out on top, because they now have a new way to keep the risks of their preferred models in perspective.

Reality check: While supporters of open models often frame their case in terms of democratizing access to AI, Cornell Tech's David Gray Widder tells Axios that in earnings calls,"Mark Zuckerberg has said how their choice to open source PyTorch allows them to easily productize and profit from external contributions."

Go deeper