Oct 24, 2023 - Technology

AI developers are failing on transparency, new index shows

Illustration of a robot holding up a finger to its mouth.

Illustration: Allie Carl/Axios

A damning assessment of 10 key AI foundation models in a new transparency index is stoking new pressure on AI developers to share more information about their products — and on legislators and regulators to require such disclosures.

Why it matters: The Stanford, MIT and Princeton researchers who created the index say that unless AI companies are more forthcoming about the inner workings, training data and impacts of their most advanced tools, users will never be able to fully understand the risks associated with AI, and experts will never be able to mitigate them.

The big picture: Self-regulation hasn't moved the field toward transparency. In the year since ChatGPT kicked the AI market into overdrive, leading companies have become more secretive, citing competitive and safety concerns.

  • "Transparency should be a top priority for AI legislation," according to a paper the researchers published alongside their new index.

Driving the news: A Capitol Hill AI forum led by Senate Majority Leader Chuck Schumer Tuesday afternoon will put some of AI's biggest boosters and skeptics in the same room, as Congress works to develop AI legislation.

Details: The index measures models based on 100 transparency indicators, covering both the technical and social aspects of AI development, with only 2 of 10 models scoring more than 50% overall.

  • All 10 models had major transparency holes, and the mean score for the models is 37 out of 100. "None release information about the real-world impact of their systems," one of the co-authors, Kevin Klyman, told Axios.
  • Because 82 of the 100 criteria are met by at least one developer, the index authors say there are dozens of options for developers to copy or build on the work of their competitors to improve their own transparency.
  • The researchers urge policymakers to develop precise definitions of transparency requirements,. They advise large customers of AI companies to push for more transparency during contract negotiations — or partner with their peers to "to increase their collective bargaining power."

Between the lines: Developers are least transparent around "the ingredients and processes that go into building a foundation model," per Klyman, including model size and what data a model trained on.

  • No developers offered a mechanism for redress to harmed users and other parties (such as artists), and none provide "externally reproducible or third-party assessments" of their harm reduction efforts.
  • Developers of AI products that describe themselves as open source are more transparent than those that don't — with Meta, Hugging Face and Stability AI taking three of the top four positions in the ranking.

The other side: Amazon, whose Titan model was ranked lowest in the index, said its product was reviewed prematurely.

  • "Titan Text is still in private preview, and it would be premature to gauge the transparency of a foundation model before it's ready for general availability," Nathan Strauss, an Amazon spokesperson told Axios.
  • Meta, which released the top-rated Llama 2 model, declined to comment. OpenAI did not respond to a request for comment.

The intrigue: Developers including OpenAI, Cohere and AI21 Labs have jointly and publicly called for specific transparency actions. But Stanford's Rishi Bommasani, another index co-author, wrote that the developers behind the biggest models are becoming less transparent.

What they're saying: "It's worrying that companies often do not share rigorous evaluations of how their models could be misused," Klyman said, contrasting that approach with the willingness of executives to talk about longer-term and existential AI risks.

  • "We miss out on essential information about negative externalities of companies' most powerful technologies," Klyman said, lamenting the lack of detail about the environmental footprints and overseas workforces contributing to model development.

Be smart: Researchers from Stanford's Center for Research on Foundation Models (CRFM) and Institute on Human-Centered Artificial Intelligence (HAI), MIT Media Lab, and Princeton's Center for Information Technology Policy assessed the models based on publicly available information as of Sept. 15.

Go deeper