Axios Pro Exclusive Content

NTIA calls for high-risk AI audits

Illustration of multiple name tags that all say "Hello my name is AI."

Illustration: Allie Carl/Axios

NTIA is calling for the independent auditing of high-risk AI systems, according to a report released Wednesday.

Why it matters: Regulators and AI companies are turning to auditing as a way to help keep businesses accountable to safety standards, build trust with consumers and drive innovation.

Driving the news: NTIA said the government should require independent audits of high-risk AI systems that directly impact rights and safety.

  • High risk is going to look differently depending on the sector, NTIA Administrator Alan Davidson told reporters.

Other recommendations for the government and companies in the report include:

  • Implementing "AI nutrition labels" that spell out the training data, limitations, appropriate uses and other characteristics of an AI system.
  • Examining how liability rules can be applied to determine who is held accountable for AI system harms.
  • Funding key initiatives for evaluations of AI systems, including the AI Safety Institute and the National AI Research Resource.
  • Maintaining registries of high-risk AI deployments, incidents and audits.

NTIA is also examining the risks of open-source AI models. Comments on that notice were due Wednesday.

  • The accountability report and open-source request for comment are related, Davidson said on the call with reporters.
  • "Ultimately, if we want to have trust in AI systems, whether open or closed, it's got to be informed by this question of whether we can hold them accountable."

Between the lines: So much of the AI conversation in Washington has been driven by Microsoft and OpenAI, which largely runs on closed, proprietary systems.

  • The NTIA process is a chance for competitors like Meta and Google, which have been promoting more open systems, to be heard and accounted for in making standards.

What they're saying: Tech and AI companies wrote into NTIA with plenty of thoughts on what the government should do on open AI modeling, with lots of agreement that restrictions should not be arbitrarily put on open models.

  • Companies argued open models are safe and help with U.S. competitiveness.
  • Meta wrote in its comments: "The focus should be on ensuring the responsible deployment of all models rather than focusing on whether the model weights are released."
  • Nick Clegg, Meta's president of global affairs, told Axios: "If the U.S. were to sort of vacate the field by pulling back on open sourcing, or even constraining the export of open source models, that vacuum will be filled by somebody."
  • Kent Walker, Google's president of global affairs, told Axios in a statement: "We support responsible openness, and release AI systems only when their benefits sufficiently exceed their risks."
Go deeper