Axios Pro Exclusive Content

Scoop: Thune readies AI certification bill

Illustration of a US Capitol dome made out of binary code.

Illustration: Brendan Lynch/Axios

A key senator is quietly shopping a new plan for the federal government to require companies that operate AI systems to self-test and certify them, per a copy of a draft discussion bill seen by Axios.

Why it matters: Sen. John Thune's Artificial Intelligence Innovation and Accountability Act is an early attempt to detail how certification could work.

  • The bill tackles how to define AI's varying threat levels.
  • Defining which types of AI should be subjected to a highly specific set of safety standards is challenging and has been met with skepticism in the context of a licensing regime, an idea championed by big industry players and some lawmakers.
  • Thune's office thinks self-certification with risk-based guardrails is superior to licensing, since it would create fewer bottlenecks with the federal government and enable more innovation, communications director Ryan Wrasse told Axios.

How it works: Different categories will be subjected to different Commerce Department requirements.

1) Critical high-impact AI is defined in the bill as a system that impacts biometric identification, management of critical infrastructure, criminal justice, or fundamental or constitutional rights.

  • Commerce would have to develop a five-year plan for companies to test and certify their own critical high-impact AI systems to comply with government safety standards.

2) High-impact AI would need to self-certify under a different impact assessment. It's defined as systems developed to impact housing, employment, credit, education, physical places of public accommodation, healthcare, or insurance in a manner that poses a significant risk to fundamental constitutional rights or safety.

3) Generative AI, defined as a system that generates novel outputs based on a foundation model, would be subject to self-certification requirements only if an application meets the definition of critical high-impact or high-impact.

  • Consumers must be told when the platform is using generative AI. The notification requirement is strictly for generative AI.

What they're saying: "We appreciate Sen. Thune's leadership on smart, targeted AI legislation, and particularly his focus on accountability, transparency and industry taking responsibility for their AI innovations," said IBM chief privacy and trust officer Christina Montgomery.

By providing new definitions, lawmakers are acknowledging generative AI's broad applications, from coming up with a cake recipe to creating a deepfake of a world leader.

Of note: The bill places trust in companies self-testing and certifying critical high-risk applications, but Commerce would have enforcement teeth.

  • If noncompliance is discovered by either Commerce or the company and not appropriately remedied, Commerce could take civil action against the company.

The bill could address concerns raised by smaller AI players, particularly in the open source and research community, that complying with a certification regime may be too resource-intensive and shut out competition.

  • It exempts platforms that don't employ more than 500 people or collect personal data of more than 1 million people per year, for example.

What's next: Thune's team is having conversations about the bill with members of both parties on the Commerce Committee, Wrasse said, adding that he expects the bill to be formally introduced after the August recess.

  • Defining high risk will be a key topic explored in a series of Senate "insight forums," Majority Leader Chuck Schumer said this week.
Go deeper