Mar 6, 2024 - Technology

Exclusive: New approach to regulating AI

Illustration of a US flag, but the starts are replaced with binary numbers.

Illustration: Maura Losch/Axios

AI can be regulated using templates from industries including financial services, cybersecurity and nuclear energy, a new advocacy group says.

Why it matters: Americans for Responsible Innovation launched Wednesday in an attempt to refute Silicon Valley's criticism that Washington doesn't know how to regulate its revolutionary products.

  • The bipartisan group fills a new niche: It's pushing a comprehensive set of shortcuts to AI rules, while not representing the interests of industry players or partisan groups.
  • ARI says the best way to protect the public while maintaining America's AI competitive advantage is to adapt successful rules from other industries that pose safety risks.

Catch up quick: The organization is founded by former Congressman and senior defense official Brad Carson and tech entrepreneur Eric Gastfriend.

  • ARI's advisory board includes Stanford Digital Economy Lab director Erik Brynjolfsson and Diane Rinaldo, former acting administrator of the National Telecommunications and Information Administration.
  • Douglass Vijay Calidas, a former chief counsel to Sen. Joe Manchin (D-W. Va.) and former chief of staff to Sen. Amy Klobuchar (D-Minn.) will liaise with the federal government and Congress.

The big picture: Trust in AI companies and regulators is dropping, and federal agencies have threadbare funding to ensure safe AI.

  • ARI builds on the approach of President Biden's AI executive order — using existing federal bodies to govern AI — by pinpointing examples that offer shortcuts to AI regulation.

ARI's proposals start with an AI Auditing Oversight Board — similar to the Public Company Accounting Oversight Board, a non-profit established by Congress — to ensure integrity in external AI audits. Other ideas include:

  • More funding for the Commerce Department's National Institute of Standards and Technology, likely making it the leading federal AI regulator.
  • Supply chain coordination through a U.S.-led AI Suppliers Group across democracies, modeled on the Nuclear Suppliers Group formed during the Cold War.
  • "Know Your Customer" regulations — pioneered in banking, these would limit who can use U.S. cloud resources to train AI models.
  • Incident reporting databases — borrowing from the cybersecurity world.

Behind the scenes: Carson said he realized AI would revolutionize warfare while serving as Under Secretary of the Army, but ChatGPT's arrival convinced him to build a bipartisan team outside Silicon Valley to advise Congress and federal agencies on civilian impacts.

Follow the money: The group's seed funding comes from Carson (a Democrat) and Gastfriend (an effective altruist), with Republicans and libertarians on the group's board working to "show that our approach has broad support across the political spectrum," Carson said.

What they're saying: "We regulate drugs differently from airplanes, banks, and nuclear power plants" because of their unique characteristics, "but the regulatory toolkit of government is adaptable and we believe it can work for AI as well," Carson said.

  • "The public interest needs an advocate," and tech companies are too conflicted to offer it, Carson said.

Reality check: U.S. lawmakers have never passed comprehensive regulation of software or digital platforms.

  • Lawmakers and regulators will continue to face armies of Big Tech lobbyists and communicators, including because AI startups are filled with refugees from the platforms that evaded regulation in recent years.

What's next: ARI will push for modernization and more funding at the Department of Commerce, which is slated for budget cuts this year.

Go deeper