The battle brewing over AI licensing
Regulating artificial intelligence through a licensing regime is gaining steam in Congress, but some experts warn that could derail innovation and competition.
Driving the news: OpenAI CEO Sam Altman made waves on Capitol Hill in May when he called on lawmakers to regulate his own industry in part by giving an agency the power to issue and revoke licenses.
- Shortly afterward, Microsoft released an AI regulation blueprint saying a new agency would be best suited to issue licenses specifically for "highly capable" AI foundation models.
- But smaller companies and researchers worry a cumbersome licensing regime would shut out players that don't have the resources to comply.
Reality check: Even though Altman and other industry players have shown enthusiasm for regulation, Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, said he worries that when specific legislation is introduced, companies will say every proposal is not quite right (as they have for social media bills).
What they're saying: Sen. Josh Hawley calls for a licensing regime in his AI legislation framework and said in an interview that he has discussed it with Sen. Richard Blumenthal.
- "It would be more of a disclosure regiment where you would have some guardrails in order to get the license. I wouldn’t want it to be some hugely onerous regime that only the major corporate players can comply with. That would defeat the purpose."
- "My big concern with AI is it can be a huge transfer of power to the biggest corporations anyway that are already too big and too powerful," Hawley added.
- Blumenthal told Axios that ensuring players with less resources could comply is one of the overall challenges of regulation, but there is a way to do it — though he didn't offer one up.
In its blueprint, Microsoft acknowledges that one of the first challenges of setting up a licensing regime would be defining exactly what a highly capable model is.
- "The objective is not to regulate the rich ecosystem of AI models that exists today and should be supported into the future, but rather the small number of AI models that are very advanced in their capabilities and in some cases, redefining the frontier."
- The best way to decide who has to get a license is by assessing the specific risks the product poses to safety and security or by its capability, Microsoft says.
- Stability AI public policy head Ben Brooks said it's key that regulators know exactly what they're measuring for before thinking about licensing — a difficult task because of the many risk factors out there (including quality of data, how prone a model is to bias and whether it can access the internet or publish content).
- That's why its important to put in place the infrastructure to measure risk at an organization like the National Institute of Standards and Technology, which needs more funding, he added.
- Because capability-based licensing will need more research and discussion, Microsoft is proposing using compute power as the determining factor in the meantime.
Yes, but: Compute power is just one of many factors that can determine risk — and more power does not necessarily equate to higher risk, making it a faulty form of measurement, experts said.
Meanwhile, Google and IBM say they favor drawing on the sector-specific expertise of various regulatory agencies instead of creating one new one for AI.
- "We recommend that the Administration support an approach to regulating AI that prioritizes empowering every agency to be an AI agency," IBM said in a submission Friday to the Office of Science and Technology Policy.
- IBM warned that licensing would present "an enormous obstacle" to AI's complex value chain and fail to account for a constantly evolving understanding of the technology's risks and capabilities.
Threat level: Some worry that open source models would be the first casualty of a licensing regime because, by their nature, such models are decentralized and a single point of control would likely be necessary to obtain a license.
Zoom in: R Street Institute's Adam Thierer described Meta's open source large language model, LLaMA, as the "canary in the coal mine." The company has not taken a position on licensing but is preparing to expand LLaMA from research to commercial use.
Zoom out: Brooks said that although Stability AI can be transparent about its model’s performance, limitations and risks, the company can't know exactly how it will be used by the broader community.
- "We are hesitant about the idea of a license or a line in the sand that enables a handful of firms to continue cutting-edge research while making it very difficult or impossible for the rest of the developer ecosystem to participate."
What we're watching: Where giants like Amazon, which has a partnership with Stability AI, and Meta land on this issue and the safeguards they push for will indicate what's to come for the open source community.
- Congressional committees will have to hash out the thorny details, said Sen. Todd Young, who was tapped by Senate Majority Leader Chuck Schumer to help steer the upper chamber's process.
- "It’s not our intention to develop solutions for each of these matters but instead provide a general framework and then allow the committees of jurisdiction to work their will."