Commerce Department starts requiring AI model operators to report key security data
AI developers whose models pose risks to national security are now starting to report "vital information," including safety test results, to the Department of Commerce before releasing their models to the public, the White House said Monday.
Why it matters: This marks the start of the first formalized safety and security information-sharing program between some of the most powerful AI model developers and the federal government.
Driving the news: The White House hosted the first meeting of its new AI Council on Monday.
- The council, which includes top officials from a range of federal offices, met to discuss the progress they've made in implementing President Joe Biden's AI executive order.
Details: Part of that progress has been implementing a new requirement for certain AI model developers to share key details about their models with the federal government.
- The Commerce Department is authorizing this power under the Defense Production Act, a 1950 law that was established to give the president the ability to implement certain domestic economic controls, such as the ability to require companies to prioritize national defense contracts.
The other side: Industry groups have pushed back against this new requirement, arguing that a government review process will slow down innovation.
Meanwhile, the Commerce Department also released a proposed rule that would require cloud providers to report information about non-U.S. customers who use their services to train AI models.
Yes, but: A Commerce Department spokesperson declined to share which companies are required to comply with these new rules.