AI giants pledge to share new models with feds
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Sarah Grillo/Axios
OpenAI and Anthropic will give a U.S. government agency early access to major new model releases under agreements announced on Thursday.
Why it matters: Governments around the world have been pushing for measures — both legislative and otherwise — to evaluate the risks of powerful new AI algorithms.
Driving the news: Anthropic and OpenAI have each signed a memorandum of understanding to allow formal collaboration with the U.S. Artificial Intelligence Safety Institute, a part of the Commerce Department's National Institute of Standards and Technology.
- In addition to early access to models, the agreements pave the way for collaborative research around how to evaluate models and their safety as well as methods for mitigating risk.
Between the lines: The U.S. AI Safety Institute was set up as part of President Biden's AI executive order.
- Anthropic already shares some of its models with the UK AI Safety Institute.
What they're saying: "Safety is essential to fueling breakthrough technological innovation," U.S. AI Safety Institute director Elizabeth Kelly said in a statement. "These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI."
- Meanwhile, the AI companies also praised the arrangement. "Safe, trustworthy AI is crucial for the technology's positive impact," Anthropic co-founder and head of policy Jack Clark said. "Our collaboration with the U.S. AI Safety Institute leverages their wide expertise to rigorously test our models before widespread deployment."
- OpenAI chief strategy officer Jason Kwon echoed Clark's sentiments. "We believe the institute has a critical role to play in defining U.S. leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on."
