
Photo: VCG via Getty Images
Anthropic has proposed a policy framework for AI transparency that can be applied at the federal, state or international level, per an announcement shared with Axios.
Why it matters: Leading AI companies like Anthropic have a loud voice in the Trump administration, which is focused on AI competition and not interested in strict rules.
- Mandating or encouraging "transparency" so people have an idea of how the AI systems around them are working is a light-touch way to approach policy.
What they're saying: "Rigid government-imposed standards would be especially counterproductive given that evaluation methods become outdated within months due to the pace of technological change," reads a blog post from Anthropic.
Anthropic proposes some tenets for AI transparency, including:
- Transparency should apply only to the largest and most capable frontier AI models that hit a certain threshold of things like computing power and annual revenue, such as $100 million annually.
- Such frontier model developers should have a "Secure Development Framework" laying out how they assess major risks, and make that framework public for the government, researchers and users.
- It should be a "violation of law for a lab to lie about its compliance with its framework."
What we're watching: We'll be eager to see how many ideas born of AI companies themselves end up in the Trump administration's upcoming AI action plan.
