AI standards, please, tech industry tells NIST
Leading tech companies working on AI know how complicated and costly it is when governments around the world set different rule books for an emerging technology.
- That's why the industry is urging the U.S. to make guidelines for generative AI that at least aim to work around the world, per comments submitted to the National Institute of Standards and Technology.
- President Biden's executive order on AI ordered NIST to develop a "companion resource" to the existing AI Risk Management Framework specifically for generative AI, along with resources on development practices, evaluation and testing.
Why it matters: NIST is leading the way in creating frameworks for generative AI that the industry will have to adhere to closely, and these comments illuminate company principles and approaches around generative AI.
Quick take: No one company will get everything it wants out of NIST's efforts.
- But when the stakes are this high, with the government creating rules that could impact company behavior (and ultimately, profits) around a groundbreaking technology, there's a sense that industry sees government cooperation (and help) as key to U.S. success on AI.
- That's doubly true as Europe speeds ahead on AI, with member countries reaching a deal on the EU AI Act last week.
The big picture: Here's our snapshot of the big themes the tech industry shared with government in their comments:
- Whatever you do, make it easy to adapt the rules to what we are required to do elsewhere in the world.
- Don't make the rules too strict or prescriptive.
- Work with us experts to craft rules, and please use some of what you already have.
- And maybe shout out what we're already doing (watermarking, internal auditing, open-source code sharing) that you like.
What they're saying: NIST, already heavily burdened with duties around AI and in desperate need of more funding, has 202 comments to work through before deciding how to proceed.
- OpenAI pointed to its own internal testing and risk auditing in its NIST comments and urged the government to partner with third-party domain experts.
- Google said a risk management framework for generative AI should provide a general roadmap of rules that work with other global standards currently being developed.
- Salesforce wrote that any framework shouldn't rely on watermarking as the primary way to detect AI-generated content, urging NIST to study other methods as well including retrieval. Salesforce also agreed on the need for global interoperability.
- IBM also emphasized the importance of global harmony of standards: "We follow long-held principles of trust and transparency that make clear the role of AI is to augment, not replace, human expertise and judgement," the company's chief privacy and trust officer Christina Montgomery wrote.
- TechNet, a major lobbying group for tech CEOs, zeroed in on the existing legal protections that apply to the use of AI, urging NIST to build on that foundation.
- Meta said NIST should focus on filling gaps around generative AI and leverage standard-setting processes and partnerships already in the works across the industry.
- Amazon made similar comments: "NIST should ensure that any guidance it develops pursuant to the executive order on AI is informed by relevant technical standards that currently exist."
- Anthropic focused on benchmarks, writing that NIST should direct its "limited resources" on "building a robust and standardized benchmark for generative AI systems" that private companies can adhere to beyond their internal systems.