February 05, 2024
Welcome back, Pro readers. Ready for another week?
1 big thing: AI standards, please, tech industry tells NIST
Illustration: Aïda Amer/Axios
Leading tech companies working on AI know how complicated and costly it is when governments around the world set different rule books for an emerging technology, Ashley writes in her column today.
- That's why the industry is urging the U.S. to make guidelines for generative AI that at least aim to work around the world, per comments submitted to the National Institute of Standards and Technology.
- President Biden's executive order on AI ordered NIST to develop a "companion resource" to the existing AI Risk Management Framework specifically for generative AI, along with resources on development practices, evaluation and testing.
Why it matters: NIST is leading the way in creating frameworks for generative AI that the industry will have to adhere to closely, and these comments illuminate company principles and approaches around generative AI.
Quick take: No one company will get everything it wants out of NIST's efforts.
- But when the stakes are this high, with the government creating rules that could impact company behavior (and ultimately, profits) around a groundbreaking technology, there's a sense that industry sees government cooperation (and help) as key to U.S. success on AI.
- That's doubly true as Europe speeds ahead on AI, with member countries reaching a deal on the EU AI Act last week.
The big picture: Here's our snapshot of the big themes the tech industry shared with government in their comments:
- Whatever you do, make it easy to adapt the rules to what we are required to do elsewhere in the world.
- Don't make the rules too strict or prescriptive.
- Work with us experts to craft rules, and please use some of what you already have.
- And maybe shout out what we're already doing (watermarking, internal auditing, open-source code sharing) that you like.
What they're saying: NIST, already heavily burdened with duties around AI and in desperate need of more funding, has 202 comments to work through before deciding how to proceed.
- OpenAI pointed to its own internal testing and risk auditing in its NIST comments and urged the government to partner with third-party domain experts.
- Google said a risk management framework for generative AI should provide a general roadmap of rules that work with other global standards currently being developed.
- Salesforce wrote that any framework shouldn't rely on watermarking as the primary way to detect AI-generated content, urging NIST to study other methods as well including retrieval. Salesforce also agreed on the need for global interoperability.
- IBM also emphasized the importance of global harmony of standards: "We follow long-held principles of trust and transparency that make clear the role of AI is to augment, not replace, human expertise and judgement," the company's chief privacy and trust officer Christina Montgomery wrote.
- TechNet, a major lobbying group for tech CEOs, zeroed in on the existing legal protections that apply to the use of AI, urging NIST to build on that foundation.
- Meta said NIST should focus on filling gaps around generative AI and leverage standard-setting processes and partnerships already in the works across the industry.
- Amazon made similar comments: "NIST should ensure that any guidance it develops pursuant to the executive order on AI is informed by relevant technical standards that currently exist."
- Anthropic focused on benchmarks, writing that NIST should direct its "limited resources" on "building a robust and standardized benchmark for generative AI systems" that private companies can adhere to beyond their internal systems.
2. Exclusive: BSA The Software Alliance's 2024 plans
Illustration: Gabriella Turrisi/Axios
After a year of learning about artificial intelligence on Capitol Hill, a global software industry group is telling lawmakers to take action.
Driving the news: BSA The Software Alliance, which represents giants including Microsoft and IBM, is calling on Congress to pass laws related to bias and discrimination risks of AI, according to its 2024 agenda shared exclusively with Maria.
- BSA staff in recent weeks have been meeting with members of the House Energy and Commerce Committee and the Senate Commerce Committee to discuss the path forward for AI and privacy.
Details: BSA wants lawmakers to write legislation that requires impact assessments of high-risk AI use. Other items on the to-do list:
- Finally pass a strong federal privacy law.
- Secure critical infrastructure, consumer and company systems against cyber threats.
- To enable U.S. companies to thrive abroad, enforce digital trade agreements that foster cross-border data flows.
- Improve access to STEM for historically marginalized groups.
What they're saying: The group notes it will be a year of "digital transformation" as the adoption of AI modernizes agriculture, updates government IT, advances manufacturing and makes health care more accessible.
- "BSA is urging policymakers to make this a year of action to support the responsible development and use of artificial intelligence," BSA's U.S. government relations vice president Craig Albright said in a statement.
- "We will continue to work with lawmakers and be a loud voice calling for meaningful legislation."
Our thought bubble: BSA from the jump has focused its Hill efforts on the intersection of AI and privacy, providing granular input on the AI provisions of E&C's American Data Privacy and Protection Act.
- It's a strategy that recognizes the inextricable nature of the two tech issues and could help transcend Senate and House disagreements on what should be prioritized.
Yes, but: As much as BSA wishes for action this year, it's a long shot given the elections and government funding fights sucking all the air out of the room.
- States and the White House are likely to continue paving the way on AI.
Meanwhile, BSA is also steadily applying pressure in the EU, where it has a permanent office and U.S. staff recently traveled.
- AI is the foremost issue there as EU AI Act implementation gets underway.
- GDPR updates and the EU-U.S. data privacy framework were also on the agenda in recent meetings with officials.
3. Hill hearing watch
Illustration: Tiffany Herring/Axios
Here's everything we're keeping an eye on this week on the Hill.
1. Science plus AI: Tomorrow at 10am ET, two House Science subcommittees hold a joint hearing examining federal agencies and the role of AI in driving new scientific discoveries.
- Tess DeBlanc Knowles, NSF's special assistant to the director for AI, Anthropic's Jack Clark and Oak Ridge National Laboratory's Georgia Tourassi are among those testifying before lawmakers.
2. Health care plus AI: Senate Finance meets Thursday at 1oam ET for a hearing on algorithms and AI systems in health care.
3. "Zuckerbucks": On Wednesday at 9am ET, the Committee on House Administration gathers for a hearing titled "American Confidence in Elections: Confronting Zuckerbucks, Private Funding of Election Administration."
- As CRS notes in its report on the topic, Mark Zuckerberg and his wife "reported committing up to $419.5 million in the 2020 election cycle for grants to be distributed by two nonprofit organizations."
4. Global security: It's safe to expect tech policy to come up at the Senate Armed Services' Thursday hearing at 9:30am ET on "global security challenges and U.S. strategy."
✅ Thank you for reading Axios Pro Policy, and thanks to editors Mackenzie Weinger and David Nather and copy editor Brad Bonhall.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive


