November 09, 2023
It's Thursday. We're not publishing on Veterans Day, so we'll be back in your inbox Monday.
1 big thing: D.C. meets SF on AI
Yesterday's AI+ Summit in San Francisco. Photo: Chris Constantine/Axios
Ashley and Maria led a wide-ranging conversation on artificial intelligence innovation and regulation yesterday at Axios' inaugural AI+ Summit in San Francisco.
Why it matters: Policy conversations are often siloed in Washington, D.C., but the decisions made on Capitol Hill, the White House and across government agencies will be heavily felt in places like the Bay Area.
- The discussion featured tech industry staffers, academics, advocates for online privacy protections, and state government officials.
Here are five takeaways from the Pro Tech Policy team:
1. Government procurement that prioritizes AI companies of varying sizes will be a key driver of competition.
- Jonathan Porat, chief technology officer for the state of California, said governments can shape procurement processes in way that enables smaller companies to have a better chance of landing lucrative government contracts.
- Generally, there was sentiment in the room that the biggest players in AI were calling too many shots in the regulatory conversation and smaller companies were being boxed out.
2. Unlike other technological advancements, AI will permeate so many aspects of a company that having a chief AI officer reporting directly to CEOs will be essential for companies' culture and operations.
- Yes, but: Others said such positions would eventually become obsolete.
- "I've always been a believer that it's the means, and not the end. Once you start centralizing this stuff in some individual with a title, I always worry that it takes on a life of its own," said Kareem Yusuf, senior vice president for product management and growth at IBM Software.
3. Companies think regulating the outcomes of AI applications versus the techniques used is the best way to regulate without stifling innovation.
- Attendees described it as focusing on "deployment" instead of "development."
- "There are some regulatory approaches that might unintentionally block open source innovation, like some of the licensing conversation, or requiring all AI models to have watermarking capabilities," said Daniel Zhang, senior manager for policy initiatives at the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
4. AI's impact on energy and water consumption is less frequently discussed than its potential to address big societal issues like climate change.
- Panelists discussed the need for governments to address the industry's use of energy for compute power and the impact on electric grids.
- Some panelists thought the technology would correct the issue itself in time. Others suspected it would be the next big policy conversation about AI to bubble up.
5. AI can help address equity issues as long as conversations around it are made more approachable for everyone.
- "Students can persist in colleges using tools in a way they've never been able to previously," said Claire Fisher, senior director of the Foundation for California Community Colleges.
- Fisher said it's important for people not directly involved in tech to understand what is possible with the use of AI: "So I think starting from an opportunistic, positive framing, and then thinking about the carrot rather than the stick approach in some capacities."
2. November bill roundup
Illustration: Gabriella Turrisi/Axios
Here's a useful roundup of the tech bills introduced in the past few weeks.
1. Federal Artificial Intelligence Risk Management Act: Sens. Jerry Moran and Mark Warner introduced legislation that would require federal agencies to incorporate the NIST framework into their AI management efforts.
- Rep. Ted Lieu plans to introduce companion legislation.
2. Tech Safety for Victims of Domestic Violence, Sexual Assault and Stalking Act: Reps. Anna G. Eshoo and Debbie Lesko are proposing legislation that would establish more DOJ clinics to support people who have been harassed, controlled or stalked through technology.
- Sen. Ron Wyden is leading efforts in the upper chamber.
3. National Quantum Initiative Reauthorization Act: House Science Chair Frank Lucas and Ranking Member Zoe Lofgren unveiled their bipartisan NQIA reauthorization bill that would expand the scope of the law to help quantum move into early-stage applied research.
- The act, which boosted research and development underway at the National Science Foundation, the National Institute of Standards and Technology, and the Energy Department, expired Sept. 30.
4. Government Surveillance Reform Act: Sens. Ron Wyden and Mike Lee, along with Reps. Lofgren, Warren Davidson and Andy Biggs, this week intro'd a bill that would reauthorize Section 702 of the Foreign Intelligence Surveillance Act for four years.
- Axios cybersecurity reporter Sam Sabin has more on the legislative push.
3. Catch me up: Chips, political ads and more
Illustration: Shoshana Gordon/Axios
🚧 NEPA/chips update: House Natural Resources Chair Bruce Westerman opposes a semiconductor permitting provision being debated in the NDAA, he told our Axios Pro Energy colleague Nick Sobczyk.
- Why it matters: Westerman could potentially sink a bipartisan effort to speed environmental reviews for projects funded through the CHIPS and Science Act.
📎 FTC staff news: Elizabeth Wilkins, Lina Khan’s chief of staff and director of the office of policy planning, will leave this month, MLex’s Khushita Vasant reported, and White House competition official Hannah Garden-Monheit will join the agency in December.
🧐 Meta on political ads: Meta announced that in 2024, advertisers will have to disclose if AI is used to digitally alter or create "a social issue, electoral, or political ad."
- Sen. Amy Klobuchar said that although the move is "a step in the right direction, we can't solely rely on voluntary commitments."
- Klobuchar has introduced bipartisan legislation, the Protect Elections From Deceptive AI Act, to ban the use of AI to generate deceptive political ads and another bill, the REAL Political Ads Act, to require a disclaimer on political ads that use AI-generated images or video.
🥸 Microsoft on political ads: Microsoft said it will offer its services, including a new tool to help crack down on deepfakes, to help candidates and campaigns ahead of next year's elections around the world.
- Microsoft's Brad Smith and Teresa Hutson endorsed the Protect Elections from Deceptive AI Act in the blog post, adding that the company will "use our voice" to support legislative and legal changes to protect elections.
✒️ NIST input: NIST is extending until Dec. 22 the public comment period for the White House's National Standards Strategy for Critical and Emerging Technology.
📢 NSTC news: The Commerce Department has reached an agreement with a new nonprofit org called SemiUS to operate the future National Semiconductor Technology Center.
✅ Thank you for reading Axios Pro Policy, and thanks to editors Mackenzie Weinger and David Nather and copy editor Brad Bonhall.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive


