Tech Policy

May 16, 2023

Axios Pro Exclusive Content

Good afternoon! It's all AI, all the time today — let's get started.

1 big thing: Altman enjoys Hill honeymoon phase

Altman testifying before senators today. Photo: Eric Lee/Bloomberg via Getty Images

Senators used OpenAI CEO Sam Altman's first time testifying on Capitol Hill today as an opportunity to address the generally agreed-upon risks and rewards of artificial intelligence, Maria and Ashley report.

The intrigue: Usually tech executives face a combative grilling from both parties when they come to Congress, but Altman is being welcomed as an entrepreneur keeping the U.S. competitive in a global tech race — at least for now.

  • As the heads of social media giants like Meta's Mark Zuckerberg eventually learned, lawmakers might change their tone toward Altman and other AI leaders as the impacts of the technology become clearer.

The big picture: Lawmakers showcased an understanding of AI dangers, including the weaponization of disinformation, disproportionate harms to marginalized groups, and a consolidation of power among tech companies.

  • They also recognized the potential of the technology to cure diseases, fight climate change and create more equity.

Key policy themes that emerged:

1) Social media regrets: Senators on the Judiciary panel said they don't want to botch policymaking on AI as they did on social media companies.

  • "Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real," said Richard Blumenthal, chair of the privacy, technology and the law subcommittee.
  • Altman: "I think it’s tempting to use the frame of social media. But this is not social media. This is different. And so the response we need is different."
  • Reality check: Congress is still trying to get a baseline online privacy law passed, update liability for the Internet age and establish protections for children online. That was before the AI craze.

2) Transparency: "Nutrition labels" for AI products were floated as a way to describe the quality or trustworthiness of information being presented and detail where it came from.

  • Altman, noting the elections next year, said people need to know if they're talking to AI or seeing a fake image.
  • People were able to adapt to photoshop, and AI "will be like that but on steroids," he said.
  • Reality check: It was only after Russian adversaries meddled in the 2016 elections that Congress began to grapple with social media's impact on elections. Companies have grown more sophisticated since then, but the risks are not gone, and homegrown tactics to mislead people online about voting have grown exponentially.

3) Jobs: Blumenthal said his "biggest nightmare" is the impact of AI on employment.

  • Altman responded there will be more and better jobs on the other side of this AI hype cycle.
  • AI is not a "creature" but a "tool" that will be used for tasks, according to Altman.
  • Reality check: Lawmakers won't hesitate to give up any goodwill toward AI CEOs like Altman if there are job losses in their districts to worry about.

4) Data privacy: Subcommittee ranking member Josh Hawley highlighted AI's potential to supercharge "the war of attention" through personal data collection.

  • Hyper-targeting of ads through AI models is not something OpenAI does now.
  • But NYU professor Gary Marcus, who was also testifying, questioned how OpenAI's business model might change now that the company is bolstering its partnerships with Microsoft.
  • Reality check: OpenAI and other companies competing in generative AI say they want smart regulation, but the product and business sides tell a different story. They're all moving as fast as they can.

5) Licensing: This is one area in which the differences between OpenAI and IBM, the other company testifying before lawmakers, were made crystal clear.

  • In a dicey exchange with Sen. Lindsey Graham, Altman said an agency should be created with the power to issue and take away licensing.
  • IBM's Christina Montgomery said there shouldn't be a new agency to regulate the technology and that licenses should "potentially" be required only for some types of AI.
  • Reality check: A new agency may struggle with the staffing and funding issues that existing agencies have repeatedly called on Congress to remedy.

6) Liability: Section 230 came up a lot, with lawmakers agreeing it gave tech platforms too much latitude in avoiding liability. But there seems to be consensus that the statute should not apply to AI, so ideas around new liability regimes could crop up.

  • Hawley asked whether Altman believes OpenAI should be held liable for harming individuals.
  • Altman said he thinks there are current laws under which OpenAI can be sued, but "if the question is, are clearer laws about the specifics of this technology and consumer protections a good thing, I would say definitely yes."
  • Reality check: Altman may believe Section 230 doesn't apply to generative AI models. But there's no way he agrees on total legal liability either.

2. AI bill roundup

Illustration of legislation being pierced with fountain pens.

Illustration: Aïda Amer/Axios

Lawmakers on Capitol Hill want to avoid getting in the way of artificial intelligence's potential. But they're also looking to put guardrails in place for the rapidly evolving technology.

What we're watching: The American Data Privacy and Protection Act, which is expected to be reintroduced soon, includes AI language that could serve as a basic structure for national guardrails.

  • AI is included in the bill's definition of covered algorithms, which would be subjected to detailed and comprehensive impact assessments and require companies to mitigate harm before deploying the technology.

Risk-targeted approaches:

1) Rep. Yvette Clarke's REAL Political Advertisements Act would require campaigns to disclose when they use generative AI in political ads.

  • Sens. Amy Klobuchar, Cory Booker and Michael Bennet introduced companion legislation yesterday.

2) Reps. Ted Lieu, Don Beyer and Ken Buck introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act, which would ensure federal funds can't be used by an automated system to launch nuclear weapons without meaningful human control.

Government's own use:

1) Bennet has introduced a number of AI-related bills, including:

  • The ASSESS AI Act, which would create a task force to give recommendations on responsible AI use to Congress.
  • The Overseeing Emerging Technology Act, which would require certain federal agencies to "designate a senior official able to advise on the responsible use of emerging technologies like artificial intelligence."

2) Lieu has suggested creating a federal agency to govern the use of AI.

3) Sens. Gary Peters and Mike Braun introduced a bill this week to establish AI training programs for the federal workforce, specifically for federal supervisors and management officials.

4) Senate Majority Leader Chuck Schumer has also called for AI regulation, putting forth ideas that would serve as the groundwork for legislation, as Axios previously reported.

Of note: On the heels of a White House meeting with AI executives this month, the administration advised the Office of Management and Budget to release a draft policy guidance on the use of AI systems by the federal government.

Thank you for reading Axios Pro Policy, and thanks to editors Mackenzie Weinger and David Nather and copy editor Brad Bonhall.

  • Do you know someone who needs this newsletter? Have them sign up here.