Altman enjoys Hill honeymoon phase
Senators on Tuesday used OpenAI CEO Sam Altman's first time testifying on Capitol Hill as an opportunity to address the generally agreed-upon risks and rewards of artificial intelligence.
The intrigue: Usually tech executives face a combative grilling from both parties when they come to Congress, but Altman is being welcomed as an entrepreneur keeping the U.S. competitive in a global tech race — at least for now.
- As the heads of social media giants like Meta's Mark Zuckerberg eventually learned, lawmakers might change their tone toward Altman and other AI leaders as the impacts of the technology become clearer.
The big picture: Lawmakers showcased an understanding of AI dangers, including the weaponization of disinformation, disproportionate harms to marginalized groups, and a consolidation of power among tech companies.
- They also recognized the potential of the technology to cure diseases, fight climate change and create more equity.
Key policy themes that emerged:
1) Social media regrets: Senators on the Judiciary panel said they don't want to botch policymaking on AI as they did on social media companies.
- "Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and the risks become real," said Richard Blumenthal, chair of the privacy, technology and the law subcommittee.
- Altman: "I think it’s tempting to use the frame of social media. But this is not social media. This is different. And so the response we need is different."
- Reality check: Congress is still trying to get a baseline online privacy law passed, update liability for the Internet age and establish protections for children online. That was before the AI craze.
2) Transparency: "Nutrition labels" for AI products were floated as a way to describe the quality or trustworthiness of information being presented and detail where it came from.
- Altman, noting the elections next year, said people need to know if they're talking to AI or seeing a fake image.
- People were able to adapt to photoshop, and AI "will be like that but on steroids," he said.
- Reality check: It was only after Russian adversaries meddled in the 2016 elections that Congress began to grapple with social media's impact on elections. Companies have grown more sophisticated since then, but the risks are not gone, and homegrown tactics to mislead people online about voting have grown exponentially.
3) Jobs: Blumenthal said his "biggest nightmare" is the impact of AI on employment.
- Altman responded there will be more and better jobs on the other side of this AI hype cycle.
- AI is not a "creature" but a "tool" that will be used for tasks, according to Altman.
- Reality check: Lawmakers won't hesitate to give up any goodwill toward AI CEOs like Altman if there are job losses in their districts to worry about.
4) Data privacy: Subcommittee ranking member Josh Hawley highlighted AI's potential to supercharge "the war of attention" through personal data collection.
- Hyper-targeting of ads through AI models is not something OpenAI does now.
- But NYU professor Gary Marcus, who was also testifying, questioned how OpenAI's business model might change now that the company is bolstering its partnerships with Microsoft.
- Reality check: OpenAI and other companies competing in generative AI say they want smart regulation, but the product and business sides tell a different story. They're all moving as fast as they can.
5) Licensing: This is one area in which the differences between OpenAI and IBM, the other company testifying before lawmakers, were made crystal clear.
- In a dicey exchange with Sen. Lindsey Graham, Altman said an agency should be created with the power to issue and take away licensing.
- IBM's Christina Montgomery said there shouldn't be a new agency to regulate the technology and that licenses should "potentially" be required only for some types of AI.
- Reality check: A new agency may struggle with the staffing and funding issues that existing agencies have repeatedly called on Congress to remedy.
6) Liability: Section 230 came up a lot, with lawmakers agreeing it gave tech platforms too much latitude in avoiding liability. But there seems to be consensus that the statute should not apply to AI, so ideas around new liability regimes could crop up.
- Hawley asked whether Altman believes OpenAI should be held liable for harming individuals.
- Altman said he thinks there are current laws under which OpenAI can be sued, but "if the question is, are clearer laws about the specifics of this technology and consumer protections a good thing, I would say definitely yes."
- Reality check: Altman may believe Section 230 doesn't apply to generative AI models. But there's no way he agrees on total legal liability either.
What's next: Tuesday's hearing is the first in a series that will feature more tech executives. Altman, meanwhile, is slated to brief House members, including leadership, later in the day.
Be smart: Lawmakers want to make sure they have a good grasp on such a novel technology before they begin to regulate it but there are certain issues that apply to AI that Congress continues to lag on, i.e. privacy protections.
- Around 60 lawmakers dined with Altman on Monday night, peppering him with questions for more than two hours.
- AI Caucus co-chair Rep. Anna Eshoo said after the dinner that "members are learning a great deal from Sam Altman. He's so forthcoming. There isn't anything that is menacing [about] him."