Anthropic says AI should be made "boring"
Jack Clark, co-founder of leading AI company Anthropic, says he knows too much about what AI can do, good and bad. He's in Washington this week to pass some knowledge off to lawmakers.
- Axios sat down with Clark, who formerly worked as a journalist covering AI, amid his meetings on Capitol Hill.
- Anthropic, which closely competes with OpenAI but is thought of as the slower-paced and more measured of the companies, got a pledge for up to $2 billion in investment from Google last year.
What he's saying: "We generally want to be quite honest brokers, to generate information and pass it on," Clark said.
- "Part of why we're doing that is to give [lawmakers] a sense of how important it is for the government itself to develop and understand this technology."
Details: Some of the things Clark thinks are essential to ensure that AI is being used responsibly and the U.S. can continue to lead:
- Clarity on fair use and copyright rules for training AI systems.
- Ample funding for the National Institute of Standards and Technology, the National Artificial Intelligence Research Resource and the U.S. AI Safety Institute for standard setting and testing.
- Legislation that would guarantee people a right to know when they're speaking to an AI system.
Clark said that when he visits Washington, "I am made uneasy by how much information we have about this technology" relative to what lawmakers have.
Yes, but: "What's encouraging is that people in Washington have put AI on the agenda as something people view as legitimate and worth spending time on," he said.
- "Where things have been slightly less good is … getting ahead of ourselves and trying to come up with perfect, intricate regulatory regimes when we're starting from close to zero on AI."
- The industry and government should figure out how to make AI "boring," Clark said, by figuring out uniform regulatory and testing regimes standard in other industries.
State of play: With the 2024 election coming up, Anthropic's Claude chatbot is not allowed to be used for political campaigning, a similar position to that of other major AI companies.
- But never say never, Clark said, if standards and testing improve.
- "We're taking a somewhat conservative position … but if you look over history, things that would seem really confusing tend to then become boring and standardized."
The bottom line: The U.S. is leading on AI development, Clark said, but it's fragile. "It can change very quickly," he said. "You don't want to lose."
- "If there's a bad misuse or accident it'll set the industry back decades, and we can't afford that, so having government regulation is essential," he said.