
Illustration: Shoshana Gordon/Axios
The organizers of a high-profile open letter last March calling for a "pause" in work on advanced artificial intelligence lost that battle, but they could be winning a longer-term fight to persuade the world to slow AI down.
The big picture: Almost exactly six months after the Future of Life Institute's letter — signed by Elon Musk, Steve Wozniak and more than 1,000 others — called for a six-month moratorium on advanced AI, the work is still charging ahead. But the ensuing massive debate deepened public unease with the technology.
- The letter helped normalize expressing deep AI fears. Voters began to repeatedly express concern to pollsters, the White House mobilized tech CEOs to make safety commitments, and foreign regulators from Europe to China raced to finish AI regulations.
Between the lines: In recent months, the AI conversation around the world has intensely focused on the social, political and economic risks associated with generative AI, and voters have been vocal in telling pollsters about their AI concerns.
Driving the news: The British government is gathering a who's who of deep thinkers on AI safety at a global summit Nov. 1-2. The event is "aimed specifically at frontier AI," U.K. Deputy Prime Minister Oliver Dowden told a conference in Washington on Thursday afternoon.
- OpenAI, Meta and regulators have typically used the term "frontier AI" to distinguish the largest and potentially riskiest AI models from less capable technologies.
- "You can never in one summit change the world, but you can take a step forward" and "create an institutional framework" for AI safety, Dowden said.
What they're saying: Anthony Aguirre, the executive director of the Future of Life Institute who organized the "pause" letter, told Axios in an interview that he's "pretty hopeful" the U.K. process is now the best bet for "slowing down" AI development — a subtle reframing of his original "six-month pause" goal.
- Aguirre thinks it's "absolutely critical" that China play a leading role in the U.K. summit.
- While acknowledging the surveillance implications of AI regulation by Beijing, Aguirre noted that China's approval process for products of foundational models such as chatbots is proof that governments can slow down the release of AI if they want to.
- "Rushing, rushing, rushing to fundamental disruption is not necessarily a race you want to win," he said, adding, "The general public doesn't want runaway technologies."
- Aguirre dismisses the White House-organized voluntary safety commitments as "not nearly up to the task," but is hopeful U.S. legislation will pass in 2024.
The other side: Inflection AI co-founder Reid Hoffman believes that whatever public attention the letter may have generated, the authors undermined their credibility with the AI developer community — which they will need to achieve their aims.
- Hoffman told Axios the letter authors were "virtue signaling" — and claiming that only they truly cared about humanity. "It hurts the cause," Hoffman said.
Flashback: The original letter described a "dangerous race to ever-larger unpredictable black-box models" and urged AI labs to draw a line at the then recently released GPT-4, hinting that AI might one day destroy humanity.
- But another camp of AI critics insisted that letter signatories were guilty of AI hype — inflating the capabilities of the current generation of large language models.