Key lines from Elon Musk, others' call to pause AI development
Dozens of scientists, experts and tech leaders, including Twitter and Tesla CEO Elon Musk, recently signed a letter calling on labs generating artificial intelligence (AI) to slow down production so potential risks can be studied —and researched.
Driving the news: Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang and more than 1,000 others signed an open letter to AI labs urging them to "immediately pause" production of AI models more powerful than GPT-4 — the most recent update of its text generator engine — for at least six months.
- "This does not mean a pause on AI development in general," the letter states, but rather "a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."
- "If such a pause cannot be enacted quickly, governments should step in and institute a moratorium," adds the letter, which comes from the Future of Life Institute, a nonprofit that campaigns for responsible use of artificial intelligence.
Context: The letter specifically mentions GPT-4, a generative AI tool that is considered more powerful than OpenAI's ChatGPT.
- GPT-4 can pass most AP exams and score in the 90th percentile of the standard bar exam taken by lawyers. One study found it can also spout misinformation.
Here are some key lines from the open letter:
- On misinformation: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?"
- On replacing humans: "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
- On purpose of AI: "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
The letter urged AI labs and experts to work together "to jointly develop and implement" safety protocols for AI design and development, which should then be "audited and overseen by independent outside experts."
Our thought bubble via Axios' Peter Allen Clark: Few, if any, tech advancements are coupled with the level of forethought and even-mindedness the letter’s authors request. In the U.S., market forces have long been the primary driver for the growth of specific innovations.
- Furthermore, an outreach to policymakers seems likely to land on deaf ears. U.S. lawmakers are woefully behind on how technological advancements impact the country — they’re still struggling to deal with the advent of social media.
More from Axios: