
Yejin Choi, speaking at TED2023. Photo: TED
When the TED audience was asked Tuesday whether they were excited by artificial intelligence, most people raised their hands. When asked whether they were scared about AI, most people also raised their hands.
Why it matters: This enthusiastic ambivalence reflects society's broader split over a rapidly advancing technology that both tantalizes and terrifies.
Driving the news: AI has dominated the early part of this year's conference in Vancouver, with talks highlighting both the exciting promise and potentially apocalyptic future that the technology could portend.
The positive case
- OpenAI co-founder Greg Brockman showed a series of demos of what's coming in the near future. In one, ChatGPT suggested a post-TED meal, used a DALL-E plug-in to visualize the meal, created a shopping list on Instacart and then tweeted out that list.
- In another example, ChatGPT dissected a spreadsheet to suggest several ways to display data within the program, and then graphed them.
- Sal Khan showed off Khan Academy's work to turn GPT into a useful tutor and teacher's assistant, a subject we covered in Login recently.
Sounding the alarm
- Eliezer Yudkowsky, who argues modern AI development needs to be shut down, highlighted the existential threat posed by the imminent arrival of machines built by humans that can have superhuman intelligence and act in ways humans don't fully understand.
- Even if we don't know exactly how such systems might cause human extinction, he says the risk is high.
- "I suspect we could figure out with unlimited time and unlimited retries," he said, but insists that's not the situation. "We do not get to learn from our mistakes and try again."
AI is a mixed bag
- University of Washington professor Yejin Choi argued that AI systems need to be re-architected and taught both common sense and human values, both of which she said are severely lacking in even the latest large language models.
- Choi believes that building ever bigger models alone won't solve these fundamental limitations. "You don’t reach to the moon by making the tallest building in the world one inch taller at a time," she said.
- ScaleAI CEO Alexandr Wang made an impassioned case that U.S. and its allies must harness AI for military use faster than its adversaries. AI, he said, is already changing the nature of warfare, from weaponizing drones to enabling disinformation and cyberattacks on infrastructure.
- "The AI war will define the future of our world," Wang said. "We cannot sit on the sidelines and watch the rise of an authoritarian regime. We must fight for the world we want to live in."
Between the lines: Less addressed during Monday's talks were the more subtle challenges posed by AI, including how AI will impact jobs or the potential for it to disproportionately advantage the already powerful and wealthy while doing harm to those already marginalized.
Be smart: AI's benefits could extend well beyond the automation of mundane tasks to help address challenges that humans have struggled with, from climate change to curing disease.
- But this is powerful technology and it will inevitably be used for both good and bad. Identifying the good that AI can do won't, by itself, make any of its harms disappear.
The bottom line: Smart people in the field say they share both the optimism and fears being voiced. "We hear from people who are excited. We hear from people who are concerned," Brockman said. "Honestly that’s how we feel."