Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Illustration: Aïda Amer/Axios
Get everyone in the room: That's the new mantra for AI researchers, nervous about the potential for their technology to decimate jobs and perpetuate human biases. They're pulling in experts from every academic background, including seemingly incompatible ones, to help steer the course.
Driving the news: But sparks flew last evening in a star-studded on-stage conversation between prominent AI researcher Fei-Fei Li and celebrity author–philosopher Yuval Noah Harari. Li, who is behind an enormous multidisciplinary project at Stanford to inject human values into AI research, often calls for closer collaboration between disciplines. But she and Harari were frequently at odds on stage.
- From the outset, Harari opened fire: "We're seeing questions that used to be the bread and butter of the philosophy department being moved to the engineering department," he said. But while philosophers have the patience to debate them for thousands of years, engineers — and their investors — won't wait.
- "I'm very envious of philosophers now," Li lobbed back. "They can propose questions in crisis, but they don't have to answer them."
As the conversation turned to debate, Harari warned that AI is setting off an arms race worse than the Cold War — and it threatens to renew colonialism, he said. Li responded that international collaboration and cross-disciplinary efforts will save humans.
What's going on: We've reported on how top universities and Big Tech are asking experts in ethics and the humanities for help directing big decisions. Whether they're listening remains to be seen.
- Several companies have published AI ethics guidelines, and Google convened a short-lived board for internal oversight. Critics argue that these self-policing measures are "ethics theater" rather than real guardrails.
- MIT, like Stanford, recently announced a $1 billion interdisciplinary center to study AI from every angle, with a focus on ethics.
The big picture: These efforts come with language barriers.
- Talking across academic fields is inherently difficult, says Nicole Coleman, an AI expert at Stanford Libraries.
- "Our disciplinary training within the academy is intended to circumscribe what you can talk about, who you can talk to, the language you can use," Coleman tells Axios. Every field has its own goals, approaches, and measures of success.