Welcome back to Future. Thanks for subscribing. Consider inviting your friends and colleagues to sign up.
Let's start with ...
A much larger Future
Future is expanding in almost every way.
- Since launching a little over a year ago, our aim has been to open a discussion of a superlatively fast-changing — and unnerving — period and the clues to a future that seems likely to be transformatively different from today.
- We've looked at work, technology, business, economics, society, science, cities, geopolitics, sports and more. That's been a lot to stuff into what began as a once-, and now, twice-a-week publication.
Starting September 3, we move to five days a week. We hope to continue to hear from you as we do so.
There is more. We are also deepening our ability to dig into two of the major trends we follow — the artificial intelligence and e-commerce revolutions.
- I'm excited to announce that Kaveh Waddell, just back in the U.S. from a year reporting in Beirut, joins Future to cover AI, robotics, and how they are affecting humans, politics and the future writ large. Write him at email@example.com.
- And I'm equally excited to announce that Erica Pandey moves from her China focus on the Axios newsdesk to the Future team. Erica will cover global e-commerce, including Amazon, China's gargantuan Alibaba, and their impact on society, cities, economies and politics. Write her at firstname.lastname@example.org.
We will continue our usual obsessions, while adding — as we have in recent editions — coverage of quantum computing, the future of heat and fire, demography, and more.
- Let me know what else interests you, and what you think we can do better. Just hit reply to this email or shoot me a message at email@example.com.
1 big thing: Confronting AI's demons
By sensibility, computer science researchers prefer to leave it to philosophers and policymakers to interpret the societal repercussions of their work.
But, in a shift that's roiling typically cocooned computer scientists, some researchers — uneasy in part about the role of technology in the 2016 election — are urging colleagues to determine and mitigate the societal impact of their peer-reviewed work before it's published.
Axios' Kaveh Waddell writes: The push — meant to shake computer scientists out of their labs and into the public sphere — comes as academics and scientists are suffering the same loss of popular faith as other major institutions. "We need to regain that trust by showing we're conscious of the impact of what we do," Jack Clark, strategy and communications director at OpenAI, tells Axios.
- The highest-profile push has come from a group of scientists in the Association for Computing Machinery, a major professional organization that publishes dozens of academic journals. The group has proposed that peer reviewers assess whether papers adequately consider their negative implications.
- The proposal has provoked a lengthy, combative discussion thread on Hacker News.
- Wrote one commenter: "Engineers are not philosophers and should not be placed in this role. We do not have the tools to do it."
Researchers whose oppose greater oversight say it's not possible to guess whether their work will be repurposed for ill, or to prevent it from being misused.
- "Bad guys always can do bad things by using new technologies," said Zhedong Zheng, a PhD candidate at the University of Technology Sydney. "We cannot control them."
But Clark argues that the reluctance to engage with the ethical repercussions of research is an "abdicating of responsibility that is frankly shocking."
2. ... and suppressing the most dangerous work
Sometimes, a computer science researcher produces a paper whose findings, if published, might lead to societal harm. Now, some experts are questioning the default course of action: publishing the paper anyway, potential damage be damned.
Why it matters: The call to suppress some research challenges decades-old principles in computer science and could slow work in a field that drives the economy, helps define the future of work and is the subject of intense global competition.
Kaveh writes: If the field does decide to withhold some work, it would join several scientific disciplines, including nuclear, military and intelligence research, that is often kept under wraps.
"A very core principle in the computer science community has been that openness is a fundamental good," said Brent Hecht, a Northwestern professor who co-authored a proposal for how the field should address potentially harmful research. But he said "recent events have made me and my colleagues question that value."
- Potentially harmful research should be published, says Hecht, but should include a discussion of "complementary technologies, policy, or other interventions that could mitigate the negative broader impacts."
The other side: No, it shouldn't be published, at least in rare cases, says Jack Clark, strategy and communications director at OpenAI, and Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security.
- One reason for suppressing would be if a harmful finding was difficult to discover but easy to reproduce, Scharre says.
Would research be set back by selective openness?
- Definitely, says Clark, but it's a worthwhile tradeoff. Choosing not to pump the brakes is like "saying scientific progress is more important than societal stability," he said.
3. Anti-vaxxers in Italy
Experts worry that a bill to suspend compulsory vaccination of children in Italy could spread the anti-vaxx movement across borders, posing a serious global health threat. Pushed by Italy's populist government, the bill will become law if approved by the lower house of its parliament.
Axios' Eileen Drage O'Reilly writes: Vaccinations have helped to eradicate a dozen major childhood diseases and are praised as a key advance of the 20th century. But the anti-establishment wave in Europe and the U.S., plus the ability of social media to spread any opinion, have put new impetus behind the opposition to mandatory inoculation.
The backstory: The anti-vaxx movement in Italy and elsewhere goes back to the 1998 publication of a study in The Lancet on a 12-person trial that linked the measles vaccine (MMR) and autism.
- The study was found to be fraudulent, retracted by the journal, and Andrew Wakefield, its author, was stripped of his medical license.
- But the damage was done.
Driving the news: Last week, Italy's upper house of parliament voted to suspend mandatory inoculation of schoolchildren against 10 diseases. The bill attempts to reverse a law passed last year increasing the number of mandatory vaccinations after a measles outbreak infected nearly 5,000 people in Italy, killing four.
- "What is concerning is the possibility that this might strengthen anti-vaccination sentiment elsewhere," said Naomi Smith, program leader in sociology, Federation University Australia's School of Arts, Humanities and Social Science.
- Smith says, "For example, anti-vaccination proponents might point to Italy and say, 'Well the Italians think vaccination shouldn’t be mandatory, why don’t we have fewer requirements around vaccination too?'"
- Anthony Fauci, director of the U.S. National Institute of Allergy and Infectious Diseases, tells Axios, "I'm really concerned, obviously, about the move Italy made. ... It's very ill-advised."
4. Worthy of your time
- The future of the movies — in old-fashioned theaters (Sara Fischer - Axios)
- DEF CON hackers fighting for a clean election (Hannah Kuchler - FT)
- Real history of the liberal world order (Michael Mazarr - Foreign Affairs)
- Elon, Elon, come what may (Michael Wursthorn, Asjylyn Loder - WSJ)
- Incompatibility of string theory and dark energy (Natalie Wolchover - Quanta)
- IBM Watson's cancer program doesn't work (Daniela Hernandez, Ted Greenwald - WSJ)
5. 1 fun thing: Food delivery robots
Heading back to Axios' San Francisco office after a meeting with a Berkeley professor, Kaveh nearly collided with an icebox-sized tub with wheels and a flagpole, sporting Cal colors.
He writes: The sidewalk robot is one of around two dozen that roam UC Berkeley and nearby parts of town, delivering food to students and residents. Kiwi, the Berkeley-based company behind this bot, has already made more than 10,000 deliveries, Techcrunch reports.
The details: Place an order through the Kiwi app from one of the participating restaurants and a robot will be delivered by a human in a car to your area.
- The bot then picks up multiple orders from different restaurants, and a human packs it into an autonomous tricycle (yep!), which pedals the food toward the delivery location.
- When it's near your home, the trike deploys the delivery bot, which takes the food the rest of the way, sending a notification when it's arrived. You then unlock the bot with the Kiwi app.
- The delivery fee is less than $4.00. With Kiwi Prime, $14.99 a month will get you unlimited 99-cent deliveries.
Techcrunch has more in this video.
What's next: Kiwi did not respond to interview requests. But expect more of these bots in more places. They've hit regulatory hurdles in some cities — San Francisco temporarily banned them last year and has not yet issued new permits — but they're popping up in other parts of the region, including San Jose and Stanford, and to the East Coast in D.C. and NYC.