Welcome back to Future. Thanks for subscribing. Consider inviting your friends and colleagues to sign up.
Let's start with ...
Photo: George Rinhart/Corbis/Getty Images
Future is expanding in almost every way.
Starting September 3, we move to five days a week. We hope to continue to hear from you as we do so.
There is more. We are also deepening our ability to dig into two of the major trends we follow — the artificial intelligence and e-commerce revolutions.
We will continue our usual obsessions, while adding — as we have in recent editions — coverage of quantum computing, the future of heat and fire, demography, and more.
Illustration: Sarah Grillo/Axios
By sensibility, computer science researchers prefer to leave it to philosophers and policymakers to interpret the societal repercussions of their work.
But, in a shift that's roiling typically cocooned computer scientists, some researchers — uneasy in part about the role of technology in the 2016 election — are urging colleagues to determine and mitigate the societal impact of their peer-reviewed work before it's published.
Axios' Kaveh Waddell writes: The push — meant to shake computer scientists out of their labs and into the public sphere — comes as academics and scientists are suffering the same loss of popular faith as other major institutions. "We need to regain that trust by showing we're conscious of the impact of what we do," Jack Clark, strategy and communications director at OpenAI, tells Axios.
Researchers whose oppose greater oversight say it's not possible to guess whether their work will be repurposed for ill, or to prevent it from being misused.
But Clark argues that the reluctance to engage with the ethical repercussions of research is an "abdicating of responsibility that is frankly shocking."
Illustration: Sarah Grillo/Axios
Sometimes, a computer science researcher produces a paper whose findings, if published, might lead to societal harm. Now, some experts are questioning the default course of action: publishing the paper anyway, potential damage be damned.
Why it matters: The call to suppress some research challenges decades-old principles in computer science and could slow work in a field that drives the economy, helps define the future of work and is the subject of intense global competition.
Kaveh writes: If the field does decide to withhold some work, it would join several scientific disciplines, including nuclear, military and intelligence research, that is often kept under wraps.
"A very core principle in the computer science community has been that openness is a fundamental good," said Brent Hecht, a Northwestern professor who co-authored a proposal for how the field should address potentially harmful research. But he said "recent events have made me and my colleagues question that value."
The other side: No, it shouldn't be published, at least in rare cases, says Jack Clark, strategy and communications director at OpenAI, and Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security.
Would research be set back by selective openness?
When Italy mandated vaccinations last year, opponents responded, "It doesn't end here." Photo: NurPhoto/Getty Images
Experts worry that a bill to suspend compulsory vaccination of children in Italy could spread the anti-vaxx movement across borders, posing a serious global health threat. Pushed by Italy's populist government, the bill will become law if approved by the lower house of its parliament.
Axios' Eileen Drage O'Reilly writes: Vaccinations have helped to eradicate a dozen major childhood diseases and are praised as a key advance of the 20th century. But the anti-establishment wave in Europe and the U.S., plus the ability of social media to spread any opinion, have put new impetus behind the opposition to mandatory inoculation.
The backstory: The anti-vaxx movement in Italy and elsewhere goes back to the 1998 publication of a study in The Lancet on a 12-person trial that linked the measles vaccine (MMR) and autism.
Driving the news: Last week, Italy's upper house of parliament voted to suspend mandatory inoculation of schoolchildren against 10 diseases. The bill attempts to reverse a law passed last year increasing the number of mandatory vaccinations after a measles outbreak infected nearly 5,000 people in Italy, killing four.
Photo: Bob Berg/Getty Images
A Kiwi sidewalk robot at UC Berkeley. Photo: Kaveh Waddell/Axios
Heading back to Axios' San Francisco office after a meeting with a Berkeley professor, Kaveh nearly collided with an icebox-sized tub with wheels and a flagpole, sporting Cal colors.
He writes: The sidewalk robot is one of around two dozen that roam UC Berkeley and nearby parts of town, delivering food to students and residents. Kiwi, the Berkeley-based company behind this bot, has already made more than 10,000 deliveries, Techcrunch reports.
The details: Place an order through the Kiwi app from one of the participating restaurants and a robot will be delivered by a human in a car to your area.
Techcrunch has more in this video.
What's next: Kiwi did not respond to interview requests. But expect more of these bots in more places. They've hit regulatory hurdles in some cities — San Francisco temporarily banned them last year and has not yet issued new permits — but they're popping up in other parts of the region, including San Jose and Stanford, and to the East Coast in D.C. and NYC.