Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Stay on top of the latest market trends
Subscribe to Axios Markets for the latest market trends and economic insights. Sign up for free.
Sports news worthy of your time
Binge on the stats and stories that drive the sports world with Axios Sports. Sign up for free.
Tech news worthy of your time
Get our smart take on technology from the Valley and D.C. with Axios Login. Sign up for free.
Get the inside stories
Get an insider's guide to the new White House with Axios Sneak Peek. Sign up for free.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Want a daily digest of the top Denver news?
Get a daily digest of the most important stories affecting your hometown with Axios Denver
Want a daily digest of the top Des Moines news?
Get a daily digest of the most important stories affecting your hometown with Axios Des Moines
Want a daily digest of the top Twin Cities news?
Get a daily digest of the most important stories affecting your hometown with Axios Twin Cities
Want a daily digest of the top Tampa Bay news?
Get a daily digest of the most important stories affecting your hometown with Axios Tampa Bay
Want a daily digest of the top Charlotte news?
Get a daily digest of the most important stories affecting your hometown with Axios Charlotte
Illustration: Sarah Grillo/Axios
In a shift that is roiling typically cocooned computer scientists, some researchers — uneasy in part about the role of technology in the 2016 election — are urging colleagues to determine and mitigate the societal impact of their peer-reviewed work before it's published.
The big picture: The push — meant to shake computer scientists out of their labs and into the public sphere — comes as academics and scientists are suffering the same loss of popular faith as other major institutions.
What's going on: By sensibility, computer science researchers prefer to leave it to philosophers and policymakers to interpret the societal repercussions of their work. At most, their papers cite a few potential positive impacts of the findings — and avoid mentioning the negatives altogether.
But now, they should be more active to renew public faith, said Jack Clark, strategy and communications director at OpenAI. "We need to regain that trust by showing we're conscious of the impact of what we do," he tells Axios.
- The highest-profile push has come from a group of scientists in the Association for Computing Machinery, a major professional organization that publishes dozens of academic journals. The group has proposed that peer reviewers assess whether papers adequately consider their negative implications.
- The proposal has provoked a lengthy, combative discussion thread on Hacker News.
- Wrote one commenter: "Engineers are not philosophers and should not be placed in this role. We do not have the tools to do it."
Researchers whose oppose greater oversight say it's not possible to guess whether their work will be repurposed for ill, or to prevent it from being misused.
- "Bad guys always can do bad things by using new technologies," said Zhedong Zheng, a PhD candidate at the University of Technology Sydney. "We cannot control them."
- Technology that's benign on its face can be adapted for surveillance, propaganda, misinformation, cyberattacks, or any number of other uses.
- Zheng's own research focuses on re-identification, a surveillance tech that follows people from one camera feed to another. Among its ramifications, it could allow paranoid government to create a digital panopticon — or help a mother find a lost child at an amusement park.
But the proposal's 12 co-authors say the need to try harder is palpable. There is "a serious and embarrassing intellectual lapse" between researchers' positive outlook of their own work and reality, they wrote.
"Probably like those in the oil and tobacco industries, we can tell our kids and grandkids that throughout our careers, we were just doing our jobs. … Or we can tell our kids and grandkids that ours was the generation in computing to take responsibility for the effects of our innovations."— Brent Hecht, lead author of the proposal, in a July speech
Why now? Several researchers said the tipping point was a combination of concern over advances in artificial intelligence and the 2016 election.
- Part of the growing concern is that half-baked or biased AI algorithms could harm society.
- And computer scientists watching tech take center stage — via email servers and leaks, Twitter and Facebook — underwent a "sea change" regarding their own ethical responsibility, according to Hecht, a professor at Northwestern.
Clark says the reluctance to engage with the ethical repercussions of research is an "abdicating of responsibility that is frankly shocking." And Joseph Redmon, a PhD candidate at the University of Washington, called it a sign of moral laziness.
- In a recent paper describing updates to a computer-vision algorithm, Redmon wrote, "As researchers, we have a responsibility to at least consider the harm our work might be doing and think of ways to mitigate it. We owe the world that much."
- Redmon said he decided to include the note after a member of the military approached him at a conference to express excitement about using the algorithm to analyze drone footage.
Go deeper: Computer science's public safety question.