Artificial intelligence — already a much-discussed science in recent years — moved to the center of public conversation in 2017. Leading tech voices continued to ring the alarm about the potential for a super-intelligent bot to take over the world in a very unpleasant way; others said the fears are vastly exaggerated. The latter gained more converts, namely because AI is nowhere near super-human intelligence at the moment.
We surveyed the community asking the following question: What was the most important AI story of 2017? Their answers follow.
Rodney Brooks, founder, Rethink Robotics
My most important AI story for 2017 is an advertisement I saw on broadcast TV on Sunday, Dec. 17, 2017, during an NFL game. It was an ad by the NFL saying that it is now using machine learning to reveal insights for fans. The end of the ad showed that the NFL is hosting its ML on AWS (Amazon Web Services). Here is a story about this effort from three weeks ago. The significance is that the hype about ML/AI is now so widespread that it is expected to have a cachet impact on NFL fans.
Andrew Ng, CEO, Landing.AI
AlphaGo demonstrated the power of computing and data. But Carnegie Mellon's Libratus, its poker-playing program, took much more innovation. From a technical standpoint, it was a delightfully surprising result.
Andrew Moore, dean, Carnegie Mellon's School of Computer Science
The victory of the Libratus AI over four top professional poker players. This victory in no-limit Texas Hold 'em heralds a new kind of game in which the AI has to take into account that its opponent might be deliberately misleading. In a world of increasing scrutiny of what information is real or unreal, it is amazing that we are seeing the emergence of a new generation of AI that is more skeptical about raw facts.
Geoffrey Hinton, University of Toronto
I think that 2017 saw a lot of progress on many fronts but there wasn't a breakthrough as spectacular, for example, as the use of neural nets for machine translation in 2014 or AlphaGo in 2016.The most impressive advances, in my opinion, were the following:Neural architecture search: This uses neural networks to automate the black art of designing neural networks, and it's beginning to work. Machine translation that uses attention to avoid the need for recurrence or convolutions.Alpha-zero for chess: This quickly learns to play chess in the style of a person but at a level well beyond the best chess engines.
Greg Diamos, senior researcher, Baidu
This year I was extremely impressed by the team of researchers at Stanford University who developed the first AI radiologists, which can detect heart arrhythmias and better inform human doctors. I think medical applications of AI will be very visible and surprising to many people as technology develops.
Azeem Azhar, founder Peer Index, curator The Exponential View
I would choose two works that looked at the question of the responsible implementation of AI. Both help us to tackle all-too-easy-to-ignore downsides of this powerful technology.
- The first was a talk by Kate Crawford (of Microsoft Research), who described how machine learning algorithms can go wrong, reinforcing and amplifying existing prejudices.
- The second is a paper by Adrian Weller (of the University of Cambridge), on building algorithmic systems that map to our intuitions of fairness. It is essential that we manage the downsides addressed by Kate and Adrian in order to spur the acceptance of the tech.
Terah Lyons, executive director, Partnership on AI
This year has brought us a series of heart-wrenching, watershed moments of understanding about marginalization. Kristian Lum's recent personal account of appalling behavior experienced at the hands of machine-learning colleagues was one of many such wake-up calls, which should make apparent to the AI field that the diversity issue is not a sideshow.The rampant and pernicious sexism of the technology industry has catastrophic ramifications in AI — not least of which because the disastrous consequences of exclusionary design bring with them a whole host of other issues when the technology so easily has the potential to amplify and perpetuate the very worst of human biases.It is incumbent upon all of us to prioritize inclusion as a primary principle of innovation, especially in a field with such potential to bring tremendous benefits. Of all of the grand challenges that the AI field attempts to tackle in 2018, inclusion needs to be number one.Been Kim, research scientist, Google Brain
The biggest trend that I welcomed this year was overwhelming interest on the topic of interpretability, meaning a method that can help humans understand an AI model's answers.
This year, the International Conference on Machine Learning invited its first tutorial on interpretability, as well as two related workshops. At the 2017 NIPS conference, there were also a couple of oral presentations on interpretability in addition to a symposium and two workshops. The trend seems to continue to next year — the CVPR conference is holding tutorials on interpretability, as well as the FATML conference.
Richard Socher, chief scientist, Salesforce
Perhaps the most important theme of 2017 came at the NIPS conference, earlier this month. Ethics was a core theme amongst the impressive innovation coming from the research community, serving as an important reminder to everyone that the success of AI depends on core values of trust, transparency and equality.
Alison Snyder contributed reporting to this post.
Note: this post has been updated with Geoffrey Hinton's contribution.