Dec 4, 2017 - Technology

Behind the boom in machine learning

Photo illustration: Axios Visuals

Machine learning is a core technology driving advances in artificial intelligence. This week, some of its earliest practitioners and many of the world's top AI researchers are in Long Beach, CA, for the field's big annual gathering—the Neural Information Processing Systems (NIPS) conference. In all, some 7,700 people are to attend AI's version of high tech's glitzy South by Southwest conference, and the electronic device industry's even bigger annual CES conference.

It's NIPS' 31st year in what originally drew just a few hundred participants — computer scientists, physicists, mathematicians and neuroscientists all interested in AI. Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies and president of the NIPS Foundation, spoke with Axios about growth in the field and what's next.

How machine learning has grown since NIPS' start in the '80s: "Over that period what happened was a convergence of a number of different factors, one of them being the fact that computers got a million times faster. Back then we could only study little toy networks with a few hundred units. But now we can study networks with millions of units. The other thing was the training sets — you need to have examples of what it is you're trying to learn. The internet made it possible for us to get millions of training examples relatively easily, because there's so many images, abundant speech examples, and so forth, that you can download from the internet. Finally, there were breakthroughs along the way in the algorithms that we used to make them more efficient. We understood them a lot better in terms of something called regularization, which is how to keep the network from memorizing — you want it to generalize."

The role of hardware: "By far, right now, the most exciting part of the hardware development is special purpose digital chips that speed up and are able to enhance the learning. That is to say, the bottleneck right now for learning is the fact that you have to give it many examples. Basically it's applying the same simple operation over and over again.

"What's happened is that Nvidia and Intel, and a dozen other startup companies, are designing special purpose learning chips. Actually, Google already has one that's called TPU, tensor processing unit, which they're using in the cloud because of the fact that it's much more efficient in terms of the energy use and the speed. Without it they wouldn't have been able to roll out services using deep learning — things like language translation. There's literally billions of dollars that are being invested right now in digital hardware.

"The problem though is that, of course, if you want to put it into a cell phone you have to make it very low power. The next generation will be even lower power chips using analog VLSI [very-large-scale-integration]. That's being driven by the applications and the technology. The cell phone market is huge, we're talking about billions of chips out there that can be put into cell phones."

The coming challenge: "We can now put a million units with a billion connections and train it to do something. If you look in the brain, that's about five square millimeters of the cortex. What will eventually happen — and it is happening — is that we know that each part of the cortex is specialized for a different function. Each little patch, very tiny patch, has been dedicated to all these different functions, which we know are separate networks and are doing separate tasks, which is kind of a modular approach.

"If you look into the way the cortex works it's really interesting because all these areas are interconnected with each other. It's not like they're isolated from each other. Right? There are long range, and there are short range connections. The big challenge is the global organization of all of these, right now, modular networks that have been designed for one task each network. It'll happen. It's beginning to happen but it will require theoretical advances for how to organize all the information that is distributed over the entire cortex.

"This is a very exciting area in neuroscience right now because we have tools and techniques, like brain imaging, that we can actually see that happening, both during learning and also during memory consolidation during sleep. As we learn more about how the cortex organizes information globally, it should be possible to translate that into a global workplace built out of all of these chips that are being designed for all these special applications."

"There are a lot of other problems too, but it seems to me that the integration problem may be the key to general intelligence. That's something people like to talk about. They say, 'Oh, you solved these little applications but you haven't figured out how to get much more flexible behavior that involves integrating all that information.' My guess is that if we could figure out how the brain solves the global integration problem, we'll be on our way to understanding a little bit more about general intelligence."

Go deeper: NIPS is one of the AI hiring events of the year (Business Insider), a new approach to image recognition is being presented by AI pioneer Geoff Hinton (ScienceNews) and a daydreaming AI (MIT Technology Review).

Go deeper