Illustration: Caresse Haaser/Axios
Big data got us here, but small data will get us the rest of the way. That's the mantra coming from AI researchers at the forefront of their field, who are casting about for the next big breakthrough.
Details: Inspired by how children learn, they are experimenting with methods that will allow them to train up AI systems with a tiny fraction of the inputs required today — and then set the systems loose on a new problem that they've never seen before.
Background: The deafening fuss around AI is driven by deep learning, a technique that allows machines to pick out subtle patterns from enormous datasets.
- It's great for all sorts of lucrative and interesting tasks, like driving cars and reading brain scans. And it can get better and better as it eats up more data.
- But amassing and labeling vast amounts of data is cumbersome and slow — or even impossible, when there's just not much information available.
The next frontier is AI that learns on its own, rather than being explicitly fed information, and algorithms that can take what they know in one arena and apply it to another — like kids learning how the world works.
Driving the news: A panel of leading AI scientists laid out the state of the art at Stanford on Monday, at the launch of the university's Institute for Human-Centered AI. Among the various stabs at solving the data problem:
- Curiosity-based AI, which would find gaps in its knowledge and gather the missing data itself — like a two-year-old finding her way about the world, according to Berkeley psychology professor Alison Gopnik.
- Transfer learning, the long-sought but still out-of-reach principle that an AI system can apply what it's learned in one domain to a similar one.
- "Just like children, we think that to learn things about the world properly you need to be an active learner," said DeepMind CEO Demis Hassabis.
- "I really think that's the direction we need to be going in as the field: How do we actually build more general systems that can take … a new task and do well on that," said Jeff Dean, head of Google AI.
- Compositional knowledge, the idea that computers can put together disparate experiences and pieces of information into a larger whole."The kind of thinking that Daniel Kahneman refers to as 'thinking slow' — that's the kind of thinking that we haven't really worked out how to get artificial intelligence to do," said Stanford computer scientist Christopher Manning.