Wake up to Mike Allen’s AM, the 10 stories driving your day

Stories

Small, narrow — and revolutionary AI

Illustration of robot arms holding a scalpel, a wrench and a light bulb.
Illustration: Aïda Amer/Axios

Talk of artificial intelligence goes almost inexorably to the very large. Companies, it is said, must embrace big data, along with future human-like AI, or be lost to history. But some tech leaders are going the other way: urging businesses to start thinking small.

What's happening: This growing mantra speaks of the upside of narrow AI ambitions, and small data. But by small bore, they don't mean small results. AI and data minimalism, they say, could be what revolutionizes business, industry, war and more.

General intelligence — the broad human capacity to cook, do a crossword, work at a computer, and carry on conversations with friends, all seamlessly, one after the other — is still a distant dream for AI researchers. And that's not even contemplating super-human intelligence, the holy grail.

  • Instead, today’s best AI algorithms are one-trick ponies that have each been taught an extremely useful, single trick, Andrew Ng, an AI pioneer at Stanford, tells Axios.
  • Examples are algorithms that drive cars, read chest x-rays, and translate languages.
  • None can do any other thing — only their one task.

But that's not something to sneer at, because these ponies have already created more than a trillion dollars of business value, according to Gartner — with many trillions more up for grabs for those who can figure out how to apply the young technology.

  • We’ve written of some of the problems that confound the best AI systems.
  • But in many areas, AI can outperform humans.
  • "Almost anything you can do with less than a second of mental thought, we can probably now automate" with AI, says Ng, who co-founded Google Brain and Baidu's AI program.

Driving the optimism is deep learning, the technique that has in recent years been the jet fuel accelerating AI applications, which few signs of slowing.

  • "We're in year two or year three of a good, 40-year run," says Frank Chen, a partner at Andreesen Horowitz, a prominent Silicon Valley VC firm.
  • Even if, by some dictatorial decree, all deep learning research stopped today, developers would be writing new software using today’s technology for decades, said Chen. "We have a long way to go just harnessing the existing techniques."

But deep learning has its own drawbacks. Among the most constraining is its never-ending hunger for more and more data.

  • Since the technology learns by finding faint patterns, it needs an enormous amount of information to reliably find meaningful ones.
  • Too little data can cause weird failures. One team that trained an algorithm to differentiate between huskies and wolves with a limited set of images ended up with a system that flagged every photo with a white, snowy background as "wolf."
  • Problems like this can often be solved by throwing more data at the machine. If the algorithm sees more wolves in different settings, it will adjust to no longer lean on the snow crutch.

But some problems don’t come with a lot of data. Ng offered some examples:

  • A factory that wants to detect defective parts as they roll off the manufacturing line may have just a handful of examples with which to train an algorithm.
  • A hospital that wants to detect diseases from medical imaging may only have a small number of scans on hand for a particularly rare disease.

"I think big data is over-hyped," says Ng.

  • A lot of problems will be solved using small data. The only problem? It takes "a much more skilled team to do things with small data," he said.
  • One way to overcome the challenge is to pair deep learning, which uses correlations to make predictions, with explicit rules that can clear up ambiguity, writes Virginia Dignum, a computer science professor at the Delft University of Technology.

Go deeper:

More stories loading.