Aug 27, 2017

Automating really big ideas

Rebecca Zisser / Axios

We have many problems, few apparent solutions, and could use some novel ideas about what to do next. Among inventors, the flash of genius comes not from nowhere, but usually by analogy — one thing is so, so why not another? In the 1940s, this was how Italian microbiologist Salvador Luria, watching a slot machine work, conceived his Nobel Prize-winning extension of Darwinism to bacteria. In 1666, Isaac Newton saw an apple fall from a tree, and originated his theory of gravity.

The trouble with this approach to invention is the unpredictability of a good analogy — you simply have to wait for that spark. But in a new paper, researchers at Carnegie Mellon and Hebrew University say they've made an advance toward automating the process of finding and melding wholly unconnected things into big ideas.

To get started, says Carnegie Mellon's Aniket Kittur, researchers hired a bunch of people as a crowd-sourcing group. Their assignment: to attach analogy labels to hundreds of products.

  • These descriptions were fed into a neuron network—a machine-learning system—which trained on them.
  • The machine, after investigating far more material, spit out what, in its view, were related analogies.
  • Those were handed back to the crowd-source group, which used them to suggest new products.
  • The result: the human-AI team produced the most innovative ideas, the researchers said.

A first step: "Analogy has driven human progress," Kittur tells Axios. "This doesn't solve the whole thing. But it is the first step showing the practical benefits of finding analogies at scale."

The context: People have tracked papers that can be called scientific by today's definition back to 1650. Since then, researchers have produced about 70 million of them, and their numbers double more or less every nine years, according to a 2014 paper in Nature.

  • Against that backdrop, it's easy to see why striking an analogy — making an original observation by connecting the deep meaning in far-removed facts, and skipping their surface appearances — is so hard and rare. "It's impossible for any scientist to stay on top of his own field, less where there might be connections," says Kittur. He calls this problem the "analogy gap."
  • Researchers have tried for decades to figure out how to bridge that gap — to speed up the process computationally and "find analogies at scale," Kittur says. He himself had been attempting to use crowd-sourcing to short-circuit the pathway there when he teamed up with Hebrew University's Dafna Shahaf, who had been working on a computerized approach.
  • They combined the two, working with two graduate students — Tom Hope and Joel Chan — and the result is the paper, presented Aug. 17 at a conference in Halifax, Nova Scotia.

How it works: The primary problem is that computers don't quite understand the nuances of words. So you need to clearly define your purpose — what you are seeking in an analogy — and then associate it with at least one mechanism for accomplishing it.

  • A purpose would be getting a nail into a wall.
  • A hammer is a mechanism for doing so.

Conceptually speaking, you are training the neural network as to what a purpose and a mechanism look like, which is a lot harder than it looks. That's why the researchers began with crowd-sourced examples of hundreds of products with their purposes and mechanisms labeled, a solid foundation to train the neural network.

Bottom line: "We are not in the business of building an AI that would take over the entire scientific process," Kittur said. "But we think we could enhance scientific creativity. We hope there will be some pearls in there."

Go deeper