Dec 7, 2019 - Technology

An obstacle course to make AI better

Illustration of a robot jumping over a hurdle

Illustration: Aïda Amer/Axios

AI is better at recognizing objects than the average human — but only under super-specific circumstances. Even a slightly unusual scene can cause it to fail.

Why it matters: Image recognition is at the heart of frontier AI products like autonomous cars, delivery drones and facial recognition. But these systems are held back by serious problems interpreting the messy real world.

Driving the news: Scientists from MIT and IBM will propose a new benchmark for image recognition next week at the premier academic conference on AI.

  • It takes aim at a big problem with existing tests, which generally show objects in ordinary habitats, like a kettle on a stove or floss in a bathroom.
  • But they don't test for all-important edge cases: rare situations that humans can still interpret in an instant, even if they're confounding — like a kettle in a bathroom or floss in a kitchen.
  • To get at those cases, this new dataset is made up of 50,000 images compiled by crowdsourced workers.They photographed 313 household objects at 50 different angles with varying backgrounds and perspectives.

The goal is to put object recognition through more realistic paces.

  • "We’re not intentionally mean to computer systems and we should start doing that," Andrei Barbu of MIT tells Axios.
  • "We don’t want them to only recognize what is very common," says MIT's Boris Katz of robots and automated vehicles. "We want [a robot] to recognize a chair that is upside down on the floor and not say it is a backpack. In order to do that they need to be able to generalize."

The big picture: Ten years ago, image recognition got a huge boost from a humble source — a free database with millions of pictures of everyday things, paired with captions.

  • Scientists began using that dataset, ImageNet, to train algorithms to tell cats from dogs and trees from people, using thousands of labeled examples from each category.
  • But the gold-standard dataset is limited, despite its scale.

Key stat: Tested against the new MIT/IBM benchmark, ObjectNet, the performance of leading image-recognition systems dropped 40–45%.

  • "This says that we have spent tons of our resources overfitting on ImageNet," says Dileep George, cofounder of the AI company Vicarious.
  • Overfitting is AI-speak for teaching to the test: It refers to a system that can pass a specific benchmark but can't perform nearly as well in the real world.
  • "I don't think we're anywhere near the finish line," says Rayfe Gaspar-Asaoka, a VC investor at Canaan.

What's next: The creators of the new benchmark hope that more realistic tests will prod much-needed changes to image-recognition systems.

  • Now, they're showing the images to humans to understand the compromises the brain makes in processing objects.
  • Katz says the ultimate goal is to create detectors that can employ the same patterns of errors as the brain — and generalize as humans do.
  • "What are the assumptions our human brain is making about the world?" asks George of Vicarious.

Go deeper: Teaching robots to see — and understand

Go deeper