Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Illustration: Aïda Amer/Axios
AI is better at recognizing objects than the average human — but only under super-specific circumstances. Even a slightly unusual scene can cause it to fail.
Why it matters: Image recognition is at the heart of frontier AI products like autonomous cars, delivery drones and facial recognition. But these systems are held back by serious problems interpreting the messy real world.
Driving the news: Scientists from MIT and IBM will propose a new benchmark for image recognition next week at the premier academic conference on AI.
- It takes aim at a big problem with existing tests, which generally show objects in ordinary habitats, like a kettle on a stove or floss in a bathroom.
- But they don't test for all-important edge cases: rare situations that humans can still interpret in an instant, even if they're confounding — like a kettle in a bathroom or floss in a kitchen.
- To get at those cases, this new dataset is made up of 50,000 images compiled by crowdsourced workers.They photographed 313 household objects at 50 different angles with varying backgrounds and perspectives.
The goal is to put object recognition through more realistic paces.
- "We’re not intentionally mean to computer systems and we should start doing that," Andrei Barbu of MIT tells Axios.
- "We don’t want them to only recognize what is very common," says MIT's Boris Katz of robots and automated vehicles. "We want [a robot] to recognize a chair that is upside down on the floor and not say it is a backpack. In order to do that they need to be able to generalize."
The big picture: Ten years ago, image recognition got a huge boost from a humble source — a free database with millions of pictures of everyday things, paired with captions.
- Scientists began using that dataset, ImageNet, to train algorithms to tell cats from dogs and trees from people, using thousands of labeled examples from each category.
- But the gold-standard dataset is limited, despite its scale.
Key stat: Tested against the new MIT/IBM benchmark, ObjectNet, the performance of leading image-recognition systems dropped 40–45%.
- "This says that we have spent tons of our resources overfitting on ImageNet," says Dileep George, cofounder of the AI company Vicarious.
- Overfitting is AI-speak for teaching to the test: It refers to a system that can pass a specific benchmark but can't perform nearly as well in the real world.
- "I don't think we're anywhere near the finish line," says Rayfe Gaspar-Asaoka, a VC investor at Canaan.
What's next: The creators of the new benchmark hope that more realistic tests will prod much-needed changes to image-recognition systems.
- Now, they're showing the images to humans to understand the compromises the brain makes in processing objects.
- Katz says the ultimate goal is to create detectors that can employ the same patterns of errors as the brain — and generalize as humans do.
- "What are the assumptions our human brain is making about the world?" asks George of Vicarious.
Go deeper: Teaching robots to see — and understand