Dec 7, 2019

An obstacle course to make AI better

Illustration: Aïda Amer/Axios

AI is better at recognizing objects than the average human — but only under super-specific circumstances. Even a slightly unusual scene can cause it to fail.

Why it matters: Image recognition is at the heart of frontier AI products like autonomous cars, delivery drones and facial recognition. But these systems are held back by serious problems interpreting the messy real world.

Driving the news: Scientists from MIT and IBM will propose a new benchmark for image recognition next week at the premier academic conference on AI.

  • It takes aim at a big problem with existing tests, which generally show objects in ordinary habitats, like a kettle on a stove or floss in a bathroom.
  • But they don't test for all-important edge cases: rare situations that humans can still interpret in an instant, even if they're confounding — like a kettle in a bathroom or floss in a kitchen.
  • To get at those cases, this new dataset is made up of 50,000 images compiled by crowdsourced workers.They photographed 313 household objects at 50 different angles with varying backgrounds and perspectives.

The goal is to put object recognition through more realistic paces.

  • "We’re not intentionally mean to computer systems and we should start doing that," Andrei Barbu of MIT tells Axios.
  • "We don’t want them to only recognize what is very common," says MIT's Boris Katz of robots and automated vehicles. "We want [a robot] to recognize a chair that is upside down on the floor and not say it is a backpack. In order to do that they need to be able to generalize."

The big picture: Ten years ago, image recognition got a huge boost from a humble source — a free database with millions of pictures of everyday things, paired with captions.

  • Scientists began using that dataset, ImageNet, to train algorithms to tell cats from dogs and trees from people, using thousands of labeled examples from each category.
  • But the gold-standard dataset is limited, despite its scale.

Key stat: Tested against the new MIT/IBM benchmark, ObjectNet, the performance of leading image-recognition systems dropped 40–45%.

  • "This says that we have spent tons of our resources overfitting on ImageNet," says Dileep George, cofounder of the AI company Vicarious.
  • Overfitting is AI-speak for teaching to the test: It refers to a system that can pass a specific benchmark but can't perform nearly as well in the real world.
  • "I don't think we're anywhere near the finish line," says Rayfe Gaspar-Asaoka, a VC investor at Canaan.

What's next: The creators of the new benchmark hope that more realistic tests will prod much-needed changes to image-recognition systems.

  • Now, they're showing the images to humans to understand the compromises the brain makes in processing objects.
  • Katz says the ultimate goal is to create detectors that can employ the same patterns of errors as the brain — and generalize as humans do.
  • "What are the assumptions our human brain is making about the world?" asks George of Vicarious.

Go deeper: Teaching robots to see — and understand

Go deeper

A tug-of-war over biased AI

Illustration: Eniola Odetunde/Axios

The idea that AI can replicate or amplify human prejudice, once argued mostly at the field's fringes, has been thoroughly absorbed into its mainstream: Every major tech company now makes the necessary noise about "AI ethics."

Yes, but: A critical split divides AI reformers. On one side are the bias-fixers, who believe the systems can be purged of prejudice with a bit more math. (Big Tech is largely in this camp.) On the other side are the bias-blockers, who argue that AI has no place at all in some high-stakes decisions.

Go deeperArrowDec 14, 2019

Law enforcement's rising problem

Illustration: Sarah Grillo/Axios

The latest and greatest tool for law enforcement has an existential problem.

Driving the news: A major federal study found "Asian and African American people were up to 100 times as likely to be misidentified than white men," per the Washington Post. It also found "high error rates for 'one-to-one' searches of Asians, African Americans, Native Americans and Pacific Islanders."

Go deeperArrowDec 19, 2019

AI is the new co-writer

A recently released AI program that generates hyper-realistic writing has become a powerful tool for storytelling, hinting at a new genre of computer-aided creativity.

What's happening: Inventive programmers are using it to generate poetry, interactive text adventures, and even irreverent new prompts for the popular game Cards Against Humanity.

Go deeperArrowDec 7, 2019