Sep 22, 2019

The world through AI's eye

How ImageNet sees me. On the left: "beard" / On the right: "Bedouin, Beduin"

Maybe you've seen images like these floating around social media this week: photos of people with lime-green boxes around their heads and funny, odd or in some cases super-offensive labels applied.

What's happening: They're from an interactive art project about AI image recognition that doubles as a commentary about the social and political baggage built into AI systems.

Why it matters: This experiment — which will only be accessible for another week — shows one way that AI systems can end up delivering biased or racist results, which is a recurring problem in the field.

  • It scans uploaded photos for faces and sends them to an AI object-recognition program that uses ImageNet, the gold-standard dataset for training such programs.
  • The program matches the face with the closest label from WordNet, a project that started in the 1980s to map out word relationships throughout the English language, and applies it to the image.

Some people got generic results, like "woman" or "person." Others received hyper-specific labels, like "microeconomist." And many got some pretty racist stuff.

"The point of the project is to show how a lot of things in machine learning that are conceived of as technical operations or mathematical models are actually deeply social and deeply political," says Trevor Paglen, the MacArthur-winning artist who co-developed the project with Kate Crawford of the AI Now Institute.

  • The experiment and accompanying essay reveal the assumptions that go into building AI systems.
  • Here, the system depends on judgment calls from the people who originally labeled the images — some straightforward, like "chair"; others completely unknowable from the outside, like "bisexual."
  • From those image–label pairs, AI systems can learn to label new photos that they've never seen before.

But, but, but: This is an art project, not an academic takedown of ImageNet, which is mostly intended to detect objects rather than people. Some AI experts have criticized the demonstration for giving a false impression of the dataset.

This week ImageNet responded to the project, which Paglen says is currently being accessed more than 1 million times per day.

  • The ImageNet team says it's making changes to person-related image labels, in part by removing 600,000 potentially sensitive or offensive images — more than half of the images of people in the dataset.

Bonus: When Axios' Erica Pandey uploaded a photo of herself, the ImageNet experiment classified her as a "flibbertigibbet," which is disrespectful but a great word.

Go deeper

Training real AI with fake data

Illustration: Aïda Amer/Axios

AI systems have an endless appetite for data. For an autonomous car's camera to identify pedestrians every time — not just nearly every time — its software needs to have studied countless examples of people standing, walking and running near roads.

Yes, but: Gathering and labeling those images is expensive and time consuming, and in some cases impossible. (Imagine staging a huge car crash.) So companies are teaching AI systems with fake photos and videos, sometimes also generated by AI, that stand in for the real thing.

Go deeperArrowOct 12, 2019

Revenge of the deepfake detectives

Illustration: Sarah Grillo/Axios

Tech giants, startups and academic labs are pumping out datasets and detectors in hopes of jump-starting the effort to create an automated system that can separate real videos, images and voice recordings from AI forgeries.

Why it matters: Algorithms that try to detect deepfakes lag behind the technology that creates them — a worrying imbalance given the technology's potential to stir chaos in an election or an IPO.

Go deeperArrowSep 28, 2019

Automating humans with AI

Illustration: Eniola Odetunde/Axios

Most jobs are still out of reach of robots, which lack the dexterity required on an assembly line or the social grace needed on a customer service call. But in some cases, the humans doing this work are themselves being automated as if they were machines.

What's happening: Even the most vigilant supervisor can only watch over a few workers at one time. But now, increasingly cheap AI systems can monitor every employee in a store, at a call center or on a factory floor, flagging their failures in real time and learning from their triumphs to optimize an entire workforce.

Go deeperArrowOct 12, 2019