Jan 11, 2019 - Technology

AI's accountability gap

Illustration: Sarah Grillo/Axios

Applicants usually don't know when a startup has used artificial intelligence to triage their resume. When Big Tech deploys AI to tweak a social feed and maximize scrolling time, users often can't tell, either. The same goes when the government relies on AI to dole out benefits — citizens have little say in the matter.

What's happening: As companies and the government take up AI at a delirious pace, it's increasingly difficult to know what they're automating — or hold them accountable when they make mistakes. If something goes wrong, those harmed have had no chance to vet their own fate.

Why it matters: AI tasked with critical choices can be deployed rapidly, with little supervision — and it can fall dangerously short.

The big picture: Researchers and companies are subject to no fixed rules or even specific professional guidelines regarding AI. Hence, companies have tripped up but suffered little more than a short-lived PR fuss.

  • Last February, MIT researchers found that facial recognition systems often misidentified the gender of women of color. Some of the companies involved revised their software.
  • In October, Amazon pulled an internal AI recruiting tool when it found that the system favored men over women.
  • "Technology is amplifying the inequality built into the current market," says Frank Pasquale, an expert on AI law at the University of Maryland.

The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University. In addition, many AI systems and the companies that make them are opaque. "Technocratic smokescreens have made it difficult or intimidating for a lot of people to question the implications of these technologies," Whittaker tells Axios.

One result of these and other tech sector behaviors is to raise some people’s suspicions.

  • The industry has made matters worse by testing rough-around-the-edges products on unsuspecting people: pedestrians in the case of autonomous vehicles, patients in the case of health care AI, and students in the case of educational software.
  • In 2016, Cambridge Analytica quietly used Facebook data to sway Americans’ political opinions.
  • In 2012, Facebook researchers quietly manipulated some users' newsfeeds — emphasizing positive posts for one group and negative ones for another — and monitored for an emotional response.

"This is a repeated pattern when market dominance and profits are valued over safety, transparency, and assurance," write Whittaker and her co-authors in an AI Now report published last month.

Go deeper: AI makers get political

Go deeper