Illustration: Sarah Grillo/Axios

Businesses facing unprecedented demands during the coronavirus pandemic have boosted their use of artificial intelligence in some of society's most sensitive areas.

Why it matters: Algorithms and the data they rely on are prone to automating preexisting biases — and are more likely to do so when they're rushed into the field without careful testing and review.

Driving the news:

  • Twitter and Facebook have been relying far more on AI to moderate content. Many of the contractors who normally handle such tasks are not able to go into the office, and the companies don't want the work done remotely so they can keep close tabs on sensitive user data.
  • Walmart associates have voiced concern that the AI being used at self-checkout is flagging appropriate behavior as potential wrongdoing and missing actual theft.
  • A need for fast results in the earliest days of the pandemic pushed adoption of novel uses of AI in tracking the virus' spread and speeding its diagnosis. But health care data leaves out big parts of the population and has historically been rife with bias.

The big picture: Beyond these examples, experts worry that the economy's sudden halt has driven resource-strapped companies and institutions to increasingly rely on algorithms to make decisions in housing, credit, employment and other areas.

Key areas of accelerating AI adoption:

  • Employment: It's concerning enough that algorithms are used to screen applicants, but a new concern is how companies might use AI to also decide who gets cut when companies are reducing staff. Amazon came under fire for using an algorithm in the past to decide which warehouse workers should be terminated for low productivity.
  • Policing: As nationwide protests shine a spotlight on abuses in policing, AI algorithms for predictive policing are being increasingly deployed in the field, even though critics say they worsen and codify racial profiling and other problems.
  • Housing: AI-driven algorithms are playing a greater role in housing decision like landlords' choice of tenants and banks' approval of loans. As in many areas, AI holds potential to aid people of color and others who have historically faced discrimination on this front — but only if enough care is taken with both algorithms and training data.
  • COVID-19 itself: AI is playing a role in the response to the disease in everything from vaccine trials to the selection of populations for public outreach to decisions over who can be safely treated at home via telehealth services. AI can help speed care, but providers need to pay attention to which groups are likely to be underrepresented in the data used to train algorithms, along with other patterns of inequality embedded in existing systems of care.

Between the lines: If you are going to use AI in making meaningful decisions, experts recommend making sure a diverse group of people is involved in reviewing everything from the algorithm design to the training data to the way the system will be deployed and evaluated.

  • Experts also caution that using pre-COVID data to make decisions today could produce flawed results, given how much the world has changed.
  • "Some data is still relevant, other data isn’t," says McGill University professor Matissa Hollister.

Yes, but: Hollister notes that adding humans to the mix isn't a cure-all, either, given that humans have plenty of bias as well.

Meanwhile: A number of companies have hit the pause button on police use of AI-driven face recognition systems, including Amazon, Microsoft and IBM, which is getting out of the commercial face recognition business entirely.

What's next: Expect a wave of lawsuits from consumers contending that they were discriminated against by AI systems, especially in key areas such as hiring.

  • "The law is very clear you cannot discriminate in employment decisions," Vogel said.
  • While that principle hasn’t been widely applied to AI programs yet, Vogel said, that's largely because the technology is so new. "People can fully expect the lawyers are going to get up to speed," Vogel said.

Go deeper

Tech's election-season survival plan: transparency

Illustration: Annelise Capossela/Axios

Leading U.S. tech platforms are going out of their way to reveal how their businesses, policies and algorithms work ahead of November in a bid to avoid blame for election-related trouble.

Why it matters: Until recently, tech companies found it useful to be opaque about their policies and technology — stopping bad actors from gaming their systems and competitors from copying their best features. But all that happened anyway, and now the firms' need to recapture trust is making transparency look like a better bet.

Sep 10, 2020 - Technology

Inside TikTok's killer algorithm

Illustration: Aïda Amer/Axios

TikTok Wednesday revealed some of the elusive workings of the prized algorithm that keeps hundreds of millions of users worldwide hooked on the viral video app.

Why it matters: The code TikTok uses to pick your next video is a large part of what has led the two-year-old company to achieve broad popularity along with a remarkable $20-$30 billion valuation. The key asset is in play as TikTok's Chinese parent prepares to sell its U.S. operation amid fears about its relationship with China's government.

A court fight for the ages

The flag flies at half-staff as people mourn on the Supreme Court steps last night. Photo: Tasos Katopodis/Getty Images

Ruth Bader Ginsburg — feminist icon, legal giant, toast of pop culture — left this statement with granddaughter Clara Spera as cancer closed in: "My most fervent wish is that I will not be replaced until a new president is installed."

The big picture: For all that the nation owes "Notorious RBG" — the hip-hop-inspired nickname she enjoyed and embraced — Republicans are planning to do their best to be sure her robe is quickly filled, despite that last wish, with her ideological polar opposite.