Illustration: Sarah Grillo/Axios

Businesses facing unprecedented demands during the coronavirus pandemic have boosted their use of artificial intelligence in some of society's most sensitive areas.

Why it matters: Algorithms and the data they rely on are prone to automating preexisting biases — and are more likely to do so when they're rushed into the field without careful testing and review.

Driving the news:

  • Twitter and Facebook have been relying far more on AI to moderate content. Many of the contractors who normally handle such tasks are not able to go into the office, and the companies don't want the work done remotely so they can keep close tabs on sensitive user data.
  • Walmart associates have voiced concern that the AI being used at self-checkout is flagging appropriate behavior as potential wrongdoing and missing actual theft.
  • A need for fast results in the earliest days of the pandemic pushed adoption of novel uses of AI in tracking the virus' spread and speeding its diagnosis. But health care data leaves out big parts of the population and has historically been rife with bias.

The big picture: Beyond these examples, experts worry that the economy's sudden halt has driven resource-strapped companies and institutions to increasingly rely on algorithms to make decisions in housing, credit, employment and other areas.

Key areas of accelerating AI adoption:

  • Employment: It's concerning enough that algorithms are used to screen applicants, but a new concern is how companies might use AI to also decide who gets cut when companies are reducing staff. Amazon came under fire for using an algorithm in the past to decide which warehouse workers should be terminated for low productivity.
  • Policing: As nationwide protests shine a spotlight on abuses in policing, AI algorithms for predictive policing are being increasingly deployed in the field, even though critics say they worsen and codify racial profiling and other problems.
  • Housing: AI-driven algorithms are playing a greater role in housing decision like landlords' choice of tenants and banks' approval of loans. As in many areas, AI holds potential to aid people of color and others who have historically faced discrimination on this front — but only if enough care is taken with both algorithms and training data.
  • COVID-19 itself: AI is playing a role in the response to the disease in everything from vaccine trials to the selection of populations for public outreach to decisions over who can be safely treated at home via telehealth services. AI can help speed care, but providers need to pay attention to which groups are likely to be underrepresented in the data used to train algorithms, along with other patterns of inequality embedded in existing systems of care.

Between the lines: If you are going to use AI in making meaningful decisions, experts recommend making sure a diverse group of people is involved in reviewing everything from the algorithm design to the training data to the way the system will be deployed and evaluated.

  • Experts also caution that using pre-COVID data to make decisions today could produce flawed results, given how much the world has changed.
  • "Some data is still relevant, other data isn’t," says McGill University professor Matissa Hollister.

Yes, but: Hollister notes that adding humans to the mix isn't a cure-all, either, given that humans have plenty of bias as well.

Meanwhile: A number of companies have hit the pause button on police use of AI-driven face recognition systems, including Amazon, Microsoft and IBM, which is getting out of the commercial face recognition business entirely.

What's next: Expect a wave of lawsuits from consumers contending that they were discriminated against by AI systems, especially in key areas such as hiring.

  • "The law is very clear you cannot discriminate in employment decisions," Vogel said.
  • While that principle hasn’t been widely applied to AI programs yet, Vogel said, that's largely because the technology is so new. "People can fully expect the lawyers are going to get up to speed," Vogel said.

Go deeper

Tech firms blast Trump's extended H-1B visa restrictions

Illustration: Lazaro Gamio/Axios

Tech companies reacted quickly and negatively Monday to news out of the Trump administration that it is extending a ban on entry of those with visas through the end of the year. Among those speaking out against the move are Facebook, Amazon, Google, Intel and Twitter, along with several tech trade groups.

The big picture: The Trump administration argues that visas like the H-1B widely used in the tech industry are responsible for taking jobs that American citizens could fill. Tech companies say they rely on these visas to fill positions with skilled workers from overseas when they've tapped out the American workforce.

Mercedes and Nvidia design a car that gets better with age

Photo illustration courtesy of Nvidia

Mercedes Benz is teaming up with Nvidia to create a perpetually upgradable computing platform for vehicles that will allow cars to add automated driving functions over time, becoming smarter and more valuable the longer they are on the road.

Why it matters: Self-driving technology won't arrive in a snap. Instead, it will roll out gradually through periodic software updates, similar to the way people refresh their smartphones. It's a fundamental shift in thinking that will extend the life of cars, and allow even used-car buyers to get the latest technologies.

Amazon's big new $2 billion climate VC fund

Amazon founder and CEO Jeff Bezos. Photo: Andrej Sokolow/picture alliance via Getty Images

Amazon is creating a $2 billion venture fund that will stake companies working on climate-friendly technologies in transportation, storage, food, power generation, waste and more, the tech giant said Tuesday.

Why it matters: The new fund will help Amazon and other companies meet the "climate pledge" that Amazon announced last year to reach net-zero emissions by 2040.