Jun 15, 2020

Axios Login

By Ina Fried
Ina Fried

Hey there. So, on Friday I promised on Twitter that any new subscribers who signed up would get a funny intro today. Oh well, sorry. No refunds.

Today's Login is 1,395 words, a 5-minute read.

1 big thing: Fresh concerns about AI bias in the age of COVID-19

Illustration: Sarah Grillo/Axios

Businesses and institutions facing unprecedented demands during the coronavirus pandemic have boosted their use of artificial intelligence in some of society's most sensitive areas.

Why it matters: Algorithms and the data they rely on are prone to automating preexisting biases — all the more so when they're rushed into the field without careful testing and review.

Driving the news:

  • Twitter and Facebook have been relying far more on AI to moderate content. Many of the contractors who normally handle such tasks are not able to go into the office, and the companies don't want the work done remotely so they can keep close tabs on sensitive user data.
  • Walmart associates have voiced concerns that the AI being used at self-checkout is flagging appropriate behavior as potential wrongdoing and missing actual theft.
  • A need for fast results in the earliest days of the pandemic pushed adoption of novel uses of AI in tracking the virus' spread and speeding its diagnosis. But health care data leaves out big parts of the population and has historically been rife with bias.

The big picture: Beyond these examples, experts worry that the economy's sudden halt has driven resource-strapped companies and institutions to increasingly rely on algorithms to make decisions in housing, credit, employment and other areas.

  • "It's the next pandemic," said Miriam Vogel, CEO of the nonprofit EqualAI, who has written on the topic.

Key areas of accelerating AI adoption:

  • Employment: Concerns have been raised about algorithms being used both to screen applicants and to decide who gets cut when companies are reducing staff.
  • Policing: Law enforcement is increasingly deploying AI for predictive policing, even though critics say it worsens and codifies racial profiling and other problems.
  • Housing: AI-driven algorithms are playing a greater role in housing decisions like landlords’ choice of tenants and banks’ approval of loans. AI holds potential to aid people who have long faced discrimination on these fronts — but only if enough care is taken with both algorithms and training data.
  • COVID-19: AI is playing a role in everything from vaccine trials to targeting public outreach to decisions over who can be safely treated at home via telemedicine. But providers need to watch which groups are likely to be underrepresented in the data used to train algorithms, along with patterns of inequality in existing health systems.

Between the lines: If you are going to use AI in making meaningful decisions, experts recommend making sure a diverse group of people is involved in reviewing everything from the algorithm design to the training data to the way the system will be deployed and evaluated.

Yes, but: McGill University professor Matissa Hollister notes that adding humans to the mix isn't a cure-all, either, given that humans have plenty of bias as well.

Meanwhile: Amazon and Microsoft have paused police use of their AI-driven facial recognition, while IBM is getting out of that business entirely. Such systems have historically been significantly less accurate when dealing with people of color.

What's next: Expect a wave of lawsuits from consumers contending that they were discriminated against by AI systems, especially in key areas such as hiring.

  • "The law is very clear you cannot discriminate in employment decisions," Vogel said.
2. Airbnb's new plan to measure discrimination

Airbnb will set out to identify and measure racial discrimination experienced by users of its service through a new research project in partnership with Color of Change and other advisers.

Why it matters: "The reason we're doing this is because we have not achieved our goal of reducing all bias and discrimination on our platform," Airbnb co-founder and CEO Brian Chesky tells Axios' Kia Kokalitcheva.

The big picture: Users of Airbnb's home-sharing service have faced discrimination for years. Frustrating and heartbreaking anecdotes of subtle and overt racism would occasionally surface in social media, but affected customers had little other recourse.

  • "This is not a new issue," says Chesky. "Presumably, this issue has existed so long as people had the tools to be discerning of who they want to stay with; and then, therefore, they could discriminate."
  • "We get gaslit around whether or not something was racist or not," says Color Of Change president Rashad Robinson. "People say it was something else."

Details: Project Lighthouse will seek to measure discrimination that travelers and hosts face based on the perception of their race.

  • "When people deny somebody a home, they don't ask them their background, their nationality, their heritage. They look at them, they make a snap judgment, and the two things that we identified were photo and name," says Chesky.
  • The study will cover the reservation process, reviews, and interactions with Airbnb's customer support.

Yes, but: A study is one thing, but what Airbnb does afterwards will be the true test of its commitment to combat discrimination.

  • More broadly, Airbnb will have to address discrimination that happens offline, when guests and hosts meet in person — as well as how its service affects the neighborhoods and cities where it operates, potentially deepening existing inequities.

Flashback: In 2016, the company enlisted former attorney general Eric Holder to investigate and assemble a report on the issue.

3. Why contact-tracing apps have slow uptake in U.S.

Illustration: Sarah Grillo/Axios

For all the attention on Apple and Google's joint effort to help track COVID-19 exposure, adoption of the technology in the U.S. has been limited, especially compared to other countries.

Why it matters: The companies' exposure notification technology could augment the labor-intensive work of contact tracing that experts say is key to controlling the spread of a disease for which there is no treatment or cure.

As NBC News reported Sunday, even some of the states that expressed support for the project have yet to move forward with apps, with others saying they have no plans to leverage the technology.

There are several reasons adoption in the U.S. has been slow:

  1. As with many other aspects of addressing the coronavirus crisis, federal health authorities have left the choice whether and how to use exposure notification technology to individual states.
  2. Handling things at the state level forces each state to at least partially reinvent the wheel, all at a time when scarce tech resources are stretched thin.

Between the lines: It's unclear how many Americans would voluntarily use such apps, given a cultural aversion to government tracking as well as significant portions of the population who don't believe COVID-19 is a significant threat and refuse to wear masks or take other steps.

  • That's despite the fact that Google and Apple have made the technology as simple and privacy-preserving as possible.

Go deeper: Apple, Google deliver test code for virus-exposure tracking

4. Crisis Text Line CEO ousted after racism accusations

Crisis Text Line founder and former CEO Nancy Lublin. Photo: Steven Ferdman/Getty Images

Crisis Text Line CEO Nancy Lublin was ousted by the nonprofit's board Friday, in response to allegations of racism and mistreatment of staff, Axios' Orion Rummler reports.

The big picture: The crisis hotline has emerged as a key mental health resource — particularly for younger people — amid the coronavirus pandemic, as individuals grapple with how to cope in a drastically changed world.

  • Lublin's termination — effective immediately, per the board — follows a stream of Twitter posts that used the hashtag #notmycrisistextline alongside allegations of micro-aggressions and abuse.

What they're saying: "We take full accountability and are ready to address these issues head-on," the board wrote in a letter to staff Friday. "No form of racism or bullying of any kind will be tolerated at Crisis Text Line."

  • Anti-racist trainings for board members will begin in July, the board told staff.
  • In light of nationwide Black Lives Matter protests against police brutality and misconduct, the company is also examining alternatives to sending law enforcement to help those in crisis who use the hotline.

My thought bubble: It is hard bringing this news as I have highlighted both Lublin and the group's work multiple times, including a recent interview for Axios on HBO. But that's also why it is important that I do so.

Go deeper: Crisis Text Line fills added role in coronavirus pandemic

5. Take Note

On Tap

Trading Places

  • The Internet Association hired Dylan Hoffman to serve as the group's head lobbyist in Sacramento, California. Hoffman previously served as legislative director for State Assemblyman Jesse Gabriel.

ICYMI

6. After you Login

Check out what happened as Portland State University graduate Madisen Hallberg was recording the national anthem earlier this month, to be played at the school's commencement speech. Really, it's worth watching, especially if you need a little lift going into the week.

Ina Fried