Get the latest market trends in your inbox

Stay on top of the latest market trends and economic insights with the Axios Markets newsletter. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Minneapolis-St. Paul

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa-St. Petersburg news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa-St. Petersburg

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Sarah Grillo/Axios

Identity-verification startup Onfido is training its machine-learning system to reduce the bias that leads AI to make more facial recognition errors with dark-complexioned customers than those with lighter skin.

Why it matters: The pandemic-driven boom in telemedicine and fintech has made accurate remote identity-verification technology increasingly important, but these systems will only work fairly if they can identify customers of all races and ethnicities.

How it works: Onfido provides remote identity verification by analyzing the face on a government-issued ID document and comparing it to a freshly captured selfie or video.

  • The company's face-matching algorithm is able to use image recognition to determine whether the face in the selfie is the same as the one on the ID document, confirming identity for remote banking, admission into an event and more.
  • "Essentially, we're replicating what happens in-person in a bank branch and making it digital," says Husayn Kassai, Onfido's CEO.
  • That service that has become more valuable as the pandemic has pushed such interactions online.

By the numbers: Onfido has a market-leading false acceptance rate of 0.01%, which means there's only a 1 in 10,000 chance of incorrectly matching a selfie with an ID.

  • But while ID holders of European nationalities have a false acceptance rate of 0.019% and those in the Americas 0.008%, ID holders of African nationalities had a false acceptance rate of 0.038%.

Yes, but: Onfido's rate for African nationalities still represents a 60-fold improvement from a year ago — and that improvement required deliberate training.

  • Because Onfido has a much larger customer base in Europe, the dataset used to train the algorithm was unbalanced. With more light-skinned faces to learn from, the algorithm unsurprisingly performed best with light-skinned users.
  • To reduce bias, says Onfido's director of product Susana Lopes, the company "changed the way it trained the algorithm to help it learn from an unbalanced dataset."
  • Onfido is working with the U.K.'s Information Commissioner's Office to directly tackle facial recognition bias.

The bottom line: AI bias is almost invariably the result of bias in the real world. If companies offering AI solutions want to change that, says Kassai, they need to specifically "focus on fairness."

Go deeper

Bryan Walsh, author of Future
Aug 19, 2020 - Technology

How an AI grading system ignited a national controversy in the U.K.

Illustration: Eniola Odetunde/Axios

A huge controversy in the U.K. over an algorithm used to substitute for university-entrance exams highlights problems with the use of AI in the real world.

Why it matters: From bail decisions to hate speech moderation, invisible algorithms are increasingly making recommendations that have a major impact on human beings. If they're seen as unfair, what happened in the U.K. could be the start of an angry pushback.

Biden's Day 1 challenges: Systemic racism

Photo illustration: Sarah Grillo/Axios. Photo: Kirsty O'Connor (PA Images)/Getty Images

Advocates are pushing President-elect Biden to tackle systemic racism with a Day 1 agenda that includes ending the detention of migrant children and expanding DACA, announcing a Justice Department investigation of rogue police departments and returning some public lands to Indigenous tribes.

Why it matters: Biden has said the fight against systemic racism will be one of the top goals of his presidency — but the expectations may be so high that he won't be able to meet them.

Caitlin Owens, author of Vitals
2 hours ago - Health

Most Americans are still vulnerable to the coronavirus

Adapted from Bajema, et al., 2020, "Estimated SARS-CoV-2 Seroprevalence in the US as of September 2020"; Cartogram: Andrew Witherspoon/Axios

As of September, the vast majority of Americans did not have coronavirus antibodies, according to a new study published in JAMA Internal Medicine.

Why it matters: As the coronavirus spreads rapidly throughout most of the country, most people remain vulnerable to it.