Axios Future

A robotic hand with the palm facing upward.

August 19, 2020

Welcome to Axios Future, where we hope you had "firenado" on your 2020 apocalypse bingo card.

1 big thing: How an AI grading system missed the mark

Illustration of glitched checkmark plus

Illustration: Eniola Odetunde/Axios

A huge controversy in the U.K. over an algorithm used to substitute for university-entrance exams highlights problems with the use of AI in the real world.

Why it matters: From bail decisions to hate speech moderation, invisible algorithms are increasingly making recommendations that have a major impact on human beings. If they're seen as unfair, what happened in the U.K. could be the start of an angry pushback.

What's happening: Every summer, hundreds of thousands of British students sit for advanced-level qualification exams, known as A-levels, which help determine which students go to which universities.

  • Because of the coronavirus pandemic, however, the British government canceled A-levels this year. Instead, the government had teachers give an estimate of how they thought their students would have performed on the exams.
  • Those predicted grades were then adjusted by Ofqual, England's regulatory agency for exams and qualifications, using an algorithm that weighted the scores based on the historic performance of individual secondary schools.
  • The idea was that the algorithm would compensate for the tendency of teachers to inflate the expected performance of their students and more accurately predict how test-takers would have actually performed.

The catch: It didn't quite work out that way.

  • When students received their predicted A-level results last week, many were shocked to discover that they had "scored" lower than they had expected based on their previous grades and performance on earlier mock exams.
  • Around 40% of the predicted performances were downgraded, while only 2% of marks increased. The biggest victims were students with high grades from less-advantaged schools, who were more likely to have their scores downgraded, while students from richer schools were more likely to have their scores raised.

Be smart: The testing algorithm essentially reinforced the economic and societal bias built into the U.K.'s schooling system, leading to results that a trio of AI experts called in The Guardian "unethical and harmful to education."

What's new: After days of front-page controversies, the British government on Monday abandoned the algorithmic results, instead deciding to accept teachers' initial predictions.

Yes, but: British students may be relieved, but the A-level debacle showcases major problems with using algorithms to predict human outcomes. "It's not just a grading crisis," says Anton Ovchinnikov, a professor at Queen's University's Smith School of Business who has written about the situation. "It's a crisis of data abuse."

  • To students on the outside, the algorithms used to adjust their grades appeared to be an unexplained black box — a frequent concern with AI systems. It wasn't clear how students could appeal predicted scores that often made little sense.
  • Putting what Ovchinnikov notes was a "disproportionately large weight" on schools' past performance meant that students — especially those from disadvantaged backgrounds — lost the chance to be treated as individuals, even though scoring high on A-levels and going to an elite university arguably represents one of the best opportunities for individuals to improve their lot in life.
  • To avoid such disasters in the future, authorities need to "be more inclusive and diverse in the process of creating such models and algorithms," says Ed Finn, an associate professor at Arizona State University and the author of "What Algorithms Want."

The bottom line: Bias, positive and negative, is a fact of human life — a fact that AI systems are often meant to counter. But poorly designed algorithms risk entrenching a new form of bias that could have impacts that go well beyond university placement.

2. Beating AI bias in facial recognition

A man's face geometrically segmented with some portions highlighted

Illustration: Sarah Grillo/Axios

Identify-verification startup Onfido is training its machine-learning system to reduce the bias that leads AI to make more facial recognition errors with dark-complexioned customers than those with lighter skin.

Why it matters: The pandemic-driven boom in telemedicine and fintech has made accurate remote identity-verification technology increasingly important, but these systems will only work fairly if they can identify customers of all races and ethnicities.

How it works: Onfido provides remote identity verification by analyzing the face on a government-issued ID document and comparing it to a freshly captured selfie or video.

  • The company's face-matching algorithm is able to use image recognition to determine whether the face in the selfie is the same as the one on the ID document, confirming identity for remote banking, admission into an event and more.
  • "Essentially, we're replicating what happens in-person in a bank branch and making it digital," says Husayn Kassai, Onfido's CEO.
  • That service that has become more valuable as the pandemic has pushed such interactions online.

By the numbers: Onfido has a market-leading false acceptance rate of 0.01%, which means there's only a 1 in 10,000 chance of incorrectly matching a selfie with an ID.

  • But while ID holders of European nationalities have a false acceptance rate of 0.019% and those in the Americas 0.008%, ID holders of African nationalities had a false acceptance rate of 0.038%.

Yes, but: Onfido's rate for African nationalities still represents a 60-fold improvement from a year ago — and that improvement required deliberate training.

  • Because Onfido has a much larger customer base in Europe, the dataset used to train the algorithm was unbalanced. With more light-skinned faces to learn from, the algorithm unsurprisingly performed best with light-skinned users.
  • To reduce bias, says Onfido's director of product Susana Lopes, the company "changed the way it trained the algorithm to help it learn from an unbalanced dataset."
  • Onfido is working with the U.K.'s Information Commissioner's Office to directly tackle facial recognition bias.

The bottom line: AI bias is almost invariably the result of bias in the real world. If companies offering AI solutions want to change that, says Kassai, they need to specifically "focus on fairness."

3. More seafood on the menu by 2050

Juvenile salmon in a hatchery in Russia

Juvenile salmon in a hatchery in Russia. Photo: Yuri Smityuk\TASS via Getty Images

New research charts out how improvements in aquaculture and sustainable fishing could significantly increase food production from the sea by midcentury.

Why it matters: Global demand for food and particularly protein is projected to rise in step with human population growth. With little new land available to be sustainably opened for farming, our best bet may be the oceans — provided we can better manage that resource.

What's new: In a paper published today in Nature, researchers led by Christopher Costello of the University of California, Santa Barbara, argue that the right policies could increase annual global production of food from the sea by up to 44 million tonnes by 2050.

  • That would account for a quarter of the increase in all meat required to feed a projected 9.8 billion people by midcentury.

Background: Costello's projections may sound surprising, given years of reports that overfishing would essentially empty the oceans over the next several decades.

  • But he says that while seafood may indeed collapse if we "fail to implement sound mariculture policies," the world is trending toward improved fishery management. If those trends continue, we can both produce "substantially more food than today" and do so in a more sustainable fashion.

While right now most ocean seafood comes from wild-caught fisheries, Costello foresees a shift toward mariculture — fish farming at sea.

  • That will require diversifying the diet of farmed fish away from other fish species — which is contributing to overfishing — and toward sustainable sources like insects, algae and microbes.
  • Consumers will also need to diversify their taste preferences, meaning less salmon and tuna fillets and more oysters and mussels.

The bottom line: If the future means more steamed mussels with white wine and garlic, I'm all for it.

4. A new push to deploy carbon-sucking tech in the U.S.

Illustration for carbon removal story

Illustration: Rebecca Zisser/Axios

This morning brought new information about a proposal to build a large plant in Texas oil country that would directly pull carbon dioxide from the atmosphere, my Axios colleague Ben Geman writes.

Driving the news: Occidental Petroleum has teamed up with Rusheen Capital Management to advance plans by Canada-based Carbon Engineering to build a direct air capture plant in the Permian Basin — and eventually facilities elsewhere, too.

Occidental subsidiary Oxy Low Carbon Ventures and Rusheen, a private equity firm, have formed a company called 1PointFive to "finance and deploy" Carbon Engineering's technology in the U.S.

Why it matters: It's a step toward building a plant that the companies say would be the world's largest direct air capture (DAC) facility, with the capacity to remove up to 1 million metric tons of atmospheric CO2 annually.

  • More broadly, the new licensing deal with 1PointFive and Carbon Engineering for the Permian plant in Texas is the "first step toward their aspiration to deliver this technology on an industrial scale throughout the United States," they said.

Where it stands: DAC is among the nascent negative-emissions technologies attracting more attention as a way to help avoid runaway global warming. But that's if — if! — it can eventually be deployed at a major scale (1 million metric tons annually is a drop in the bucket).

Go deeper.

5. Worthy of your time

Near misses at UNC Chapel Hill’s high-security lab illustrate risk of accidents with coronaviruses (Allison Young and Jessica Blake — ProPublica)

  • As new technology allows scientists to genetically engineer dangerous viruses, the catastrophic risks posed by lab accidents like those documented here are only growing.

A radical new model of the brain illuminates its wiring (Grace Huckins — Wired)

  • "Network neuroscience" offers a new picture of the ever-mysterious brain.

Philosophers on GPT-3 (Daily Nous)

The millions being made from cardboard theft (Jo Harper and Will Smale — BBC)

  • How the rise of e-commerce led to a global crime ring dedicated to stealing your recycling.

6. 1 law and order thing: AI criminology

Photo of Terminator from films

Not included on the list: Time-traveling robot terminators. Photo: Yoshikazu Tsuno/AFP via Getty Images

Deepfakes, blackmail and deliberate autonomous vehicle crashes are among the most worrying ways AI could be used for criminal activity over the next 15 years, according to new research.

Why it matters: Any technology as transformative as AI has the potential to be used for evil as well as good, which is why we should identify dangerous uses now — pre-Skynet.

By the numbers: In the paper, funded by the Dawes Centre for Future Crime at University College London, researchers used academic studies, news reports and works of fiction to rank the top 20 potential criminal uses of AI. The top ones include:

  • Audio/video impersonation: As AI improves, it will be increasingly difficult to tell authentic media from deepfakes generated by machines. Researchers expect the technology to be mostly used for financial crime, with a side in manipulating politics.
  • Driverless vehicles as weapons: Motor vehicles are already used as weapons of terror, but autonomous vehicles would increase the threat by reducing the need to recruit drivers and enabling a single terrorist to direct multiple attacks at once.
  • Large-scale blackmail: The real threat from AI isn't new forms of crime, but turbocharging already existing patterns of criminal behavior. Faked evidence at massive scale could lower the bar to blackmail by permitting the targeting of countless victims.

Least worrying: Tiny burglar bots that fit through keyholes or letterboxes could help human robbers gain entry into locked homes, but researchers consider the risk they pose and the damage they could do to be low.

The bottom line: AI may enable new forms of criminal behavior, but it will still be human beings breaking the law — for now, at least.