Get the latest market trends in your inbox

Stay on top of the latest market trends and economic insights with the Axios Markets newsletter. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Minneapolis-St. Paul

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa-St. Petersburg news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa-St. Petersburg

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Lazaro Gamio/Axios

The first step to ethical artificial intelligence is teaching the computer to explain its decision making, something known in the field as explainable AI.

Why it matters: Right now many deep learning algorithms don't make it clear how they arrived at their predictions or conclusions. That lack of visibility into the data, steps and calculations that went into an outcome makes it hard to root out bias or other algorithmic errors that could impact results like who gets a loan or how much a factory should produce.

What's happening: Explainable AI, also sometimes called transparent AI, has become a top priority for nearly all the big companies in the AI field, including Microsoft, Google, Intel, IBM and Oracle. The topic is also expected to come up in Thursday's White House meeting on AI.

  • IBM's guidelines put it pretty simply: "Companies must be able to explain what went into their algorithm’s recommendations. If they can’t, then their systems shouldn’t be on the market."

That sounds straightforward, even obvious. But it actually isn't a feature built into many of the deep learning systems that are currently available.

No one size fits all: AI was a huge topic at Google's I/O developer conference this week, with some focus on explainability as well.

  • One very basic example is the new personalized scores in Google Maps for locations. In that case, Google's AI can also say why it thought that restaurant is a good fit by showing a few of the factors that led to the recommendation.
  • By contrast, doctors wanting to know why an AI system made a clinical diagnosis are going to want a far more detailed explanation.
  • "The kind of information you need in different scenarios really is fundamentally different," Google principal scientist Greg Corrado told Axios. "It's not possible to have one uniform standard for what constitutes explainability."

How it works: There are different ways for an AI system to explain itself.

  • One is to be able to show which variables led to different decisions and how heavily each was weighted. In the Google restaurant example, the program might explain its recommendation by noting a user tends to greatly prefer restaurants that are kid-friendly and often goes to pizza places.
  • In other instances, it could be enough to offer people the ability to adjust different pieces of data to see if that changes the conclusion.

What is explainable enough? Microsoft Research director Eric Horvitz likens the problem to getting a car fixed. You don't have to know exactly how the carburetor on a car works, as long as your mechanic does. That opens up the question of just how explainable AI needs to be for specific users and specific purposes.

"I think we need to do more research on what is a satisfying answer to a human being," he said.

It's not just Big Tech: DARPA, the Defense Department's advanced research arm, has a program on Explainable AI. The stakes are obviously huge when AI is helping guide decisions of who to attack, how and when.

"The DoD has to have, I would argue, a much higher bar ," DARPA's Brian Pierce told Axios.

Government's role: IBM hopes to raise the topic at today's White House AI summit.

“If the government’s going to do anything in terms of encouraging or even potentially regulating AI, the main focus has got to be on this issue of explainability," said Chris Padilla, who leads IBM's government affairs efforts.

What else: Explainability is a necessary ingredient for ethical AI, but it's really just a start. Other keys are eliminating bias, both in the data used to "train" the programs and in the algorithms themselves.

  • There is a separate question concerning what tasks should be reserved only for humans.
  • Many believe, for example, that allowing AI-powered autonomous weapons systems is a bad idea.

Go deeper: Here are several more looks at the need for (and means of creating) explainable AI.

Go deeper

Updated 2 hours ago - Politics & Policy

Coronavirus dashboard

Illustration: Sarah Grillo/Axios

  1. Health: Coronavirus cases rose 10% in the week before Thanksgiving.
  2. Politics: Supreme Court backs religious groups on New York coronavirus restrictions.
  3. World: Expert says COVID vaccine likely won't be available in Africa until Q2 of 2021 — Europeans extend lockdowns.
  4. Economy: The winners and losers of the COVID holiday season.
  5. Education: National standardized tests delayed until 2022.
4 hours ago - Health

Standardized testing becomes another pandemic victim

Photo: Edmund D. Fountain for The Washington Post via Getty

National standardized reading and math tests have been pushed from next year to 2022, the National Center for Education Statistics (NCES) announced Wednesday.

Why it matters: There’s mounting national evidence that students are suffering major setbacks this year, with a surge in the number of failing grades.

4 hours ago - World

European countries extend lockdowns

A medical worker takes a COVID-19 throat swab sample at the Berlin-Brandenburg Airport. Photo by Maja Hitij via Getty

Recent spikes in COVID-19 infections across Europe have led authorities to extend restrictions ahead of the holiday season.

Why it matters: "Relaxing too fast and too much is a risk for a third wave after Christmas," said European Commission President Ursula von der Leyen.