Oct 19, 2019

In AI we trust — too much

AI systems intended to help people make tough choices — like prescribing the right drug or setting the length of a prison sentence — can instead end up effectively making those choices for them, thanks to human faith in machines.

How it works: These programs generally offer new information or a few options meant to help a human decision-maker choose more wisely. But an overworked or overly trusting person can fall into a rubber-stamping role, unquestioningly following algorithmic advice.

Why it matters: Over-reliance on potentially faulty AI can harm the people whose lives are shaped by critical decisions about employment, health care, legal proceedings and more.

The big picture: This phenomenon is called automation bias. Early studies focused on autopilot for airplanes — but as automation technology becomes more complex, the problem could get much worse with more dangerous consequences.

  • AI carries an aura of legitimacy and accuracy, burnished by overeager marketing departments and underinformed users.
  • But AI is just fancy math. Like any equation, if you give it incorrect inputs, it will return wrong answers. And if it learns patterns that don't reflect the real world, its output will be equally flawed.

Automation bias caused by simpler technologies has already been blamed for real-world disasters.

  • In 2016, a patient was prescribed the wrong medication when a pharmacist chose a similarly named drug from a list on a computer. A nurse noticed — but administered the meds anyway, assuming the electronic record was correct. The patient had heart and blood pressure problems as a result.
  • In 2010, a pipeline dumped nearly 1 million gallons of crude oil into Michigan wetlands and rivers after operators repeatedly ignored "critical alarms." They were desensitized because of previous false alarms, according to a 2016 post-mortem report — showing another threat from over-reliance on machines.

"When people have to make decisions in relatively short timeframes, with little information — this is when people will tend to just trust whatever the algorithm gives them," says Ryan Kennedy, a University of Houston professor who researches trust and automation.

  • "The worst-case scenario is somebody taking these algorithmic recommendations, not understanding them, and putting us in a life or death situation," Kennedy tells Axios.

Now, institutions are pushing AI systems further into high-stakes decisions.

  • In hospitals: A forthcoming study found that Stanford physicians "followed the advice of [an AI] model even when it was pretty clearly wrong in some cases," says Matthew Lungren, a study author and the associate director of the university's Center for Artificial Intelligence in Medicine and Imaging.
  • At war: Weapons are increasingly automated, but usually still require human approval before they shoot to kill. In a 2004 paper, Missy Cummings, now the director of Duke University's Humans and Autonomy Lab, wrote that automated aids for aviation or defense "can cause new errors in the operation of a system if not designed with human cognitive limitations in mind."
  • On the road: Sophisticated driver assists like Tesla's Autopilot still require people to intervene in dangerous situations. But a 2015 Duke study found that humans lose focus when they're just monitoring a car rather than driving it.

And in the courtroom, human prejudice mixes in.

What's next: More information about an algorithm's confidence level can give people clues for how much they should lean on it. Lungren says the Stanford physicians made fewer mistakes when they were given a recommendation and an accuracy estimate.

  • In the future, machines may adjust to a user's behavior — say, by showing its work when a person is trusting its advice too much, or by backing off if the user seems tried or stressed, which can make them less critical.
  • "Humans are good at seeing nuance in a situation that automation can't," says Neera Jain, a Purdue professor who studies human–machine interaction. "[We are] trying to avoid those situations where we become so over-reliant that we forget we have our own brains that are powerful and sophisticated."

Go deeper

The hidden costs of AI

Illustration: Eniola Odetunde/Axios

In the most exclusive AI conferences and journals, AI systems are judged largely on their accuracy: How well do they stack up against human-level translation or vision or speech?

Yes, but: In the messy real world, even the most accurate programs can stumble and break. Considerations that matter little in the lab, like reliability or computing and environmental costs, are huge hurdles for businesses.

Go deeperArrowOct 26, 2019

Expert Voices Live: AI in 2050

Joshua New, Senior Policy Analyst at the Center for Data Innovation, on Thursday morning. Photo: Chuck Kennedy for Axios

The big picture: On Thursday morning, Axios' Cities Correspondent Kim Hart and Emerging Technology Reporter Kaveh Waddell hosted a roundtable conversation to discuss the future of AI, with a focus on policy and innovation.

The conversation touched on how to balance innovation with necessary regulation, create and maintain trust with users, and prepare for the future of work.

The relationship between the public and private sector

As AI continues to become more sophisticated and more widely used, how to provide regulatory guardrails while still encouraging innovation was a focal point of the discussion.

  • Rep. Jerry McNerney (D-CA) stressed the importance of regulators being more informed about new technology: "How can we best use resources? We need the expertise within the government to manage these developments as they come."
  • Dr. Mona Siddiqui, Chief Data Officer at HHS, on the existing gaps at the federal level: "Investment and infrastructure is lacking. A lot of departments need the support to build that."
  • Collin Sebastian, Head of Software Products and Engineering at SoftBank Robotics America, on how the government can serve as an effective partner to the private sector: "One of the best ways the government can help without stifling innovation is to provide direction...If you give me a specific problem to address, that’s going to guide my development without having to create new legislation."

Attendees discussed balancing regulation and innovation in the context of global competition, particularly with China.

  • Rob Strayer, Deputy Assistant Secretary of State for Cyber and International Communications Policy at the State Department, on the challenges of regulation in the context of international competition in AI development: "We need to not impede growth of AI technologies and...[be] aware of a competitive international environment. Other countries won’t put [these] guardrails in."
Preparing for the future of work

The conversation also highlighted who is most impacted by technological development in AI, and the importance of future-proofing employment across all industries. As AI is something that touches all industries, the importance of centering the human experience in creating solutions was stressed at multiple points in the conversation.

  • William Carter, Deputy Director and Fellow at the Technology Policy Program at the Center for Strategic & International Studies, highlighted the importance of future-proofing systems: "Creating trust is more than regulation and mediating algorithmic risk. [People want to feel that] AI can be a part of the world in which they can participate. [We should be] creating incentives for companies to retrain workers who are displaced."
  • Molly Kinder, David Rubenstein Fellow with the Metropolitan Policy Program at the Brookings Institution, on the importance of having a clear picture of who is most at risk to be adversely affected by AI job displacement:
    • "We’re finding that...the least resilient are the ones who are least likely to be retrained. Our insights suggest that we as a country are not equipped to help working adults."
    • "Latina women are the most at-risk group for AI [job displacement]...We need to make sure we’re human-centered in developing our solutions...[and that] we update our sense of who the workers are that are most being affected."
Creating trust with users

With the accelerating development of AI, creating and maintaining trust with users, consumers, and constituents alike was central to the discussion.

  • Kristin Sharp, Senior Fellow at New America and Partner at Entangled, on how keeping people informed can create trust: "People tend to be worried about their privacy when they don’t know what the end-use case is for the data that’s being collected."
  • Lindsey Sheppard, Associate Fellow at the Center for Strategic & International Studies, on the importance of seeing AI as part of social, economic, and educational systems that also need future-proofing: "You’re not let off the hook if you’re not using AI. You need that infrastructure whether or not you’re using AI. You still need skilled workers that have those software and data skills."

Thank you SoftBank Group for sponsoring this event.

Keep ReadingArrowOct 25, 2019

The unanswered questions in America's AI strategy

Illustration: Sarah Grillo/Axios

Three years since the White House first publicly considered the U.S. government's role as a shepherd of artificial intelligence research, pivotal unanswered questions are still holding back a coherent strategy for boosting the critical technology at home.

Why it matters: China's authoritarian system, largely untroubled by deliberative holdups, has been pouring money into its AI sector.

Go deeperArrowNov 6, 2019