Oct 5, 2019

Wrestling over secret AI algorithms

Illustration: Sarah Grillo/Axios

We're seeing the beginnings of a tug-of-war at the highest levels of government over how much access people should have to AI systems that make critical decisions about them.

What's happening: Life-changing determinations, like the length of a criminal's sentence or the terms of a loan, are increasingly informed by AI programs. These can churn through oodles of data to detect patterns invisible to the human eye, potentially making more accurate predictions than before.

Why it matters: The systems are so complex that it can be hard to know how they arrive at answers — and so valuable that their creators often try to restrict access to their inner workings, making it potentially impossible to challenge their consequential results.

Driving the news: Two recent proposals are pulling in opposite directions.

  • A bill from Rep. Mark Takano, a California Democrat, would block companies that design AI systems for criminal justice from withholding details about their algorithms by claiming they’re trade secrets.
  • A proposal from the Department of Housing and Urban Development (HUD) would protect landlords, lenders and insurers that want to use algorithms for important determinations, shielding them from claims that the algorithms unintentionally have a more negative impact on certain groups of people.

These are among the earliest attempts to set down rules and definitions for algorithmic transparency. How they shake out could set rough precedents for how the government will approach the many future questions that will emerge.

Proponents of more access say it's vital to test whether walled-off systems are making serious mistakes or unfair determinations — and argue that the potential for harm should outweigh companies' interest in protecting their secrets.

  • Developers regularly invoke trade-secret rights to keep their algorithms — used for key evidence like DNA matches or bullet traces — away from the accused, says Rebecca Wexler, a UC Berkeley law professor who consulted on Takano's bill.
  • "We need to give defendants the rights to get the source code and [not] allow intellectual property rights to be able to trump due process rights," Takano tells Axios. His bill also asks the government to set standards for forensic algorithms and test every program before it is used.

The HUD proposal would require someone to show that an algorithmic decision was based on an illegal proxy, like race or gender, in order to succeed in a lawsuit. But critics say that can be impossible to determine without understanding the system.

  • "By creating a safe harbor around algorithms that do not use protected class variables or close proxies, the rule would set a precedent that both permits the proliferation of biased algorithms and hampers efforts to correct for algorithmic bias," says Alice Xiang, a researcher at the Partnership on AI.
  • HUD is soliciting comments on the proposal until later this month.

The other side: "The goal here is to bring more certainty into this area of the law," said HUD General Counsel Paul Compton in an August press conference. He said the proposal "frees up parties to innovate, take risks and meet the needs of their customers without the fear that their efforts will be second-guessed through statistics years down the line."

Go deeper

The hidden costs of AI

Illustration: Eniola Odetunde/Axios

In the most exclusive AI conferences and journals, AI systems are judged largely on their accuracy: How well do they stack up against human-level translation or vision or speech?

Yes, but: In the messy real world, even the most accurate programs can stumble and break. Considerations that matter little in the lab, like reliability or computing and environmental costs, are huge hurdles for businesses.

Go deeperArrowOct 26, 2019

In AI we trust — too much

AI systems intended to help people make tough choices — like prescribing the right drug or setting the length of a prison sentence — can instead end up effectively making those choices for them, thanks to human faith in machines.

How it works: These programs generally offer new information or a few options meant to help a human decision-maker choose more wisely. But an overworked or overly trusting person can fall into a rubber-stamping role, unquestioningly following algorithmic advice.

Go deeperArrowOct 19, 2019

Expert Voices Live: AI in 2050

Joshua New, Senior Policy Analyst at the Center for Data Innovation, on Thursday morning. Photo: Chuck Kennedy for Axios

The big picture: On Thursday morning, Axios' Cities Correspondent Kim Hart and Emerging Technology Reporter Kaveh Waddell hosted a roundtable conversation to discuss the future of AI, with a focus on policy and innovation.

The conversation touched on how to balance innovation with necessary regulation, create and maintain trust with users, and prepare for the future of work.

The relationship between the public and private sector

As AI continues to become more sophisticated and more widely used, how to provide regulatory guardrails while still encouraging innovation was a focal point of the discussion.

  • Rep. Jerry McNerney (D-CA) stressed the importance of regulators being more informed about new technology: "How can we best use resources? We need the expertise within the government to manage these developments as they come."
  • Dr. Mona Siddiqui, Chief Data Officer at HHS, on the existing gaps at the federal level: "Investment and infrastructure is lacking. A lot of departments need the support to build that."
  • Collin Sebastian, Head of Software Products and Engineering at SoftBank Robotics America, on how the government can serve as an effective partner to the private sector: "One of the best ways the government can help without stifling innovation is to provide direction...If you give me a specific problem to address, that’s going to guide my development without having to create new legislation."

Attendees discussed balancing regulation and innovation in the context of global competition, particularly with China.

  • Rob Strayer, Deputy Assistant Secretary of State for Cyber and International Communications Policy at the State Department, on the challenges of regulation in the context of international competition in AI development: "We need to not impede growth of AI technologies and...[be] aware of a competitive international environment. Other countries won’t put [these] guardrails in."
Preparing for the future of work

The conversation also highlighted who is most impacted by technological development in AI, and the importance of future-proofing employment across all industries. As AI is something that touches all industries, the importance of centering the human experience in creating solutions was stressed at multiple points in the conversation.

  • William Carter, Deputy Director and Fellow at the Technology Policy Program at the Center for Strategic & International Studies, highlighted the importance of future-proofing systems: "Creating trust is more than regulation and mediating algorithmic risk. [People want to feel that] AI can be a part of the world in which they can participate. [We should be] creating incentives for companies to retrain workers who are displaced."
  • Molly Kinder, David Rubenstein Fellow with the Metropolitan Policy Program at the Brookings Institution, on the importance of having a clear picture of who is most at risk to be adversely affected by AI job displacement:
    • "We’re finding that...the least resilient are the ones who are least likely to be retrained. Our insights suggest that we as a country are not equipped to help working adults."
    • "Latina women are the most at-risk group for AI [job displacement]...We need to make sure we’re human-centered in developing our solutions...[and that] we update our sense of who the workers are that are most being affected."
Creating trust with users

With the accelerating development of AI, creating and maintaining trust with users, consumers, and constituents alike was central to the discussion.

  • Kristin Sharp, Senior Fellow at New America and Partner at Entangled, on how keeping people informed can create trust: "People tend to be worried about their privacy when they don’t know what the end-use case is for the data that’s being collected."
  • Lindsey Sheppard, Associate Fellow at the Center for Strategic & International Studies, on the importance of seeing AI as part of social, economic, and educational systems that also need future-proofing: "You’re not let off the hook if you’re not using AI. You need that infrastructure whether or not you’re using AI. You still need skilled workers that have those software and data skills."

Thank you SoftBank Group for sponsoring this event.

Keep ReadingArrowOct 25, 2019