Sep 28, 2019

Insurers deploy AI against California's wildfire crisis

Illustration: Aïda Amer/Axios

To keep up with California's unrelenting wildfire threat, some insurers are now turning to AI to predict fire risk with unprecedented, structure-by-structure detail.

Why it matters: This will allow them to cover homes in areas that they would otherwise have passed over — but potentially at the cost of hiking rates for those who can least afford it.

The big picture: Spooked by a recent surge in destructive fires that shows no sign of cooling off, insurers have backed away from underwriting in the most flammable parts of the state. They say the risk is sky-high, and there's too much uncertainty about where fire will strike next and what it will consume.

Now, some insurers are getting creative. They are trying to pack in as much data as possible: information from building permits, records and codes — and, increasingly, satellite photos and aerial imagery from drones and aircraft.

  • Automatically analyzing super-detailed, top-down images helps insurers understand crucial, property-specific risk factors.
  • Some important ones: how close a house is to vegetation, how flammable that brush is and what the house's roof is made of.

Driving the news: MetLife announced this week that it's working with a Bay Area startup, Zesty.ai, to use this type of data for property-level scoring.

  • Zesty.ai predicts risk based on building information, aerial imagery, patterns gleaned from examining decades of wildfires and data from fire scientists.
  • MetLife has been extra conservative in fire areas, VP Carol Anderson tells Axios, in part because it has relied on traditional maps to assess risk.
  • When MetLife implements the new scoring system early next year, it hopes to bring in new customers and retain old ones, Anderson says.

Several startups have popped up to sell these new models to insurers, and some of the old guard are developing them, too.

  • About half a dozen companies have already met with CDI about their new models, according to Ken Allen, deputy commissioner for rate regulation.
  • And insurers, at an inflection point in how they approach fire risk, are increasingly interested, says Janet Ruiz of the Insurance Information Institute, an industry association.

But, but, but: Experts worry that property-level scoring can result in higher premiums for people living in high-risk areas, who are often on low or fixed incomes.

  • Low-income homeowners may be unable to afford property updates that would drive risk factors down, like replacing roofing or clearing trees. And if rates go up, their property values could go down in response.
  • New, more granular models could drive a bigger wedge between premiums, says Allen of the Department of Insurance, "so high risk pay more and lower risk pay less."
  • "We are seeing insurance companies over-rely on technology, and the consumers are paying the price," says Emily Rogan of United Policyholders, a nonprofit that advocates for insurance customers.

What they're saying: "MetLife follows standard actuarial principles for ratemaking to ensure our rates are not excessive, inadequate or unfairly discriminatory," a spokesperson told Axios.

  • Zesty.ai founder Attila Toth argues that it ultimately falls on regulators, not his company, to make sure that its risk models don't discriminate. But in a report last year, CDI said it "does not have the necessary authority to regulate how insurers underwrite residential property insurance."
  • After this story was published, Toth shared the summary of an outside auditor's report that analyzed the Zesty.ai fire risk model and concluded that, if applied properly, it would not "result in rates that are unfairly discriminatory."

The bottom line: "Moving to risk-based rates is overall a positive thing to do, but it could have a negative effect on people currently in these high-risk areas," says Lloyd Dixon, a RAND researcher who last year published a detailed study of wildfire's impact on insurance in California.

Go deeper: Deciding whether to rebuild after fire

Editor's note: This story has been updated with details on the Zesty.ai actuarial report.

Go deeper

The hidden costs of AI

Illustration: Eniola Odetunde/Axios

In the most exclusive AI conferences and journals, AI systems are judged largely on their accuracy: How well do they stack up against human-level translation or vision or speech?

Yes, but: In the messy real world, even the most accurate programs can stumble and break. Considerations that matter little in the lab, like reliability or computing and environmental costs, are huge hurdles for businesses.

Go deeperArrowOct 26, 2019

New York's very expensive surprise medical billing solution

New York's surprise billing law — which providers hope will become the model for a national solution — has resulted in providers receiving some very high payments, according to a new analysis by the USC-Brookings Schaeffer Initiative for Health Policy.

Why it matters: Surprise medical bills impact two groups of people: The patients directly responsible for paying them, and the rest of us, who pay higher premiums as a result of their existence.

Go deeperArrowOct 25, 2019

Expert Voices Live: AI in 2050

Joshua New, Senior Policy Analyst at the Center for Data Innovation, on Thursday morning. Photo: Chuck Kennedy for Axios

The big picture: On Thursday morning, Axios' Cities Correspondent Kim Hart and Emerging Technology Reporter Kaveh Waddell hosted a roundtable conversation to discuss the future of AI, with a focus on policy and innovation.

The conversation touched on how to balance innovation with necessary regulation, create and maintain trust with users, and prepare for the future of work.

The relationship between the public and private sector

As AI continues to become more sophisticated and more widely used, how to provide regulatory guardrails while still encouraging innovation was a focal point of the discussion.

  • Rep. Jerry McNerney (D-CA) stressed the importance of regulators being more informed about new technology: "How can we best use resources? We need the expertise within the government to manage these developments as they come."
  • Dr. Mona Siddiqui, Chief Data Officer at HHS, on the existing gaps at the federal level: "Investment and infrastructure is lacking. A lot of departments need the support to build that."
  • Collin Sebastian, Head of Software Products and Engineering at SoftBank Robotics America, on how the government can serve as an effective partner to the private sector: "One of the best ways the government can help without stifling innovation is to provide direction...If you give me a specific problem to address, that’s going to guide my development without having to create new legislation."

Attendees discussed balancing regulation and innovation in the context of global competition, particularly with China.

  • Rob Strayer, Deputy Assistant Secretary of State for Cyber and International Communications Policy at the State Department, on the challenges of regulation in the context of international competition in AI development: "We need to not impede growth of AI technologies and...[be] aware of a competitive international environment. Other countries won’t put [these] guardrails in."
Preparing for the future of work

The conversation also highlighted who is most impacted by technological development in AI, and the importance of future-proofing employment across all industries. As AI is something that touches all industries, the importance of centering the human experience in creating solutions was stressed at multiple points in the conversation.

  • William Carter, Deputy Director and Fellow at the Technology Policy Program at the Center for Strategic & International Studies, highlighted the importance of future-proofing systems: "Creating trust is more than regulation and mediating algorithmic risk. [People want to feel that] AI can be a part of the world in which they can participate. [We should be] creating incentives for companies to retrain workers who are displaced."
  • Molly Kinder, David Rubenstein Fellow with the Metropolitan Policy Program at the Brookings Institution, on the importance of having a clear picture of who is most at risk to be adversely affected by AI job displacement:
    • "We’re finding that...the least resilient are the ones who are least likely to be retrained. Our insights suggest that we as a country are not equipped to help working adults."
    • "Latina women are the most at-risk group for AI [job displacement]...We need to make sure we’re human-centered in developing our solutions...[and that] we update our sense of who the workers are that are most being affected."
Creating trust with users

With the accelerating development of AI, creating and maintaining trust with users, consumers, and constituents alike was central to the discussion.

  • Kristin Sharp, Senior Fellow at New America and Partner at Entangled, on how keeping people informed can create trust: "People tend to be worried about their privacy when they don’t know what the end-use case is for the data that’s being collected."
  • Lindsey Sheppard, Associate Fellow at the Center for Strategic & International Studies, on the importance of seeing AI as part of social, economic, and educational systems that also need future-proofing: "You’re not let off the hook if you’re not using AI. You need that infrastructure whether or not you’re using AI. You still need skilled workers that have those software and data skills."

Thank you SoftBank Group for sponsoring this event.

Keep ReadingArrowOct 25, 2019