Welcome back to Future. Let me know what you think we should cover. Just hit reply or send a note to firstname.lastname@example.org. Erica, who writes Future on Wednesdays, is at email@example.com.
Today's issue is ~1,500 words, or a 5ish-minute read. Let's get to it…
Illustration: Aïda Amer/Axios
To keep up with California's unrelenting wildfire threat, some insurers are now turning to AI to predict fire risk with unprecedented, structure-by-structure detail.
Why it matters: This will allow them to cover homes in areas that they would otherwise have passed over — but potentially at the cost of hiking rates for those who can least afford it.
The big picture: Spooked by a recent surge in destructive fires that shows no sign of cooling off, insurers have backed away from underwriting in the most flammable parts of the state. They say the risk is sky-high, and there's too much uncertainty about where fire will strike next and what it will consume.
Now, some insurers are getting creative. They are trying to pack in as much data as possible: information from building permits, records and codes — and, increasingly, satellite photos and aerial imagery from drones and aircraft.
Driving the news: MetLife announced this week that it's working with a Bay Area startup, Zesty.ai, to use this type of data for property-level scoring.
Several startups have popped up to sell these new models to insurers, and some of the old guard are developing them, too.
But, but, but: Experts worry that property-level scoring can result in higher premiums for people living in high-risk areas, who are often on low or fixed incomes.
What they're saying: "MetLife follows standard actuarial principles for ratemaking to ensure our rates are not excessive, inadequate or unfairly discriminatory," a spokesperson told Axios.
The bottom line: "Moving to risk-based rates is overall a positive thing to do, but it could have a negative effect on people currently in these high-risk areas," says Lloyd Dixon, a RAND researcher who last year published a detailed study of wildfire's impact on insurance in California.
Go deeper: Deciding whether to rebuild after fire
Editor's note: This story has been updated with details on the Zesty.ai actuarial report.
Illustration: Sarah Grillo/Axios
Tech giants, startups and academic labs are pumping out datasets and detectors in hopes of jump-starting the effort to create an automated system that can separate real videos, images and voice recordings from AI forgeries.
Driving the news: Dessa, the AI company behind the hyper-convincing fake Joe Rogan voice from earlier this summer, published a tool today for detecting deepfake audio — the kind that recently scammed a CEO out of $240,000.
The big picture: There's an all-hands scramble for better detectors, which generally require a lot of really good examples of deepfakes. Researchers use them to train algorithms that can tell if media was created by AI.
Unlike these datasets, which allow researchers to cook up their own detectors, Dessa is releasing a pre-baked system — which has advantages and risks.
But, but, but: Thurairatnam acknowledged that an open-source detector could help a particularly determined troll create new audio fakes that fool it. That's because generative AI systems can be trained to trick a specific detector.
A Russian serviceman with a drone. Photo: Valery Matytsin/TASS/Getty
About half of the world's militaries are now flying drones, according to a sweeping new study published this week, revealing the swift spread of a critical technology that used to be too expensive or sophisticated for most countries.
Why it matters: Increasingly robot-crowded skies mean that clashes involving drones — like the recent attack on a Saudi oil facility that the U.S. has blamed on Iran — are likely to become commonplace.
What's happening: From cheap, off-the-shelf quadcopters to enormous, missile-toting aircraft, flying drones are not only proliferating widely, but they're becoming steadily more integrated into militaries, according to the report from Dan Gettinger, co-founder of the Center for the Study of the Drone at Bard College.
Between the lines: The study's focus on training and R&D programs in addition to drone arsenals — all gleaned from public information — reveals some militaries' deeper preparations for drone warfare.
What to watch: Big R&D efforts are underway in several countries to develop drone swarms — groupings of drones that can be flown by one remote operator, or even autonomously.
An image of Osma.ai, an augmented reality art project in which a terrarium is watered based on how many likes its AI-generated selfies get on Instagram. Photo: Ina Fried/Axios
Preparing for an augmented reality future (Ina Fried - Axios)
Disinformation campaigns found in 70+ countries (Oxford Internet Institute)
Medical images left up online (Jack Gillum, Jeff Kao & Jeff Larson - ProPublica)
The gender gap in 6 charts (Gretchen Gavett & Matt Perry - HBR)
Revenge of the English major (David Deming - NYT)
Photo: Alexandre Schneider/Getty
A group of five "AI musicians" released an album yesterday.
The "performers" are programs that generate music based on patterns found in human tunes and reading material — "from Atwood to articles about teenage life and growth in metropolitan areas," according to a press release that itself may have been written by a robot.
What's happening: It's not very good.
But, but, but: The company that created the "musicians" got a $200,000 investment to make a new AI-generated album every month.
What's next: If synthetic media is the future, we're not quite there yet.