Get the latest market trends in your inbox

Stay on top of the latest market trends and economic insights with the Axios Markets newsletter. Sign up for free.

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Catch up on coronavirus stories and special reports, curated by Mike Allen everyday

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Denver news in your inbox

Catch up on the most important stories affecting your hometown with Axios Denver

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Des Moines news in your inbox

Catch up on the most important stories affecting your hometown with Axios Des Moines

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Minneapolis-St. Paul news in your inbox

Catch up on the most important stories affecting your hometown with Axios Minneapolis-St. Paul

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Tampa-St. Petersburg news in your inbox

Catch up on the most important stories affecting your hometown with Axios Tampa-St. Petersburg

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Please enter a valid email.

Please enter a valid email.

Subscription failed
Thank you for subscribing!

Illustration: Aïda Amer/Axios

One of the oddest ways that an AI system can fail is by falling prey to an adversarial attack — a cleverly manipulated input that makes the system behave in an unexpected way.

Why it matters: Autonomous car experts worry that their cameras are susceptible to these tricks: It's been shown that a few plain stickers can make a stop sign look like a "Speed Limit 100" marker to a driverless vehicle. But other high-stakes fields — like medicine — are paying too little attention to this risk.

That's according to a powerhouse of researchers from Harvard and MIT, who published an today article in Science arguing that these attacks could blindside hospitals, pharma companies, and big insurers.

Details: Consider a photo of a mole on a patient's skin. Research has shown that it can be manipulated in a way that's invisible to the human eye, but still changes the result of an AI system's diagnosis from cancerous to non-cancerous.

The big question: Why would anyone want to do this?

  • For Samuel Finlayson, an MD–PhD candidate at Harvard and MIT and the lead author of the new paper, it’s a question of incentives. If someone sending in data for analysis has a different goal than the owner of the system doing the analysis, there's a potential for funny business.
  • We're not talking about a malicious doctor manipulating cancer diagnoses — "There's way more effective ways to kill a person," Finlayson says — but rather an extension of existing dynamics into a near future where AI is involved in billing, diagnosis, and reading medical scans.

Doctors and hospitals already game the insurance billing system — these could be considered proto-adversarial attacks, Finlayson tells Axios.

  • They often bill for more expensive procedures than they performed, in order to make more money, or avoid billing for procedures that they know will land a huge bill in the patient's lap.
  • Insurance companies are already hiring tech firms to put a stop to the practice, often with AI tools. Finlayson sees a future where basic adversarial attacks are used to fool the AI systems into continuing to accept fraudulent claims.
  • Despite this possibility, hospitals and the pharma industry are flying blind, he says. "Adversarial attacks aren't even on the map for them."

But, but, but: These hypotheticals are a bit far-fetched for Matthew Lungren, associate director of the Stanford Center for Artificial Intelligence in Medicine and Imaging.

  • "There are a lot of easier ways to defraud the system, frankly," he tells Axios.
  • But there is an urgent need, Lungren says, to test medical AI systems more rigorously before they're released into the world. Protecting against adversarial attacks is one of the ways experts should shore up algorithms before using them on patients.

Go deeper: Scientists call for rules on evaluating predictive AI in medicine

Go deeper

Updated 2 hours ago - Politics & Policy

Coronavirus dashboard

Illustration: Sarah Grillo/Axios

  1. Health: WHO: AstraZeneca vaccine must be evaluated on "more than a press release."
  2. Politics: Supreme Court backs religious groups on New York COVID restrictions.
  3. Economy: Safety nets to disappear in DecemberAmazon hires 1,400 workers a day throughout pandemic.
  4. Education: U.S. public school enrollment drops as pandemic persists — National standardized tests delayed until 2022.
  5. Cities: Los Angeles County issues stay-at-home order, limits gatherings.
  6. World: London police arrest dozens during anti-lockdown protests — Thailand, Philippines sign deal with AstraZeneca for vaccine.

Tony Hsieh, longtime Zappos CEO, dies at 46

Tony Hsieh. Photo: FilmMagic/FilmMagic

Tony Hsieh, the longtime ex-chief executive of Zappos, died on Friday after being injured in a house fire, his lawyer told the Las Vegas Review-Journal. He was 46.

The big picture: Hsieh was known for his unique approach to management, and following the 2008 recession his ongoing investment and efforts to revitalize the downtown Las Vegas area.

Dan Primack, author of Pro Rata
8 hours ago - Economy & Business

The unicorn stampede is coming

Illustration: Annelise Capossela/Axios

Airbnb and DoorDash plan to go public in the next few weeks, capping off a very busy year for IPOs.

What's next: You ain't seen nothing yet.