Oct 23, 2023 - Health

Study: Some AI chatbots provide racist health info

Illustration of AI elements coming out of a doctor's suit where their head should be.

Illustration: Maura Losch/Axios

Some of the most high-profile artificial intelligence chatbots churned out responses that perpetuated false or debunked medical information about Black people, a new study found.

Why it matters: As AI takes off, chatbots are already being incorporated into medicine — with little to no oversight. These new tech tools, if fueled by false or inaccurate data, have the potential to worsen health disparities, experts have warned.

Details: This spring and summer, researchers led by doctors at Stanford University ran nine questions through four AI chatbots — including OpenAI's ChatGPT and Google's Bard — that are trained on large amounts of internet text.

  • All four models used debunked race-based information when asked about kidney function and lung capacity, the study published Friday in Digital Medicine found. Two of the models gave incorrect answers about Black people having different muscle masses.
  • To varying degrees, the models appeared to be using race-based equations for kidney and lung function, which the medical establishment increasingly recognizes could lead to misdiagnosis or delayed care for Black patients.

What they're saying: "There are very real-world consequences to getting this wrong that can impact health disparities," Stanford University assistant professor Roxana Daneshjou, a faculty adviser for the paper, told the Associated Press. "We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning."

  • Google and OpenAI told the AP they're working to reduce bias in their tools.

Flashback: The World Health Organization in May called for ethical oversight of AI chatbots in medicine and warned that data used to train the technologies may be biased.

Go deeper