AI in health care spurs privacy and data concerns
The adoption of AI across health care is hastening calls from some House leaders for a national data privacy standard that would help protect personal health information and more closely police online marketing.
Driving the news: House Energy and Commerce Chair Cathy McMorris Rodgers used a Wednesday hearing on AI in health settings to promote privacy legislation that she and ranking member Frank Pallone advanced through the committee last year.
- McMorris Rodgers and health subcommittee chair Brett Guthrie also expressed concern about quality-adjusted life years being factored into data sets and potentially excluding people from care.
- QALYs measure how many years a drug adds to a person's life, weighted by how well a patient feels during those extra years, in order to help decide if a drug is worth the price.
- "If AI is reliant on QALYs or other similar measures when assisting in clinical decision-making, our most vulnerable will be left behind," McMorris Rodgers said.
- "No one here wants to advocate for discrimination. We need to be conscious of how federal programs and AI technologies incorporate these types of biases."
Experts at the hearing said the success of AI systems partly rides on the quality of data fed into them, but still were wary of endorsing specific regulations on AI in health care.
- "The clinical data which are in electronic notes actually have lots of errors and problems in them," said David Newman-Toker, a neurologist at Johns Hopkins.
- "To some extent that's a key focal point where we should be making sure that we are not over-relying on faulty data sources as we try to move forward."