Safety-first AI comes to health care
As companies rush AI-powered products and services to market, some non-profits and safety-focused companies are pushing to slow them down in higher-risk use cases such as health care.
Why it matters: Since faulty AI can be a matter of life and death in health settings, healthcare providers need support in building, buying and using the technology safely.
- Bias along gender, racial and economic lines can be amplified by poorly-designed AI algorithms.
- The AI in healthcare market looks to grow seven-fold to around $102 billion by 2028, by one forecast.
- AI efficacy is hard to measure, opening the way for scams, particularly in the adjacent wellness industry.
What's happening: An explosion of generative AI-powered services prompted the World Health Organization to warn in May about the need to demonstrate evidence-based benefits before services are offered to patients and consumers.
- In Taylor v. Intuitive Surgical, the Washington state Supreme Court found that manufacturers of dangerous medical products, in this case an AI-powered robot surgical device, have a duty to warn hospitals about dangers, such as which patients could be poor candidates for receiving surgery.
- Virtual therapists and AI companions are promoted as ways to combat loneliness but may come with long-term risks.
Brain wave technology — such as The Crown, a $2,000 wellness wearable — claims that measuring your gamma brain waves while playing certain types of music refined over time by AI can increase your concentration "by over 25%."
- But if you connect it to ChatGPT, your brain data is also helping to train its AI model.
Driving the news: A new transatlantic Responsible AI in Healthcare consortium, organized by the Austin-based Responsible AI Institute, launched on Wednesday at Cambridge University with the aim of helping hospitals and other health providers use AI more safely.
- The consortium is forming as companies such as Dandelion Health and System scale with the aim of helping health care providers weed out biased data and unsafe products.
- Physician-founded Hippocratic AI launched in May with $50 million seed funding for its large language model honed for health care uses.
The details: The Responsible AI in Healthcare consortium is backed by Harvard Business School and the U.K. National Health Service — the world's largest government health agency, with over 1.2 million staff.
- The consortium aims to influence policymakers and investors, in addition to health care providers.
- Its first product will be a Responsible Generative AI Safety Index scoring AI systems. The goal is to provide a rating as clear and easy to read as a credit score.
- Consortium members will engage in collective learning and "actively experiment with and refine responsible generative technologies in a real-world health care context," per a consortium statement.
What they're saying: "The responsible use of AI in the health care industry has immense potential," said Hatim Abdulhussein, national clinical lead for AI at NHS England, who wants to see guardrails developed by cross-organization work.
- "Health care organizations need to be able to innovate with generative AI without putting their organization or customers at risk," Seth Dobrin, CEO of Trustwise, a founding member of the consortium, told Axios.