White House's AI "Bill of Rights" enters crowded field
The White House issued a call for artificial intelligence systems to be developed with built-in protections Tuesday, even as the tech industry barrels forward in an AI free-for-all.
Why it matters: Automated systems can influence or even determine important aspects of Americans' lives, including healthcare, employment, housing and education. In the U.S., government regulations covering the new technology remain minimal or nonexistent.
Driving the news: The Blueprint for an AI Bill of Rights, released Tuesday by the Office of Science & Technology Policy, describes 5 principles that should be incorporated into AI systems to insure their safety and transparency, limit the impact of algorithmic discrimination, and give users control over data.
The report details real-world consequences of failures to put such principles into practice.
- A model meant to predict the likelihood of sepsis in hospitalized patients underperformed and caused "alert fatigue" with false warnings.
- A hiring tool that "learned" employees were predominantly men rejected women applicants, with resumes that had language like "women's chess club captain" penalized in ranking candidates.
Our thought bubble: The White House is late to this party. Many others have already weighed in with recommendations on AI best practices, including other governments, human rights groups and tech companies.
- IBM produced a set of principles in 2017, calling for, among other things, AI that can explain itself. "Companies must be able to explain what went into their algorithm’s recommendations. If they can’t, then their systems shouldn’t be on the market," IBM said at the time.
- The EU released its list of guidelines back in 2019.
- Even the Vatican published on the topic, issuing in 2020 what it dubbed a "algor-ethical" framework saying that AI systems need to be designed to protect "the rights and the freedom of individuals so they are not discriminated against by algorithms."
Flashback: In 2020, the Trump administration outlined 10 regulatory principles for agencies writing rules for the technology, warning against over-regulating the systems.
What they're saying: In a call with reporters, senior administration officials described the principles as part of President Biden's commitment to tech accountability.
- "These technologies are causing real harms in the lives of Americans," a senior administration official said. "Harms that run counter to our core democratic values, including the fundamental right to privacy, freedom from discrimination and our basic dignity."
Yes, but: The principles don't carry the force of law.
- The White House wants technologists to integrate the safeguards into their products and incorporate them into new designs.
- Senior administration officials expect enforcement to happen sector by sector, with, for example, Health & Human Services focusing on AI in healthcare or the Department of Housing and Urban Development investigating algorithmic discrimination in housing prices.
The big picture: The tech industry is divided between some companies that say they are seeking to develop AI responsibly and others that believe in advancing the technology as quickly as possible regardless of potential problems.
- In this game, the fast deployers effectively rule out the possibility that voluntary guard rails might work.
Details: The AI Bill of Rights blueprint includes these five points.
- Automated systems should be safe and effective. Designers should test the systems before they are deployed, consult with a diverse communities in development and continually monitor the systems to ensure they are safe and effective.
- Users should not experience algorithmic discrimination. AI systems should both be designed to protect individuals and communities from discrimination, and continually monitored and assessed to ensure discrimination does not occur.
- Users should be protected from abusive data practices and have control over how their data is used. Automated systems should keep data private by design and default.
- Users should know an automated system is being used, and understand how it impacts them. The person or organization responsible for an automated system should be provided, as well as a plain language description of the role automation plays.
- Users should be able to opt out, and talk to a human where possible. Users should be able to contact a real person if an automated system fails, produces an error or they want to challenge a decision.
What's next: The White House says it will lead by example — with a dozen federal agencies announcing actions related to automated systems, including the development of a new federal policy for the procurement and use of AI systems.