Inside today’s AI-human rights hearing
Lawmakers want to educate themselves on artificial intelligence before trying to regulate it — but according to some experts, viable policy solutions are within reach now.
Driving the news: A Senate Judiciary Committee panel Tuesday holds a hearing on the impact of artificial intelligence on human rights.
- While AI is rapidly evolving, there are known impacts of automation on privacy, peaceful assembly, freedom of expression and equal treatment before the law.
Surveillance: Law enforcement's use of facial recognition has been flagged by human rights advocates as ripe for abuse, as seen in Iran and China. In the U.S., Black Lives Matter protesters have been identified using the technology.
- In addition to surveillance, misidentification can lead to people being wrongfully accused of crimes, with people of color and women more likely to be misidentified because of algorithmic bias in facial recognition technology.
- While Congress lags, more than a dozen states have enacted limits on facial recognition over the past five years.
- At the hearing, Center for Democracy & Technology CEO Alexandra Reeve Givens will outline federal policy solutions, including requiring law enforcement to obtain a warrant before using facial recognition, limiting use of the technology for serious offenses and imposing software accuracy standards.
Misinformation: Generative AI can take online misinformation to a whole new level by making it faster and cheaper to produce convincing but misleading text, images and video.
- Human rights advocates warn that that can undermine democratic systems.
- Reeve Givens will point to bills introduced in the last Congress, including the Algorithmic Accountability Act and the American Data Privacy and Protection Act's algorithmic assessment language, as a good place to start.
- MIT Professor Aleksander Mądry will testify how AI can help deploy highly personalized campaigns on an individual basis.
- "The hook to get you will not be some post that came across your social media," Mądry said in an email. "But, rather, it may be a Facebook ‘friend’ that is actually an AI-driven agent impersonating a human — a friend that subtly mixes political commentary or product endorsements into your engaging conversations."
China: The global risks of China's AI ambitions and U.S. companies' role in human rights abuses will be another feature of the hearing.
- Panel ranking member Marsha Blackburn will warn in her opening remarks against regulating AI out of existence and guaranteeing China becomes the leader in the technology.
- "But we do need to think carefully about how we deploy AI technologies in the absence of a national privacy law, as well as how we identify and stop unauthorized uses of AI, whether to surveil or to scam unsuspecting people," she'll add, noting China's use of Apple's iPhones to track Uyghur Muslims.
- Foundation for American Innovation senior fellow Geoffrey Cain will say the U.S. should leverage UN bodies to build democratic AI principles, pointing to a 2021 global agreement of AI ethics reached by 193 countries under UNESCO calling for a “do no harm” principle, personal data protection, and measures for fairness and nondiscrimination.
- Cain will also contend U.S. companies that "help build China's oppressive AI system" should be held accountable: "So far, American technology giants have faced no punishment for their involvement in China’s surveillance state."
- He will suggest the subcommittee "consider drafting a bill that requires public corporations to publish their due diligence reports on their activities in China and the risks they have encountered with regard to human rights there."
- Further, Chinese software companies should be compelled to separate their American businesses, and global supply chains for advanced AI logic chips should be secured, Cain will say.
Of note: Tuesday, the CDT and the Leadership Conference on Civil and Human Rights will send a letter signed by 40-plus civil society organizations to the White House, urging the administration to make the AI Bill of Rights binding policy, ensure coordinated follow-through by federal agencies, and launch sustained public engagement to combat algorithmic discrimination.