March 19, 2024
Good afternoon ... We're back with the latest installment in our series AI Meets Equity.
🔮 Ashley will interview House GOP Conference Vice Chair Blake Moore at Axios' What's Next Summit this afternoon. Tune in to livestream the event here.
💼 Situational awareness: The Senate Intel and Commerce committees will receive a classified briefing on TikTok from intelligence officials tomorrow, sources confirmed to us.
1 big thing: Concerns mount over AI-powered surveillance tech
Illustration: Gabriella Turrisi/Axios
Civil rights advocates are sounding the alarm over what they call the Biden administration's flimsy language regulating facial recognition technology, Maria reports.
Why it matters: The government is increasingly turning to AI-powered facial recognition technology for law and immigration enforcement.
- But that technology contains the potential for error and thus the potential to misidentify individuals, especially those in communities of color who are disproportionately racially profiled and surveilled.
What we're watching: The Office of Management and Budget has until the end of March to issue guidance to federal agencies on how to use AI and manage its risks.
- Draft guidance states that chief agency AI officers can waive civil rights and privacy risk assessments when they impede on critical operations, such as those concerning law enforcement and national security.
- Agencies are required to report to OMB the scope, justifications and supporting evidence of granting a waiver within 30 days.
"We think that's a huge mistake," said UnidosUs senior director Laura MacCleery, referring to the exemptions.
- "Those are the places where uses of technology could be most undemocratic."
- "They need to embrace rather than run away from the applicability of basic rights across every domain, but particularly around policing and immigration enforcement."
- Privacy advocates say the waivers should be removed so companies will be incentivized to design the tech in a way that will protect people's personal data.
The waivers section in its current form "basically allows agencies to avoid the safeguards set out in the OMB guidance at their own discretion," said Brennan Center senior director Faiza Patel.
- "We think that is contrary to what the White House has been trying to do here and what civil society has been calling for, which is greater transparency and safeguards."
The other side: DHS science and technology undersecretary Dimitri Kusnezov told Axios that privacy and civil rights are at the core of DHS work and that the OMB guidance won't "get us to relax."
- Asked whether he's worried about a future administration's approach to the waivers, Kusnezov said: "I have no idea how to predict anything after the end of this year."
Catch up quick: DHS and DOJ are working to improve their use of facial recognition technology, following a GAO report showing the agencies were using it without adequate staff training or specific civil rights policies.
The big picture: Regulators around the world are grappling with how to balance protecting civil rights and upholding national security when using surveillance technology.
- "The guardrails are stuff that you can easily finagle from the start, and then it's basically an 'ask forgiveness, not permission' kind of deal," New America's Open Technology Institute senior policy analyst David Morar said of Europe's approach with the AI Act.
You can read the previous stories in our AI Meets Equity series here and here.
2. Exclusive: Senators unveil AI data consent bill
Illustration: Eniola Odetunde/Axios
Sens. Peter Welch and Ben Ray Luján today will introduce a bill that would require companies to get people's consent before using their data to train AI models, Ashley and Maria report.
Why it matters: Americans' personal data is fueling the explosion of both generative and predictive AI in a country where there are no federal privacy protections in place.
The AI CONSENT Act, shared first with Ashley, would require online platforms to obtain a person's expressed and informed consent before using personally identifiable data to train an AI system.
- Companies would have to explain how a person's data will be used and make clear that it's within their right to withhold consent.
- The FTC would enforce the law, taking into consideration the impact of "consumer fatigue" from too much exposure to the disclosures.
"We cannot allow the public to be caught in the crossfire of a data arms race, which is why these privacy protections are so crucial," Welch said in an email.
- Luján: The use of personal data by online platforms already poses "great risks to our communities, and artificial intelligence increases the potential for misuse."
✅ Thank you for reading Axios Pro Policy, and thanks to editors Mackenzie Weinger and David Nather and copy editor Brad Bonhall.
- Do you know someone who needs this newsletter? Have them sign up here.
View archive

