AI is the future of discrimination — and fairness
- Ina Fried, author of Axios Login

Illustration: Caresse Haaser, Rebecca Zisser / Axios
Nonprofits that work to fight discrimination are increasingly shifting an eye toward AI amid rising concern over algorithmic bias.
Why it matters: With human bias, each generation represents an opportunity to break through stereotypes. With algorithms, bias will only reinforce itself and become less clear over time, so it is critical to address the issues when the technologies are in their infancy.
The latest: Several nonprofits are among the latest members in the Partnership on AI, a group established to address the ethical and other challenges presented by artificial intelligence. The effort began with big companies like Apple, Google, Facebook and Microsoft, but now includes a growing roster of academic institutions and nonprofit groups alongside some of the biggest names in tech.
LGBT rights organization GLAAD is among the new members, as is the Joint Center for Political and Economic Studies, which was set up in 1970 as a think tank serving black elected officials.
Joint Center president Spencer Overton said his organization is especially concerned with how AI could disproportionately impact employment in communities of color. Overton said that 27% of black workers are concentrated in just 30 jobs at high risk to automation.
"These are your cashiers, your retail sales people, your truck drivers, your security guards your fast food workers. We think the existing debate is somewhat limited because it focuses on whether there will be more jobs or fewer jobs. It is very possible that we could see economic growth but also see displacement of particular communities such as African Americans and Latinos. This is not just about income or education. There are unique factors such as implicit bias and residential segregation that make race relevant in labor market transitions."— Spencer Overton
Among the first organizations to join the partnership was the ACLU, recognizing the potential civil rights and civil liberties issues raised by machine learning.
"In some cities, police are using artificial intelligence to predict where crimes might occur and to deploy officers and surveillance technologies accordingly. Courts in many states are using algorithms to set lengths of incarceration. Disfavored communities and people of color who historically have been targeted for government scrutiny too often bear the brunt of dangers posed by these new technologies.
Worse, the data and algorithms used to make fateful decisions about people’s lives often are hidden from public oversight, making it difficult to test for bias and needed correction."— Carol Rose, executive director, ACLU of Massachusetts and Partnership on AI board member
Other new members include: Deutsche Telekom, PayPal, MIT's Media Lab, the Wikimedia Foundation.
The partnership aims to spark discussion and collect best practices in areas ranging from fair and transparent AI, AI's impact on the economy and the impact of AI in safety-critical uses.
What's next: Despite its high profile launch and growing membership roster, the organization is still in its infancy, with just five full-time staffers working out of a largely empty new office space in San Francisco.
The organization has been working to have a more global membership. It currently has members from Europe, India, Japan and North America, but not China — yet.