Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Denver news in your inbox
Catch up on the most important stories affecting your hometown with Axios Denver
Des Moines news in your inbox
Catch up on the most important stories affecting your hometown with Axios Des Moines
Minneapolis-St. Paul news in your inbox
Catch up on the most important stories affecting your hometown with Axios Twin Cities
Tampa Bay news in your inbox
Catch up on the most important stories affecting your hometown with Axios Tampa Bay
Charlotte news in your inbox
Catch up on the most important stories affecting your hometown with Axios Charlotte
Noah Berger / AP
It is concerning enough that Facebook's ad system was letting buyers target people with self-described categories like "Jew hater" until ProPublica brought the issue to the social network's attention. (Facebook has now temporarily disabled the self-reported education and employer targeting fields, which is where most of the offenses occurred.)
- The big question: What about when Facebook's and Google's algorithms are good enough to know that an advertiser wants to target bigots without them needing to type in "Jew hater"?
- Why this is a concern: We know about this current problem because Facebook's algorithms are still manual enough that buyers are targeting by keyword. But what about when the system is good enough to understand the kind of person someone is trying to reach, without requiring someone to type in such obviously offensive keywords?
It's issues like these that the tech industry needs to confront now, at the dawn of the machine-learning era, before our biases become codified.