It is concerning enough that Facebook's ad system was letting buyers target people with self-described categories like "Jew hater" until ProPublica brought the issue to the social network's attention. (Facebook has now temporarily disabled the self-reported education and employer targeting fields, which is where most of the offenses occurred.)
- The big question: What about when Facebook's and Google's algorithms are good enough to know that an advertiser wants to target bigots without them needing to type in "Jew hater"?
- Why this is a concern: We know about this current problem because Facebook's algorithms are still manual enough that buyers are targeting by keyword. But what about when the system is good enough to understand the kind of person someone is trying to reach, without requiring someone to type in such obviously offensive keywords?
It's issues like these that the tech industry needs to confront now, at the dawn of the machine-learning era, before our biases become codified.