Nov 2, 2018 - Technology

"Extreme vetting" for hiring

A black and white photo of a man and a woman conversing across a table

A job interview in 1935. Photo: Ullstein Bild via Getty Images

Once, a misguided tweet or racist Facebook post from years ago might have avoided a hiring manager’s notice. But now, artificial intelligence is leaving no tweet unread in the search for job candidates' bad online behavior.

Why it matters: Companies are placing applicants under a high-powered microscope as they seek to avoid hiring employees who might create a toxic environment or harm the firm's image. But it's also limiting employee opportunities.

In February, the NYT fired writer Quinn Norton within hours of announcing her hiring. In that time, several old tweets surfaced in which she had used or amplified racist and homophobic slurs.

  • The Times said it had not previously seen the tweets — several needles in a haystack of Norton’s nearly 90,000 messages.
  • This is the sort of public fiasco companies would rather avoid. But while HR can’t sift through a candidate’s entire Twitter timeline, a machine can.

Enter the algorithms. Several companies offer to set the pattern-matching power of AI on the social feeds of job hopefuls to uncover posts that are racist, sexist, violent, or otherwise objectionable.

  • Fama contracts with about 100 companies, each with more than 1,000 employees. For each online check, it returns a report with links to offending posts.
  • Last month, Predictim began offering a similar service to parents seeking babysitters. The company goes further than Fama, asking candidates for permission to access their private posts and comments in addition to public ones. The resulting report — a sample is available here — assigns risk scores ranging from 1 to 5, overall and for categories that include drug use, bullying, and "bad attitude."

Privacy advocates worry that such systems can make basic mistakes with lasting effects.

"The automated processing of human speech, including social media, is extremely unreliable even with the most advanced AI. Computers just don’t get context. I hate to think of people being unfairly rejected from jobs because some computer decides they have a 'bad attitude,' or some other red flag."
— Jay Stanley, senior policy analyst at the American Civil Liberties Union

AI-powered hiring systems can be extremely susceptible to bias.

  • If a system is fed hiring data and learns that male candidates are hired more often than female ones, it can start to favor men, like an AI program Amazon tested.
  • To avoid this trap, Fama and Predictim withheld sensitive information like gender and race from the training data, so their AI systems evaluate only the contents of social media posts.
  • Both companies’ CEOs told Axios they work to minimize bias by carefully choosing training data, using a diverse group of people to label it, and regularly testing outputs for fairness.

Users of Predictim, the month-old babysitter-checking service, have already run about 300 scans, said CEO Sal Parsa. Of those, 10% were flagged as moderately risky or higher, and 2.7% as very risky.

Go deeper