Sign up for our daily briefing
Make your busy days simpler with Axios AM/PM. Catch up on what's new and why it matters in just 5 minutes.
Catch up on the day's biggest business stories
Subscribe to Axios Closer for insights into the day’s business news and trends and why they matter
Stay on top of the latest market trends
Subscribe to Axios Markets for the latest market trends and economic insights. Sign up for free.
Sports news worthy of your time
Binge on the stats and stories that drive the sports world with Axios Sports. Sign up for free.
Tech news worthy of your time
Get our smart take on technology from the Valley and D.C. with Axios Login. Sign up for free.
Get the inside stories
Get an insider's guide to the new White House with Axios Sneak Peek. Sign up for free.
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Catch up on coronavirus stories and special reports, curated by Mike Allen everyday
Want a daily digest of the top Denver news?
Get a daily digest of the most important stories affecting your hometown with Axios Denver
Want a daily digest of the top Des Moines news?
Get a daily digest of the most important stories affecting your hometown with Axios Des Moines
Want a daily digest of the top Twin Cities news?
Get a daily digest of the most important stories affecting your hometown with Axios Twin Cities
Want a daily digest of the top Tampa Bay news?
Get a daily digest of the most important stories affecting your hometown with Axios Tampa Bay
Want a daily digest of the top Charlotte news?
Get a daily digest of the most important stories affecting your hometown with Axios Charlotte
A job interview in 1935. Photo: Ullstein Bild via Getty Images
Once, a misguided tweet or racist Facebook post from years ago might have avoided a hiring manager’s notice. But now, artificial intelligence is leaving no tweet unread in the search for job candidates' bad online behavior.
Why it matters: Companies are placing applicants under a high-powered microscope as they seek to avoid hiring employees who might create a toxic environment or harm the firm's image. But it's also limiting employee opportunities.
In February, the NYT fired writer Quinn Norton within hours of announcing her hiring. In that time, several old tweets surfaced in which she had used or amplified racist and homophobic slurs.
- The Times said it had not previously seen the tweets — several needles in a haystack of Norton’s nearly 90,000 messages.
- This is the sort of public fiasco companies would rather avoid. But while HR can’t sift through a candidate’s entire Twitter timeline, a machine can.
Enter the algorithms. Several companies offer to set the pattern-matching power of AI on the social feeds of job hopefuls to uncover posts that are racist, sexist, violent, or otherwise objectionable.
- Fama contracts with about 100 companies, each with more than 1,000 employees. For each online check, it returns a report with links to offending posts.
- Last month, Predictim began offering a similar service to parents seeking babysitters. The company goes further than Fama, asking candidates for permission to access their private posts and comments in addition to public ones. The resulting report — a sample is available here — assigns risk scores ranging from 1 to 5, overall and for categories that include drug use, bullying, and "bad attitude."
Privacy advocates worry that such systems can make basic mistakes with lasting effects.
"The automated processing of human speech, including social media, is extremely unreliable even with the most advanced AI. Computers just don’t get context. I hate to think of people being unfairly rejected from jobs because some computer decides they have a 'bad attitude,' or some other red flag."— Jay Stanley, senior policy analyst at the American Civil Liberties Union
AI-powered hiring systems can be extremely susceptible to bias.
- If a system is fed hiring data and learns that male candidates are hired more often than female ones, it can start to favor men, like an AI program Amazon tested.
- To avoid this trap, Fama and Predictim withheld sensitive information like gender and race from the training data, so their AI systems evaluate only the contents of social media posts.
- Both companies’ CEOs told Axios they work to minimize bias by carefully choosing training data, using a diverse group of people to label it, and regularly testing outputs for fairness.
Users of Predictim, the month-old babysitter-checking service, have already run about 300 scans, said CEO Sal Parsa. Of those, 10% were flagged as moderately risky or higher, and 2.7% as very risky.