While GPT-3 has earned ecstatic reviews from many experts for its capabilities, some critics have pointed out clear issues around bias.
Why it matters: As AI becomes more powerful and more integrated into daily life, it becomes even more important to root out the persistent problem of bias and fairness.
What's happening: Researchers at OpenAI noted in the paper introducing GPT-3 that "internet-trained models have internet-scale biases." A model trained on the internet like GPT-3 will share the biases of the internet, including stereotypes around gender, race and religion.
- As the table above from the paper shows, females were more often described with appearance-associated adjectives, while males were more often described with adjectives that spanned a wider spectrum of descriptions.
- The paper also found that GPT-3 associated different races with different degrees of sentiment, with Black ranking consistently low.
In a Twitter thread, Facebook AI head Jerome Pesenti raised concerns that GPT-3 can "easily output toxic language that propagates harmful biases."
- OpenAI CEO Sam Altman responded that he shared those concerns, and he argued that part of the reason the nonprofit was starting off GPT-3 in a closed beta was to do safety reviews before it went fully live.
- He noted that OpenAI had introduced a new toxicity filter that was on by default.
- The original paper also found that GPT-3 seemed less prone to bias than earlier, smaller models, offering some preliminary hope that size could help minimize the problem.
What to watch: A system that can generate near-human quality writing could be used for misinformation, phishing and other hacking efforts. And while malicious humans already do all of those things, GPT-3 and future AI systems could effectively scale those efforts up.
The bottom line: If AI produces racist or sexist content, it's because the system learned it by watching us. That puts the onus on programmers to curb their creations.
Go deeper: Rooting out AI bias