Groups spotlight machine learning's role in misinformation
Firefox maker Mozilla and a group of advocacy organizations are highlighting ways Big Tech platforms' use of machine learning allows misinformation to flourish, in a memo Tuesday.
Why it matters: Big Tech companies rely on AI and machine learning to decide which content to promote and to flag problematic posts.
The systems operate with little transparency or accountability, according to the memo, written by New America's Open Technology Institute, the Anti-Defamation League, Avaaz, Decode Democracy and Mozilla.
While automated systems can increase efficiency and help limit human moderators' exposure to harmful content, they open the door to other problems.
- "Decreased human oversight increases the risk of errors from automated systems, which can result in the amplification of hate, extremism, systemic biases, discrimination, and misleading information," the groups wrote.
The big picture: While focused on misinformation, the memo highlights a range of other issues caused by over-reliance on machine learning, including content policies that allow politicians to lie as well as advertising algorithms that help select viewers in way that can lead to discrimination in housing and employment.