Tech platforms have built the heart of their businesses around secretive computer algorithms, and lawmakers and regulators now want to know just what's inside those black boxes, Axios' Ashley Gold and I report.
Why it matters: Algorithms, formulas for computer-based decision making, are responsible for what we get shown on Facebook, Twitter and YouTube — and, increasingly, for choices companies make about who gets a loan or parole or a spot at a college.
How it works: When posts "go viral," algorithms are usually why. Often, they work by detecting small blips in user interest and amplifying them.
- Algorithms' complexity and obscurity have helped tech firms make the case that they are neutral platforms. They also allow companies to duck responsibility for decisions about promoting and demoting content.
- But, at their core, algorithms are a set of priorities decided by humans.
- Users and critics, increasingly aware of the power of these systems, now want to hold companies more responsible for the outcomes their code produces.
Driving the news: At a hearing on "Algorithms and Amplification," executives from YouTube, Twitter and Facebook, along with Harvard researcher Joan Donovan and ethicist Tristan Harris, will testify Tuesday before the Senate Judiciary Committee's privacy, technology and law subcommittee.
Our thought bubble: The conversation in policy circles has long concentrated on the outer limits of content decisions — decisions about what gets removed and who gets banned. Those are what software people call "edge cases." What gets recommended, and why, is the center of the issue.
Between the lines: Platforms have long used their algorithms to boost business metrics, such at the amount of time spent on their site. Increasingly, though, they are also acknowledging and tapping the power of algorithms to limit the spread of misinformation or hate speech that doesn't merit an outright ban.
- Companies were reluctant to use their algorithms in these ways lest they be seen as putting their thumbs on the scale. But their inaction allowed problems to grow unchecked — including election interference, proliferation of conspiracy theories, vaccine hesitancy and COVID-19 misinformation.
What to watch: Democratic aides told Axios they see today's hearing as a chance to reset the conversation about algorithms and their role in public discourse, a topic that has often been politicized and devolved into partisan squabbling.
- Aides are looking forward to homing in on YouTube's recommendation algorithm, which serves up suggested videos for users based on their history. One question that may come up is how often YouTube users are recommended content that is later found to be in violation of YouTube's policy, an aide said.
Tech firms see their algorithms as a kind of trade secret and are reluctant to expose their inner workings, both to keep them from competitors and to make it harder for users to game their systems.
For their part, Facebook, Twitter and Google are expected to focus on the steps they are already taking, from offering the option of purely chronological feeds to better explaining how their systems work to allowing people more ways to signal the type of content they want to see.
- They also insist that showing harmful content isn't in their companies' long-term business interests, either.