Apr 27, 2021 - Technology

Congress drags algorithms out of the shadows

Illustration of the Capitol Dome surrounded by algorithmic pathways that read things like "input" and "error"

Illustration: Sarah Grillo/Axios

Tech platforms have built the heart of their businesses around secretive computer algorithms, and lawmakers and regulators now want to know just what's inside those black boxes.

Why it matters: Algorithms, formulas for computer-based decision making, are responsible for what we get shown on Facebook, Twitter and YouTube — and, increasingly, for choices companies make about who gets a loan or parole or a spot at a college.

How it works: When posts "go viral," algorithms are why. Often, they work by detecting small blips in user interest and amplifying them.

  • Algorithms' complexity and obscurity have helped tech firms make the case that they are neutral platforms. They also allow companies to stand at one remove from responsibility for decisions about promoting and demoting content.
  • But users and critics, increasingly aware of the power of these systems, now want to hold companies more responsible for the outcomes their code produces.

Driving the news: At a hearing on "Algorithms and Amplification," executives from YouTube, Twitter and Facebook, along with Harvard researcher Joan Donovan and ethicist Tristan Harris, will testify Tuesday before the Senate Judiciary Committee's privacy, technology and law subcommittee.

  • The subcommittee is led by Sen. Chris Coons (D-Del.) and ranking member Sen. Ben Sasse (R-Neb.).

The big picture: Government agencies around the world are starting to take up issues related to algorithms and machine learning.

Our thought bubble: The conversation in policy circles has long concentrated on the outer limits of content decisions — decisions about what gets removed and who gets banned. Those are what software people call "edge cases." What gets recommended, and why, is the center of the issue.

Between the lines: Platforms have long used their algorithms to boost business metrics, such at the amount of time spent on their site. Increasingly, though, they are also acknowledging and tapping the power of algorithms to limit the spread of misinformation or hate speech that doesn't merit an outright ban.

  • Twitter has used its algorithmic powers in a variety of ways, including putting potentially offensive content behind a warning flag and limiting retweets of information that it isn't ready to ban entirely.
  • YouTube has changed its recommendation engines to steer away from conspiracy theories and other "borderline content" that was criticized for leading people down rabbit holes that may be good for increasing time spent on the site, but are bad for society.
  • Facebook has changed the way it recommends groups to users in an effort to reduce radicalization and tweaked its algorithm to show more news from credible sources, among other changes.
  • Platforms long tried to avoid such measures for fear of being seen as putting their thumbs on the scale. But their inaction allowed problems to grow unchecked — including election interference, proliferation of conspiracy theories, vaccine hesitancy and Covid-19 misinformation.

Yes, but: Some politicians see these efforts as another sign of platforms' unfairness.

  • On the right, complaints about "shadow banning" stem from situations where platforms have limited the spread of users' content rather than outright banned them.

What to watch: Democratic aides told Axios they see today's hearing as a chance to reset the conversation about algorithms and their role in public discourse, a topic that has often been politicized and devolved into partisan squabbling.

  • Aides are looking forward to homing in on YouTube's recommendation algorithm, which serves up suggested videos for users based on their history. One question that may come up is how often YouTube users are recommended content that is later found to be in violation of YouTube's policy, an aide said.
  • Polarization around the 2020 presidential election and social media's role in the Jan. 6 Capitol riot make the hearing especially important to have now, the aides said.

Tech firms see their algorithms as a kind of trade secret and are reluctant to expose their inner workings, both to keep them from competitors and to make it harder for users to game their systems.

  • Recent efforts at transparency from companies are encouraging, committee aides said. But so much information about what content stays up and how algorithms weigh different factors in ranking posts is still kept secret, an aide said.

The tech companies are expected to focus on steps they are already taking:

  •  offering chronological feeds as an alternative to algorithmic ones;
  •  limiting the spread of "borderline content";
  • and providing ways for individuals to more clearly signal the type of content they want to see.

Be smart: When Congress gets a bunch of tech executives to testify, the conversation can quickly devolve into individual lawmakers' beefs with companies.

  • But a spate of recent hearings have shown more bipartisan interest in actually figuring out how to regulate tech than in the past, and algorithms are of interest to regulators around the world.
Go deeper