Aug 12, 2019 - Technology

Trump's pretzel-logic tech policy

Illustration of two pointing arms in the shape of a pretzel

Illustration: Sarah Grillo/Axios

The Trump administration's policy toward big tech moved in two opposite directions late last week, as the White House sought the big platforms' help in predicting mass shootings while it was also reportedly drafting plans to punish them for perceived bias.

Driving the news: Friday, the administration invoked the help of Google, Facebook and other companies to detect and deter mass shooters before they act.

  • At a White House meeting, administration officials sought ideas from tech representatives in response to Trump's Monday call for developing "tools that can detect mass shooters before they strike," per the Washington Post's Tony Romm.
  • Tech platforms have tons of data on users, but experts are skeptical that AI can replace the more painstaking work of real-world threat assessment, and they worry that algorithmic threat detection could make a lot of mistakes.

Meanwhile, the White House has circulated a draft of a new executive order aimed at imposing new restrictions on tech platforms' freedom to moderate the content users contribute, according to CNN's Brian Fung.

  • The move follows months of complaints and hearings in which conservatives have derided Facebook and Google (with little actual evidence) for censoring the right.
  • The draft order would put the Federal Communications Commission in charge of determining whether large online platforms are moderated in a politically neutral fashion.
  • Negative findings could result in the companies losing legal protection they have had since 1996 that allows them to moderate user contributions without taking on the liabilities assumed by a traditional publisher.

The catch: The draft order on platform moderation wasn't on the agenda at Friday's White House meeting, and the topic didn't come up at all, according to Axios' reporting.

  • Another contradiction: As the Wall Street Journal reported last week, the FBI is seeking private-sector proposals to build it a vast dragnet of social media data intended "to proactively identify and reactively monitor threats to the United States and its interests." This comes at the same time that Facebook has agreed to a $5 billion settlement with the Federal Trade Commission for violating its users' privacy rights.

Between the lines: One reason the administration wants to collaborate with social platforms to identify mass shooters is that this is a step it can take to respond to events like the El Paso and Dayton shootings without offending gun-rights believers or taking firmer and more explicit action against specific brands of extremism.

But, but, but: Today, the U.S.'s most urgent domestic terror threat springs from white nationalists, neo-Nazis and other groups that sit at the far right of the ideological spectrum, law enforcement researchers have found.

  • Yet when tech platforms take action against right-wing extremists, typically for violating hate-speech policies or inciting violence against specific groups, the companies are dragged before Congress and accused of political bias.

Our thought bubble: Some of the most inflammatory speech in social media today comes straight from the Oval Office. But if tech companies tried to take action against Trump's incitements, they'd face even louder shouts of censorship.

The bottom line: The Trump strategy of "We want to work with you, but we will attack you until you get nicer" has yet to pay off in the international sphere (see China, Iran). It's hard to imagine things playing out any differently in tech.

Go deeper