Oct 4, 2017

A resistance against Big Tech

Illustration: Lazaro Gamio / Axios

Big Tech's main virtue — its very bigness — is getting it in trouble. Facebook, Google and Twitter are on the hot seat for their credulousness about Russian intelligence operations. And Amazon always seems on the verge of swallowing up another retail sector. Now, there are calls to break them up over their perceived domination of technologies of the future, chiefly artificial intelligence.

Usher in the resistance: LiftIgniter, a San Francisco-based AI startup, says it's devised a predictive algorithm that allows smaller players to perform one of the main functions that make Big Tech so financially successful — to forecast in real time what a user wants to click on next, and then provide it. This "flywheel effect" is what keeps people returning to click on Google and Facebook, CEO Indraneel Mukherjee tells Axios. But if Big Tech's dominance in AI isn't curbed, he says, "the rest of the Internet will stop existing."

How it works: The program that Mukherjee's team built takes account of a user's search habits, along with those of a lot of other people who have clicked on the same thing, and quickly assesses what that person wants to see or read next. They call this "personalization," and their main clients are in the media, which once a user is on site can keep suggesting new headlines to hold onto the person, and earn more and more ad dollars, said co-founder Adam Spector.

This makes the program an equalizer: "I know I can go to YouTube and find content relevant to me," Spector said. "I keep going back there and uploading my own content. It's positive reinforcement." Smaller players can achieve the same connection with users, he said, and thus capture the flywheel effect for themselves.

But isn't there now a new trap: Having loosened Big Tech's grip on Internet users, doesn't Mukherjee's company now go on to contribute to the country's echo chamber, in which people — aided by algorithms created by the same Big Tech — read and listen only to voices with which they already agree?

Mukherjee agreed that is a risk. But he said that an algorithm can also be programmed to provide regular, diverse content — half the suggested material being for the clicks, and the other half containing different "clusters of topics," he said. "Technology is powerful enough to do the right thing," he said. "It's being pushed to emphasize something not completely moral."

"Ultimately the power is in the hands of the people," Mukherjee said. "If we get corrupted, it will be a bad world. But it's unlikely that all the small media companies will get corrupted. You need AI for everyone, not just the powers like Fox."

Go deeper