May 14, 2019

A contest to beat geopolitical "superforecasters"

Man having his palm read

A former forecasting method. Photo: Topical/Hulton/Getty

Four years ago, a team of researchers based at the University of Pennsylvania wowed the U.S. intelligence community by producing a superior new way to forecast geopolitical events. They were dubbed the "Superforecasters."

Driving the news: At a time the science of professional prognostication is sorely battered, the radical innovation arm of U.S. intelligence services is looking to best the UPenn team with a fresh big-money prize competition.

What's happening: IARPA, the research arm for the U.S. director of national intelligence, is offering $250,000 in prize money in a contest to forecast geopolitical events such as elections, disease outbreaks and economic indicators.

  • The contestants will have access to all of IARPA's data on the winning UPenn methodology.
  • They can do anything they want — use a computer, or no computer, and any methodology.
  • Over the coming eight or so months, they will be asked hundreds of closed-ended questions, such as: "How many missile test events will North Korea conduct in August 2019?" and "What will be the daily closing price of gold on June 2019 in USD?"

I asked Seth Goldstein, an IARPA program manager who is running the contest, his dream outcome: "I am hoping to find the next Swiss patent clerk doing forecasting in his spare time," he says.

Background:

  • There is substantial rigor to current forecasting, but it is still more an art than a science: "It is largely practiced by an informed elite predicated on gut instinct and intuition, and a range of confirmation biases," says Samuel Brannen, director of the Risk and Foresight Group at the Center for Strategic and International Studies.
  • "It’s one thing for algorithms to time market moves and execute trades faster than humans would be capable, and quite another for them to forecast the outcome of U.S.-China trade negotiations, in which the variables are nearly infinite and perhaps impossible to identify," Brannen says.

But from 2011 to 2015, IARPA ran another forecasting contest. That's the one that was won by the UPenn team, led by social scientist Philip Tetlock.

  • Tetlock, who had been studying forecasting methods for decades, pushed his team of some 2,000 volunteers to be open-minded, and he championed outsiders with no particular subject matter expertise.
  • Then he culled out a small group that seemed to be preternaturally terrific prognosticators. He called them superforecasters (and in 2015 co-authored a book about the methodology called "Superforecasting: The Art and Science of Prediction").
  • Tetlock declined to comment for this story.

Goldstein calls Tetlock's performance the "gold standard." But now he wants to do better.

  • The contestants will be pitted against another team of expert forecasters randomly assigned to work with machines on prognostications.
  • The maximum $153,000 prize can be won if a team comes in first place and performs at least 20% better than Tetlock's team, and no one else performs at all better. "They get a really nice chunk" of money "plus all of the glory," Goldstein says.

Members of Tetlock's group have added to their forecasting renown. Regina Joseph, a Tetlock superforecaster, built a cybersecurity forecasting training program for the Dutch National Cyber Security Centre, in addition to a tournament that launched last year and is still underway.

  • In terms of what forecasting advances might come next, Joseph tells Axios that she would like to see "longitudinal tournaments," in which subject matter experts are pitted against generalist elite forecasters.
  • "So far, results suggest good forecasting skill can still beat subject matter expertise in niche areas," she says.

Go deeper:

Go deeper