When it comes to predicting world events, some of the most influential decisions are fraught with a significant amount of uncertainty: Will this national economy stabilize or crash? Will that country follow through with their promises to halt production of WMDs? Will these public demonstrations lead to democratic change or violent revolt?
“Governments rely routinely and heavily on intuitive beliefs about high-stakes outcomes,” write psychology researchers Barbara Mellers, Philip Tetlock, Don Moore, and colleagues.
Despite this, training the people who make these intuitive judgments is difficult, because there is little scientific research available that can shed light on the issue.
Mellers and colleagues found a unique opportunity to investigate different forecasting methods with the occurrence of a two-year forecasting tournament sponsored by IARPA, the Intelligence Advanced Research Projects Activity.
Forecasters from around the world were recruited via professional societies, research centers, alumni associations, science blogs, and word of mouth. The forecasters were invited to submit estimates for the likelihood that each of 199 geopolitical outcomes would occur on the website www.goodjudgmentproject.com.
The first year began with 2,246 participants, who were randomly assigned to one of the conditions that reflected different types of training (non training, scenario training, or probability training) and different types of group influence (independent, crowd-belief, team, or prediction-market forecasting).
Forecasters had to answer questions like, “Will Italy’s Silvio Berlusconi resign, lose re-election/confidence vote, or otherwise vacate office before 1 January 2012?” They predicted the likelihood that the event would occur, on a scale from 0% (certain it will not occur) to 100% (certain it will occur), and they were encouraged to update their estimates as many times as they wished up until the question was closed.
The results showed that probability training was more effective than scenario training, which, in turn, was more effective than receiving no training at all.
And the results also showed that team forecasters were more accurate than crowd-belief forecasters, who were, in turn, more accurate than independent forecasters.
“Greater accuracy in teams was due to members who gathered and shared information, encouraged one another, and discussed issues,” Mellers and colleagues explain.
Intriguingly, the best forecasters at the end of the first year were grouped together in “elite teams” for the second year of the tournament. These “superforecasters” were far more accurate in their forecasts than the other groups in Year 2.
“Results strongly disconfirm the expectations of pro-independence theorists,” the researchers note.
“Our psychological interventions reduced the errors in individual forecasts for events ranging from military conflicts and global leadership changes to international negotiations and economic shifts.”
“To improve geopolitical forecasts, one needs insights from both statistics and psychology.”
You can learn more about IARPA and the Good Judgment Project at the “Exploring the Optimal Forecasting Frontier” symposium taking place at the 26th APS Annual Convention in San Francisco, CA, USA. The symposium will feature study co-authors Don Moore, Lyle Ungar, Pavel Atanasov, and Samuel A. Swift.
Mellers, B., Ungar, L., Baron, J., Ramos, J., Gurcay, B., Fincher, K., Scott, S., Moore, D., Atanasov, P., Swift, S., Murray, T., Stone, E., & Tetlock, P. (2014). Psychological Strategies for Winning a Geopolitical Forecasting Tournament. Psychological Science, 25 (5), 1106-1115. DOI: 10.1177/0956797614524255