Two Lessons for Improving Forecasts

by John Sides on May 22, 2012

in Experimental Analysis,Other social science

For each of four weeks, participants made probabilistic forecasts in four domains relating to domestic and international politics and economics: the Dow Jones Industrial Average stock index, the national unemployment rate, Obama’s presidential job approval ratings, and the price of crude oil. I randomly assigned 308 participants to one of three groups. The base rate group received information about how frequently changes of various magnitude in these variables occurred in the previous year; the performance feedback group received information about how far off their predictions were the previous week; the control group received no extra information. I also recruited an “expert” subgroup of people with backgrounds in finance and economics in order to look at the effects of expertise on accuracy of Dow predictions, and I distributed these 72 experts evenly among the three groups.
The results are very encouraging.  Both strategies significantly improved forecasting accuracy. On average, participants who received base rate or performance feedback information were 10 or 15 percent more accurate than those who did not.

From research by Dartmouth undergraduate Kelsey Woerner.  See more at Jay Ulfelder’s place.

Comments on this entry are closed.

Previous post:

Next post: