The Forecasts and The Outcome

midtermforecasts5.png

[UPDATE, 4:11 pm: Mark Blumenthal of Pollster notified me via email of Doug Rivers’ forecast here, which was quite close to the outcome. I have added it to the chart.]

In earlier posts (here and here), I presented some forecasting models of the Democrats’ number of House seats that date to early to mid-September. I have updated that graph, adding an additional September forecast from Doug Hibbs (pdf), updated forecasts for 538 and Stochastic Democracy, as well as Election Eve forecasts from Jay Cost, Sam Wang, Election Projection Vote”:http://www.electoral-vote.com/evp2010/Senate/Maps/Nov02-s.html#4, Larry Sabato, and Pollster. I put the September and November forecasts in different colors to distinguish them.

The provisional outcome is a 65-seat GOP gain. It is Nate Silver’s estimate. If that proves incorrect by a couple of seats, it won’t change my conclusions.

I did not include any forecaster that did not publish a single estimate, including Real Clear Politics, Charlie Cook, and Stuart Rothenberg. Rothenberg forecast “Likely Republican gain of 55-65 seats, with gains at or above 70 seats possible.” Cook forecast “a Democratic net loss of 50 to 60 seats, with higher losses possible.”

The reason that I compiled this graph is not to evaluate the accuracy of individual forecasting models and declare one model the winner. All of these models have substantial uncertainty in their predictions. So I do not think there is conclusive evidence that anybody “wins.”

More important are the lessons we can learn from these models. The central question is why nearly all of the forecasts were, if anything, underestimates. (Cook and Rothenberg both hedged somewhat, but the predicted seat loss is still at the uppermost end of their estimates.) Here are some related observations:

  • The GOP gains are greater than we would expect, based solely on the economy and presidential approval. The Cuzan and Lewis-Beck and Tien models rely on those two factors, and clearly their estimates are smaller (indeed, the smallest). The economy and presidential approval are the most important background factors in this election, because they created a climate that likely encouraged higher quality GOP candidates to run. facilitated GOP fundraising, and thereby helped GOP candidates win. But they cannot explain the outcome by themselves. That’s important.
  • Moreover, GOP gains are greater than models that rely on “generic ballot” figures would predict (e.g., Abramowitz, Bafumi et al.). Whatever the generic ballot was capturing—e.g., generalized dissatisfaction with the Democratic party—it wasn’t enough to bring the forecasts in line with the outcome. (UPDATE, 8:20 pm: I should have added that Bafumi et al. began forecasting in July, not September, and rely on both the generic ballot and the party of the president. Their model forecast considerable movement toward the GOP but, as Chris Wlezien put it in an email to me, “not enough”!)
  • Other models that incorporate additional factors were no different. Jacobson’s incorporates the political experience or “quality” of the candidates. Campbell’s incorporates Cook’s ratings. Election Projection’s and Wang’s models are based on polling numbers. Silver’s incorporates these factors and more. In general, many of these models were a little closer to the outcome, but not so much so that we would declare their methodologies superior. Relatedly, there is no evidence that simply incorporating more information into the model leads to significantly better predictions. 538’s model draws on a lot more information than Campbell’s, but their predictions were nearly identical.

So where does that leave us? First, the general tenor of the models was on the mark: most of them led us to expect a GOP takeover of the House, and that’s what we got. I don’t think we should expect models to get the answer precisely correct. That’s certainly not why I paid attention to them. (See my comments on forecasting in this earlier post on 538.)

Second, because the GOP gains were greater than what the models typically predicted, we know factors included in the models aren’t sufficient to explain the outcome. Other ingredients mattered. And here is where speculation comes in. Perhaps the models didn’t fully account for the “enthusiasm gap.” Perhaps the balance of fundraising and spending was important. Perhaps turnout operations were key. Perhaps there were idiosyncratic attributes of candidates, or districts, or races.

We’ll likely never know for sure. But evaluating elections against these kinds of predictions is a good way to begin to understand the magnitude of the results and to identify possible reasons for the outcome.

5 Responses to The Forecasts and The Outcome

  1. Paul G. November 3, 2010 at 4:08 pm #

    John, or the one that most acknowledge hampered the models in 1994: a change in the “normal” seats for each party.

    I.e. it may not be idiosyncratic at all, but a shift in the fundamentals.

  2. Seth November 3, 2010 at 6:57 pm #

    Scott Lemieux was pretty close, too.

  3. TreeTop November 3, 2010 at 8:26 pm #

    I just ran across your post from June predicting 87% reelection rate in the House.

    http://www.themonkeycage.org/2010/06/feel_the_anger_people.html

    According to my calculations, you got it exactly right. Of the 394 Representatives running for reelection, 343 were reelected . . . 87%

  4. belegoster November 3, 2010 at 11:53 pm #

    Perhaps what should be speculated is not what the forecasters didn’t include in their models, but what they did. The most jarring outcome of Tuesday night that should be noted is that the consensus of generic ballot polls performed very well in predicting the final vote share (R+7), but the translation into seats didn’t match up historical standards. Specifically, the GOP overperformed in seats (56%) compared to its vote share (53%), when as a minority party the reverse is supposed to happen (see Kastellec 2006, 2008). Many of these models include an incumbency correction that accords an advantage to the incumbent party (requiring less than 50% vote share to hold 218 seats)–might this upside down result suggest that the incumbency advantage is actually negative this year? That would well and truly be shocking.

Trackbacks/Pingbacks

  1. Democrats will take back the House, Romney will win Pennsylvania and other such silliness | Saint Petersblog - September 25, 2012

    […] they will gain 11 seats.  We can also look at expectations of handicappers, which are often quite accurate.  For example, take Larry Sabato’s House forecasts.  If we assume that the Democrats win every […]