Forecasting Follow-up to Jay Cost

by John Sides on April 25, 2012 · 7 comments

in Campaigns and elections

Jay Cost responds to the Washington Post’s forecasting model that I helped put together:

I find Klein’s model to be particularly unpersuasive, but all these models seem to share a similar problem: they take the blowout elections of 1952, 1956, 1964, 1972, 1980, and 1984 not as historical peculiarities with little relevance to today, but as central tendencies. To put this in plain English, the three variables Klein elaborates (or, for that matter the variables in any model I’ve ever seen) cannot account for the wide gulf between Eisenhower v. Stevenson and Clinton v. Dole. Those were campaigns waged in different ages, yet the the models never acknowledge that and end up basically forcing square pegs into round holes.

Here are some thoughts.

Eisenhower-Stevenson.  This is a small point.  I’m not sure about other models, but the Post model’s prediction for 1952 is within 1 point of the outcome and its prediction for 1956 is almost exactly equal to the outcome.  Those are in-sample predictions.  If we drop the 1952 and 1956 elections from the model one-by-one, reestimate the model, and the calculate an out-of-sample prediction for each election, we get similar results: errors that are, in absolute values, equal to 1.1 and 0.2 points, respectively.  As I’ve said, any individual prediction has a lot of uncertainty, which is why we don’t focus on point predictions and errors in presenting the models’ results.  But ultimately I’m not sure why Cost thinks those elections are so hard to predict.

The Fundamentals.  I completely agree with the thrust of Cost’s post.  He says that because of party identification, 90% of the electorate is “locked in.”  I’ve been blogging about that for a while—e.g., here or here or here.  And I agree with Cost when he says:

That’s why I’m keeping an eye on the fundamentals – rather than the horse race polls – until relatively late in the cycle.

One of the fundamentals, according to Cost elsewhere, is presidential approval.  I would guess, hopefully correctly, that he thinks the economy is also a fundamental, since he writes:

It is not like the experts are predicting the economy is going to take off between now and Election Day. Instead, we are going to get more of the same muddled growth at roughly 2-2.5 percent, far less than what is needed to reduce the deficit or create jobs.

Basically, Cost seems to think that the election will depend a lot on presidential approval and economic growth, which are the central components of the model.  In other words, his model of how voters think—a big effect of party identification plus some referendum voting by weaker partisans or independents—is my model too.

This gets at why we build those models in the first place.  We do so not purely as exercises at prediction, but because we think we’re testing theories of voter choice.  You can predict or at least explain elections with lots of things.  Here, Cost even gives you an example.   But generally the forecasting models center on variables that have a theoretical basis, not just predictive value.  In fact, it’s probably more interesting to me to build models that will occasionally fail at prediction, if only because those failures will help us build better theories.

Ultimately, Cost and I agree on the fundamentals but when those fundamentals are put into a model, Cost and others are skeptical.

Problems with models. If I’m interpreting Cost’s skepticism correctly, he thinks that “times have changed” and forecasting models based on historical elections don’t account for this.  In his account, party loyalty is on the rise, limiting the winning candidate’s margin of victory even when economic conditions or other fundamentals should really favor that winning candidate.  So we’d expect, then, that the model would overestimate the winning candidate’s margin of victory in recent elections.

Which recent elections?  I’m not quite sure where Cost thinks the models would start to generate overestimates.  He mentions 1984 as the last blowout but also seems to think that the model could not account for Clinton’s victory over Dole in 1996.  So let’s just take 1988 as the starting point.  In 1988, the model actually underestimates Bush’s vote share by about 2 points in-sample, and 3 points out of sample.  In 1992, it overestimates Bush’s vote share and even predicts a narrow victory for Bush.  This is one year in which the model calls the winner incorrectly.  In 1996, the model overestimates Clinton’s vote share by 1.3 points (1.5 points out of sample).  In 2000, it overestimates Gore’s vote share by 3.4 points (4.5 points out of sample).  In 2004, it overestimates Bush’s share of the vote by 1 point, either way.  In 2008, it overestimates Obama’s vote share by 3.8 points (5.5 points out of sample).  And as I wrote in my earlier post, the model’s estimates for 2012—e.g., Obama is a near-certain winner with 50% approval and 2% GDP growth—feel a bit to optimistic for the incumbent too.

What do I see here?  The model has overestimated the incumbent party’s vote share in 1992-2008.  That, I think, confirms Cost’s notion.  I can even suggest a reason why: partisan biases have gotten larger.  With regard to presidential approval, see Gary Jacobson’s book.  With regard to economic perceptions, see this forthcoming paper (pdf) by Peter Enns, Paul Kellstedt, and Greg McAvoy.  Because of these biases, the fundamentals—good or bad—may sway fewer votes.

At the same time, it’s possible to generate alternative explanations.  1992 was a weird economy—GDP growth but high unemployment and thus negative news coverage—plus there was Ross Perot.  2000 was Gore’s failing as a candidate.  2008 was Obama’s racial identity.  With so few elections, it’s difficult to know whether the outcomes reflect Cost’s theory—stronger partisanship makes the fundamentals less consequential—or are idiosyncratic.  The implication of this is important: the small number of elections that are typically modeled (16) not only complicate estimates of the fundamentals but also complicate theories that critique fundamentals-based models.  Only with time will we know whether Cost’s hypothesis is true.  To be sure, I think it has a lot of plausibility.  I just want it acknowledged that small samples aren’t a problem only for fundamentals-based models.

Does all of the above this mean that the Post model is wrong in recent elections?  Not really.  If you care about estimating vote share (which I don’t), there are clearly cases between 1992 and 2008 where the model overestimates the incumbent party’s vote share by 3-5 points, and also a couple cases—in particular, 1996 and 2000—where the difference between the model’s prediction and what actually happened is small.  If you care about picking the winner—and this is what I’m more attentive to—in only one case (1992) did the model call the popular vote winner incorrectly.  (Of course, I would never rely on only one model anyway.  Averages of models are more likely to be accurate.)

Takeaways.  First, this model is doing a good enough job at calling winners.  In combination with other models, you’d probably get an even more definitive sense.

Second, it’s not necessarily true that the models will get worse at calling winners.  If Cost is right and fewer votes are in play, then with successive elections the effects of economic conditions and presidential approval will grow somewhat smaller.  But these effects will likely remain greater than zero.  After all, there will still be some slice of voters who thinks of the election as a referendum on the incumbent party.  And if those voters decide the outcome, then these sorts of models will still tend to predict the winner correctly.

Third, Cost’s critique illustrates how you can use these models in just the way Klein suggested:

So sure, perhaps this year will be different — that’s what my gut tells me, and the model has me thinking about the ways in which that could be true.

That is, you can use the model to start thinking about ways in which its predictions might be false.  But as Klein also notes:

But the reality is, everyone always thinks “this year” will be different, and they’re usually wrong.

To be determined.

{ 7 comments }

Andy Rudalevige April 25, 2012 at 6:50 pm

As a small addendum, it should be remembered that 1980 was an odd sort of “blowout” for vote-share purposes (obviously the electoral college majority was, but that’s another model.) President Reagan won just 50.75% of the popular vote – almost exactly the same as George W. Bush in 2004. Jimmy Carter and John Anderson between them won most of the rest, of course, with the remaining 1.3 million(!) votes as rounding error…

William Ockham April 25, 2012 at 6:51 pm

The problem with the “90% of voters are locked in” notion is that it is irrelevant to the actual decision that voters make. Almost no one is truly undecided about who to vote for, even the folks who think they are. The real decision that everyone makes is whether or not to vote. When you think about it, that makes the model much more believable. The idea that voters are changing their minds about who to vote for based on the most recent economic data is pretty absurd, but thinking about it impacting the motivation of the most marginal voters is closer to what we know about decisionmaking generally.

Kendall Kaut April 25, 2012 at 8:59 pm

The Model predicts if Obama’s approval is at 38% and economic growth averages 1.4% he will still win in a majority of simulations. This seems to fit the problem of the model: it’s very easy to lose with low non-negative growth. You have a strong bias built in, as far as I can tell, in favor of the incumbent. Explaining away Bush in 92 because of Perot voters ignores that said they were split for who they would have supported in 1992. I generally agree with Cost’s and Silver’s criticisms of these models. There are a variety of other factors that might have caused Bush to win in 2004. It was during a war period and social issues were the number one issue for most voters.

Michael April 26, 2012 at 12:10 am

Kendell – “It was during a war period and social issues were the number one issue for most voters.”

Huh? Unless you consider Iraq and/or terrorism a social issue, that statement doesn’t fit with a ten second Google inquiry.

http://www.cnn.com/ELECTION/2004/pages/results/states/US/P/00/epolls.0.html

I don’t do American politics, so I’m willing to believe you’re right, but can you provide evidence for your assertion?

Kendall Kaut April 26, 2012 at 3:32 am

Sure. The Exit Poll lists moral issues as the #1 issues for voters. 22% cited that as the key issue. Pluarlity is more than t Bush received the same margin of victory in that category 80-18 as John Kerry garnered from the 20% who listed the economy as the #1 issue. What I’m trying to claim is that categories like terrorism and Iraq may have dilluted the normal prominence of the economy as a determining variable in the 2004 election(You are correct that 34% list terrorism and Iraq as the #1 issue). However, moral issues are higher than the economy and the #1 issue alone. That is my problem with models that rely on just the economy. They ignore that other issues may sometimes be more important and account for why the models sometimes have huge errors in their predictions. Nate Silver seems correct when he feels that its more luck than skill that these models have been right so often when the margins are so wildly off. The 1992 explanation and the models prediction of an Obama victory with .5% growth and a 38% approval rating seems odd.

Kendall Kaut April 26, 2012 at 3:34 am

Sorry meant to type 1.4% growth and not .5% but that number is still quite low.

charles sutcliffe April 26, 2012 at 11:01 am

The problem with the model is that incumbency is not the determining factor but first time party control is. Piles of incumbents lose (Bush,Ford, Carter). If you take a look at how incumbent parties ( National level) in democracies that are rich- they win over 90% of the first term re-elections. America has re-elected 7 of 8 candiates running for the first term over the life of everyone who will be voting in 2012. The Anglo sample of Uk,Can,Aus,Nz. is higher. Absent the 80 Year old NZ labour candidate, all the anglo defeats arose due to the Opec increases- UK-74, can-80, USA-80. Anglo voters over 80 years have always refused to change their minds, at the first hurdle. It is more than an Anglo thing. Of those countries rated Full Democracies by The Economist- all 100% re-elected on the last change of power. Absent a new event, Obama will win. Change arguments will be met with-you just wait your turn.

To a non American, it has appeared odd that the Republicans have chosen Romney. A cross country analysis demonstrates the folly of choosing a 65 year old and it shows that a 250 million dollar man is not a wise choice. To compound all this with a disparagement of the verb ‘change’ as in-Obama promised change- when you will need the verb in October is really odd.

It is not in the interests of the industrial/political complex to enlighten the public on the fact that almost half the elections in the full democracies could be cancelled as a cost cutting measure. Each country needs its own mad punditocracy to add to the faux excitement and create uncertainty where there really is non!

Comments on this entry are closed.

Previous post:

Next post: