Is Nate Silver’s popularity good or bad for quantitative political science?

by Joshua Tucker on November 7, 2012 · 9 comments

in Campaigns and elections,Political science,Public opinion

The following is a guest post from political scientists Adam Berinsky (MIT), Neil Malhotra (Stanford) and Erik Snowberg (Caltech).

******

Nate Silver’s work at 538.com has undoubtedly drawn positive attention to quantitative political science. A rigorous approach to aggregating polling results in election forecasting is a huge improvement over the finger-in-the-air prognostications employed by most political pundits in both the broadcast and print media. Further, Silver’s analyses have confirmed the general finding in political science that the impact of campaign events such as debates, conventions, advertisements, and gaffes is largely overblown.

As Silver’s work has increased in popularity, so has the scientific approach to studying politics, which is good news for political scientists who want to share their work with a broader audience. Much like the publication of Moneyball sent shivers down the spines of traditional baseball scouts who feared for their jobs, the traditional pundit class knows the more quantitative approach is catching on and has therefore been highly defensive (as witnessed by the attacks by David Brooks and Peggy Noonan).

Silver draws a mixed reaction from an informal poll of political scientists. While we don’t fear for our jobs, there is no doubt there is some professional jealousy at Silver’s success. But there are good reasons for some healthy skepticism while at the same time respecting Silver’s work ethic and flair for explaining statistics to the public.

A major concern is that Silver’s aggregating model is not publically and completely presented, making it impossible to replicate any given projection, whether at the state or national level. Silver does provide some information about his methodology, but it would be unlikely to pass through the replication review at the Quarterly Journal of Political Science, for example. On the other hand, his public forecasts can be rigorously evaluated and compared to other techniques, if only after the fact, for example in this paper.

This may seem like a quibble, but it has important implications. How can we, or the public, evaluate Noonan’s claim that Silver is biased against Romney? Or our own belief that he consistently overstated Romney’s chance of victory? We could begin by looking at other models (e.g. those of Sam Wang, Drew Linzer, and Simon Jackman) with publicly disclosed methodologies, but without Silver’s model it is difficult to know where the disagreements lie, and whose side to take. Instead, we are left to speculate about motivations: the media prefers close races where the “horse race” style of coverage draws the most viewers. Given that Silver is paid by a major newspaper, these incentives should also theoretically apply to him. Of course, right before the election his incentive is to get the outcome right—and it seems that his model has converged with others over time. However, as with public opinion polling, there are no benchmarks that can tell us which model is correctly forecasting the race in September and October.

This point is especially important given the controversies around political polling that erupted this election cycle. In the last few months, we have seen accusations from conservative commentators that the polls were wrong because they had too many Democrats. One website went so far as to “unskew ” the polls to show a large Romney lead. By not making his methodology fully transparent, Silver opens the door to (false) insinuations that he is “adjusting” his predictions to fit his personal political views (see, for example, this ). We should like political prognosticators because we respect their methods. We shouldn’t respect someone merely because were like their conclusions – a warning that applies both to the right and the left. Without transparency. poll aggregation sites become just one more forum for partisan bickering.

Overall, we are grateful to Silver for the attention he has drawn to rigorous political science. But the discipline might be best served by keeping three things in mind. First, predictions from any model have uncertainty attached to them. A prediction that Obama has a 51% chance of winning California is not a prediction that got it “right”, as it would be wrong 49% of the time. Indeed, that is just proof that the model got it wrong by being too conservative. Second, we should choose our projectors based on a track record of forecasts and their methods, with the latter being especially important when there is no way to assess projections. Third, and finally, we should create professional incentives around tenure and promotion for producing high-quality work that is of contemporary political relevance, and should not be content to outsourcing projection to any one source. Political scientists have the ability to raise the level of public discussion around opinion polls. As a profession, we should make sure that we put our best foot forward.

{ 9 comments }

Owen Marsden November 7, 2012 at 7:44 pm

I think Silver has addressed a lot of the criticisms you raise about his transparency. He has addressed at length why his model’s results are different from other models.

In regards to his differences with Wang’s model, as well as a lot of the other models you mentioned, the primary difference is that Silver accounts for the fact that the outcomes between states, especially demographically similar states, are linked and not independent. So a shift in support in Pennsylvania should show, at some point, a similar move in a state like Ohio. So a collapse for Obama in his Pennsylvania numbers on Election Night, would almost certainly mean he lost Ohio. Silver accounts for this, the other models don’t.

As for his results converging with the other models as time went on, that wasn’t Silver tinkering with his model in order to fall in line. Instead, the certainty of his forecast naturally increases as the date of the election approaches, as the risk of an “October surprise” is reduced with, well, no October surprise occurring and time passing. The other models are static, and so they tend to show the outcome if the election were today. Silver’s “now-cast” does the same thing, and was more in line with the others.

W Mason November 7, 2012 at 9:59 pm

“A prediction that Obama has a 51% chance of winning California is not a prediction that got it “right”, as it would be wrong 49% of the time”

This passage, in and of itself clearly demonstrates your lack of a grasp on forecast and prediction. On this example, if Obama lost California 49% of the time, the prediction would be 100% correct. It wouldn’t be wrong 49% of the time.

Lawrence Zigerell November 7, 2012 at 11:00 pm

I think that the presumption is that Obama would win California in 100 out of 100 elections, and therefore a prediction that Obama has a 51 percent chance of winning California is underestimated by 49 percentage points. Not perfectly stated, of course.

Henry Farrell November 7, 2012 at 10:04 pm

I disagree with the basic premise here. Yes, it would be nice to see the innards of the model. First, your suggestion that

By not making his methodology fully transparent, Silver opens the door to (false) insinuations that he is “adjusting” his predictions to fit his personal political views (see, for example, this ). We should like political prognosticators because we respect their methods. We shouldn’t respect someone merely because were like their conclusions – a warning that applies both to the right and the left. Without transparency. poll aggregation sites become just one more forum for partisan bickering.

seems to me to be demonstrably incorrect. If there are partisans out there who don’t like his results, or the results of any pollster, providing them with greater transparency isn’t going to stop false insinuations. It is merely going to change the nature of those insinuations. Ask the folks in climate science whether providing access to data and methodology stopped the insinuations. Second, as you sort-of-acknowledge, the proof of the pudding is in the eating. It should certainly be possible to detect bias ex post. Given that Silver’s business model centers on trustworthiness of results rather than willingness to skew, this should surely provide good incentives towards honesty. There is obviously a lot of value to standard scientific practices of data sharing – schemes like Victoria Stodden’s for preserving reproducibility of analysis techniques are great too. If 538 started encouraging sloppier practices among academics, then this would be a problem. But I’m unaware of any evidence that it’s doing this – and as long as it is operating in the commercial space, I am not sure whether it is realistic to expect it to observe the same standards of transparency as academics should rightly be expected. Perhaps I’m wrong here, or misinterpreting your argument, but there you go …

LFC November 8, 2012 at 12:39 am

I don’t think I looked at 538 once during the entire campaign. Indeed I’m not sure I’ve ever been to Silvers’s site.

Why not?

First, I couldn’t be bothered to obsess every day (or even every week) over polls and forecasts, aggregated, sophisticated, simplistic or whatever. Such obsessing reinforces, albeit from another angle, the ‘horse race’ aspect of the campaign that pundits and journalists are criticized for paying too much attention to. Whether you’re Nate Silver or an innumerate pundit, the basic question for you is still: Who’s ahead? Who’s behind? Who’s going to win? I find that exclusive focus boring. It drains whatever substance a campaign might have — often precious little to begin with — right out.

Second, I was keeping one eye on The Monkey Cage. So why bother w Silver when you can read this blog now and then? It’s not that I am an unqualified fan of this blog — I am not — but if what you want is people who know their quant stuff, it wd seem this blog is sufficient, and indeed more than sufficient, for the non-quant person. If I had had to read both Silver *and* The Monkey Cage, I think I might have gone insane (figuratively speaking).

Finally, I find it hard to get too excited about the fortunes of quantitative political science, partly no doubt b.c I’m not a quantitative political scientist.

Ted Craig November 8, 2012 at 12:44 pm

Silver’s problem is he’s both a stat guy and a vocal partisan. It’s hard for many people to separate the data from the man. Let’s say Silver was a Yankees fan and also presented stats that shows the Yankees field the best team. While that data is true, fans of other teams would claim bias.

ORSAMatic November 8, 2012 at 1:49 pm

Code or it didn’t happen.

ORSAMatic November 8, 2012 at 1:56 pm

While I do want to see his code just to validate what he’s doing, I’m also very self-interestedly interested in the Gibbs Sampler for STATA that he may have coded up. At least, he uses STATA for at least some part of his model based on the output presented on his blog, and he claims to be a Bayesian, so…

Will Jennings November 9, 2012 at 10:13 pm

Nate Silver has done a lot for popularizing the art/science* of election forecasting. Broadly speaking I am pro-Silver, at least when the alternative is a vacuous punditocracy (which the UK suffers from at times too). There are reasons to be cautious about the hero worship accorded him, though. Successful forecasters can, paradoxically, start to become seen as infallible (anti-probabilist) sources of knowledge about elections. And when Silver calls an election wrong (due to data failure or a breakdown in his model), as he surely will if he stays in the game long enough, the scorn heaped on election forecasting will potentially do great damage to the political science profession and its standing. The pundits will jump all over the guy who has been making them look foolish and have their revenge. Indeed, Nate Silver has been wrong before: when it came to predicting the UK 2010 general election. After a fairly belligerent intervention on the merits of ‘proportional swing’ models over uniform swing, Silver’s model performed poorly in contrast to alternatives. A relative lack of knowledge of the UK no doubt was a factor. But still, even brilliant forecasters can get it wrong.

If you’re interested, my colleague Rob Ford’s response to Nate Silver about forecasting the UK 2010 election is posted here:
http://www.pollster.com/blogs/ford_response_to_nate_silver.php?nr=1

*delete as appropriate

Comments on this entry are closed.

Previous post:

Next post: