Home > News > Is Nate Silver’s popularity good or bad for quantitative political science?
132 views 6 min 0 Comment

Is Nate Silver’s popularity good or bad for quantitative political science?

- November 7, 2012

The following is a guest post from political scientists Adam Berinsky (MIT), Neil Malhotra (Stanford) and Erik Snowberg (Caltech).

******

Nate Silver’s work at 538.com has undoubtedly drawn positive attention to quantitative political science. A rigorous approach to aggregating polling results in election forecasting is a huge improvement over the finger-in-the-air prognostications employed by most political pundits in both the broadcast and print media. Further, Silver’s analyses have confirmed the general finding in political science that the impact of campaign events such as debates, conventions, advertisements, and gaffes is largely overblown.

As Silver’s work has increased in popularity, so has the scientific approach to studying politics, which is good news for political scientists who want to share their work with a broader audience. Much like the publication of Moneyball sent shivers down the spines of traditional baseball scouts who feared for their jobs, the traditional pundit class knows the more quantitative approach is catching on and has therefore been highly defensive (as witnessed by the attacks by David Brooks and Peggy Noonan).

Silver draws a mixed reaction from an informal poll of political scientists. While we don’t fear for our jobs, there is no doubt there is some professional jealousy at Silver’s success. But there are good reasons for some healthy skepticism while at the same time respecting Silver’s work ethic and flair for explaining statistics to the public.

A major concern is that Silver’s aggregating model is not publically and completely presented, making it impossible to replicate any given projection, whether at the state or national level. Silver does provide some information about his methodology, but it would be unlikely to pass through the replication review at the Quarterly Journal of Political Science, for example. On the other hand, his public forecasts can be rigorously evaluated and compared to other techniques, if only after the fact, for example in this paper.

This may seem like a quibble, but it has important implications. How can we, or the public, evaluate Noonan’s claim that Silver is biased against Romney? Or our own belief that he consistently overstated Romney’s chance of victory? We could begin by looking at other models (e.g. those of Sam Wang, Drew Linzer, and Simon Jackman) with publicly disclosed methodologies, but without Silver’s model it is difficult to know where the disagreements lie, and whose side to take. Instead, we are left to speculate about motivations: the media prefers close races where the “horse race” style of coverage draws the most viewers. Given that Silver is paid by a major newspaper, these incentives should also theoretically apply to him. Of course, right before the election his incentive is to get the outcome right–and it seems that his model has converged with others over time. However, as with public opinion polling, there are no benchmarks that can tell us which model is correctly forecasting the race in September and October.

This point is especially important given the controversies around political polling that erupted this election cycle. In the last few months, we have seen accusations from conservative commentators that the polls were wrong because they had too many Democrats. One website went so far as to “unskew ” the polls to show a large Romney lead. By not making his methodology fully transparent, Silver opens the door to (false) insinuations that he is “adjusting” his predictions to fit his personal political views (see, for example, this ). We should like political prognosticators because we respect their methods. We shouldn’t respect someone merely because were like their conclusions – a warning that applies both to the right and the left. Without transparency. poll aggregation sites become just one more forum for partisan bickering.

Overall, we are grateful to Silver for the attention he has drawn to rigorous political science. But the discipline might be best served by keeping three things in mind. First, predictions from any model have uncertainty attached to them. A prediction that Obama has a 51% chance of winning California is not a prediction that got it “right”, as it would be wrong 49% of the time. Indeed, that is just proof that the model got it wrong by being too conservative. Second, we should choose our projectors based on a track record of forecasts and their methods, with the latter being especially important when there is no way to assess projections. Third, and finally, we should create professional incentives around tenure and promotion for producing high-quality work that is of contemporary political relevance, and should not be content to outsourcing projection to any one source. Political scientists have the ability to raise the level of public discussion around opinion polls. As a profession, we should make sure that we put our best foot forward.