A Second Look at National Pollster Accuracy

by John Sides on November 7, 2012 · 10 comments

in Campaigns and elections

Here are two more takes at it.  First, courtesy of UNC Ph.D. student Brice Acree, takes my original plot and then adds an underlying measure of uncertainty—essentially, the margin of error for the estimated margin of victory.

Second, spurred on by a friend who is also a pollster, I calculated a different quantity—called “A” in the piece by Martin, Traugott, and Kennedy—that also captures predictive accuracy, is more robust to how polls treat undecided voters, and allows me to calculate a confidence interval.  (UPDATE: This quantity was first calculated in the analysis by Costas Panagopoulos that I mentioned in my first post.  However, I mistakenly overlooked this fact.  Thus, what I present here is an unintentional replication of what he did.)  Here, I included only polls that left the field on November 3 or later, in an effort to exclude polls who polled earlier and may simply have missed late movement in the polls (as opposed to being inaccurate per se).  I averaged the two PPP polls during this period.

Although A has no intuitive metric, the results are sensible.  Rasmussen and Gallup, for example, had a more pro-Romney bias.  (“Bias” here is a statistical term of art and does not imply any partisan agenda on their part.)  PPP, YouGov, and Ipsos/Reuters had almost no bias.   Other polls, like ABC/Washington Post and NBC/WSJ had very minimal bias.

But note that because we are looking mostly at individual polls without very large samples, the underlying uncertainty is large.  [UPDATE: Note that this is what Panagopoulos observed as well—“but none of the 28 national pre-election polls I examined had a significant partisan bias.”]

 

{ 10 comments }

El Criador de Gorilas November 7, 2012 at 6:14 pm

So the three most accurate pollsters use the least expensive (and supposedly weaker) methods: web and IVR.

J November 7, 2012 at 6:36 pm

YouGov certainly isn’t a weak method, especially when Doug Rivers is your CEO and your last poll has an N=36,000. Also should be said that these numbers are National numbers and not CD or State numbers…where IVR seems to have worse luck.

El Criador de Gorilas November 7, 2012 at 7:08 pm

The “supposedly” tried to convey that it is not a weak method…

Andrew Gelman November 7, 2012 at 6:58 pm

John:

I luv your graphs. Graphs graphs graphs. Graphs.

MarkS November 7, 2012 at 8:56 pm

Looking only at national polls gives does not yield much information. Drew Linzer’s analysis of state polls leads to a significantly more robust conclusion:

http://votamatic.org/another-look-at-survey-bias

John Sides November 7, 2012 at 11:03 pm

MarkS: Maybe, just maybe, Drew and I are doing a bit of coordination on this. Stay tuned.

Robert de Neufville November 7, 2012 at 10:10 pm

Great post. I would really love to know is what the distribution of error was. Are these polls, taken collectively, off by about what we would expect as a result random chance? Or does the distribution of error suggest some bias toward conservative voters was built in? What would it look like if we excluded, say, web polls? Is the bias actually worse?

David Marcus November 9, 2012 at 1:59 am

In the graphs comparing forecast O-R margin to the actual, what value did you use for the actual? With millions of absentees still to be counted in heavily democratic California, the true O-R margin may be as much as 1% larger than the election night margin. Already (as of 6 pm PDT 11/8) the reported national margin (per CBS, after accounting for 1.9 million third-party votes) is up to about 2.55% from an election night value of 2.4%.

John Sides November 9, 2012 at 8:05 am

It’s a provisional graph and I will update with new vote totals as they come in.

David Marcus November 9, 2012 at 12:20 pm

Great. Thanks.

Comments on this entry are closed.

Previous post:

Next post: