Robo-Polls: A Thumb on the Scales?

by John Sides on September 26, 2012 · 7 comments

in Campaigns and elections,Methodology,Public opinion

In the 2012 presidential primaries, a funny thing happened with the polls.  By and large, the errors of pre-election polls—that is, the difference between the polls and the primary election outcomes—did not depend much on whether the polls used live interviewers or recorded voices.  That is, robo-polls had no more error than polls with live interviewers.

However, the errors of the robo-polls were much lower when a live-interviewer poll had already been conducted in a particular state.  In other words, the robo-polls were more accurate when there was a previous live-interviewer poll that may have served as a benchmark.

That is the conclusion of a new paper by political scientists Joshua Clinton and Steve Rogers.  Their analysis controls for a number of other factors and still finds that a robo-poll had about 3-4 points less error when it was conducted after a live-interviewer poll.  Their conclusion, with appropriate caveats:

Pollsters know their results are being compared to the results of prior polls, and polls created for public consumption have incentives to ensure that their results are roughly consistent with the narrative being told in the press if they want to garner public attention. Pollsters also have further financial incentives to get it right which may make them leery of ignoring the information contained in other polls. The results we find are consistent with what we would expect if IVR polls took cues from the results of more established methodologies – IVR polls do as well as traditional human polls when both are present, but they do worse when there are no other polls to cue off of. However, the nature of the investigations means that our results are necessarily suggestive rather than definitive. Beyond the implications for interpreting IVR polls, the larger point here is that if polls take cues from one another, then the hundreds of polls being reported are not really as informative as the number of polls would imply.

{ 7 comments }

Mark B. September 26, 2012 at 10:48 am

This suggest a lack of independence, which I imagine should adjust how people aggregate polls?

matoko_chan September 26, 2012 at 12:29 pm

In organic systems, this behavior is called flocking or schooling.

Craigo September 26, 2012 at 12:57 pm

“12 out of 17 of these IVR polls were conducted by a single firm.”

Every single pollster in America is saying “Gee, I wonder who that could be…Scott, do you have any idea?”

James Kelley September 27, 2012 at 9:17 am

Hi John,

This is my first post on monkey age; hopefully they’ll be more to come. Before entering grad school 2 years ago I worked for a dc-based political polling firm. It seems to me if I were running a robo poll and were concerned about being consistent with existing CATIs, I would weight my demographics (race, party id, etc) according to results of the previous polls. My hypothesis is this is what’s going on here. A test might be to compare the demos from robos coming before cati surveys vs those which come after. I imagine we see much greater consistency with robos weighting accordingly when they have s base line to weight to.

I actually don’t think this is a terrible thing, if we believe that the sample of those willing to answer a CATI (live interviewer) survey is more representative of the electorate than those willing to respond to a Robo poll. Obviously there’s a tremendous amount of selection bias no matter which method you are using. This also might be more meaningful for the clients of these robo polls. as they are probably more interested in movement among the existing sample then a new read from an altered sample. If I were a client, I would expect my firm to weight according to a representative sample.

John Sides September 27, 2012 at 9:41 am

Thanks, James. I’ve passed this along to Clinton and Rogers.

jimmy the one October 11, 2012 at 4:38 pm

It’s been pretty transparent to that rasmussen has been ‘going to school’ off of gallup, the last month or two, keeping within a ‘parent poll moe’, so as to garner credibility. They did this well, cleverly, largely because gallup was down-polling obama due gallup’s (then) demographics.
Remember much earlier this year when bloomberg poll gave obama a wide lead ~8 – 10 pts, & rasmussen countered by giving romney a similar big lead so as to offset, rass’ duplicity is easy to spot for some of us. Flash – scott rasmussen wants romney to win the election – really & truly. The rationale is older than the hills, to get fence sitters to vote for romney, so as to be on the ‘winning side according to the poll’ (even if it’s a manipulated poll) – fine for those with little political convictions & just think they’re going to get ‘more money’ if they vote for the winner.

There are a couple other pollsters which I’ve not encountered & I’ve been following polls for 20 yrs now, pardon perhaps the spelling: gravis, we ask america. As well, there are a couple which lean right, such as IBD, battlegrd polls (well they used to, was it ‘portrait of america’?), & i’ve read that gravis is rightwing biased, but dunno, and of course rasmussen.
.. btw, I believe real clear politics is owned 51% by rightwingster forbes, which is likely why rasmussen is even included in RCP’s averaging. A truly unbiased poll of polls would not include rasmussen, esp not a daily track poll. The other poll of polls is ‘Polling Report’ which does not include rass.
Is why I often refer to RCP as ‘real queer politics’.

Sean October 19, 2012 at 1:07 am

John,

I read through that entire study, and its conclusions are…not robust. At all. The study is actually complete garbage.

Let’s start with the conclusion and work backwards. The authors conclude that robo-pollsters tend to benchmark their poll results to other non-robo polls in the field. To come to that conclusion, they need to have a control — namely, robo-polls that were conducted without non-robos in the field.

Now, they did that. But how big was their sample? 17. Their entire thesis is based off of 17 data points. Would you trust a poll of only 17 people? Me neither.

But it gets worth. Of those 17 data points, 12 of them were from a single pollster. 12! Over 70 percent of sample that is the basis of their entire paper came from one firm — PPP.

It is mathematical and statistical malpractice to boldly assert, as they did, that robo-pollsters cook their data based on a sample of 17 data points, of which 12 came from one firm.

If you tried to publish a paper in any economics or finance journal with a conclusion based on 17 pieces of data, you’d get laughed out of the room. It would be nice to see the same treatment for the type of pseudo-science represented by this paper.

Comments on this entry are closed.

Previous post:

Next post: