In the 2012 presidential primaries, a funny thing happened with the polls. By and large, the errors of pre-election polls—that is, the difference between the polls and the primary election outcomes—did not depend much on whether the polls used live interviewers or recorded voices. That is, robo-polls had no more error than polls with live interviewers.
However, the errors of the robo-polls were much lower when a live-interviewer poll had already been conducted in a particular state. In other words, the robo-polls were more accurate when there was a previous live-interviewer poll that may have served as a benchmark.
That is the conclusion of a new paper by political scientists Joshua Clinton and Steve Rogers. Their analysis controls for a number of other factors and still finds that a robo-poll had about 3-4 points less error when it was conducted after a live-interviewer poll. Their conclusion, with appropriate caveats:
Pollsters know their results are being compared to the results of prior polls, and polls created for public consumption have incentives to ensure that their results are roughly consistent with the narrative being told in the press if they want to garner public attention. Pollsters also have further financial incentives to get it right which may make them leery of ignoring the information contained in other polls. The results we find are consistent with what we would expect if IVR polls took cues from the results of more established methodologies – IVR polls do as well as traditional human polls when both are present, but they do worse when there are no other polls to cue off of. However, the nature of the investigations means that our results are necessarily suggestive rather than definitive. Beyond the implications for interpreting IVR polls, the larger point here is that if polls take cues from one another, then the hundreds of polls being reported are not really as informative as the number of polls would imply.