Home > News > Evolution, Pundits and Pollsters
117 views 8 min 0 Comment

Evolution, Pundits and Pollsters

- November 13, 2012

“David Lazer”:http://blogs.iq.harvard.edu/netgov/2012/11/the_extraordinary_delusions_of.html has an interesting piece on the topic.

bq. What is important is how well pollsters did in the face of increased obstacles to doing a good job: response rates to surveys have plummeted, and increasing numbers of individuals rely exclusively on (hard to reach) mobile phones. Despite these challenges, in aggregate surveys are more accurate than ever, almost spot on in 2012. How is this possible? This is worth far more reflection than a blog entry can offer, because not all communities face challenges like these so effectively. … Here I will simply speculate that it reflects three things. The first is that there is real world feedback as to the effectiveness of methods to address these challenges. … Here I will simply speculate that it reflects three things. The first is that there is real world feedback as to the effectiveness of methods to address these challenges. Third, there is a collective process of sifting through best practices. While there is certainly some desire to keep the secrets to success private, in fact there is a certain necessary degree of transparency in methods; and this is a small world of professional friendships where knowledge is semi-permeable, allowing a certain degree of local innovation providing short run advantage, while allowing good practices to disseminate. That is, there may be (as I have written about elsewhere) a good balance between exploration (development of new solutions) and exploitation (taking advantage of what is known to work) in this system.

bq. …The system of pollsters might be contrasted with that of pundits. Do you expect a Darwinian culling of the right leaning pundits who missed the outcome? The answer is surely not. Nor will there be an adjustment of practices on the part of “pundits who largely served up a mix of anecdotal pablum to their readers”:http://blogs.wsj.com/peggynoonan/2012/11/05/monday-morning/. … And how did the right get it so wrong? How could the Romney campaign of successful political professionals, in part embedded in the same epistemic community as the broader set of pollsters, not have seen an Obama victory as a plausible (put aside likely) outcome? This was not a near miss on their part. Consider: at last count, you could have subtracted 4.7 points (!) from Obama’s margin in every state and he would still have won … . Romney’s campaign, and many commentators on the right, were living in a parallel world, one with fewer minority and young voters than in ours. Again, I don’t know the answer to this question. Likely key ingredients: an authentic ambiguity in how to handle the aforementioned challenges; a strong desire to see a Romney victory; an informational ecosystem today that provides the opportunity for producing plausible sounding arguments to rationalize any wishful thoughts one might have; and the relevant subcommunity was small, centralized, and deferential enough so that a few opinion leaders could trigger a bandwagon.

As David is suggesting, this is a specific case of a more general problem – how does one build forms of “collective cognition”:http://masi.cscs.lsa.umich.edu/~crshalizi/notabene/collective-cognition.html that generate useful information rather than garbage? The only thing that I would add is that it might be useful to think very slightly more explicitly about the incentives that these different communities have. As he notes, there is likely a fair degree of intellectual exchange happening among professional pollsters, producing something that roughly approximates the kinds of exploration-of-different-alternatives-by-actors-communicating-with-each-other that he has studied through simulations, and that “Mason and Watts work on”:http://www.pnas.org/content/109/3/764 through experiments. There _may_ be some tendencies towards isomorphism, but they look to be relatively mild. In contrast, professional pundits are in the business of entertaining, and producing counter-intuitive claims rather than being right. As @jimcramer rather revealingly describes the perceived incentives he faces, ” No one will recall who picks Obama by 10 electorals if it turns out to be 150 margin. Believe me.” Such pundits are indifferent between wild guesses that are wrong, and safe guesses that are right – neither is likely to be remembered. Hence, they have strong incentives to make wild guesses rather than sober ones – there’s no downside to being wrong, and much upside to being right. Finally, the problems for pollsters in a campaign don’t only have to do with wishful thinking, and the bandwagoning power of a few leaders. They also likely have to do with commmenters’ desire _not_ to be seen as deviating from the collective consensus among their ideological community. Their problem is precisely the opposite of professional pundits – deviants and iconoclasts from the prevailing wisdom are likely to be cast out if they are wrong, whereas both those who are wrong, and those who are right, are likely to continue to be employed (and to have reasonable employment chances in other campaigns) as long as they do not stray from the herd.

In short, professional pollsters have (most of the time), good incentives to be right. Professional pundits have good incentives to guess wildly, regardless of whether they are wrong or right. Political hacks have good incentives to guess safely, regardless of whether they are wrong or right. And that, arguably, is why we are where we are.

Update: Also “this, from Cosma Shalizi”:http://masi.cscs.lsa.umich.edu/~crshalizi/bulletin/logic-of-diversity.html way back in 2005:

bq. When political scientists, say, come up with dozens of different models for predicting elections, each backed up by their own data set, the thing to do might not be to try to find the One Right Model, but instead to find good ways to combine these partial, overlapping models. The collective picture could turn out to be highly accurate, even if the component models are bad, and their combination is too complicated for individual social scientists to grasp.