Home > News > 4 Steps for Improving News Coverage of Election Polls
111 views 5 min 0 Comment

4 Steps for Improving News Coverage of Election Polls

- November 8, 2012

I am speaking on a panel this morning at the National Press Club.  The subject is the election polls.  Given the setting, I am going to focus my remarks on how we can improve news coverage of the election polls.

1) Reconsider methodological standards. Some news outlets — e.g., the New York Times — will not report on certain kinds of polls, including internet polls.  Real Clear Politics will not include these polls in its averages.  In the context of election polls, however, there may be less and less reason to make such distinctions.  In 2012, an internet pollster, YouGov, a “robo-pollster,” Public Policy Polling, and a “traditional” pollster, ABC/Washington Post, which calls both cell phones and landlines, had final national polls that were among the most accurate.  Gallup, which also calls both landlines and cell phones, was among the least accurate, as was Rasmussen, which combines robo-polls and an internet sample.  I am not suggesting that we evaluate polls solely in terms of whether the accurately predict elections (in part, for this reason).  I am simply suggesting that, in the course of reporting pre-election polls specifically, it may make less sense to make hard-and-fast decisions based on methodology.  (Moreover, my suggestion #4 below will guard against the idiosyncrasies of any individual pollster’s methodology.)

2) No more “some polls say.”  I saw this time and time again in 2012 — e.g., here (“With some polls suggesting that Mr. Romney is closing the gap”).  It was particularly common in the “Romney momentum” stories which, as Brendan Nyhan and I pointed out, had no real basis in fact.  Saying “some polls say” is the equivalent of the old journalistic saw “some people say.”  Reporters can do better.  Which polls indicate tightening?  Which do not?  Is any “tightening” distinguishable from movement based on sampling error?  Be specific.

3) If you must report on campaign internals, assume spin. The day before Election Day, Joseph Weisenthal of Business Insider linked to this Daily Mail story (“Exclusive: Romney UP one point in Ohio and TIED in Pennsylvania and Wisconsin, according to his campaign’s internal polling”) and tweeted “What is the explanation for internal polls always being more optimistic than public polls?” Matt Glassman quickly responded: “selection bias in releasing polls.”  Which means, whatever the merits of campaign polling methodology, you must assume that campaigns only “release” (that is, discuss) polls that favor their candidate.  Period.  Period period.  Here is proof.

4) Poll averages, poll averages, pol averages.  I hope this HuffPo frontpage lives in infamy:

As I said at the time, this constant attention to individual polls is hurting America.  It gives the misleading impression that a candidate is surging.  It invites amateur analysis of every poll’s innards.  It elevates the importance of outliers.  It creates the sense that “polls are all over the place” even when that’s just due to sampling error.
An antidote to this is to rely more on poll averages in election reporting.  Reporters then won’t go chasing after stories that aren’t really stories.  And, most important, they’ll be more likely to get the story right.  Which is what most reporters want to do.  Polling averages aren’t necessarily “the truth” — since we can never know the precise fraction of Americans that support the Democratic or Republican candidate in the months leading up to an election.  But they are more likely to be correct about that fraction than is any individual poll.  They can take into account the idiosyncrasies of individual pollsters (“house effects”).  They have less uncertainty associated with them.  In the end, they are often remarkably accurate.
I realize that these suggestions would drain some of the drama from campaign reporting.  So be it.