I am speaking on a panel this morning at the National Press Club. The subject is the election polls. Given the setting, I am going to focus my remarks on how we can improve news coverage of the election polls.
1) Reconsider methodological standards. Some news outlets—e.g., the New York Times—will not report on certain kinds of polls, including internet polls. Real Clear Politics will not include these polls in its averages. In the context of election polls, however, there may be less and less reason to make such distinctions. In 2012, an internet pollster, YouGov, a “robo-pollster,” Public Policy Polling, and a “traditional” pollster, ABC/Washington Post, which calls both cell phones and landlines, had final national polls that were among the most accurate. Gallup, which also calls both landlines and cell phones, was among the least accurate, as was Rasmussen, which combines robo-polls and an internet sample. I am not suggesting that we evaluate polls solely in terms of whether the accurately predict elections (in part, for this reason). I am simply suggesting that, in the course of reporting pre-election polls specifically, it may make less sense to make hard-and-fast decisions based on methodology. (Moreover, my suggestion #4 below will guard against the idiosyncrasies of any individual pollster’s methodology.)
2) No more “some polls say.” I saw this time and time again in 2012—e.g., here (“With some polls suggesting that Mr. Romney is closing the gap”). It was particularly common in the “Romney momentum” stories which, as Brendan Nyhan and I pointed out, had no real basis in fact. Saying “some polls say” is the equivalent of the old journalistic saw “some people say.” Reporters can do better. Which polls indicate tightening? Which do not? Is any “tightening” distinguishable from movement based on sampling error? Be specific.
3) If you must report on campaign internals, assume spin. The day before Election Day, Joseph Weisenthal of Business Insider linked to this Daily Mail story (“Exclusive: Romney UP one point in Ohio and TIED in Pennsylvania and Wisconsin, according to his campaign’s internal polling”) and tweeted “What is the explanation for internal polls always being more optimistic than public polls?” Matt Glassman quickly responded: “selection bias in releasing polls.” Which means, whatever the merits of campaign polling methodology, you must assume that campaigns only “release” (that is, discuss) polls that favor their candidate. Period. Period period. Here is proof.
4) Poll averages, poll averages, pol averages. I hope this HuffPo frontpage lives in infamy: