Home > News > Should We Get Rid of Polls?
125 views 16 min 0 Comment

Should We Get Rid of Polls?

- July 14, 2009

In graduate school, one of my advisors gave a presentation that included analysis of a survey. In the question-and-answer, a person asked something along the lines of “But are surveys really well-suited for getting at this question and aren’t there problems with surveys anyway etc.” My advisor paused and sighed and then said, “Let’s see, which answer to I want to give to this question today?”

That’s how I feel reading Conor Clarke’s proposal to “get rid of polls” in _The Atlantic Monthly_. I want to believe his proposal has a certain Swiftian spirit, but he really seems sincere.

bq. Polls are as integral to the American political tradition as sex scandals or earmarks. Yet it’s not clear that they serve any beneficial purpose. When a new one is published in The Times or The Post, I — along with everyone else — read it. But it seems to me that polls are qualitatively different from the rest of the content that fills the papers. News organizations are supposed to provide information that holds government accountable and helps the citizenry make informed decisions on Election Day. Polls turn that mission on its head: they inform people and government of what the people already think. It’s time to do away with them.

But how is “inform[ing]…the government of what the people already think” not holding government accountable? If there is to be any democratic accountability, we need to know what the people think. Otherwise, what should we rely on? What politicians tell us “the American people” think?

Clarke then argues that polls have three problems:

bq. First, constant polling uncomfortably expands the domain of democracy. There are, of course, lots of ways in which the U.S. might be able to use a little more democracy. (Think the Senate.) But the value of the referendum has its limits. (Think California.) Writers have been whining about the “tyranny of the majority” since Tocqueville for a reason: getting the input of the citizenry at regularly appointed intervals has real benefits–among them stability and reliability and the chance for a politician or policy to succeed or fail within reasonable time constraints. Poll-testing every decision, on the other hand, disturbs the balance between democratic legitimacy and democratic effectiveness.

But polls aren’t referenda. Answering a pollster’s questions is a far cry from voting on policy. The California analogy is flawed. Moreover, polls can measure lots of things — broader values, for example — that have nothing to do with specific policy proposals.

And even if polls are about policy, how does that inevitably compromise “democratic effectiveness”? And what is that, anyway? Leaders are so cautious about public opinion that they aren’t making effective policy, or making any policy at all? I don’t see it. In fact, some political science research suggesting exactly the opposite: that politicians use polling data not to act as beleaguered delegates of the people, buffeted to and fro by the winds of the capricious public, but instead to find out how best to sell their ideas. In other words, they behave like Frank Luntz. A book to consult is Politicians Don’t Pander, by Lawrence Jacobs and Robert Shapiro. Or see Michael Gratz and Ian Shapiro’s account of the campaign to repeal the estate tax. Of course, Luntzian wordsmithery is hardly good news for democratic accountability. But it suggests that polls cut different ways, depending on how they are being used.

Finally, Clarke endorses elections, but elections are not necessarily the best way to ensure accountability. They are blunt instruments at best. They occur only episodically, even as government policy is being made and re-made continually. And they rarely provide true accountability for policy. Candidates tend to take vague policy stands. Voters tend to vote based on other considerations — party, most importantly — that have less to do with policy. Even the effects of the economy and other “fundamentals” on election outcomes do not imply a good accountability mechanism: leaders have only a modest ability to affect these things, and so holding them accountable for a recession holds them to too high a standard.

bq. Second, many polls are wrong. Which isn’t to say that the opinions the American people express in polls are factually incorrect, though that’s sometimes true. What I mean is that polls are a terrible indicator of the citizenry’s actual preferences. Part of the problem is that many people have a tendency to say one thing (“stated preference”) and then do another (“revealed preference”). Another part of the problem is that the public is sometimes simply confused. My favorite recent example of such confusion pertains to cap and trade. According to one poll, three-quarters of Americans think the U.S. should regulate greenhouse-gas emissions, with a slight majority saying they would support a cap-and-trade program of the type now being considered in the Senate. But as another poll makes clear, most Americans don’t even know what cap and trade is: slightly fewer than one-quarter of respondents could even identify it as having something to do with the environment.

There are two points here. One is that people tell pollsters one thing, but then do another. Sure: some people do, sometimes. Some say they go to church, and don’t. Some say they voted, and didn’t. All that tells us is to be cautious in interpreting polls. As Howard Schuman has written, no poll is going to provide you some sacrosanct estimate of the number of people who believe X or do Y:

bq. The tendency to take too literally single-variable distributions of responses (the “marginals”) is essentially the same as believing that answers come entirely from respondents, forgetting that they are also shaped by the questions we ask.

So what do we do? We triangulate using different polls, perhaps taken at different points or with different question wordings. We supplement polls with other data — such as voter files or aggregate turnout statistics. Polls can tell us some things that other data cannot, and vice versa.

The second point is that people can be “confused.” This is correct, but we only know if people are truly confused by doing polls in the first place. We ask different questions and find evidence of inconsistency. We measure the extent to which people don’t have strong opinions, or any opinion, and this is important as well. The things that Clarke seems to see as pathologies I would call “findings.” Do people mistake survey responses for measured opinions? Yes. But again, this is a problem with how polls are interpreted, not with polls as instruments. In the right hands, conclusions about confusion can be properly drawn.

In fact, let’s take Clarke’s discussion of cap-and-trade polling as an example of improperly drawn conclusions. The first poll he cites, which was conducted by ABC News and the Washington Post, asked this question:

bq. There’s a proposed system called “cap and trade.” The government would issue permits limiting the amount of greenhouse gases companies can put out. Companies that did not use all their permits could sell them to other companies. The idea is that many companies would find ways to put out less greenhouse gases, because that would be cheaper than buying permits. Would you support or oppose this system?

52% supported this system. The second poll he cites, conducted by Rasmussen, asked question:

bq. Does the cap-and-trade legislation address health care reform, environmental issues, or regulatory reform for Wall Street?

24% could correctly identify cap-and-trade as addressing the environment. Clarke snarks about this as confusion. But there’s no contradiction in these polls whatsoever. When given a piece of policy jargon completely out of context, most people don’t know what it is. When that policy is actually explained in digest form, most people are able to supply an assessment of that policy. Both of those are relevant pieces of data.

bq. Third, and of perhaps greatest concern: the outcome of one poll can affect future polls and behavior. As behavioral scientists and economists are fond of pointing out–in books like Nudge and Predictably Irrational–popular behavior can snowball. Public-health campaigns emphasizing how few teenagers smoke are more effective in deterring teen smoking than those that emphasize lung cancer or bad breath. Likewise, the perception that a candidate or political position is popular today will make the candidate or position more popular in the future. As Cass Sunstein and Richard Thaler put it in Nudge, “Nothing is worse than a perception that voters are leaving a candidate in droves.” Voters should be free to switch allegiances whenever they want, but they should do so for substantive reasons, not because they’re following the flock.

bq. Most everyone acknowledges the problem with polls when it comes to Election Day: exit polls are frowned upon and in some cases banned, because early ones have been shown to influence the behavior of people who haven’t yet made their way to the voting booths. If we can see that it’s a problem on Election Day, shouldn’t we acknowledge that it’s a problem the rest of the year as well?

It’s truly frustrating that this exit-polls-as-demobilizers myth lives on. I’ve posted on this before. There is no evidence for this effect.

But let’s address the substance here: do voters mindlessly follow polls? The short answer is no. First, Clarke overestimates the extent to which people actually know what the polls say. Maybe in a hard-fought presidential election with a clear frontrunner, a substantial fraction of Americans (a number greater than chance) could identify the frontrunner — although I would still expect to see some bias among the underdog’s partisans (“The polls are wrong!”). But, really: Clarke would have us believe that the public is too confused to know what cap-and-trade is, but attentive enough to politics to know what the public thinks about cap-and-trade. It just doesn’t jibe. Simply put, most people don’t pay that much attention to polls.

As such, the sorts of tipping points or cascades that Clarke describes are rarely in evidence. Most change in public polling on issues comes about gradually, sometimes over generations. See The Rational Public by Benjamin Page and Robert Shapiro.

The best example of rapidly changing preferences occurs in some presidential primaries. But even there, the role of polls is unclear. Some candidates never get off the ground: they don’t have money, endorsements, news coverage, or support in the polls. But what’s the cause of their failure? It is because people saw the polls and said, “Well, no one supports that guy, so I certainly won’t”? No, the whole constellation of self-reinforcing factors is to blame.

But what about when candidates go down in flames? Surely that’s a snowball, right? Howard Dean? The Scream? Here again, the polls are really epiphenomenal. Primary candidates lose because they lose _elections_ — i.e., they perform below “expectations” in some key contest. Voters in other states are responding to that outcome, and the consequent media coverage.

The same is true of candidates who streak to the front of the pack. The bandwagon comes about because of an election, because other candidates drop out, because of new media coverage, etc. Polls per se don’t matter all that much.

Furthermore, if there’s this all-powerful bandwagon, why are there so many tenacious minorities? Surely by now people can see that large majorities support, say, interracial marriage. Right? So where is the 17% who oppose such marriages coming from? People’s attitudes derive much more from other factors — broader changes in culture, media coverage, etc. — than from a blind obeisance to the latest polls.

Clarke is right about this: we are awash in polls. The imperative for journalists and others is to become more discerning interpreters. The imperative for citizens is to become more discerning consumers. When conducted and interpreted intelligently, we learn much more from polls than we would otherwise. And our politics is better for it.