With no small amount of ceremony, the Asia Foundation released its eighth annual “Survey of the Afghan People” last week. Drawing on over 6,000 respondents in all 34 provinces of Afghanistan, the Survey concluded that “public optimism about the overall direction of Afghanistan is currently at its highest point since 2006.” Even this tepidly optimistic conclusion was immediately derided by bloggers and pundits alike as fanciful at best.
Sarah Chayes, a long-time observer of Afghanistan now at the Carnegie Endowment for International Peace, has offered perhaps the harshest critique of the Asia Foundation’s work. One-third of selected villages could not be surveyed, she noted, due in part to security concerns. Only 67% of respondents agreed to participate when approached, and these individuals were not the same as approached in previous years, making it impossible to track trends over time.
Outright fraud, too, cannot be dismissed: ”I have rented out offices to polltakers,” she writes, “and watched them sit at the desks and fill in the answers they were supposed to be getting from respondents.” Finally, Afghans, she writes, are “survivors,” and have become highly adept at providing the answers that enumerators and their funders find “pleasing.” The conclusion?
Polling, a notoriously complex art, is almost impossible to conduct meaningfully in Afghanistan….It is time to stop deluding ourselves with such patently distorted information, and using it as a basis for analysis or for placating the public with a comforting message. It is dangerous to build strategy on such quicksand.
Many of these criticisms are familiar to those who conduct survey work in dangerous (post)conflict settings. Others could be added: if not properly conducted, the actual conduct of the survey could endanger both respondents and enumerators alike.
Yet Chayes errs on the side of throwing the baby out with the bathwater. The solution is not to stop conducting surveys but to design and implement better ones—and more of them. The point of this post is not to defend the Asia Foundation or its survey, but instead to offer some context and correctives to Chayes’ critique of the utility of surveys in these environments. In the past two years, I’ve had the opportunity to design and field four surveys in Afghanistan, the last of which is a 35,000 respondent behemoth currently in the field. (Truth in adverting: two of my four projects are being implemented by the same survey firm that conducted the Asia Foundation’s study).
Three points could be made in reply to Chayes. First, let’s tackle the mechanics of surveying. Sample attrition–here, the loss of villages due to violence or other factors—can be mitigated by research designs that block match on important covariates so that replacements can be readily found (and randomly selected) when problems arise. This is not a show-stopper.
Second, a 67% response rate looks poor until you add context. How many Americans responded to a polling request during the 2012 election? 9 percent (it was only 37% in 1997). If response rate were the sole criterion for the quality of a survey, then Nate Silver would be out of a job. And, yes, we’ve all heard about (or witnessed) data falsification. Yet that is a testament to poor quality control practices, not an inherent flaw of surveying itself. There are several diagnostic tests that can be easily run to detect outright fraud by enumerators.
I suspect, however, that these boring technical issues are not at the heart of Chayes’ criticism. Instead, the deeper objection is that inscrutable Afghans are impervious to social science methods of eliciting truthful answers to sensitive topics. And it’s true: respondents do lie and they do sometimes shade their answers to tell researchers (us) what they want to hear.
Fortunately, however, there’s an entire wing of social science devoted to measuring attitudes on sensitive topics using indirect approaches like list and endorsement experiments. What’s more, researchers have applied these tools to exactly the policy relevant questions that Chayes would like answered. Studies measuring support for insurgent groups and counterinsurgent forces have recently been conducted in Afghanistan (here and here), Colombia, Mexico, Nigeria (here and here), and Pakistan, for example. These indirect methods have key advantages: they avoid triggering social desirability biases, lower incentives to outright lie, and are easier to slip past village stakeholders who would bar enumerators since the actual intent of the survey is shielded from stakeholders and respondent. No outsiders asking directed, pointed questions are required here.
Finally, Chayes’ rejection of surveys suggests she has an alternative. Her solution? Rely on anecdotes: “Recent conversations with ordinary Afghans indicate that weapons are rapidly being bought up, at least in the north…Such factors provide more eloquent indications about prevailing conditions than do opinion surveys.” I don’t doubt that weapons are being bought up in the North; I was there in September 2011. But what’s the half-life of an anecdote? How many people have to repeat the story before it becomes “truth?” We could, of course, ask lots of people about the issue—but then we’d be conducting a survey.
In truth, part of the reason the war in Afghanistan has gone so poorly is that ISAF’s governing logic has been “strategy by anecdote.” A call for greater realism in our debates about Afghanistan and the looming exit would include the recognition that are no substitutes for survey data. Fewer soldiers conducting fewer patrols mean less data now streaming into ISAF HQ; less security means NGOs are more restrained in their movement and activities; and growing swathes of the countryside are now falling “dark” as researchers’ ability to move becomes similarly constrained. Surveys are not silver bullets, and their findings should always be cross-checked with other metrics and data. But we need more, and better, surveys in Afghanistan (and elsewhere), not fewer.
This post reflects my personal views.