Surveys in Conflict Settings

by Jason Lyall on December 5, 2012 · 9 comments

in Methodology,Public opinion,Violence,War

With no small amount of ceremony, the Asia Foundation released its eighth annual “Survey of the Afghan People‘’ last week. Drawing on over 6,000 respondents in all 34 provinces of Afghanistan, the Survey concluded that “public optimism about the overall direction of Afghanistan is currently at its highest point since 2006.’’ Even this tepidly optimistic conclusion was immediately derided by bloggers and pundits alike as fanciful at best.

Sarah Chayes, a long-time observer of Afghanistan now at the Carnegie Endowment for International Peace, has offered perhaps the harshest critique of the Asia Foundation’s work. One-third of selected villages could not be surveyed, she noted, due in part to security concerns. Only 67% of respondents agreed to participate when approached, and these individuals were not the same as approached in previous years, making it impossible to track trends over time.

Outright fraud, too, cannot be dismissed:  ‘’I have rented out offices to polltakers,’’ she writes, “and watched them sit at the desks and fill in the answers they were supposed to be getting from respondents.’’ Finally, Afghans, she writes, are “survivors,’’ and have become highly adept at providing the answers that enumerators and their funders find “pleasing.’’ The conclusion?

Polling, a notoriously complex art, is almost impossible to conduct meaningfully in Afghanistan….It is time to stop deluding ourselves with such patently distorted information, and using it as a basis for analysis or for placating the public with a comforting message. It is dangerous to build strategy on such quicksand.

Many of these criticisms are familiar to those who conduct survey work in dangerous (post)conflict settings. Others could be added: if not properly conducted, the actual conduct of the survey could endanger both respondents and enumerators alike.

Yet Chayes errs on the side of throwing the baby out with the bathwater. The solution is not to stop conducting surveys but to design and implement better ones—-and more of them. The point of this post is not to defend the Asia Foundation or its survey, but instead to offer some context and correctives to Chayes’ critique of the utility of surveys in these environments. In the past two years, I’ve had the opportunity to design and field four surveys in Afghanistan, the last of which is a 35,000 respondent behemoth currently in the field. (Truth in adverting: two of my four projects are being implemented by the same survey firm that conducted the Asia Foundation’s study).

Three points could be made in reply to Chayes. First, let’s tackle the mechanics of surveying. Sample attrition—here, the loss of villages due to violence or other factors—-can be mitigated by research designs that block match on important covariates so that replacements can be readily found (and randomly selected) when problems arise. This is not a show-stopper.

Second, a 67% response rate looks poor until you add context. How many Americans responded to a polling request during the 2012 election? 9 percent (it was only 37% in 1997). If response rate were the sole criterion for the quality of a survey, then Nate Silver would be out of a job. And, yes, we’ve all heard about (or witnessed) data falsification. Yet that is a testament to poor quality control practices, not an inherent flaw of surveying itself. There are several diagnostic tests that can be easily run to detect outright fraud by enumerators.

I suspect, however, that these boring technical issues are not at the heart of Chayes’ criticism. Instead, the deeper objection is that inscrutable Afghans are impervious to social science methods of eliciting truthful answers to sensitive topics. And it’s true: respondents do lie and they do sometimes shade their answers to tell researchers (us) what they want to hear.

Fortunately, however, there’s an entire wing of social science devoted to measuring attitudes on sensitive topics using indirect approaches like list and endorsement experiments. What’s more, researchers have applied these tools to exactly the policy relevant questions that Chayes would like answered. Studies measuring support for insurgent groups and counterinsurgent forces have recently been conducted in Afghanistan (here and here), Colombia, Mexico, Nigeria (here and here), and Pakistan, for example. These indirect methods have key advantages: they avoid triggering social desirability biases, lower incentives to outright lie, and are easier to slip past village stakeholders who would bar enumerators since the actual intent of the survey is shielded from stakeholders and respondent. No outsiders asking directed, pointed questions are required here.

Finally, Chayes’ rejection of surveys suggests she has an alternative. Her solution? Rely on anecdotes: “Recent conversations with ordinary Afghans indicate that weapons are rapidly being bought up, at least in the north…Such factors provide more eloquent indications about prevailing conditions than do opinion surveys.’’ I don’t doubt that weapons are being bought up in the North; I was there in September 2011. But what’s the half-life of an anecdote? How many people have to repeat the story before it becomes “truth?’’ We could, of course, ask lots of people about the issue—-but then we’d be conducting a survey.

In truth, part of the reason the war in Afghanistan has gone so poorly is that ISAF’s governing logic has been “strategy by anecdote.” A call for greater realism in our debates about Afghanistan and the looming exit would include the recognition that are no substitutes for survey data. Fewer soldiers conducting fewer patrols mean less data now streaming into ISAF HQ; less security means NGOs are more restrained in their movement and activities; and growing swathes of the countryside are now falling “dark’’ as researchers’ ability to move becomes similarly constrained. Surveys are not silver bullets, and their findings should always be cross-checked with other metrics and data. But we need more, and better, surveys in Afghanistan (and elsewhere), not fewer.

This post reflects my personal views.

{ 9 comments }

RobC December 5, 2012 at 6:34 pm

I agree that a 67% response rate would be nothing short of terrific, but where does Chayes say that 67% of Afghans agreed to participate when approached or criticize a 67% response rate? I couldn’t find any such comments in the article you linked.

Either my eyes fail me, or you’re battling a straw (wo)man.

Paul Meinshausen December 5, 2012 at 6:51 pm

@RobC – she wrote the following: “In the case of the 2012 Asia Foundation survey, perhaps the most significant flaw is that nearly one-third of the planned “sampling points” could not be accessed by interviewers, largely for security reasons.”

She just didn’t use the number 67.

I think this is such an important point: “In truth, part of the reason the war in Afghanistan has gone so poorly is that ISAF’s governing logic has been “strategy by anecdote.”

When I was working at ISAF HQ in 2010, it wasn’t just strategy by anecdote – it was ardent antipathy to anything but anecdote and only anecdotes from recognized insiders were acceptable. Hence her line in her post: “The findings, which defy logic, never fail to amaze Afghans or Westerners who spend significant time in Afghanistan.” With that line she sets up an easy dismissal of anyone who hasn’t “spent significant time” (significance left conveniently undefined) and simultaneously rhetorically enforced a uniformity among those who ARE qualified to say anything about Afghanistan (apparently everyone who has spent a significant amount of time there is in agreement with her).

Chayes came through a couple of times when I was there, and she was always open about her antipathy towards anything that she considered an attempt to “quantify Afghan culture” (whatever that means). I see she remains so opposed. And that’s very saddening.

Of course what I’m saying is just anecdote. The value of anecdote is that it suggests the kind of questions that one might attempt to answer empirically and systematically. I wish it wasn’t just the Afghan population that is surveyed. I wish someone would survey ISAF and US analysts and decision-makers and supposed “experts” on Afghanistan. I wish we could begin to systematically explore and describe the approaches ISAF personnel have taken to understanding Afghanistan, and the mental models ISAF-affiliated personnel have brought to their analyses of the country and the conflict. It would be amazing to see documented how widespread the reliance on anecdote really is among the ISAF community.

RobC December 5, 2012 at 6:59 pm

One third of the sampling points couldn’t be accessed because they were the least safe, which of course could skew the responses from the remaining sample, which was her criticism. That doesn’t mean that everyone approached at the remaining sampling points agreed to be interviewed. Bottom line: Chayes didn’t say there was a 67% response rate, much less criticize such a response rate.

Rigor.

Jason Lyall December 5, 2012 at 9:50 pm

Dear RobC,

Thanks for taking tbe time to read the post. You are correct; Sarah Chayes does not cite the number 67%. That number is provided by the Asia Foundation in its methodological section (p.188 of the Report). One of Chayes’ criticisms is that because the same sampling points and individuals cannot be surveyed from wave to wave (known as a “panel design”), these surveys cannot be used to draw trends. Part of her argument therefore hinges on the fact that there are multiple sources of attrition, including at the individual level. That’s why the response rate is so important here. But I added the number, not Sarah. I don’t believe that adding the actual number renders her argument a “strawman,” but your mileage may vary.

JB December 5, 2012 at 7:50 pm

Check the second graf again folks.

dsp December 5, 2012 at 9:54 pm

Didn’t one of your Yale colleagues recently say: “An anthropologist goes in and tries to have as few prejudices as possible and be as open as possible to where the world leads you, whereas a political scientist would go in with a questionnaire.”

surveyscientist December 6, 2012 at 7:52 am

There’s a difference between bias due to non-response (which is what nate silver would encounter) and bias due to coverage issues. In Afghanistan, coverage bias seems to be a bigger issue, and in this case seems to be more insidious than non-response bias would be.

idiot December 6, 2012 at 10:48 am

“It is time to stop deluding ourselves with such patently distorted information, and using it as a basis for analysis or for placating the public with a comforting message.”

According to the latest Asian Foundation poll, 31% of Afghans believe that Afghanistan is going in the wrong direction. That’s 31% of ~30 million people living in Afghanistan, or over 9 million Afghans who are pessimistic about the future of Afghanistan. 30% of Afghans feel some sort of sympathy towards armed groups, and 10% claim to have a lot of sympathy. 10% of ~30 million is 3 million people, more than enough people to sustain an insurgency against the Afghan regime.

48% of Afghans fear for their own personal safety, which I must point out, is more than the 31% who think Afghanistan is going on the wrong track. The biggest problems facing Afghanistan is “insecurity” (28%), unemployment (27%), and corruption (25%). Though overall crime has decreased, reported suicide bombings have increased and have reached their highest level since 2007.

And this is all ‘comforting’?!

Brian Schmidt December 6, 2012 at 9:31 pm

“There are several diagnostic tests that can be easily run to detect outright fraud by enumerators.”

What types of results do you get when you run those tests – what percent fraudulent?

Note these tests won’t solve the problem of surveyors failing to follow protocol in order to get more responses, mainly trying to interview people who are similar to themselves in age/class/gender/ethnicity, possibly combining real answers with faked demographics.

Comments on this entry are closed.

Previous post:

Next post: