They’d rather be rigorous than right

by Andrew Gelman on January 10, 2013 · 6 comments

in Methodology,Political Economy

Following up on my post responding to his question about that controversial claim that high genetic diversity, or low genetic diversity, is bad for the economy, Kyle Peyton writes:

I’m happy to see you’ve articulated similar gripes I had w/ the piece, which makes me feel like I’m not crazy. I remember discussing this with colleagues (I work at a research institute w/ economists) and only a couple of them shared any concern. It seems that by virtue of being published in ‘the AER’ the results are unquestionable. I agree that the idea is interesting and worth pursuing but as you say it’s one thing to go from that to asserting ‘causality’ (I still don’t know what definition of causality they’re using?). All the data torture along the way is just tipping the hat to convention rather than serving any scientific purpose.

Some researchers are so uptight about identification that, when they think they have it, all their skepticism dissolves. Even in a case like this where that causal treatment makes little sense. Also there’s the problem of economists who don’t listen to experts in other fields who could set them right (or, maybe I should say, the more general problem of researchers in field X ignoring important work in field Y).

Remember what I wrote:

The way I see this work, the authors have an interesting idea and want to explore it. But exploration won’t get you published in the American Economic Review. Instead of the explore-and-study paradigm, Ashraf and Galor are going with assert-and-defend. . . .

{ 6 comments }

MikeM January 11, 2013 at 12:18 am

There’s a certain arrogance to all too many economists. They (the arrogant ones) seem to feel that they are above the peons working in the trenches who generate the data that they, with their sophisticated quantitative armamentarium, know how to analyze — and thus provide us working stiffs with spectacular insights that were not apparent until they came on the scene.

In reality, as you pointed out in your earlier post, they don’t understand the context. They should remember the words of wisdom from The Music Man, “you’ve got to know the territory.”

ricketson January 11, 2013 at 2:14 am

This tendency to jump to conclusions reminds me of a debate I studied a couple of years back: Is economics a science? Do a web search and you’ll find essays from a variety of public economists (Cowen, DeLong, etc). Some of the essays indicated that there are two types of economists, the “scientists” and the “engineer”. The scientists approach economic phenomena with the question of “how does this work?”, and presumably they are willing to recognize the limits of their studies. However, those with an engineer mindset are looking for answers that are useful NOW (e.g. to address the current recession). Apparently the economic engineer has been influential since the early days of economics, and many of the original economists were driven by this impetus.

I find this appalling because I don’t believe that economics is precise enough to be used for engineering; social sciences will never have the power of the natural sciences. This is especially true when a field of study is first being established, yet many economists seem to be willing to brush aside concerns about their ignorance.

The arrogance illustrated in the above study is exactly what I’d expect from a field of science that insists on its usefulness even before it has demonstrated any understanding of its subject.

Bill January 11, 2013 at 2:42 pm

Some researchers are so uptight about identification that, when they think they have it, all their skepticism dissolves. Even in a case like this where that causal treatment makes little sense.

Is there some semantic difference between economists and statisticians about the word “identification?” Because, the way economists typically use it, you can’t really have identification while getting the causation all wrong. The thing being identified is the counterfactual causal effect of whatever is under study.

Andrew Gelman January 11, 2013 at 7:58 pm

Bill:

I’m using the economists’ meaning of the term. In this case, I don’t think the authors had identification in any meaningful sense of the word. But they thought they did. I think the problem is that they never thought seriously about what causation would mean in this setting (what would it take to increase genetic diversity in Peru or decrease it in Kenya). All the robustness checks in the world won’t save you if you’ve never fully conceptualized your question. Also, as noted above, I think they were under (self-imposed) pressure to have a big finding, rather than to simply explore and try to learn.

Ken January 12, 2013 at 4:08 pm

This is interesting, but perhaps too harsh on Economics. My sense is that work is unlikely to be published in the highest-profile political science journals (e.g. APSR, AJPS) either without being of the “assert-and-defend” variety. So while I agree that there is a bias against “explore-and-study” research being published in top journals, which then gets spread more broadly via the prestige attached to scholars who publish in the top “assert-and-defend” journals, I’d suggest that both the biases and the subsequent problems are quite acute in political science as well.

Wheeler's Cat January 13, 2013 at 8:48 am

High genetics diversity is antifragile. Taleb would approve. Meh, in ten years the fractalists will be sneering at the bayesians like the bayesians sneer at the frequentists now.

Comments on this entry are closed.

Previous post:

Next post: