Professionalization and the Demise of IR Theory

by Erik Voeten on March 28, 2012 · 17 comments

in Academia,International Relations

My friend and colleague Dan Nexon has a long rant (by his own admission) about the poverty of IR theory on the Duck of Minerva. Go read the whole thing, it is provocative and interesting. Also look at the comments, many of which are interesting and from prominent people in the field. What follows is some inside baseball, so it is going below the fold.

Dan’s core argument is that:

the conjunction of over-professionalization, GLR-style statistical work, and environmental factors is diminishing the overall quality of theorization, circumscribing the audience for good theoretical work, and otherwise working in the  direction of impoverishing IR theory.

In the process of advancing this argument he makes a lot of claims, many of which I disagree with. But I want to start with an important area where I think we have some agreement, although I would phrase things differently than he does.

 

1. The advance of quantitative research diminishes the quality of theorizing in IR

If with “quality of theorizing” we mean developing  multi-causal theories of complex phenomena I agree with this statement. We are increasingly realizing not only that causal inference is incredibly difficult but also that to do causal inference well, it is very hard to isolate more than one causal factor. This is, of course, not just a problem of quantitative research but of causal inference more generally. “Doing some process tracing” (a popular comment these days in research seminars) does not magically make the fundamental problem of causal inference go away.

The question is what to do about this. The modal article in IR is supposed to offer both theoretical and empirical contributions. It is very difficult to offer a sophisticated theoretical contribution when your empirics force you to focus on one factor. It is also very hard to substantiate a theoretical contribution if we demand that the empirical part of the paper obeys the rules of causal inference. Surely, there are papers that do both within the word limit of articles but we cannot expect all articles to do so.

Both causal inference and theoretical development are too important to give up. The only answer, as I see it, is that we have to increase our valuation of empirical work that provides new evidence that matters to existing theoretical or (importantly) policy debates but that offers no theoretical innovation of its own. Conversely, we ought to increase the valuation of theoretical work that is innovative and has potential empirical applications but that offers no more than suggestive evidence.

Changing our valuation of certain types of research is, of course, easier said than done. Yet, my sense is that the discipline  is slowly moving in this direction. Journals like International Theory and the European Journal of International Relations offer increasingly esteemed venues for theoretical work while journals like International Organization now offer an outlet for shorter research notes that primarily make empirical contributions. The very real danger is, of course, that these two sides of the discipline are not going to speak to each other. Lots remains to be done here and I’ll try to offer some more thoughts on this at a later time.

2. “over-professionalization of graduate students is an enormous threat to the vibrancy and innovativeness of International Relations (IR)”

What Dan means with this is that the insistence that graduate students are now required to have some prior mathematical and statistical training and should develop more during their graduate careers. I would defend such requirements strongly. Understanding statistics is an enormous asset for anyone who endeavors to understand social life. Creative and innovative minds understand this, regardless of whether they intend to use it in their own research. The very best qualitatively minded students also tend to do very well in quantitative methods classes. If with professionalization we mean demanding that all IR PhDs have some understanding of statistics, know at least one foreign language (a requirement in almost any program), and so on, then I am all for professionalization.

3. All the professional rewards go to quantitative scholars

Dan is rightly being taken to task in the comments for being hyperbolic on this. Many people point to the TRIPS survey of IR scholars and other studies that show that a majority of IR scholars and published research still is qualitative. I am a little disappointed that Dan is making this claim so strongly. In our own Georgetown department we have some 25 IR scholars, 4 6 of whom are primarily quantitative. Another issue is the critical theory/Constructivism angle but I would posit that this has far less to do with professionalism or quantitative methods than Dan implies. We can have endless debates about what a just division of professional rewards would look like (and no-one would agree) but we cannot claim that the IR field is a monoculture. Indeed, compared to other professional fields it is and remains extremely diverse (as it should). That does not mean that everyone is happy with what types of research appear to get rewarded more.

4. The monoculture of quantitative research

There is an underlying tone to Dan’s comments that make quantitative scholars seem like unimaginative and uncreative number crunchers that simply tweak existing datasets in order to bypass lazy gatekeepers and reap their professional rewards. As is true everywhere, there is tremendous variation in the quality of quantitative work. Yet, there is tremendously creative work going on in terms of research design and data collection. For example, the new generation of scholars engaged in field experiments do more soaking and poking than most qualitative researchers ever get to do.

There are a number of other issues I could get into, for example the sense that there was this mythical time when all was well, everyone could think big things, and tremendous progress was made in our understanding of IR (see Erik Gartzke’s comment to Dan’s post). I don’t want to claim that all is now well with IR but I do not see the professional demands we make of our graduate students as a core part of the problem.

 

{ 17 comments }

Dan Nexon March 28, 2012 at 10:17 am

Erik, these are excellent points and deserve a longer response. But I do want to push back on your reading of my post.

1. I think it a mistake to disaggregate my clams. At least, I hope it is, because I can’t get behind them as independent point. My argument is configurational: that the combination of factors (e.g., over-professionalization, received templates for success, conflation of science with neopositivism) is problematic. Indeed, I support greater mathematical and computational training for graduate students, as well as for undergraduates.

2. By “over-professionalization” I emphatically do not mean “the insistence that graduate students are now required to have some prior mathematical and statistical training and should develop more during their graduate careers.” I mean the relentless backwards pressure on graduate students to orient their entire student life toward the job market, and my fear (supported only by stylized facts) that this encourages graduate students to look to their professors as cooks who are their to provide the right “recipe” for job market success.

3. As PTJ better explains in comments, the relevant “monoculture” concerns what he calls “neopositivist” criteria of explanation, not quantification. As I’ve said elsewhere, a great deal of what we do in IR can be done best with mathematics and with electronic computation. However, I am very disturbed by the growing evidence that doing some kind of multivariate regression in the dissertation (and therefore de facto adopting a particular style of explanation) is a necessary condition for getting most of the “prestige” jobs in IR.

4. To the extent that I paint quantitative scholars as unimaginative — and I think that’s a fair reading of some of my rhetoric — I apologize, as this wasn’t my intent. I see tremendous creativity among all my quantitative colleagues and our graduate students at Georgetown, let alone in the broader field. But because that vast creativity is channelled into particular styles of explanation and kinds of problems, it is important that we don’t foreclose other domains of creativity.

5. I agree that there was no golden age. I’m well aware of how much discrimination mathematically-oriented scholars received in the early 1990s and before. I think here there’s more of a reaction to arguments made by others than by myself.

Matt March 28, 2012 at 10:38 am

I perhaps hue too close to my cliched biases as a statistician, but this sounds like whining of the old guard to me. “Theoretical” IR (and poli sci) is just an excuse to tell stories without any necessary basis in reality. That you might have to provide at least some suggestive evidence from somewhere other than your imagination to support fanciful story time is taken as an affront? Massaging one or two historical anecdotes into some grand new theory does not count, and thinking that it should just betrays the sad truth that too much of the field desperately doesn’t want to leave the 19th century behind.

Dan Nexon March 28, 2012 at 11:16 am

Matt: I’d respectfully submit that: (1) the choice between statistical inference and “massaging one or two historical anecdotes” is a false and biased one, (2) that the relationship between data and theory does not flow only forward, and (3) not all important theoretical claims (cf. Weber) take a form best adjudicated via the (currently) dominant forms of statistical analysis in IR. Our challenge is to allow multiple channels of creativity while maintaining approach-appropriate rigor. The work you’re implicitly criticizing failed the latter challenge, but not necessarily because it didn’t conform to particular kinds of tests.

Moby Hick March 28, 2012 at 12:35 pm

I have forgotten enough that I don’t have anything substantive to say. Hooray.

Matt March 28, 2012 at 2:54 pm

Dan: Please excuse my slightly pissy tone. I work myself mostly in computational biology, where despite many challenges interesting results or hypotheses can be experimentally validated, an opportunity obviously lacking in the social sciences. I thus tend to view pretty much all theoretical constructs in fields such as political science and IR as pure story time, unfalsifiable (at least within the next generation or two).

The frustration evident in my tone stems from the fact that I also do some work with my wife, who works in public policy research. An example of the kind of attitude that frustrates me in social research which I have more direct experience with than IR is in education. I’ve seen so many articles where the claims for a particular program or style or theory of education is purely a (sometimes disguised) argument from authority, where the gist of the argument is that some respected educational theorist declared very forcefully but with zero rigorous evidence that children develop psychologically in such and such a way and thus said theorist’s proposed theory of effective education is the only correct one.

I’d be interested in more expansion of your argument (1) above. Experimentation in IR is of course impossible, and IR is such a macro field, where each actor is dependent on a huge array of elements that are purely local to that actor, that I would posit that even any natural experiment is of limited value in terms of supporting grand theories. What is available other than the noting of associations (backed by more or less rigorous statistical analysis) and then either humble acknowledgements of the limits of proof or pure story time?

Also, my guess would be that a lot of the problem is part of what you note in (3) above. All the social sciences seem to have fads in statistical technique (e.g., instrumental variables in economics starting in the 90s). My response to that is probably different from yours, but that is definitely limiting and annoying regardless.

Moby Hick March 28, 2012 at 8:36 pm

Glad to see you went from slightly prissy to all-in-prissy. Now I have something to say.
Having both studied IR and done statistical work for medical research, I’m not that impressed by differences in the ease of finding or formulating testable hypotheses. You make the same kind of trade-offs among differing research priorities. You can pretend biology is different by pretending that looking at processes in isolation from an actual organism is anything but prep work for people doing something useful.

Hein Goemans March 28, 2012 at 9:44 pm

Here’s a debate that I finally do want to join. Not least because I so highly respect the individuals behind it. I’ve posted on Dan’s blog and do have an ISA paper to finish, so I’ll be brief. I’m actually somewhat surprised at finding myself so deeply disagreeing with both Erik (V) and Dan. I do *not* think that graduate schools is all about training, I think that graduate school is the once in a lifetime opportunity for individuals to start building a framework of analysis (Rosenau) for how the world works. If they do manage to develop such a framework, this more or less guarantees — at least in my mind — that they’ll produce a set of interesting and coherent publications. I strongly believe, and tell prospective students, that all programs have their strenghts and weaknesses; the best predictor of future success is whether students have the wits and initiative to compensate for their program’s weaknesses. Finally, I’m deeply bothered by the fetishization of “theory”. I’m pretty sure that both Dan and Erik would be with me on this, but also pretty sure that 95% of the discipline isn’t: What’s wrong with a well-executed statistical analysis of carefully and thoughtfully collected data that generates some “surprising” and at least interesting patterns of behavior, but makes no grandiose claims to have a “theory” to explain these patterns. In my opinion, NOTHING. But any such article would be (almost??!) impossible to publish.

Mike March 29, 2012 at 9:00 am

Concerning theory fetishism, I think Stinchcombe had it exactly right:

“I usually assign students in a theory class the following task: Choose any relation between two or more variables which you are interested in; invent at least three theories, not known to be false, which might explain these relations; choosing appropriate indicators, derive at least three empirical consequences from each theory, such that the factual consequences distinguish among the theories. This I take to be the model of social theorizing as a practical scientific activity. A student who has difficulty thinking of at least three sensible explanations for any correlation that he is interested in should probably choose another profession.”

And I would venture a guess that many people object to pure stats papers for the same reason that many people object to pure history papers: they (rightly) see that without the explanatory part everything would simply grind to a halt, since that’s where new knowledge enters the picture (cf. C.S. Peirce on abduction).

Having the discipline overrun by people who think explanations are the devil and that everything would be much better if we’d just let the facts speak for themselves (*cough* like Gelman *cough*) would be a fate of comparable horror to having it overrun by french literary theorists. The gatekeeping is warranted.

Ves March 29, 2012 at 10:52 am

“We are increasingly realizing not only that causal inference is incredibly difficult but also that to do causal inference well, it is very hard to isolate more than one causal factor. ”

This is indicative of the greater poverty of IR–that IR is “increasingly realizing” what is already obvious to most others studying politics.

Matt March 29, 2012 at 1:46 pm

Moby Hick: I do not pretend there are not problems of causal analysis in biology, but in the study of genetics and cellular processes, where I work, there is actually a physical system that can be experimented on. Yes there are further issues in translating findings at that level up to the level where conclusions are actionable when considering entire organisms (say, humans) rather than cells or nuclei, but it is simply dishonest or misguided to pretend that the challenges and contraints there are not of an entirely different order, with vastly fewer limitations, than the constraints and limitations of making testable causal or theoretical claims in a field like IR.

Mike: My issue is that most correlations discoverable in a field like IR would plausibly be supported by a large number of theories between which the available data can not distinguish. Gelman’s view seems to me not just an opinion but a fact. A little story time where a researcher explores what is explicitly presented as his/her (probably untestable) view of the causal or theoretical basis for the observed correlations may be fine, but such story time is rarely presented that way and is then essentially dishonest research.

Moby Hick March 29, 2012 at 9:54 pm

…than the constraints and limitations of making testable causal or theoretical claims in a field like IR.

I thus tend to view pretty much all theoretical constructs in fields such as political science and IR as pure story time, unfalsifiable (at least within the next generation or two).

Can you form a hypothesis as to why those statements are so different as to yield different responses?

Matt March 30, 2012 at 3:40 pm

Those two sentences are making the same point in slightly different ways. Or, more precisely, the second is a consequence of the first. With so many limits on the ability to make testable (ie, not just plausible given observed histories and correlations) causal or theoretical claims, the vast majority of grand causal or theoretical claims are 100% story time.

Or do you just not have any response that can refute any of the points I and other posters with critical points have made so you’re descending to childish shark?

Moby Hick March 31, 2012 at 9:52 pm

You defend that IR has no falsifiable hypothesis first.

Moby Hick March 31, 2012 at 9:53 pm

Actually, your first statement was that all of political science was unfalsifiable. Given advances in opinion polling, that’s obviously wrong.

Matt April 2, 2012 at 7:13 pm

You’re still not really engaging with any points I’ve made. When points are made you can’t refute (points that have been made by myself and a number of others here) you ignore them and try to change the subject or counterattack. What gives? Isn’t responding to criticism and debate in a convincing way essentially your job?

But yes, I’ll still respond to your obfuscating redirection of the debate.

If we’re talking about grand unified theories of some large topic in politics, whether they describe a side of politics that would be classified as political science or IR, then while yes there are of course falsifiable theories (eg, “ET controls who wins every election” is a falsifiable theory of politics, but doesn’t help your argument much), I would defend the idea that there is always a large number of competing theories that are plausible given history and the data and that are all unfalsifiable.

Without experimentation, in all but the rarest (and luckiest) cases no definitive causal claim can be made. In other words, in trying to answer almost any question of poli sci or IR you still only have observational data , so any causal claim you make that is plausibly consistent with the correlations you observe will be unfalsifiable but also unprovable by the available information.

The opinion polling riposte is essentially off topic. A standard political opinion poll gives lots of interesting correlations, but doesn’t have anything to say about causal or theoretical relationships.

Moby Hick April 2, 2012 at 9:40 pm

A standard political science article using survey data tests theoretical relationships or it wouldn’t be published. It does this very explicitly and often uses “natural experiments” based on questions asked across time both before and after historical events. Many explicitly test competing causal theories against each other. This is not a randomized controlled experiment. There are few experiments outside of political psychology (not that I think it is fair to ignore political psychology). But using a well-designed observational study to test a hypothesis isn’t very different from medical sciences. There are a whole host of questions that cannot be answered by experimentation for either ethical or practical reasons. This type of analysis is how I earn a living.

With practice, you might become a good troll. Stick to it, but I’m done.

Matt April 5, 2012 at 3:02 pm

I appreciate a paragraph of response instead of snark.

Maybe I’m talking about something different from what you’re thinking of? Natural experiments are very useful when talking about, e.g., some economic or public policy questions. There is no such thing as a natural experiment exploring, say, whether a neo-realist or constructivist formulation of the nature of international relations is closer to the truth.

And regardless of exactly what kind of theoretical questions you’re investigating, looking at survey data before and after historical events gives one almost no power to honestly assess causal relationships. No matter how many variables you control for, the fact that there are significant differences in poll results before and after some historical event only slightly decreases the number of causal claims one could make. Pretending that kind of data can prove a single specific causal claim is, again, dishonest, self-deceiving, or displays ignorance of the basic logic of causation.

Comments on this entry are closed.

Previous post:

Next post: