Does Research Collaboration Pay Off?

Lee Sigelman Jul 16 '09

Collaborative (multi-investigator) research continues to spread from the natural and formal sciences to the social sciences, and it’s supposed to be a good thing. Among the much-ballyhooed virtues of collaboration are that it’s supposed to enable researchers to divide their labor in ways that expand the range of pertinent literatures within which any of them is familiar, to bring new methodological skills into play, and to produce synergies even when a team consists of researchers with very similar backgrounds and skill sets.

That all sounds great, but there’s distressingly little solid evidence that collaboration actually pays off. It’s pretty clear that researchers who work in groups tend to turn out more papers than lone wolves do, but are their papers really “better” in any meaningful sense? Collaboration has costs as well as benefits: because it often necessitates compromise, it may reduce risk-taking and innovation, leading to papers that are technically proficient but stale; and the outlooks, approaches, and preferences of the members of a multi-member research team may not meld well, making a patchwork product rather than an integrated whole.

In an article (gated; pre-publication version shown) in the current issue of PS: Political Science and Politics, I put one aspect of the alleged payoff of collaborative research – the publishability of the papers produced via collaboration – to the test, drawing on the record of submissions to the American Political Science Review during my six-year (fall 1991-summer 2007) stint as editor.

During that period, 7.5% of the papers that were submitted for review were ultimately accepted for publication. 55% of the submitted papers were single-authored; the rest were by teams ranging in size from two to nine. The overall acceptance rates of single- and multiple-authored papers were essentially identical (7.5% and 7.4%, respectively). That doesn’t tell the whole story, though, because acceptance rates varied among papers representing different parts of the discipline and different disciplines. With those differences taken into account, the following picture emerged:

(1) Overall, whether a paper was submitted by a single author or by two or more authors had no bearing on its chances of being accepted.

(2) But within that general pattern, a more specific pattern stood out. Single-authored papers did no worse than multiple-authored ones as long as at least one of the authors of the latter was a political scientist. Papers submitted by a single “outsider” fared much more poorly than those submitted by a single political scientist, multiple political scientists, or mixed teams with at least one political scientist. As I noted in the article, typically the single “outsider” was an economist, and their lack of success is consistent with the unflattering stereotype of economic imperialists “marching into neighboring disciplines without making much effort to acquaint themselves with those disciplines’ research literatures. Shortchanging contributions from outside of one’s own discipline might matter little when a paper is being considered by a journal in one’s own discipline. But seeking acceptance of one’s work without paying due heed to prior research in the journal’s own discipline … is likely to be self-defeating.”

This is a topic on which more research needs to be done, on a wide variety of journals and with a wide variety of measures of the “payoff” of collaboration.