Is Peer Review Broken?

A very discomfiting article (gated):

A growing interest in and concern about the adequacy and fairness of modern peer-review practices in publication and funding are apparent across a wide range of scientific disciplines. Although questions about reliability, accountability, reviewer bias, and competence have been raised, there has been very little direct research on these variables.

The present investigation was an attempt to study the peer-review process directly, in the natural setting of actual journal referee evaluations of submitted manuscripts. As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.

With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected. Sixteen of the 18 referees (89%) recommended against publication and the editors concurred. The grounds for rejection were in many cases described as “serious methodological flaws.” A number of possible interpretations of these data are reviewed and evaluated.


[UPDATE: A commenter draws my attention to the publication date of this article: 1982.  That’s what happens when you learn about something from Twitter: you assume it just happened today.  Sorry.  But this raises an interesting question: would the same thing happen again if this study were done today?]

16 Responses to Is Peer Review Broken?

  1. Peace Keeping Grad Student May 24, 2013 at 1:35 pm #

    Well if it is broken, its been broken for a very long time and no one has been anything about it. This article was published over thirty years ago.

  2. Kuze May 24, 2013 at 1:41 pm #

    See also Mahoney 1977 http://people.stern.nyu.edu/wstarbuc/Writing/Prejud.htm

  3. Kevin May 24, 2013 at 2:38 pm #

    It seems likely that the finding “only three (8%) detected the resubmissions” would be higher if the study were redone. It is a lot less costly in 2013 to hunt around when the “didn’t I see something like this” question pops up in the reviewer’s head (quick Google / Google Scholar search vs. massive stack of old dusty journals on the bookshelf). However, the core problem is probably the same.

  4. Eliot May 24, 2013 at 7:13 pm #

    Many publishers now, and more will soon, examine all new manuscript submissions with CrossCheck (http://www.ithenticate.com/products/crosscheck/) or something similar. So this particular type of plagiarism is not likely to survive for very long.

  5. John Griffin May 26, 2013 at 12:15 am #

    Yes, it would be more difficult to re-run the experiment today. But the bigger point is whether there is any reason at all to believe that the outcome would be different? Any reason to believe that these biases are less evident in political science than in psychology, any reason to believe that editors and reviewers are less influenced by pedigree than product in 2013 than in 1982?

    Perhaps the most benign defense I can think of for the result reported in the study is that publication is always somewhat arbitrary even when a manuscript is (arguably) excellent. Now if the same reviewers recommended rejecting the manuscripts they had previously recommended be rejected, that would be almost fatal evidence of the value of peer review. In this case, there was continuity of editor, but not reviewers, which may be the first nail in the coffin of peer review.

  6. Dude May 26, 2013 at 12:25 pm #

    More recent research has shown plenty of evidence suggesting that reviewers have biases – a quick Google search led me here, though I am sure this is just the tip of the iceberg.

    http://blogs.nature.com/peer-to-peer/2008/01/doubleblind_peer_review_reveal.html

    I don’t know if anyone has examined whether the prestige of the author’s affiliation matters but I would bet that it does as well.

  7. Jessica May 26, 2013 at 2:32 pm #

    If not peer-review then what? Is there any suitable improvement for our system?

  8. DN May 26, 2013 at 6:24 pm #

    Many, many disciplines study their own peer-review processes. This kind of experiment is a dime a dozen in a number of social and natural sciences. Political Scientists, on the other hand, seem comparatively uninterested in the fact that we rely on a problematic system to allocate prestige and evaluate careers.

  9. Peace Keeping Grad Student May 26, 2013 at 8:12 pm #

    I think a lot of people commenting at missing the main point. I do not think the main point is that “lol fooled the reviewers by resubmitting the same article” (which sure is a problem!) rather its that the same manuscripts were rejected with less elite names and institutions attached to them.

    To the commentator who asked what is the alternative to peer review: I say, is peer review needed? I am confident in my ability to judge for myself whether I deem an article as interesting/important. I do not really need someone else to do that for me. So instead of journals, as we are seeing increasingly scholars can use blogs, ssrn, email listservs to distribute their work. If something gains steam and a journal wants to publish it, cool.

    • Dan May 26, 2013 at 11:32 pm #

      Peer-review in some form is useful in establishing a collective judgment through a form of crowd sourcing. This collective judgment has the potential to be better than any individual judgment, because the “wisdom of crowds” tends to wash out individual biases. Even the best of us have those biases — I don’t trust myself necessarily to judge an individual article. It’s not about interest or importance, but rather validity and objectivity. Methodology needs to be verified by those knowledgeable about the methodology, etc.

      Perfection is not an option here, but if anything is broken in this case, it seems useful to try to find a way to fix it rather than just abandon it. A little good peer review goes a long way in weeding out garbage, above and beyond good editorial capacity which nevertheless cannot be deeply expert in everything that is published in a highly siloed and specialized society.

      • Peace Keeping Grad Student May 27, 2013 at 9:04 pm #

        I think crowd sourcing would be more effective outside the constrains of peer-review. Peer-review will likely consist of three reviewers and the editor. This obviously is not enough to wash out the biases (see the article posted here! and might even exacerbate it! those at elite institutions with light course loads are more likely to have time to formally serve as referees ). Outside of those constrains crowd sourcing is the whole academic community that is interested in the subject of the article. If something is good it will get passed along by the community in forums such as the money cage, email groups, and general communication with colleagues. If something is bad or methodologically flawed in someway it will be passed over.

        • Jessica May 27, 2013 at 10:03 pm #

          Reddit for research articles. You heard it here first!

          • Peace Keeping Grad Student May 28, 2013 at 9:34 am #

            Seriously, I would not have a problem with that. What do you think the problems would be?

        • Nate May 30, 2013 at 1:39 pm #

          Crowd-sourcing seems termwise contradicted by an n<= 4. Given the number of people producing work, n <= 4 seems too small.

  10. W.M. May 26, 2013 at 8:20 pm #

    Maybe this blog post needed to be peer reviewed. 😛

  11. Fran May 30, 2013 at 3:49 pm #

    And what about some kind of arXiv for the social sciences?