Clairvoyance and Bad Journal Policies

Carl Zimmer’s article in the Sunday New York Times on the struggles scientists have to correct erroneous published results is interesting throughout but this part struck me particularly:

In March, for instance, Daryl Bem, a psychologist at Cornell University, shocked his colleagues by publishing a paper in a leading scientific journal, The Journal of Personality and Social Psychology, in which he presented the results of experiments showing, he claimed, that people’s minds could be influenced by events in the future, as if they were clairvoyant.

Three teams of scientists promptly tried to replicate his results. All three teams failed. All three teams wrote up their results and submitted them to The Journal of Personality and Social Psychology. And all three teams were rejected — but not because their results were flawed. As the journal’s editor, Eliot Smith, explained to The Psychologist, a British publication, the journal has a longstanding policy of not publishing replication studies. “This policy is not new and is not unique to this journal,” he said.

As a result, the original study stands.


Wow! I just can’t think of a single good excuse to refuse replications as a matter of policy. No-one would think of publishing an experiment that shows that people are not clairvoyant in the absence of evidence otherwise. Suppose I were a believer and wished to spread the word. Even if I ran a kosher experiment , I would have a five percent chance of accidentally getting a result that shows a “statistically significant” effect of clairvoyance (and no-one would stop me from running many such experiments and not publishing the null-results). Under these journal policies, after getting the improbable result in, no-one would get on the record as showing that my results were bs (or, phrased more kindly, the consequence of random noise). Yes, getting working papers out is great but professional rewards are allocated based on publications in top journals, not on making the newspaper for showing that people aren’t clairvoyant after all.

I don’t know if any political science journals have similar policies but I sure hope not.

ps. The original clairvoyance piece has 9 experiments and I really can’t judge its quality, just its a priori improbability coupled with the failed ability to replicate the findings by 3 research teams. The point that I am trying to make is more general.

6 Responses to Clairvoyance and Bad Journal Policies

  1. Jon Baron June 26, 2011 at 6:55 pm #

    The journal Judgment and Decision Making, which I edit, welcomes “attempted replications of surprising results”. We have published several already.

  2. Bill Clark June 26, 2011 at 7:33 pm #

    I don’t know of any political science journals that have that as a policy, but in practice I don’t see political science journal editors as eager to publish papers that demonstrate that “merely” demonstrate that papers they have published are incorrect. Some years ago, my co-authors and I, for example, received a review from APSR that said that the APSR should “be a journal of ideas, not statistical details.” In this case the detail was whether the coefficient in the article we were criticizing which the authors interpreted as overturning 40 years worth of research was positive or negative. The editors – not the current editors – did not distance themselves from this comment in their rejection letter.

  3. ricketson June 26, 2011 at 7:55 pm #

    If it is simply a matter of replication, then does it require an entire article, or is a short “response” sufficient (as I’ve frequently seen in Science)?

  4. Matt G. June 27, 2011 at 8:06 am #

    Bill, if you remember, the second review said that the original article was so obviously wrong that it was unnecessary to publish a replication showing this. The original article now has close to 150 citations!

  5. jacob June 27, 2011 at 8:30 am #

    If 9 experiments all found an effect, then it would be a lower than 5% chance of finding a false positive. In practice, if I found the same results after 9 times, I would certainly believe the results, no matter how strange they were. So, something other than random chance probably explains these bizarre results.

  6. Ben Hayden June 27, 2011 at 12:08 pm #

    What I don’t get is, did JPSP’s think about the long-term consequences to their journal?

    When they accepted the article, they must have known there would have been failed replications (unless they actually believed the results). They must have known they would reject these failed replications , and they must have known this information would get out. And that has got to hurt their reputation. What happens the next time they publish an incendiary/counterintuitive result? People will say, “well that’s the journal that published the ESP stuff.” I know if I had possibly controversial results, I would avoid JPSP for that reason.

    Is it possible JPSP’s editors actually believe in ESP?