Home > News > Clairvoyance and Bad Journal Policies
97 views 3 min 0 Comment

Clairvoyance and Bad Journal Policies

- June 26, 2011

Carl Zimmer’s article in the Sunday New York Times on the struggles scientists have to correct erroneous published results is interesting throughout but this part struck me particularly:

In March, for instance, Daryl Bem, a psychologist at Cornell University, shocked his colleagues by publishing a paper in a leading scientific journal, The Journal of Personality and Social Psychology, in which he presented the results of experiments showing, he claimed, that people’s minds could be influenced by events in the future, as if they were clairvoyant.

Three teams of scientists promptly tried to replicate his results. All three teams failed. All three teams wrote up their results and submitted them to The Journal of Personality and Social Psychology. And all three teams were rejected — but not because their results were flawed. As the journal’s editor, Eliot Smith, explained to The Psychologist, a British publication, the journal has a longstanding policy of not publishing replication studies. “This policy is not new and is not unique to this journal,” he said.

As a result, the original study stands.

Wow! I just can’t think of a single good excuse to refuse replications as a matter of policy. No-one would think of publishing an experiment that shows that people are not clairvoyant in the absence of evidence otherwise. Suppose I were a believer and wished to spread the word. Even if I ran a kosher experiment , I would have a five percent chance of accidentally getting a result that shows a “statistically significant” effect of clairvoyance (and no-one would stop me from running many such experiments and not publishing the null-results). Under these journal policies, after getting the improbable result in, no-one would get on the record as showing that my results were bs (or, phrased more kindly, the consequence of random noise). Yes, getting working papers out is great but professional rewards are allocated based on publications in top journals, not on making the newspaper for showing that people aren’t clairvoyant after all.

I don’t know if any political science journals have similar policies but I sure hope not.

ps. The original clairvoyance piece has 9 experiments and I really can’t judge its quality, just its a priori improbability coupled with the failed ability to replicate the findings by 3 research teams. The point that I am trying to make is more general.