Home > News > Journal of Experimental Political Science
168 views 5 min 0 Comment

Journal of Experimental Political Science

- July 16, 2013

The American Political Science Association is coming out with a new journal:

The Journal of Experimental Political Science features research – be it theoretical, empirical, methodological, or some combination thereof – that utilizes experimental methods or experimental reasoning based on naturally occurring data. We define experimental methods broadly: research featuring random (or quasi-random) assignment of subjects to different treatments in an effort to isolate causal relationships between variables of interest. JEPS embraces all of the different types of experiments carried out as part of political science research, including survey experiments, laboratory experiments, field experiments, lab experiments in the field, natural and neurological experiments.

We invite authors to submit concise articles (around 2500 words) that immediately address the subject of the research (although in certain cases initial submissions can be longer than this limit with the understanding that if accepted the paper will be shortened within the word constraints). We do not require lengthy explanations regarding and justifications of the experimental method. Nor do we expect extensive literature reviews of pros and cons of the methodological approaches involved in the experiment unless the goal of the article is to explore these methodological issues. We expect readers to be familiar with experimental methods and therefore to not need pages of literature reviews to be convinced that experimental methods are a legitimate methodological approach. We also consider more lengthy articles in appropriate cases, as in the following examples: when a new experimental method or approach is being introduced and discussed, when a meta-analysis of existing experimental research is provided, or when new theoretical results are being evaluated through experimentation and the theoretical results are previously unpublished. Finally, we strongly encourage authors to submit null or inconsistent results from well-designed, executed, and analyzed experiments as well as replication studies of earlier experiments.

This looks good to me. There’s only one thing I’m worried about. Regular readers of the sister blog will be aware that there’s been a big problem in psychology, with the top journals publishing weak papers generalizing to the population based on Mechanical Turk samples and college students, lots of researcher degrees of freedom ensuring there will be no problem finding statistical significance, and with the sort of small sample sizes that ensure that any statistically significant finding will be noise, thus no particular reason to expect that patterns in the data will generalize to the larger population. A notorious recent example was a purported correlation between ovulation and political attitudes.

For some reason I seem to hear more about these sorts of papers in psycyhology than in poli sci (there was this paper by some political scientists, but it was not published in an actual poli sci journal).

Just to be clear: I’m not saying that the scientific claims being made in these papers are necessarily wrong, it’s just that these claims are not supported by the data. The papers are essentially exercises in speculation, “p=0.05” notwithstanding.

And I’m not saying that the authors of these papers are bad guys. I expect that they mostly just don’t know any better. They’ve been trained that “statistically significant” = real, and they go with that.

Anyway, I’m hoping this new journal of experimental political science will take a hard line and simply refuse to publish small-n experimental studies of small effects. Sometimes, of course, small-n is all you have, for example in a historical study of wars or economic depressions or whatever. But there you have to be careful to grapple with the limitations of your analyses. I’m not objecting to small-n studies of important topics. What I’m objecting to is fishing expeditions disguised as rigorous studies. In starting this new journal, we as a field just have to avoid the trap that the journal Psychological Science fell into, of seeming to feel an obligation to publish all sorts of iffy stuff that happened to combine headline-worthiness with (spurious) statistical significance.

P.S. I wrote this post last night and scheduled it to appear this morning. In the meantime, Josh posted more on this new journal. I hope it goes well.

Topics on this page