Home > News > New political science initiative calls for evaluating research before knowing the results
117 views 7 min 0 Comment

New political science initiative calls for evaluating research before knowing the results

- August 19, 2016
Kacper Pempel/Reuters

Wednesday saw the announcement of The Election Research Preacceptance Competition. The goal of the competition is to use the relatively novel idea of “preacceptance” to improve scientific research that uses the 2016 American National Election Study (ANES) to investigate the dynamics of this, um, interesting election.

Many aspects of this competition are unusual and promising. I talked to the organizers, political scientists Arthur Lupia and Brendan Nyhan, for details.

In a nutshell, how does this competition work?

Our competition will reward scholars who publish qualifying articles using ANES data with a $2,000 award and placement on a spotlight panel at an upcoming conference.

To enter, scholars must design a study using ANES data and preregister their analysis before the data are publicly available. They can then submit an article including this design to a participating journal, which includes many of the top outlets in the discipline (American Journal of Political Science, American Political Science Review, American Politics Research, Political Analysis, Political Behavior, Political Science Quarterly, Political Science Research and Methods, Public Opinion Quarterly, State Politics and Policy Quarterly).

These journals have agreed to review competition entries before ANES data is available and to accept them in principle while everyone — authors, reviewers, and editors — is blind to the results.

After the data are publicly available, authors will submit their final articles to the journals, where they will then be checked to ensure they comply with the quality standards of the journal. (See the competition FAQ and rules for more information.)

What is pre-acceptance and why should we do it?

Distortions created by the journal review process are harming the quality of published research across scientific disciplines. Numerous studies have found that published findings are disproportionately likely to be significantly significant, while null results are frequently rejected or relegated to the file drawer. This pattern creates perverse incentives for researchers and contributes to the low rate of success when researchers try to replicate published findings.

One approach to addressing this problem is preregistration, which allows researchers to specify their hypotheses and procedures before they analyze their data. However, this approach does not solve the problem of publication bias — reviewers and editors can still condition their decisions on statistical significance when considering preregistered articles, which in turn encourages authors to relegate null results to the file drawer.

Pre-acceptance allows studies to be reviewed based on the theoretical contribution and the quality of the research design rather than whether the author’s expectations were supported. (We provide a more detailed rationale for pre-acceptance on the competition website; see also Nyhan’s article in PS recommending the approach for political science.)

[interstitial_link url=”https://www.washingtonpost.com/blogs/monkey-cage/wp/2015/03/13/how-to-make-scientific-research-more-trustworthy/”]How to make scientific research more trustworthy[/interstitial_link]

Have other journals or scholarly outlets experimented with pre-acceptance? What has been their experience?

Pre-acceptance is starting to come into wider use through the Registered Reports initiative, which has helped popularize the format (disclosure: Nyhan is on the ad hoc committee behind this effort). It is still only in limited use but has begun to be more widely adopted by psychology and neuroscience journals.

Within political science, the only use of the format that we know of to date is in a forthcoming special issue of Comparative Political Studies. The guest editors report that “results-free review encourages much greater attention to theory and research design,” but “raises thorny problems about how to anticipate and interpret null findings” (see this Q&A for further discussion).

We view these results as encouraging — more attention to theory and research design is needed! We hope the competition will help the discipline learn how to most effectively implement this format and encourage other journals to follow the lead of the Journal of Experimental Political Science, whose editor just announced that it will implement a permanent Registered Reports submission track later this year.

Why have this competition using the 2016 ANES in particular? Are there potential pitfalls in analyzing the ANES that pre-acceptance helps to mitigate?

The ANES offers a high-profile opportunity for scholars to design a true out-of-sample study. The questionnaires are finalized long before data are publicly available, allowing scholars to design studies and submit articles using those designs before data are available. Such an approach helps address concerns about data mining that are often raised about any well-known public data set. Such an approach also helps address concerns about specification searches and p-hacking that are often raised about any well-known public data set, including the ANES.

In addition, we hope that the contest will renew interest among scholars in using the ANES for confirmatory tests or preregistered analyses that complement existing exploratory studies. The competition rules explicitly allow entrants to use other data; no one is required to use ANES data alone (see our FAQ for more).

There’s a cash prize for published articles, which is a rare feat indeed. How did this come to be funded by the Arnold Foundation, and how does this fit with their priorities?

The Arnold Foundation has been a pioneer in funding scientific reform efforts, including our partners at the Center for Open Science, which is running The $1,000,000 Preregistration Challenge with their support. One of us (Lupia) approached Arnold and they were immediately supportive of our idea, which fits with their goal of improving the transparency and reliability of research across the social and natural sciences. We are extremely grateful for their support and hope that the cash prizes will help entice many scholars to enter the competition.