Home > News > This new initiative is trying to make scientific research more reliable
138 views 13 min 0 Comment

This new initiative is trying to make scientific research more reliable

- June 8, 2017
(iStock)

There has been a remarkable shift in the scientific study of politics over the past 20 years — and important changes in how academic research can inform policy. In the United Kingdom, the Behavioral Insights Team carries out experimental studies of policies aimed at making government initiatives more efficient and effective. Similarly, in the United States, the Office of Evaluation Sciences works with federal agencies to design and analyze rigorous evaluations aimed at improving the implementation of government policies and programs.

What are the barriers that remain, and what are the next steps? The Evidence in Governance and Politics (EGAP) network has been at the forefront of developing new research techniques and funding models to improve the ability of academic research to speak to important policy debates.

This Friday, the EGAP Evidence Summit takes place in Washington, where the results from the Metaketa Initiative will be unveiled. (Registration is still open.) This new effort aims to sponsor coordinated research projects on a single topic — to better inform policy choices. In advance of this meeting, I spoke with two of the organization’s members, University of California at Berkeley professors Susan Hyde and Thad Dunning. What follows is a lightly edited transcript of our conversation:

Joshua Tucker (JT): EGAP has been a vocal advocate of the use of Randomized Control Trials (RCTs) in political science research. Could you briefly explain what these are, as well as the advantages of this type of approach to politics-related research?

Susan Hyde (SH): Many who work in international development or focus on promoting democracy have experienced a push for more rigorous evaluations of their programs. RCTs are considered the “gold standard” of research design because of their unparalleled ability to demonstrate cause and effect. Although Thad and I both use multiple methods in our work, we see unrealized opportunities to use field-experimental research on questions that are important for academics and practitioners.

An RCT, as its name suggests, randomizes an important element of the study or program — this makes for greater confidence that a particular intervention caused an outcome. By randomizing who receives a program or intervention (and EGAP also pays careful attention to potential ethical problems), researchers can be very confident that there is no systematic relationship between the assignment of treatment and any other factors. RCTs also have a lot of potential to uncover unanticipated effects of interventions, and to be used iteratively over time.

JT: What do you see are the principal shortcomings of RCTs?

Thad Dunning (TD): RCTs are typically carried out one at a time, with too little attention to how knowledge can be built over time. These individual evaluations are excellent at demonstrating whether a particular intervention works at a particular moment in time.

But this is only part of the information researchers and policymakers need. An individual RCT can’t necessarily demonstrate the conditions under which a program is most likely to work, whether programs have similar effects in different contexts (such as different countries), when an intervention is most likely to be cost effective, and/or perhaps most importantly, when common interventions do not work as intended. Here’s a downside, for instance: Right now, we’re drawing major lessons on the effectiveness of community-based monitoring of health workers for health gains, or on the advantages of disseminating information on corruption about politicians — all from single studies.

Other important problems include publication bias; “fishing” for statistically significant findings; an inability to learn from failed studies; and differences in research design that make it very difficult to aggregate findings from related studies.

JT: So what is the Metaketa Initiative, and how will it address existing shortcomings in RCT research?

TD: The Metaketa Initiative aims at producing simultaneous replication of research across multiple contexts. It is a grant-making model for coordinated clusters of studies that work with governmental and nongovernmental partners to support integrated research projects to address larger-context questions. An unusual feature about these grant-making rounds is that they do not aim for innovation as the primary goal but focus largely on the consolidation of knowledge. A Metaketa (the Basque word for “accumulation”) is a coordinated, multisite, research grant round designed to foster knowledge accumulation.

SH: One important aspect is also the focus on policy-relevant knowledge. The committee leading the inaugural Metaketa (which includes the two of us, as well as Guy Grossman, Macartan Humphreys and Craig McIntosh) worked collaboratively with researchers to identify a tractable research question that has implications for development assistance programming around the world. For this round, the question was whether providing objective information to voters on the performance of governing candidates or political parties before elections changes their behavior.

JT: On Friday you will unveil your findings from this first round Metaketa Initiative studies. What was the most important substantive findings?

SH: We can’t share that until June 9! We will unveil the joint findings for the first time at the Evidence Summit on June 9 (again, more info here). Our results will also be published next year in a book with Cambridge University Press.

TD: The results are under wraps because we’re using the event to evaluate experimentally some hypotheses about the effects of different ways of presenting accumulated evidence. We hope the Summit will be a useful experience for practitioners, policymakers and researchers alike.

[JT: We will update this post after Friday to add the answer to my original question here… And here it is!:

SH: We were surprised at how consistent the findings were across all six of the studies, and the results were not what we expected. Overall, the studies suggest that it is very difficult to influence voters with information in the lead up to elections. Each of the six studies, as well as the meta-analysis, found no evidence that receiving what we defined as “bad” news about candidates/parties affected voter turnout or vote share.  Some of the individual studies conducted secondary analyses and found significant effects of the intervention in more competitive districts, or among certain subsets of the population.

TD:  What is really stunning is how these studies collectively lead to a more powerful conclusion than one would get from a single study, or even several uncoordinated studies.  With a single study, implementation failures or specificities of context might explain null results.  But we have a lot of power from pooling similar interventions from these different studies, and overall there is very little evidence of impact for the common intervention.

SH:  It is important to note, however, that even though these results are pretty striking, they do not mean that information never matters.  Rather, it may just be very hard to influence voters in the lead up to elections for a whole host of reasons.  But the results do suggest that informational interventions like the ones common to all six studies do not work as a broad, one-size-fits all type of programmatic intervention.]

JT: And what did you learn about the methodology?

TD: There are a lot of reasons that coordinated research is difficult. We think that the structure of our initiative — including harmonized interventions, built-in replication, integrated publication and other features — helps to avoid problems such as publication bias, fishing and the inability to learn from failed studies.

Overall, the results really do lead to a more powerful conclusion than any single study would have produced. We also think that the planned integration of the studies allows better synthesis than we would achieve through an equal number of uncoordinated studies.

JT: What does the future look like for the Metaketa Initiative?

SH: EGAP is running three additional clusters of coordinated studies, focused on taxation, natural resource governance and community policing. We just announced the research teams and there is more info on EGAP’s website. The U.K. Department for International Development is our biggest funder for these studies, and we’re hopeful that the model proves useful as a way to build policy-relevant knowledge over time, and ultimately get at important questions such as when programs are most likely to work and the conditions that make them cost-effective.

As with the first Metaketa, each of the new clusters of studies consists of RCTs that are coordinated on underlying theories, research questions, hypotheses, treatment arms and measurement strategy. The model has a unique strength and/or weakness: We give each research team room to tailor the details of the intervention to the specific time and location at which it takes place. Some scholars might prefer a more centrally controlled model with identical interventions, but we are excited that the model allows otherwise independent research or evaluation teams to coordinate and produce findings that are interesting on their own right, but also add up to something that is greater than the sum of its parts.

Overall, we hope that the Metaketa Initiative will spark a move toward collaboration and synthesis that leads to more reliable knowledge and cumulative learning. We’d love to see the model used and improved upon by other organizations.

This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the Network is responsible for the article’s specific content. Other posts in the series can be found here.

Topics on this page