Archive | Experimental Analysis

The Military and Presidential Endorsements

With news of Mitt Romney’s council of 300 retired general officers military advisory council in Foreign Policy yesterday, we are especially pleased to welcome the following guest post from Jim Golby, Kyle Dropp, and Peter Feaver.*

*****

Presidential campaigns increasingly have competed for high-profile military endorsements in recent years, but do these endorsements persuade voters?

In a Center for New American Security (CNAS) report released Monday, we find evidence that endorsements from military members and veterans disproportionately benefit President Obama. A summary of our study appeared in the New York Times; here we present more details.

In July, we conducted a survey experiment through YouGov. Prior to a standard vote choice question, we told a nationally-representative sample of more than 2,500 registered voters either that “according to recent reports, most members of the military and veterans” supported Obama, that most supported Romney or showed no endorsement. While these endorsements do not affect overall support levels, we did find evidence that they matter among pure Independent voters, especially those who pay limited attention to foreign policy news.

Independent voters who were told that military members and veterans supported Obama swung nine points in the President’s direction, and this treatment effect jumps to fourteen points among Independents who do not follow foreign policy news very closely. President Obama also received a small bump when Independent voters received a pro-Romney prompt, but this four point shift is not statistically significant.  Romney, by contrast, received no bump from endorsements compared with a control group that received no prompt (see Table 2 below). Military endorsements do not appear to influence partisans or partisan leaners.

We argue that endorsements from military members may disproportionately benefit Obama because voters view members of the military as disproportionately conservative. Consequently, a military endorsement for President Obama is surprising while an endorsement for Governor Romney is not. Moreover, an Obama endorsement also may counteract voters’ historical tendency to distrust Democrats on national security matters and solidify voters’ improving assessments of President Obama’s performance as Commander-in-Chief.

By statute and tradition, however, the military officially is a non-partisan institution. Nevertheless, we find some evidence that such endorsements may affect the trust and confidence voters place in the military that varies across partisan lines. For example, we find that Republicans who believe most members of the military affiliate with a political party are 10 percentage points more likely to report a great deal of confidence in the institution compared with those who do not think the military is political; however, Democrats who see the military as political are nine points less likely to have confidence in the institution compared with Democrats who do not think the military is partisan.

Even small increases in support can matter in a close election so we expect campaigns to continue to seek these endorsements, especially since they do not face immediate costs for doing so. Nevertheless, we believe that this practice may negatively impact the health of American civil-military relations in the medium to long term.  As a result, we suggest several steps that campaigns can take to help establish a taboo against the practice of military endorsements.  If campaigns believe that they will face reputational costs for using military surrogates, they may be willing to forego these endorsements in the future.

*The views expressed in this post are the authors’ own and do not reflect the views of the United States Military Academy, the Army, or the Department of Defense.

Continue Reading

Two Lessons for Improving Forecasts

For each of four weeks, participants made probabilistic forecasts in four domains relating to domestic and international politics and economics: the Dow Jones Industrial Average stock index, the national unemployment rate, Obama’s presidential job approval ratings, and the price of crude oil. I randomly assigned 308 participants to one of three groups. The base rate group received information about how frequently changes of various magnitude in these variables occurred in the previous year; the performance feedback group received information about how far off their predictions were the previous week; the control group received no extra information. I also recruited an “expert” subgroup of people with backgrounds in finance and economics in order to look at the effects of expertise on accuracy of Dow predictions, and I distributed these 72 experts evenly among the three groups.
The results are very encouraging.  Both strategies significantly improved forecasting accuracy. On average, participants who received base rate or performance feedback information were 10 or 15 percent more accurate than those who did not.

From research by Dartmouth undergraduate Kelsey Woerner.  See more at Jay Ulfelder’s place.

Continue Reading

West Coast Experiments Conference May 11, 2012

The organizers of the West Coast Experiments Conference send along the following announcement:

The fifth annual meeting of the West Coast Experiments Conference will be held at the Claremont Resort, near the campus of UC Berkeley, on Friday, May 11, 2012.

For information on registration and local arrangements, please visit: http://ps-experiments.ucr.edu/conference/western.

We encourage anyone with an interest in experiments to attend; graduate students are especially welcome, as well as those who are new to experimental research. The WCE conference is organized more as a methods “workshop” than as a venue to engage in subfield debates. Instead of standard conference presentations, presenters focus in depth on one or two methodological take away points of their experimental work. The goal is to give the audience members applied, practical advice on methods and design in a way that will help them improve their own experimental research.

The WCE meeting is a single day meeting, starting at 9 and ending after dinner. Although we do not have the money to cover travel or lodging expenses, we will provide all meals on that day and we promise a good conversation.

The tentative agenda is:

Morning Panel
• Claire Adida (UCSD): causal effects of ethnicity on voter attitudes and behaviors
• Jas Sekhon (UC Berkeley) and Pradeep Chhibber (UC Berkeley): causal effects of religious practice on trust

First Afernoon Panel
• Daniel Butler (Yale): a seniority committee assignment lottery in Arkansas
• Edward Miguel (UC Berkeley): development aid field experiments in Sierra Leone
• Jim Fearon (Stanford): “Democratic Institutions and Collective Action Capacity: Results from a Field Experiment in Post-Conflict Liberia”

Second Afternoon Panel
• Adam Berinsky (MIT): “Rumors, Truth, and Reality: A Study of Political Misinformation”
• Justin Grimmer (Stanford): a natural experiment on terror alerts and public opinion

We look forward to seeing everyone in Berkeley.

Best,
Kevin Esterling (UC Riverside)
Sean Gailmard (UC Berkeley)
Taeku Lee (UC Berkeley)
Mat McCubbins (USC)
Jas Sekhon (UC Berkeley)
Laura Stoker (UC Berkeley)

West Coast Experiments Website: http://ps-experiments.ucr.edu/conference/western

Continue Reading

Two cures for racism!

It’s an unusual day when we come across two cures for racism on the same day.

First, from the Columbia political science department’s comparative politics seminar, The End of Prejudice: An Experimental Study of Intergroup Conflict and Cooperation, by Andrej Tusicisny:

This paper develops and tests a new model explaining under what conditions people from different ethnic groups cooperate and under what conditions they discriminate against an outgroup. It also uncovers what may be the true causal mechanism underlying the famous contact hypothesis. 402 subjects sampled from the slums of Mumbai, India, participated in a randomized experiment that tested the theory. The experiment showed that (1) people cooperate if they believe that their partner will reciprocate their cooperative behavior; (2) people use their partner’s ethnicity as an information shortcut to predict how likely reciprocity is; and, most importantly, (3) observation of individuals’ real behavior can change the stereotypical beliefs about groups. Once expectations of reciprocity were successfully manipulated, ethnically heterogeneous groups produced as much – or as little – public goods as the homogenous ones. The experiment demonstrated that it is in fact fairly easy to rationally update deep-rooted stereotypes of outgroups even by a short social interaction. Information updating led not only to more cooperation in public goods games, but also to a radical change in self-reported discriminatory attitudes towards the outgroup as a whole. For example, the number of Hindus who would never accept a Muslim as a neighbor dropped by 56%. Practical implications of the study can guide us in designing better institutions to prevent conflict and increase public goods provision in multiethnic societies.

Second, and on the very same day, from a news report in the Daily Telegraph (found here):

Volunteers given the beta-blocker, used to treat chest pains and lower heart rates, scored lower on a standard psychological test of “implicit” racist attitudes. They appeared to be less racially prejudiced at a subconscious level than another group treated with a “dummy” placebo pill. . . . Experimental psychologist Dr Sylvia Terbeck, from Oxford University, who led the study published in the journal Psychopharmacology, said: “Our results offer new evidence about the processes in the brain that shape implicit racial bias. . . .”

On the other hand,

Dr Chris Chambers, from the University of Cardiff’s School of Psychology, said the results should be viewed with “extreme caution”. He said: “. . . we can’t rule out the possibility that the effects were due to the drug incidentally reducing heart rate. So although interesting, in my view these preliminary results are a long way from suggesting that propranolol specifically influences racial attitudes.”

Also,

The scientists wrote: “The main finding of our study is that propranolol significantly reduced implicit but not explicit racial bias.”

The sample size was 36, and, as we all know, statistical significance doesn’t always mean very much.

Just to be on the safe side, maybe it would be best to do a bit of information updating . . . and take the pill.

Continue Reading

Sex Scandals and Race

With Herman Cain endorsing Newt Gingrich over the weekend, one can’t help but notice that one of these two had a sex scandal at least partially knock him out of the race, whereas the other one seems to have survived fairly widespread allegations of marital infidelity and kept on going.

While there are of course many differences between Cain’s and Gingrich’s purported affairs—one important one certainly being that Gingrich’s is old news whereas Cain’s was a more recent development—recently published research in the journal Political Behavior suggests another possible factor: the race of the candidates. In the previous US presidential election cycle, Adam Berinsky, Vincent Hutchings, Tali Mendelberg, Lee Shaker, and Nicholas Valentino conducted experiments to examine people’s reactions to stimuli suggesting that either Barack Obama or, ironically enough, John Edwards were potentially guilty of “sexual indiscretion” (p.185; see p.198-200 for actual cues). Here’s their summary of the article and its findings:

A growing body of work suggests that exposure to subtle racial cues prompts white voters to penalize black candidates, and that the effects of these cues may influence outcomes indirectly via perceptions of candidate ideology. We test hypotheses related to these ideas using two experiments based on national samples. In one experiment, we manipulated the race of a candidate (Barack Obama vs. John Edwards) accused of sexual impropriety. We found that while both candidates suffered from the accusation, the scandal led respondents to view Obama as more liberal than Edwards, especially among resentful and engaged whites. Second, overall evaluations of Obama declined more sharply than for Edwards. In the other experiment, we manipulated the explicitness of the scandal, and found that implicit cues were more damaging for Obama than explicit ones. (emphasis added)

The full article is available here.

[Photo credit: Mark America.]

Continue Reading

Registration Open for 5th Annual NYU-CESS Conference on Experimental Political Science

Registration is now open for the 5th Annual NYU-CESS Experimental Political Science Conference taking place on March 2-3, 2012. Information about the conference is available here, including the schedule of presentations (Friday and Saturday) and information concerning special rates at local hotels. You can register for the conference here. Registration continues to be free, and includes meals on Friday (breakfast, lunch, dinner) and Saturday (breakfast and lunch).

We have an excellent set of papers including two special panels: Voluntary versus Compulsory Voting and Women in Political Leadership. For the first time, we also have a poster session for graduate students.

Registration will close on February 20th. If you have any questions, please email the CESS Administrator, Caroline Madden, at caroline dot madden at nyu dot edu.

The full schedule of papers is listed after the break.
Continue Reading →

Continue Reading

Annals of Interesting Peer Review Decisions

Tom Bartlett describes the efforts of two psychologists to publish replication results for an article, which had purported to show that people could use ESP to predict whether they would be shown erotic pictures in the future. The replication found no observable effect, but (according to the authors’ account of it)had a difficult time finding a publisher.

Here’s the story: we sent the paper to the journal that Bem published his paper in, and they said ‘no, we don’t ever accept straight replication attempts’. We then tried another couple of journals, who said the same thing. We then sent it to the British Journal of Psychology, who sent it out for review. For whatever reason (and they have apologised, to their credit), it was quite badly delayed in their review process, and they took many months to get back to us.
When they did get back to us, there were two reviews, one very positive, urging publication, and one quite negative. This latter review didn’t find any problems in our methodology or writeup itself, but suggested that, since the three of us (Richard Wiseman, Chris French and I) are all skeptical of ESP, we might have unconsciously influenced the results using our own psychic powers. … Anyway, the BJP editor agreed with the second reviewer, and said that he’d only accept our paper if we ran a fourth experiment where we got a believer to run all the participants, to control for these experimenter effects. We thought that was a bit silly, and said that to the editor, but he didn’t change his mind. We don’t think doing another replication with a believer at the helm is the right thing to do … [the] experimental paradigms were designed so that most of the work is done by a computer and the experimenter has very little to do (this was explicitly because of his concerns about possible experimenter effects).

Although the Bartlett piece doesn’t make this suggestion, I can’t help wondering whether the reviewer was one of the authors of the original piece. Myself, I’ve had a couple of interesting interactions with editors over the years, but nothing that even comes close to matching this.

Continue Reading

Ethical Challenges of Embedded Experimentation

Continuing our series of articles from the American Political Science Association’s Comparative Democratization Section, Newsletter, today we present the following article on the “Ethical Challenges of Embedded Experimentation” by Macartan Humphreys of Columbia University. Since posting the first article from the newsletter on Monday, I have subsequently learned that the entire newsletter is free and publicly available on the website of National Endowment for Democracy. So you can find the entire Humphreys article there in .pdf format, as well as all the other articles in the newsletter. Humphreys’ piece is part of a symposium in the newsletter on the use of experiments in studying democratization.

********************

Introduction

Consider a dilemma. You are collaborating with an organization that is sponsoring ads to inform voters of corrupt practices by politicians in a random sample of constituencies. The campaign is typical of ones run by activist NGOs and no consent is sought among populations as to whether they wish to have the ads placed on billboards in their neighborhoods. You learn that another NGO is planning to run a similar campaign of its own in the same area. Worse (from a research perspective) the other organization would like to target “your” control areas so that they too can make an informed decision on their elected representatives. This would destroy your study, effectively turning it from a study of the effect of political information into a study of the differences in the effects of information interventions as administered by two different NGOs. The organizations ask you whether the new group should work in the control areas (even though it undermines the research) or instead quit altogether (and in doing so, protecting the research but possibly preventing needy populations from having access to important information on their representatives). What should you advise? Should you advise anything?

Consider a tougher dilemma. You are interested in the dynamics of coordination in protest groups. You are contacted by a section of the police that is charged with deploying water cannons to disperse protesters. The police are interested in the effectiveness of water cannons and want to partner with a researcher to advise on how to vary the use of water cannons for some random set of protest events (you could for example propose a design that reduces the use of water cannons in a subset of events and examine changes to group organization). As with the first dilemma there is clearly no intention to seek consent from the subjects—in this case the protesters—as to whether they want to be shot at. Should you partner with the police and advise them on the use of water cannons in order to learn about the behavior of non-consenting subjects?

These seem like impossible choices. But choices of this form arise regularly in the context of a mode of “embedded” experimentation that has gained prominence in recent years in which experimental research is appended to independent interventions by governments, politicians, NGOs, or others, sometimes with large humanitarian consequences.

The particular problem here is that the researcher is taking actions that may have major, direct, and possibly adverse effects on the lives of others. As discussed below, in these embedded field experiments, these actions are often taken without the consent of subjects; a situation which greatly magnifies the ethical difficulties.

In this essay I discuss the merits and demerits of “embedded” experimentation of this form that is undertaken without subject consent. I compare the approach to one in which researchers create and control interventions that have research as their primary purpose and in which consent may be more easily attained (in the terminology of Harrison and List these two approaches correspond broadly to the “natural field experiment” and the “framed field experiment” approach).[1]

Continue Reading →

Continue Reading

5th Annual NYU-CESS Experiments in Political Science Conference Call For Papers

The 5th annual NYU-CESS Experiments in Political Science Conference will be held this March 2-3, 2012 at NYU. We are now officially accepting online paper proposals. Paper proposals will be accepted through November 15th, and can be submitted here. As previously, the only requirement is that the paper be related to political science and feature any type of experimental analysis (e.g., lab, survey, field, neuro, lab in field, etc.) or else be focused on questions related to experimental methodology.

As in past years, there will be an hour for each paper presentation, including presentation of the work, comments by a discussant, and audience feedback. It is an excellent opportunity, therefore, to both publicize and receive feedback on your work; I will also highlight the findings from some of the papers here on The Monkey Cage. Thanks to the generosity of NYU’s Department of Politics and the NYU’s Center for Experimental Social Science, we continue to be able to offer a small stipend to offset travel expenses for out-of-towners who are selected to present a paper at the conference.

Please note as well that we have added a graduate student poster session this year. If you are a graduate student, you will have the option during the submissions process of indicating whether you would only like to be considered for the poster session, or if you would like to be considered for either the poster session or a standard presentation.

Continue Reading

Request for Feedback on Experiment: Corruption and Voting

As anyone who involved in survey research can tell you, one of the more frustrating comments you can get at talks is when people tell you that you should have asked one of your questions differently. While often the questioner has a good point, the fact that you’ve already run the survey introduces a minor time-inconsistency problem: the best time to get this kind of feedback is before the survey goes into the field. However, most of the time when someone wants you to give a talk (or when you are applying to present at a conference), people want to see actual results, not research design. The American National Electoin Study (ANES) has made a nice step in this direction by setting up an online commons where users can offer comments about the design of proposed new question for the ANES. Lacking the vast resources of the ANES, I figured I’d turn instead to the one resource I did have at my disposal—this blog and its excellent readers—to see if anyone had any suggestions/critiques for an experiment I’m trying to tweak.

The experiment is designed to assess the relative importance of corruption in influencing vote choice as opposed to concerns about the state of the economy. Following the economic voting literature, we* conceive of “corruption voting” as having both potential pocketbook components (e.g., what is the effect of having actually had to pay bribes on your voting behavior?) and soicotropic components (e.g., what is the effect of one’s perceptions about the pervasiveness of corroption on one’s voting behavior?). In a paper that we will be presenting at the American Political Science Association’s Annual Meeting in Seattle a few weeks (Saturday, Sept 1 at 4:15 in Convention Center Room 308), we will report results from a traditional survey in one post-communist country showing much stronger support for “pocketbook corruption voting” than “sociotropic corruption voting”, both in terms of turnout and in terms of voting against the incumbent (i.e., having had to pay a bribe makes you less likely to turnout and less likely to vote for the incumbent). Given the predominence of sociotropic explanations in the economic voting literature and the received wisdom in post-communist countries that part of the reason incumbent governments get voted out of office so often is because people are fed up with overall levels of corruption, this was a pretty surprising finding.

We are now in the process of attempting to see if we can replicate this finding using an experimental research design. We’ve run the experiment once in a different post-communist country and have found results that again suggest pocketbook corruption concerns have more of an effect on voting than sociotropic corruption concerns. However, we have the opportunity to run this experiment in a second country, and we’re hoping to improve the experimental design. As I fear the vast majority of readers of even The Monkey Cage may not be interested in the nuts and bolts of experimental research design, the actual design of the experiment and the question I have about it are found after the break:

Continue Reading →

Continue Reading