The Hunt for Campaign Effects II

In the 2008 election, did campaign activity matter? In my first post on this question, I was a bit dubious about some back-of-the-envelope analysis by Nate Silver. Now some new evidence is emerging, and Mark Blumenthal summarizes it in this interesting post (see also Andy’s and Brendan Nyhan’s reactions).

A first finding: in counties with higher numbers of new registrants and voter contacts by the Obama campaign and other progressive organizations, Obama did better vs. Kerry. Here is the slide from Catalist:

catalist1.png

It’s chartjunky, but the positive associations are clear. Blumenthal describes this subsequent analysis:

Did these campaign activities cause higher support for Obama? To try to get at an answer, [Erik] Brauner [chief scientist of Catalist] used a simple regression model and found that higher levels of personal contact, paid television advertising and new registration predicted higher support for Obama at the county level even after controlling for the most significant demographic variables (race, age, education, marriage, religious adherence and the presence of children in the household). We always need to be careful about assuming causation from correlations, but these results, as Brauner explained, show that personal contact by the Democratic campaign, voter registration activity and paid television advertising were “all acting together and explaining outcomes that are not explained simply by demographic factors.”

(There was, unfortunately, little evidence presented on the precise magnitude of personal contact’s effects, or attempting to parse the respective effects of contact, voter registration, and advertising.)

This correlational evidence is not ironclad, but it is more persuasive in light of the other evidence accumulated thus far—e.g., Seth Masket’s quickie analysis of the effect of Obama field offices in Colorado, as well as the many field experiments that show how voter contact boosts turnout.

Speaking of field experiments, Blumenthal also reports on this one by the SEIU:

One such experiment involved post election survey work conducted in 11 states by the Service Employees International Union (SEIU) on both experimental and control groups of their members. In this case they held back a random sample “control group” of voters who received no contact from SEIU during the campaign. They then surveyed both the control group of non-contacts and a random sample of all the other voters who received campaign mail and other contact by SEIU.

Here are the results:

seiu.png

The SEIU campaign activity made evaluations of McCain less favorable and evaluations of Obama more favorable. The effects are not large, but campaign effects in presidential elections rarely are. The question is whether the SEIU efforts were large-scale enough to actually affect outcomes in key states. As I’ve said before, I think this connection between individual- and aggregate-level effects is important to demonstrate. The former doesn’t imply the latter.

A great next step would be for Catalist, the SEIU, and other organizations to make their 2008 campaign data publicly available. I won’t hold my breath, of course.

7 Responses to The Hunt for Campaign Effects II

  1. Doug Hess January 23, 2009 at 3:23 pm #

    Postings like this are very worthwhile and I hope more come about.

    As an aside: I’m not sure the last sentence is “helpful.” You often have to give to receive (not money). 🙂

  2. Chris Kennedy January 23, 2009 at 4:33 pm #

    Catalist wouldn’t be the ones to publish voter contact data – that’s owned by individual organizations who will be evaluating their general election programs in the spring and summer.

    Rock the Vote makes all of our field experiments and other research publicly available. For results of text messaging, email, direct mail, etc. on young voter registration and turnout see http://www.rockthevote.com/about/about-young-voters/how-to-mobilize-young-voters/ .

    Stay tuned for more results from the 2008 general election (text message, email, direct mail, phones) – coming summer of 2009. We ran about 15 fully randomized field experiments on voter contact in the 2008 campaign cycle.

    And if a political scientist is looking for organizations to publish their voter contact data, one of the most effective methods in my experience is to offer an in-kind evaluation/analysis of the voter contact program. As Doug hints, data publishing needs to be rational from the organization’s perspective given their limited time and money. I also recommend talking to Todd Rogers (et al) if you haven’t already.

  3. John Sides January 23, 2009 at 8:19 pm #

    Doug: Okay, the last sentence was unnecessary.

    Doug and Chris: I can certainly understand that it takes money and resources to collect these data. But I think the scholarly norm of publicly releasing the original data (not just the results) should still be more operative than it is. For one, it’s not like it “costs” these organizations anything to release the data. Second, knowledge would accumulate faster were the data in the public domain, and thus these organizations would learn a lot more than they would from contracting with individual scholars. Let a thousand flowers bloom, etc. I actually think it’s a win-win situation for both these organizations and scholars.

  4. Doug Hess January 23, 2009 at 10:29 pm #

    There are other factors which explain why orgs. do not release data publicly (or at least not immediately):
    1. Strategy
    2. Proprietary

    Some organizations share information for just the reasons you gave (reasons which undermine 2, above).

    *****
    On another note: I think the campaign effects literature could be turned on its head if the cumulative impact of campaign work was considered. Certainly there’s reason to believe much campaign work (where campaign refers to candidates’ campaigns) would have temporary outcomes, but I think some work could be thought of as cumulative (the simplest example would be registrations in 2006 of people that maintain their regstration in 2008).

  5. Chris Kennedy January 24, 2009 at 12:38 am #

    In theory I agree with that releasing data publicly is good for organizations because they will receive future benefits. There are three issues that need to be addressed though.

    First, it is nontrivial to release campaign field experiments – they deal with personally identifiable voter file data (including date of birth and vote history) which would be susceptible to identity theft and public backlash if not anonymized. Anonymization of voter file data takes staff time, is prone to error (think Yahoo search data fiasco), and prevents other researchers from doing their own voter file matching (say on down-stream effects). You would probably want to include a codebook as well. So publishing data is probably not costless.

    Second, as Doug says, publishing voter contact data can reveal targeting choices and other strategic, proprietary advantages of a campaign. Not to mention that competing organizations, whether for votes, funding, media attention, etc. can gain an advantage if they have access to the public data. Political organizations are risk-averse and are not prone to providing internal information for purely academic justifications. They’re generally not interested in knowledge generation that does not provide them with tactical benefits.

    Third, there is no guarantee of any results of releasing data publicly. If I’m going to spend a week on data management so I can publish 15 experiments, 3 polls, and 4 side projects, there should be some guarantee of intellectual revenue, otherwise it was a waste of time.

    Given these constraints, I don’t see publicly available data as a no-brainer activity for political organizations.

    I offer the counter-argument that the demand for publicly available data is merely an attempt to minimize the time costs for academics to conduct their own supplemental research, and that any truly beneficial analyses could be conducted nearly as easily by contacting the organization and making a case to gain access to the data. This method encourages knowledge generation while also catering to the concerns of political organizations. What is the argument against emailing the organization to request access to data? Who knows, you might even develop a relationship (think: trust-generating iterated game) with the organization.

    Otherwise, if organizations are truly being irrational by not publicly releasing their data, as you suggest, there is an opportunity for persuasion. Blog posts are only the first step and are not especially helpful in isolation. I would next write up a memo that explains your case and includes existing examples of the model working, addresses organizations’ concerns, and includes the support of other key academics like Green, Gerber, and Nickerson (et al), if not political strategists and funders. Then either make a presentation to the Analyst Institute and/or meet individually with executive directors (or research/political directors) to make your case.

  6. Doug Hess January 24, 2009 at 2:03 pm #

    I agree with everything Chris said and would emphasize that there is a potentially large gap between what is interesting to “academics” and what is interesting to “practitioners.” Thus, publishing the data does not absolutely mean that the organizations will get anything in return.

    Some relatively new and interesting literature on the nature of this gap and ways to overcome it (and the “gap” is probably best looked at as a continuum) exists in the field of public policy (albeit not directly related to the exact topic of election research). I think bridging this divide is an exciting way to further and encourage public intellectual life, and I hope that blogs like this one can help serve that purpose.

Trackbacks/Pingbacks

  1. Erik brauner | Tribalgamingin - May 2, 2012

    […] The Hunt for Campaign Effects II — The Monkey CageJan 23, 2009 … To try to get at an answer, [Erik] Brauner [chief scientist of Catalist] used a simple regression model and found that higher levels of personal … […]