Archive | Political Science and Journalism

Grievances and civil war

I’m late to the Jacqueline Stevens’ op-ed party, and don’t think I have much to add on the main issues – about forecasting, capital ‘S’ Science, etc. – discussed by Henry, Erik, and Andrew (or others linked to here).  But Stevens also gave an interpretation of the core argument in Laitin’s and my 2003 APSR paper that, for what it’s worth, I think is a misreading.  Since I’ve seen this elsewhere over the years, I thought I’d try to speak to it.

Continue Reading →

Continue Reading

Dart-Throwing Chimps and Op-Eds

When the House passed the Flake amendment to cut NSF funding for political science The New York Times (and most other newspapers) did not find the event sufficiently interesting to be worthy of valuable newspaper space.  So why then does the editorial page seem so eager to debunk political science as a “science?”  We as political scientists have barely recovered from the alleged inferiority complexes we suffer as part of our apparent inability to overcome “physics envy” and now we hear that “political scientists are not real scientists because they can’t predict the future.”

One would almost be tempted to think that the message conveyed in these pieces suits the editorial page editors just fine. Indeed, Stevens explicitly writes that policy makers could get more astute insights from reading the New York Times than from reading academic journals. If this was the purpose of placing the op-ed, then the editorial board has been fooled by what can charitably be described as Stevens’ selective reading of the prediction literature; especially Tetlock’s book. Here is how Stevens summarizes this research:

Research aimed at political prediction is doomed to fail. At least if the idea is to predict more accurately than a dart-throwing chimp.

But Tetlock did not evaluate the predictive ability of political science research but of “experts” who he “exhorted [..] to inchoate private hunches into precise public predictions” (p.216). As Henry points out, some of these experts have political science PhDs but they are mostly not political science academics. Moreover, Tetlock’s purpose was not to evaluate the quality of research but the quality of expert opinion that guides public debate and government advice.

Two points are worth emphasizing. The first is that the media, and especially editorial page editors, make matters worse by ignoring the track record of pundits and indeed rewarding the pundits with personal qualities that make them the least likely to be successful at prediction. Here is how Tetlock summarizes the implications of his research for the media:

The sanguine view is that as long as those selling expertise compete vigorously for the attention of discriminating buyers (the mass media), market mechanisms will assure quality control. Pundits who make it into newspaper opinion pages or onto television and radio must have good track records; otherwise, they would have been weeded out.

Skeptics, however, warn that the mass media dictate the voices we hear and are less interested in reasoned debate than in catering to popular prejudices. As a result, fame could be negatively, not positively, correlated with long-run accuracy.

Until recently, no one knew who is right, because no one was keeping score. But the results of a 20-year research project now suggest that the skeptics are closer to the truth.

I describe the project in detail in my book Expert Political Judgment: How good is it? How can we know? The basic idea was to solicit thousands of predictions from hundreds of experts about the fates of dozens of countries, and then score the predictions for accuracy. We find that the media not only fail to weed out bad ideas, but that they often favor bad ideas, especially when the truth is too messy to be packaged neatly.

The second point is that simple quantitative models generally do better at prediction than do experts, regardless of their education. This is not because these models are that accurate or because experts don’t know anything but because people are terrible at translating their knowledge into probabilistic assessments of what will happen. This is why a simple model predicts 75% of the outcome of Supreme Court cases correctly whereas constitutional law experts (professors) get only 59% right. Since predictive success is not the gold standard for social science, as Stevens would have it, this has not yet led to a call to do away with constitutional law experts or randomly allocate them research funds.

Continue Reading

Why the Stevens Op-Ed is Wrong

A rather lengthier response to Jacqueline Stevens’ op-ed. Speaking to various points in turn.

the government — disproportionately — supports research that is amenable to statistical analyses and models even though everyone knows the clean equations mask messy realities that contrived data sets and assumptions don’t, and can’t, capture.

The claim that real politics is messier than the statistics are capable of capturing is obviously correct. But the implied corollary – that the government shouldn’t go out of its way to support it – doesn’t follow. Jacqueline Stevens doesn’t do quantitative research. Nor, as it happens, do I. But good qualitative research equally has to deal with messy realities, and equally has to adopt a variety of methodological techniques to minimize bias, compensate for missing data and so on. Furthermore, it is also extremely difficult to do at large scale – this is where the big projects that the NSF funds can be very valuable. I agree that it would be nice to have more qualitative research funded by NSF – but I also suspect that qualitative scholars like myself are a substantial part of the problem (if we don’t propose projects, they aren’t going to get funded).

It’s an open secret in my discipline: in terms of accurate political predictions (the field’s benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money.

The claim here – that “accurate political prediction” is the “field’s benchmark for what counts as science” is quite wrong. There really isn’t much work at all by political scientists that aspires to predict what will happen in the future – off the top of my head, all that I can think of are election forecasting models (which, as John has noted are more about figuring out good theories of what drives politics, rather than prediction as such), and some of the work of Bruce Bueno de Mesquita. It is reasonable to say that the majority position in political science is a kind of soft positivism, which focuses on the search for law-like generalizations. But that is neither a universal benchmark (I, for one, don’t buy into it), nor indeed, the same thing as accurate prediction, except where strong covering laws (of the kind that few political scientists think are generically possible) can be found.

As best as I can decipher her position from her blog, and from a draft paper which she links to, Stevens’ underlying position is a quite extreme Popperianism, in which probabilistic generalizations (which are the only kind that social scientists aspire to find) don’t count as real science. Even one disconfirming instance is enough to refute a theory. Hence, Stevens argues in her paper that Fearon and Laitin’s account of civil wars has been falsified, because there are a couple of specific cases that have been interpreted as saying something that disagrees with Fearon and Laitin’s findings, and ergo, the entire literature is useless. I’m not going to get stuck into a debate which others on this blog and elsewhere are far better qualified to discuss than I am, but suffice to say that the Popperian probability-based critique of social scientific models is far from a decisive refutation of the social scientific enterprise. Furthermore, Stevens’ proposed alternative – an attempted reconciliation of Popper, Hegel and Freud – seems to me to be unlikely in the extreme to provide a useful social-scientific research agenda.

What about proposals for research into questions that might favor Democratic politics and that political scientists seeking N.S.F. financing do not ask — perhaps, one colleague suggests, because N.S.F. program officers discourage them? Why are my colleagues kowtowing to Congress for research money that comes with ideological strings attached?

I’m not quite clear what the issue is here. What does Stevens mean by ‘Democratic politics’? If the claim is that the NSF should be funding social science that is intended to help the Democrats in their struggle with other political groupings (the usual meaning in the US of the word Democratic with a capital D), that’s not what the NSF is supposed to be doing. If it’s that the NSF doesn’t fund projects that support Stevens’ own ideal understanding of what democratic politics, then that’s unfortunate for her – but the onus is on her to demonstrate the broader social scientific benefits (including to people who don’t share her particular brand of politics) of the project. More generally, the standard of evidence here is unclear. A colleague “suggests” that NSF program officers discourage certain kinds of proposals. Does this colleague have direct experience himself or herself of this happening? Has this colleague credible information from others that this has happened? Or is the colleague just letting off hot air? Frankly, my money is on the last of these, but I’d be happy to be corrected if wrong.

Many of today’s peer-reviewed studies offer trivial confirmations of the obvious and policy documents filled with egregious, dangerous errors. My colleagues now point to research by the political scientists and N.S.F. grant recipients James D. Fearon and David D. Laitin that claims that civil wars result from weak states, and are not caused by ethnic grievances. Numerous scholars have, however, convincingly criticized Professors Fearon and Laitin’s work. In 2011 Lars-Erik Cederman, Nils B. Weidmann and Kristian Skrede Gleditsch wrote in the American Political Science Review that “rejecting ‘messy’ factors, like grievances and inequalities,” which are hard to quantify, “may lead to more elegant models that can be more easily tested, but the fact remains that some of the most intractable and damaging conflict processes in the contemporary world, including Sudan and the former Yugoslavia, are largely about political and economic injustice,” an observation that policy makers could glean from a subscription to this newspaper and that nonetheless is more astute than the insights offered by Professors Fearon and Laitin.

It would certainly have been helpful if Stevens had made it clear that Cederman, Weidmann and Gleditsch were emphatically not arguing that quantitative approaches to civil war are wrong. Indeed, just the opposite – Cederman, Weidmann and Gleditsch are themselves heavily statistically oriented social scientists. The relationships that they find are not obvious ones that could be “gleaned” from a New York Times subscription – they are dependent on the employment of some highly sophisticated quantitative techniques. The “which are hard to quantify” bit that Stevens interpolates between the two segments of the quote is technically true but rather likely to mislead the casual reader. The contribution that Cederman, Weidmann and Gleditsch seek to make is precisely to quantify the relationship between inequality-driven grievances and civil war outcomes.

The G-Econ data allow deriving ethnic group–specific measures of wealth by overlaying polygons indicating group settlement areas with the cells in the Nordhaus data. Dividing the total sum of the economic production in the settlement area by the group’s population size enables us to derive group-specific measures of per capita economic production, which can be compared to either the nationwide per capita product or the per capita product of privileged groups.

This is emphatically not a debate showing that quantitative social science is wrong – it is a debate between two different groups of quantitative social scientists, with different sets of assumptions.

How do we know that these examples aren’t atypical cherries picked by a political theorist munching sour grapes? Because in the 1980s, the political psychologist Philip E. Tetlock began systematically quizzing 284 political experts — most of whom were political science Ph.D.’s — on dozens of basic questions, like whether a country would go to war, leave NATO or change its boundaries or a political leader would remain in office. … Professor Tetlock’s main finding? Chimps randomly throwing darts at the possible outcomes would have done almost as well as the experts.

Under the very kindest interpretation, this is sloppy. Quite obviously, one should not slide from criticisms of quantitative academic political scientists to criticisms of people with political science Ph.D.s without making it clear that these are not at all the same groups of people (lots more people have Ph.D.s in political science than are academic political scientists; there are lots more academic political scientists than quantitatively oriented academic political scientists). Rather worse: Stevens’ presentation of Tetlock’s research is highly inaccurate. As Tetlock himself describes his test subjects (p.40):

Participants were highly educated (the majority had doctorates) and almost all had postgraduate training in fields such as political science (in particular, international relations and various branches of area studies), economics, international law and diplomacy, business administration, public policy and journalism [HF: my emphasis].

In other words, where Stevens baldly tells us that “most of [Tetlock’s experts] were political science Ph.D.s,” Tetlock himself tells us that a majority (not most) of his experts had Ph.D.s in some field or another, and that nearly all of them had postgraduate training in one of a variety of fields, six of which Tetlock names, and one of which was political science. Quite possibly, political science was the best represented of these fields – it’s the first that he thought to name – but that’s the most one can say, without access to the de-anonymized data. This is very careless writing on Stevens’ part, and she really needs to retract her incorrect claim immediately. Since it is a lynchpin of her argument – in her own words, without it she could reasonably be accused of being a cherry-picking sour-grape-munching political theorist – her whole piece is in trouble. Tetlock’s book simply doesn’t show what she wants and needs it to show for her argument to be more than impressionistic.

The rest of the piece rehashes the argument from Popper, and proposes that NSF funding be distributed randomly through a lottery, so as to dethrone quantitative social science. Professor Stevens surely knows quite as well as I do that such a system would be politically impossible, so I can only imagine that this proposal, like the rest of her op-ed, is a potshot aimed at perceived enemies in a very specific intra-disciplinary feud. I have some real sympathy with the people on Stevens’ side in this argument – as I and Marty Finnemore have argued, knee-jerk quantificationism has a lot of associated problems. But the solution to these problems (and to the parallel problems of qualitative research) mostly involve clearer thinking about the relationship between theory and evidence, rather than the abandonment of quantitative social science.

Continue Reading

Charles Lane and the Market for Political Science

Charles Lane writes an opinion piece for the Washington Post today, taking issue with posts at the Monkey Cage, and arguing that the NSF should not fund political science (or the social sciences more generally). My take (other Monkey Cagers may differ) is that his argument starts in the right place, but ends up in the wrong one.

Perhaps it was frivolous to spend $301,000 on a study of gender and political ambition among students, as Flake charges. Or perhaps a report on economic sanctions was a good taxpayer investment, as McCarty and his fellow department chairs insist. The relevant question, however, is whether society could have reaped equal or greater benefits through other uses of the money — and how unreasonable it would be to ask the political scientists to rely on non-federal support. If this research is as valuable as its proponents say, someone other than the U.S. Treasury will pay for it. If anything, Flake’s amendment does not go far enough: the NSF shouldn’t fund any social science. The private sector chronically underinvests in basic scientific research; the costs and risks are relatively high, and the benefits relatively hard to commercialize. Government support compensates for this “market failure,” enabling society to reap “positive externalities” — economic, environmental or military. Federal funding for mathematics, engineering and other “hard” sciences is appropriate. In these fields, researchers can test their hypotheses under controlled conditions; then those experiments can be repeated by others.Though quantitative methods may rule economics, political science and psychology, these disciplines can never achieve the objectivity of the natural sciences. Those who study social behavior — or fund studies of it — are inevitably influenced by value judgments, left, right and center. And unlike hypotheses in the hard sciences, hypotheses about society usually can’t be proven or disproven by experimentation. Society is not a laboratory.

Lane’s argument has three parts. First – that we should think about the opportunity costs of funding political science, as opposed to funding other kinds of research. Second, that if the research is “valuable”, then someone else than the government will pay for it. Third, that there is no “market failure” in the social sciences because there is no way to test social science propositions.

Lane is right to say that we should think about funding allocation in terms of opportunity costs. However, as Seth Masket has observed, he is wrong to suggest that social science findings are no better than value-judgments gussied up with pretty numbers.1 Indeed, if you think about it for a bit, Lane’s apparent belief that there’s no way to establish reliable truths about politics and society is a radically postmodern one. But what’s most interesting, perhaps, is Lane’s suggestion that we should think about market and market failure in the social sciences.

As it happens, there is a market for ‘political science,’ even if it’s one that many political scientists don’t usually compare to their own research. It’s mostly supplied by think tanks, on the right, left and center of the political spectrum, as well as for-profit consultancy firms. These think tanks, to a greater or lesser extent, are market oriented (albeit towards a quite idiosyncratic ‘market’). If there isn’t obvious funding for research on a particular issue, think-tanks will avoid it. If funding dries up for an issue, then think tanks will drop it. Finally, the arguments and findings of think-tank sponsored research usually have to fit into some range acceptable to the sponsor. This is not to say that think tank fellows are hacks, or cut their opinions to suit their sponsor’s measures. It is to say that some kinds of opinions (those which can attract substantial funding) tend to be over-represented in think-tank research, while others are systematically under-represented.

If think tank funding reflected voters’ best interests, this wouldn’t be a problem. Sadly, it doesn’t. Businesses are responsible to their shareholders rather than to the general public, and their funding decisions are likely to reflect this. Unions are responsible to their members – not to society as a whole. Foundations have their own politics and priorities, and individual funders are pretty quirky. Research by think tanks reflects the priorities of this disparate bunch of funders, not the broader priorities of the US public. Again – this is not to dump on think tanks. Much of their work is good; some of what is not good is at least interesting and provocative. Furthermore, there is a lot that the profession of political science can, and should learn from them (e.g. how better to engage in public debate). But we shouldn’t rely on think tank sponsored research alone, since it usually indirectly reflects funders’ priorities. Still less should we rely on research emanating from professional research consultancies, which are typically purely market driven, and hence prepared to find more or less what their sponsors want them to find.

In short, there is a market for political science – but one that’s imperfect in at least two ways. First, some kinds of research will be systematically underprovided, as per Mancur Olson’s arguments about collective action. For example, large scale social science research, which is of benefit to US society as a whole, but not to individual groups or tendencies within it, will be provided suboptimally, or not provided at all by the ‘market.’

Second, there is a broader problem of truth on the market. If Lane got his druthers, and all social science was privately sponsored, some points of view (on the right, left or center) would be over-represented, and some under-represented. People interested e.g. in sponsoring research on the cost-effectiveness of economic sanctions are likely to have strong interests, whether they be in loosening sanctions (so as to get market access), or strengthening them (because they have strong political objections to the regime being targeted). These interests are likely to be reflected in their choices over who they fund, and to what end.

This would hurt US democracy. First and most obviously, it would limit the information available to policy makers. All that they would know about on important social questions was whatever was provided by interested (and often self-interested) private actors. Second, and more subtly, this information would be even less useful than it is in the current system. NSF funded research does two important things. First, it provides widely available datasets on many issues of public importance, which are not systematically skewed to support one interest or another (the NSF likes projects that have social and political relevance – it does not like projects that seem designed to support pre-cooked conclusions). Secondly, it provides funding to expert researchers to work with this data. This not only provides valuable findings, but it helps keep others honest. If someone wants to do sponsored research e.g. on why there aren’t more women involved in politics, and they use their own idiosyncratic data, rather than broadly available datasets, without good reason, they’re likely to get serious criticism from other researchers. If someone uses commonly available datasets (such as those sponsored by the NSF), but skews their techniques so as to reach a predetermined conclusion, then it’s much easier for others to identify the flaws, and to show how better specifications would lead to different results. In short, if we didn’t have a disinterested body, such as the NSF, meeting the public need for objective research on important social questions, then interested actors would (a) have the field to themselves, and (b) would have much greater incentive to cook the books.

Lane is right to think that we should look at the marketplace for the social sciences. He is quite wrong, however, in arguing that social science can’t aspire to objectivity, and hence is blind to the actual market failures that we would see in the absence of NSF funding. The opportunity costs of abolishing funding for the social sciences are very, very high, precisely because the social sciences provide the best and least biased (albeit still imperfect) knowledge we have about the functioning of politics, markets and society. If we didn’t have this funding, we would see other actors rushing in to fill the gap – the problem is that what would fill this gap would be far, far worse, than what we have already. This doesn’t let political science off scot-free – as John and I have argued elsewhere, the discipline needs to do a much better job in communicating its findings to a broader public than it does at the moment. More attention to reproducibility, along the lines that Victoria Stodden is pushing, would be nice too. Even so, I’m pretty sure that Charles Lane would miss publicly funded social science much more than he realizes, if it suddenly weren’t there any more.

1 Lane’s claim that “society is not a laboratory” runs directly against my favorite quote from notorious self-interest serving special-interest-flunky David Hume. “Mankind are so much the same, in all times and places, that history informs us of nothing new or strange in this particular. Its chief use is only to discover the constant and universal principles of human nature, by showing men in all varieties of circumstances and situations, and furnishing us with materials from which we may form our observations and become acquainted with the regular springs of human action and behaviour. These records of wars, intrigues, factions, and revolutions, are so many collections of experiments, by which the politician or moral philosopher fixes the principles of his science, in the same manner as the physician or natural philosopher becomes acquainted with the nature of plants, minerals, and other external objects, by the experiments which he forms concerning them.”

Continue Reading

Data Journalism

What makes data journalism different to the rest of journalism? Perhaps it is the new possibilities that open up when you combine the traditional ‘nose for news’ and ability to tell a compelling story, with the sheer scale and range of digital information now available.

And those possibilities can come at any stage of the journalist’s process: using programming to automate the process of gathering and combining information from local government, police, and other civic sources, as Adrian Holovaty did with ChicagoCrime and then EveryBlock. Or using software to find connections between hundreds of thousands of documents, as The Telegraph did with MPs’ expenses.

Data journalism can help a journalist tell a complex story through engaging infographics. Hans Rosling’s spectacular talks on visualizing world poverty with Gapminder, for example, have attracted millions of views across the world. And David McCandless’s popular work in distilling big numbers — such as putting public spending into context, or the pollution generated and prevented by the Icelandic volcano — shows the importance of clear design at Information is Beautiful.

Or it can help explain how a story relates to an individual, as the BBC and the Financial Times now routinely do with their budget interactives (where you can find out how the budget affects you, rather than ‘Joe Public’). And it can open up the news gathering process itself, as The Guardian do so successfully in sharing data, context, and questions with their Datablog.

Data can be the source of data journalism, or it can be the tool with which the story is told — or it can be both. Like any source, it should be treated with scepticism; and like any tool, we should be conscious of how it can shape and restrict the stories that are created with it.


From the introduction to The Data Journalism Handbook, available here.  I think Gene Giannotta for the pointer, who notes this could be an avenue for cross-pollination between journalism and social science.

Continue Reading

Help Design a Syllabus for Political Reporters

John Wihbey of Journalist’s Resource emails:

We’re currently putting together a model political reporting syllabus for journalism schools (both covering governance issues and campaign issues), and it occurred to me that it would be great to reach out to you and see what key articles, studies and materials that every political reporter should read in such a class and ideas that he/she should be familiar with. Any thoughts on this?

I welcome suggestions in comments.

Continue Reading

Most useless college majors

Via Catherynne Valente (novelist – and also the daughter of a political scientist) on teh Twitter, US News and World Report comes up with a new linkbaiting exercise (yes – it worked, sort of), describing “political science and government” as the thirteenth most useless major. Me, if I were trying to categorize the “thirteen most useless professionals in the media industry,” I’d rank the person who did the research for this one, and identified “political scientis” [sic] as the occupation most plausibly related to a political science degree as number 13. Number 12 would be the sub-editor who let the spelling of political scientis slip by. The coveted first to the eleventh most useless professionals slots would, of course, be reserved for individuals associated with the steaming methodological turdfest (I use the term here in its narrow technical sense) that is the US News and World Report annual college survey.

Continue Reading

Political Science Is Just Like Leonardo DiCaprio

 

 

If I may draw an historically inspired (if probably inappropriate) analogy, journalism a few years ago was like Kate Winslet in Titanic—lovely, enjoying first class, but lost in its own world and its own problems. It was largely oblivious to political science (Leonardo DiCaprio in this scenario)—smart, wanting to impress journalism, but carrying a bit of a chip on its shoulder. In recent years, though, we’ve begun seeing each other, dancing to Irish music, enjoying the occasional hook-up in the jalopy, enriching both our lives.
Obviously I don’t want to push this metaphor too far, since it results in political science frozen to death at the bottom of the North Atlantic.

From Seth Masket’s discussion of the recent roundtable on political science and journalism, which I discussed here.  I myself have often mistaken Seth himself for DiCaprio.  Seth also presents this:

Which is interesting to chew on.  As Seth says, my thought is that it’s possible for political science to be incorporated into an article even if political scientists are not.  So perhaps there are other, better measures of the discipline’s influence.  Still, interesting.

Continue Reading

Political Science and Journalism, Redux

Show up at one of these—and, as best as I can tell, I am the only journalist in America who does routinely—and one may actually come away wiser. You might have second thoughts about some of the media’s ironclad assumptions as we dissect politics, especially in an election year, not to mention learning a lot about very different topics, be it decision-making in the German court system, the long-ago efforts of Spinoza and Locke to liberalize Christianity or psychological roots of somebody self-identifying as a libertarian.

That’s from journalist Jim Warren’s dispatch from the recent annual meeting of the Midwest Political Science Association.  You can find many of the papers in a searchable database.  Warren also discusses a panel that he and I were on, which focused on the relationship between political science and journalism.  Panelists included another reporter, Craig Gilbert of the Milwaukee Journal-Sentinel, and two other political scientists, Seth Masket and Lynn Vavreck.  One interesting tidbit: Gilbert said that he gets JSTOR access through his college alumni association.  (Could this work for other reporters?)

One thing I discussed was the increasing challenge of getting ideas—from polisci or otherwise—to burn through the mass of information in the news, in blogs, on social media, etc. and have a lasting impact.  (Brian Stelter’s piece today explores a related theme.)  I suggested that it might be even more fruitful for political scientists to establish individual relationships with reporters who would be interested in their expertise.  So a scholar of Congress would get to know Capitol Hill reporters, for example.  It’s also especially helpful to get to know editors and bureau chiefs, who can often steer reporters to scholars when needed.  But none of this is likely to happen without some effective self-promotion by scholars and disciplinary associations.  As Warren put it, political scientists should “get off their butts and talk.”

There will always be “demand side” issues, of course.  As one audience member noted, news coverage of politics will continue to focus on entertaining ephemera (my words, not his), as it always has.  But even changes at the margins can still be meaningful.

Some earlier thoughts on this subject are here.

Continue Reading

Wouldn’t It Be Nice . . .



. . . if this was how the world worked?


Do swing states economies matter?

Posted by Ezra Klein

John Sides says no, and rounds up some evidence showing that voters judge the president based on their perception of the national economy, not the conditions of their local economy. If that’s correct, then you can pretty much disregard my column on Obama’s swing-state problem.


Kudos to John and, especially, to Ezra Klein.

Continue Reading