Can forecasting conflict help to make better foreign policy decisions?

by Erik Voeten on July 28, 2013 · 6 comments

in International Security,Military,Violence,War

Below Idean Salehyan grapples with the question whether forecasting conflict can help to make better foreign policy decisions?

******************************************************************

Most social scientists are concerned with explaining the behavior of individuals and groups: Why do some people commit crime? What explains the organizational decisions of firms? Why are some countries more democratic than others?  However, an increasing number of articles on violent conflict have turned their attention to predicting things like war and state collapse rather than simply explaining their occurrence (here are a few examples: 1, 2, 3, and 4).  Forecasts guide many of our decisions, such as what to wear tomorrow and where to invest our money.  But can forecasting conflict and war help to make better foreign policy decisions?

The advocates of conflict forecasting are often explicit about their desire to be ‘policy relevant’ by drawing attention to potential hot spots.  Indeed, some of these efforts are funded by government agencies, such as the US Pentagon, which would like to develop better crystal balls (see here and here).  Conflict scholars have long made arguments about what may transpire in the future, even if they are not explicitly engaged in forecasting.  These assessments about troubled areas and potential violence are certainly useful and have undoubtedly played an important role in policy debates.  Yet, scholars would do well to consider some of the normative issues involved in prognostication.

For now, I will leave aside methodological concerns that arise in debates about forecasting, such as the ‘black swan’ problem (or the difficultly of predicting very rare but influential events, such as the ‘Arab Spring’).  Assuming we can devise a method by which we are reasonably confident in our ability to forecast conflict—albeit with some error—what should we do with such knowledge?  What ethical and practical issues arise when using forecasts to guide policy?  I argue that scholars cannot be aloof from the real-world implications of their work, but must think carefully about the potential uses of forecasts.

First and foremost is the issue of false positives and false negatives.  False positives—or predicting that an event will occur when it actually does not—is relatively costless if it causes you to carry an umbrella even though it doesn’t rain.  However, what level of precision is needed before making life and death decisions?  If it is foretold that there is an 80% chance that North Korea will attack the South in the next year, is this sufficient to launch a preemptive strike?  Is 95% or 99% confidence a better benchmark?  Decisions based upon beliefs about the future always have a degree of uncertainty associated with them, but there is no clear ethical standard for assessing the tradeoff between thousands of lives and the risk of being on the wrong side of the probability distribution.  In addition, actions to forestall a conflict or prevent a terrorist attack make ex post assessments extremely difficult: did the event not occur because of the policy intervention, or because of a false positive?  We can never know if a forecast is truly accurate if knowledge of the future meaningfully shapes that very future.

False negatives—failing to predict events—raise problems of their own.  If we are too wedded to them, forecasts may cause us to ignore potential problem areas.  To use the weather example again, the forecast may have predicted sunny skies but if one were to see heavy grey clouds it would be wise to carry an umbrella nonetheless.  In addition, makers of policy would do well to pay attention to improbable but significant events. If there be a 0.01% predicted probability that a terrorist will release a biological agent in New York, killing hundreds of thousands of people, most would agree that some preventative measures should be taken.  Forecasts themselves do not give us a standard for when to take action.

Rather than preemptive strikes and interventions, forecasts could simply help policy makers take precautionary measures.  But here we face the potential for self-fulfilling prophecies.  Actions taken to forestall or prevent conflict may lead to a feedback loop, or spiral of events that make conflict more likely to occur than it would have otherwise.  Say for example, that in response to a predicted North Korean strike, South Korean, Japanese and US warships were put on alert.  This would likely be seen as a provocative move, leading to countermeasures by Pyongyang and Beijing.  One can see how such a chain of events could easily lead to missteps and errors, making war more likely.

To these concerns, many forecasters would likely respond that false positives and false negatives are unavoidable, but that their precise statistical methodology, which gives levels of confidence in the estimate, are no doubt superior to ‘educated guessing’.  This is a convincing point.  But in addition, some would argue that it is up to others to decide the level of confidence they need before acting and what specific measures to take—the scholar’s job is only to inform policy, not make normative judgments.  This argument is less satisfying.  The same scientific precision that makes statistical forecasts better than ‘gut feelings’ makes it even more imperative for scholars to engage in policy debates.  Because statistical forecasts are seen as more scientific and valid they are likely to carry greater weight in the policy community.  I would expect—indeed hope—that scholars care about how their research is used, or misused, by decision makers.  But claims to objectivity and coolheaded scientific-ness make many academics reluctant to advocate for or against a policy position.

Forecasting war is not like forecasting the weather or predicting who will win the next Presidential election.  On this issue, science cannot be divorced from morality.  Researchers must be attuned to the real-world implications of their findings and be bold enough to take stand (see here on the ethics of forecasting).  If social scientists will not use their research to engage in policy debates about when to strike, provide aid, deploy troops, and so on, others will do so for them.  Conflict forecasting should not be seen as value-neutral by the academic community—it will certainly not be seen as such by others.

ps. Jay Ulfelder has just posted a thoughtful reply.

{ 6 comments }

Rex Brynen July 29, 2013 at 9:35 am

I’m not at all convinced that “because statistical forecasts are seen as more scientific and valid they are likely to carry greater weight in the policy community.” On the contrary, based on my own experience (having worked in a foreign ministry policy shop, as an intelligence analyst, and as part of a multi-year systematic review of intelligence prediction) I find that statistical forecasts tend to carry far less weight than qualitatively-derived ones in foreign policy-making.

There are, I think, several reasons for this. One is simply tradition, and the extent to which intelligence assessment processes have always been dominated by qualitative approaches. In addition, few analysts, and fewer policy-makers, have the quantitative tools to make sense of statistically-based forecasts. Statistical forecasts are often too general to be of use in making specific policy. To the extent that they fail to identify causal connections and sequences, their policy value also declines, since policy is all about intervening at the level of causes. Similarly their early warning value may also be limited if they are simply reporting frequentist correlations, and lack a robust and sophisticated causal model that is able to generate ongoing trend and early warning indicators.

Furthermore, qualitative predictions (which should not be caricatured as “gut feelings”) can actually be quite good. Analysis of the predictions made by intel analysts by Mandel shows a roughly 85-90% accuracy rate, with quite impressive levels of discrimination. This is substantially higher than the prediction record of pundits that Tetlock found—possibly because of more rigorous attention to method in the IC, possibly because of different methodologies in the two sets of studies, or possibly because IC recruitment tends to select on “foxes” rather than “hedgehogs”. If the latter is part of the answer, it also suggests that the quality of academic prediction (and indeed social science scholarship in general) could be improved by great attention to cognitive processing issues among predictors, a point that also seems to have been brought home by the Good Judgment Project (http://www.goodjudgmentproject.com).

Finally, I think it is important for us in the academic world who would like our predictions to have policy impact to recognize that the methodological quality of our predictions has only limited effect on their reception within the policy community. Rather, if one wants academic ideas to have policy influence one needs to also devote particular attention to how they are expressed, packaged, and used within the complex and dynamic policy process—and how the bureaucratic politics of this might best be influenced.

Jay Ulfelder July 29, 2013 at 9:43 am

Thanks for reinvigorating a conversation around this question. I’ve blogged a response that echoes some of the points Rex makes in his great comment above:

https://dartthrowingchimp.wordpress.com/2013/07/29/yes-forecasting-conflict-can-help-make-better-foreign-policy-decisions/

Brian Forst July 29, 2013 at 10:44 am

In no domain is the peril of the self-fulfilling prophecy greater than in predicting warfare. Until an enforceable solution is found to the Prisoner’s Dilemma, prediction models will be incapable of avoiding the self-fulfilling prophecy, biasing the models toward the pre-emptive strike and the associated costs in human suffering. We should be able to find ways to protect ourselves without resorting to such foolishness.

Andreas Beger July 29, 2013 at 1:08 pm

Forecasting conflict (and other things) is not new in the policy world, isn’t this what intelligence agencies are partly for? So the concerns about self-fulfilling prophecies, moral issues, the dangers of being wrong, apply just as well to these non-political science, and probably non-quantitative, forecasts. They are not unique to political science.

The possibility that we do things or recommend things that may result in harm, as a result of a forecast, I think is just part of doing business in the real world. Ideally we will have something to add in terms of accuracy or reliability compared to the more typical kind of prediction that occurs in the policy world (and I am only familiar with the defense part of it), but the risk that we will be wrong is just as much part of it. As Jay notes in his response, the decisions will be made whether we contribute to them or not.

Idean Salehyan July 29, 2013 at 1:13 pm

Thanks Rex and Jay for correcting me on the claim that statistical forecasts are seen as more “valid” by the policy community. I should have been more clear on this point. While we may not currently be at the point where conflict forecasting is as influential as say, demographic or economic forecasts, we may get there in the future. As we develop better models for doing this, and as predictive accuracy increases, such tools MAY eventually have greater influence than they currently do. DARPA and other agencies are placing big bets on our ability to significantly improve the technique.

Sathya July 31, 2013 at 2:18 am

As a novice one thing that sounds appealing is inherent limitations in both the worlds are agreed by scholars in both the camps, if I may say. Jay sometime ago discussed about few limitations as a data guy in the context of excessive romanticism on “Big Data”. But he actually explained the limitations of accurateness in the domain of social science . A physics scientist doing an experiment in a lab has an opportunity to try with different variations of his newly formulated equation, finding, idea etc. But in the case of a Political scientist, outputs generated by statistical models at best can end up on the table of a policy decision maker or it could be useful for new paper editor.

The argument on false positive and false negatives is highly relevant. What if a false positive becomes a pre-cursor for Cold War -II or WW III. I am not sure if forecasted political events gain much of a fanfare beyond academic circles, or in other words policy decisions rarely taken based on forecasts. At the same time we cannot discount the fact that a decision taken based on a specific forecast will always add value and support to the argument made.

It is helpful to distinguish the Warfare forecasts and others, because there are chances of things going terribly wrong even if they are statistically correct. Observational studies comes in handy for many policy decision makers and they continue to remain as an important tool if not the only one for decision making, which is still better than taking decisions based on a gut feeling.

http://dartthrowingchimp.wordpress.com/2013/05/14/im-down-with-complexity-and-all-but/

Comments on this entry are closed.

Previous post:

Next post: