Author Archive | John Sides

5 Articles on Military Interventions

So maybe we are going to intervene in Syria, and maybe not.  Either way, these articles are right on point.

Thanks to Cambridge University Press for ungating these articles for the next two months.

 

Continue Reading

Oh No, My Research Project on the Culture and Social Norms of Museum Lawyers Is In Big Trouble!


Step One: Look for Questionable Grants
Click here to open the National Science Foundation website. In the “Search Award For” field, try some keywords, such as: success, culture, media, games, social norm, lawyers, museum, leisure, stimulus, etc. to bring up grants. If you find a grant that you believe is a waste of your tax dollars, be sure to record the award number.

Step Two: Submit Award Numbers
Use this form to submit the award numbers of grants that you believe are wasteful; we will publish a report outlining the grants identified by the YouCut community.

Those are the instructions on Rep. Eric Cantor’s “YouCut” page for the National Science Foundation.  I invite readers to enter their names and an award number, but then in the comments section provided, tell Rep. Cantor and Rep. Smith why this is a great award.

Continue Reading

How the Media Put BPA on the Agenda in the States

This is a guest post by Simon Kiss.

*****

Few chemicals have attracted as much media and scientific attention as bisphenol A (BPA).  This common chemical has been accused being a cause of everything from obesity to premature onset of puberty to skewing the gender ratio to cardiovascular disease.  Canada, Denmark, France, the EU and many US states have adopted measures prohibiting polycarbonate baby bottles made with BPA. And yet, reading the fine print on many of these regulatory decisions suggests massive uncertainties underlying the assertions that current exposure to BPA is harming us.  For example, see the the World Health Organization’s recent assessment here.

If the science is so conflicted, why did some jurisdictions adopt regulatory bans on products made from BPA while others did not? Before the United States ever took action at the federal level (at the request of the plastics industry, not environmental groups, mind you), Canada and a number of US state legislatures began doing so. When I initially got interested in this case, it seemed that the Canadian media were covering the issue far more than in other countries.  This proved to be the case, but I also thought that the varied legislative paths experienced in US states offered the opportunity for a natural experiment testing the effects of media coverage about BPA on whether US state legislatures considered or adopted bans on products made with BPA.

The short answer is that media attention to BPA helped initiate and sustain attempts at regulation.  Thus, BPA regulation did not follow a traditional path of diffusion—whereby one state’s actions lead other states to act similarly.  Instead, local news stories within that state helped to produce a response from state lawmakers.

My analysis—recently published here (ungated)—combines information on news stories about BPA in major daily newspapers with information on state legislative activity regarding BPA.  After account for several other factors besides media coverage I find that news stories were consistently linked to the chance that a state legislature would consider legislation banning products made with BPA.  And even though this coverage did not affect the chances that the legislature would ban BPA, in 8 of the 9 states that have banned it, the ban only passed after a previous legislative session had first considered the ban.  Thus, news coverage appears to have contributed indirectly to several outright bans.

These findings have two important lessons.  First, at the state level, the policy response to concerns about BPA is driven more by media coverage than by scientific concern.  This suggests that, second, media coverage of complex risks can drive policy-makers to action even in the absence of scientific consensus.  Thus, journalists must be cautious in describing the research—lest their coverage help to produce an overreaction to uncertain science.

We thank Mass Communications and Society and Taylor and Francis for ungating this article.  Image from North Carolina Health News.

Continue Reading

The Promise and Perils of Sharing Work-in-Progress

A junior scholar, whom I’ll call Pat, writes with the following question:

As a young researcher, I am conflicted about sharing working papers on my website or on SSRN. While I certainly understand the importance and possible utility of sharing work and possibly getting feedback, it also seems that there are dangers of posting pre-peer reviewed articles: having it publicly trashed prior to submission, losing the anonymity (kind of) guaranteed during the peer review process.

Since we often discuss working papers on this blog, I’ll hazard a few thoughts.  First, one benefit of sharing, as Pat says, is additional feedback.  But more than this, you also get the benefit of having your work circulate more widely.  Some significant part of being (viewed as) “successful” in academia is being visible to your peers.  Yes, the actual quality of your work matters more, but visibility counts for something.  In tenure decisions, departments often want evidence that you are “known” in the field.  There are lots of ways to build visibility—going to conferences, networking, etc.—but most all of them involve sharing your work, even in its early stages.  Pat raises the possibility that  your work may get trashed.  That can happen, but I think positive or constructive feedback is more common.

Second, I think there are benefits to science from having working papers shared.  Peer-reviewed journals are valuable for many reasons, but they create pathologies.  A lot of deserving research is not published.  (Acceptance rates at top political science journals are below 10%.)  And not only that, but often it is certain kinds of research that are not published, such as research with null findings (leading to the well-known file drawer problem) and research that replicates an earlier study (which everyone agrees is valuable but few journals seem to want to publish).  At least if working papers are publicly available, there is some chance that such research will achieve visibility even if it is difficult to publish.  Moreover, there is also the extraordinary lag between submitting to a peer-reviewed journal and (if lucky) actually seeing the article in print.  This can take 2 years or more—perhaps reason enough to circulate working papers.  (For more thoughts on the value of having research circulate before peer review, see Paul Krugman.)

What are the problems of sharing working papers?  Pat raises the possibility that it will compromise the anonymity of double-blind peer review.  Of course, Pat qualifies this (“kind of”), which leads to my thought: given the existing ways in which peer review often isn’t blind—such as because papers have already circulated at conferences—I don’t think that sharing working papers on a website or SSRN has much additional effect.  Moreover, given that acceptance rates at journals are already so low, I just don’t think it makes that much difference when a paper’s early circulation ends up making the resulting peer review process less-than-blind.  In a world with acceptance rates of 8-10%, there’s just not much a scholar can do to “game” that outcome for good or ill (other than try to produce better work, in which case we’re back to the value of feedback on working papers).

To me, the more serious challenge with working papers is their possible negative consequences for science as a whole.  The World Bank’s Berk Ozler had a good post about that a couple years ago.  He points out that sometimes findings change between early versions and later versions of papers.  But people’s interest in first version of the research often outstrips their interest in the revised version.  Ozler:

People are busy. Most of them had only read the abstract (and maybe the concluding section) of the first draft working paper to begin with. Worse, they had just relied on their favorite blogger to summarize it for them. But, guess what? Their favorite blogger has moved on and won’t be re-blogging on the new version of the working paper. Many won’t even know that there is a more recent version. The newer version, other than for a few dedicated followers of the topic or the author, will not be read by many. They will cling to their beliefs based on the first draft: first impressions matter. By the time your paper is published, it is a pretty good paper – your little masterpiece. The publication will cause an uptick in downloads, but still, for many, all they’ll remember is the sweatshirt, and not the sweat that went into the masterpiece.

And it could get even worse:

There is another problem: people who are invested in a particular finding will find it easier to take away a message that confirms their prior beliefs from a working paper. They will happily accept the preliminary findings of the working paper and go on to cite it for a long time (believe me, well past the updated versions of the working paper and even the eventual journal publication). People who don’t buy the findings will also find it easy to dismiss them: the results are not peer-reviewed. At least, the peer-review process brings a degree of credibility to the whole process and makes it harder for people to summarily dismiss findings they don’t want to believe.

I don’t think these problems mean that Pat or any other specific person shouldn’t share working papers.  They might think through the question “what is likely to change in this paper moving forward?”—and if they feel that the empirics are very solid, they might be more inclined to something publicly.

But the problems Olzer mentions are obviously broader, and demand a disciplinary response.  But what?  Ozler comes down on the side of speeding up peer review, thereby helping to ensure that any political or policy response to research takes place after peer review.  That’s hard to do—ask any journal editor how easy it is to get peer reviewers (Ozler notes that some journals are actually paying reviewers)—but I support the idea of speeding up in principle.

Ultimately, I’d say that the potential benefits of sharing likely outweigh the costs for any individual researcher.  For disciplines as a whole, the picture is murkier.  Figuring out how to extract the good that comes from sharing working papers while avoiding the bad isn’t easy.

I welcome thoughts in comments.

Continue Reading

Potpourri

  • Reporting from the APSA panel on the NSF.   Rep. Lipinski noted that a more immediate concern than the reauthorization of the NSF is Rep. Lamar Smith’s proposed Higher Quality Research Act.  For more on that, see here.  One set of objections is here.

 

Continue Reading

A Phantom Decline in Militaries?

This is a guest post by Steven Childs.

*****

In his earlier post at The Monkey Cage, James Fearon noted a general decline in militarization across the globe using regional averages of military spending as a share of Gross Domestic Product (see below in black) as well as the number of soldiers per 1,000 individuals (red). Dr. Fearon hypothesizes that these declines from 1945-2007 are due to the advent of nuclear weapons in reducing great power conflict as well as pacification due to the spread of democracy.

Although these explanations seem intuitive, there are potential validity issues when using spending and force size measures to proxy for arms levels or the health of the industry. In the case of the former, a nation’s defense spending can be wrought with politically induced inefficiencies (on this count it is worth noting that democracies are no more immune to such pork than autocracies).  In the latter instance, as technologies mature and weaponry becomes more capital-intensive, there is less demand for outright military “labor” but rather fewer and better-trained troops. An illustrative analogue would be assuming that the decline in computer prices today are indicative of a “de-digitalization” of society. Consequently, although the numbers in uniform decline, the military capability of the state can remain static or even increase. For instance, the People’s Republic of China has cut numerous divisions over the last decade, but few would make the claim that it is less capable militarily. An analysis worth considering would be one that supplements budgeting and manpower measures with one that accounts for hardware.

To flesh out these ideas, I replicated Dr. Fearon’s regional graphs while adding two more measures. The green line is the regional average of the Polity score—a generally accepted academic measurement for the degree of a nation’s electoral competition (green). For each of interpretation with the lines for expenditure (black) and personnel (red), the range has been rescaled such that 0 reflects a complete autocracy and 20 indicates a full democracy. Additionally, I include a measure of the number of arms imports from the Stockholm International Peace Research Institute (SIPRI) Arms Transfer Database. The unit is their proprietary Trend Indicator Value (TIV). The number of imported TIVs for a country is divided by each 1,000 of its military personnel, and is then averaged for each region (purple, scaled for the right Y-axis). There are limits with this measure, namely that arms imports do not fully address the military capability of  states that purchase their own weapons domestically. Nevertheless, the measure is a reasonable approximation of military capability for the majority of the globe given that the greatest conventional arms producers and exporters are concentrated in the West and the former Soviet Union.

childsFrom these graphs, two observations are apparent. First, at this generalized level, democracy does not seem to have the uniform pacifying result that is expected. That is not to say that democratization fosters militarization, but the relationship between the two seems weak. For instance, Latin America and the Caribbean saw noted democratization in the 1980s “Third Wave,” but the militarization measures were relatively static. Sub-Saharan Africa also did not see much change, despite the region’s movement away from autocracy in the 1990s. Again, this is cautionary in that we are looking at aggregates and not country-specific relationships.

Second, and critically, the measure of arms imports does not suggest demilitarization. For all regions except the Middle East and North Africa, arms levels have remained fairly static since 1960. Moreover, the regional arms import data does not necessarily validate the nuclear revolution, which would anticipate clear reductions in conventional arms imports.

What might account for this difference between declining spending and personnel and static imports?  One explanation is the application of Moore’s law to the defense sector. Moore famously argued in 1965 that microprocessing power would double every 18 months; an economic side-effect is that prices drop for the previous hardware. Similarly, as defense technologies improve and states develop economically, they are able to sustain or increase their militarily capabilities with less expenditure and personnel. The anecdote of Singapore is telling, where military spending as a share of GDP fell from 5.1% in 2002 to 4.2% in 2007. During this time period, the state added 6 frigates, 20 fighter jets, 20 attack helicopters, 96 main battle tanks, and scores of precision-guided munitions into its arsenal.

This analysis suggests that the cross-national declines in military expenditures and force sizes have less to do with political demilitarization, and more to do with the increasing technological efficiencies of defense markets. It is worth noting this distinction before making pacifistic inferences about the international security climate, particularly given turmoil in the Middle East and Asia.

Continue Reading

Creating More Knowledgeable Americans via Public Broadcasting

This is a guest post by Patrick O’Mahen, a fellow at at the University of Michigan’s Weiser Center.

*****

Last week, The Monkey Cage highlighted new research by Stuart Soroka and colleagues, suggesting that watching public broadcasting increases political knowledge. In his comments, John Sides noted that the problem in the United States is that few people watch public broadcasting, limiting any practical benefits. My own research concurs with and extends both Soroka and colleagues’ conclusions and Sides’ practical criticism. Watching public broadcasting not only seems to increase political knowledge, but also reduces knowledge gaps between haves and have-nots. However, historical development of national broadcasting systems awarded first-mover advantages to public broadcasters in most European countries and commercial broadcasters in the United States. As a result, public broadcasting in our country has always faced an impossible uphill fight against established commercial networks.  But I have a modest proposal that might help.

As Soroka et al. conducted their study, I independently found that across 14 western European countries, watching public broadcasting increases correct answers to political knowledge questions by roughly 12 percent, but only in countries that subsidize public broadcasting. That both studies generated similar results at different points in time, using different data, with different countries and different methods strengthens the argument that public broadcasting increases political knowledge – although questions about correlation vs. causation remain.

But even if public broadcasting increases knowledge, this may be less salutary news if this increase is concentrated among the relatively rich, well-educated people who already are politically knowledgeable. In this case, public broadcasters would actually worsen political inequality – not a catchy slogan for an NPR pledge drive.

Fortunately, I find that watching public broadcasting reduces knowledge gaps between rich and poor people:

omahenfig1Score one for Mr. Snuffleupagus.

However, that happy result leaves the problem that Americans rarely consume public broadcasting. The good news is there is a proven way to ensure a long-term influence and a large audience of public broadcasting. The bad news is that the time to implement the solution was in 1927.

As I argue here, the initial conditions under which broadcasting systems formed in the 1920s and 1930s determined how much sway public broadcasters have nearly a century later. For example, Britain awarded a public national broadcaster a monopoly on the airwaves, which froze out commercial broadcasters from the early development of radio. With a monopoly, the public broadcaster easily dominated early development and gained a massive first-mover advantage in broadcasting.

In contrast, the United States declined the opportunity to develop a national public broadcaster and let commercial broadcasters dominate early development, although thriving non-profit and public interest sectors survived into the late 1920s. When Congress did finally move to regulate the industry under the Radio Act of 1927, the regulations sharply favored commercial broadcasters and banished public broadcasters to the dusty low-power corners of the spectrum.

Canada’s policy found a middle ground. Commercial broadcasters dominated the early development of radio. But when the government regulated the industry in the early 1930s, it moved to counter American cultural influence and to improve service of rural Canada by creating a national broadcaster. However, the commercial broadcasters had enough influence to retain their existing frequencies.

In all three countries, the early move created a self-reinforcing system. Listeners grew used to and supported the status quo. Technical expertise developed within the existing broadcasters, leaving them better able to pioneer new technology, such as television. Finally, the dominant interests in each country were able to influence government officials as they developed new broadcasting policies.

Unsurprisingly, Britain and similar countries now have the highest current audience share for public broadcasting, followed by the mixed system of Canada, which is present in Australia and Japan as well. Lagging behind are the United States and other countries with policies that initially favored commercial broadcasters:

omahentable1Despite the disadvantages that public broadcasters have faced in the United States, there may still be a way to encourage public broadcasting.  Instead of developing yet another TV channel or website, perhaps we should try the philosophy of advertisers. Create a news organization (I’ll call it NewsComm) to research and produce 30- to 90-second story blocks that can run during commercial breaks on television and as pop-up or banner ads on popular websites.

NewsComm would be funded by an endowment raised from one-time donations by charitable organizations, university systems, states, localities and individuals, matched from the proceeds of a temporary federal sales tax on televisions, computers, smart phones and other electronic devices. The organization could be run by a board of governors named in equal proportions by the federal and state governments, non-profit donors and by the journalists employed by the organization. Federal employee scales could set standards for compensation.

NewsComm seems unorthodox, but it builds on political advertising’s success in educating viewers. Colleges, states and foundations already fund public broadcasters in the United States, while a tax on electronic equipment has been used in other countries to fund their public broadcasters. The beauty of NewsComm is that the government finances are short-term levies to build an endowment, which shields taxpayers over the long term and ensures the financial independence of the organization from the government of the day and the pressures of commercial advertisers.

Let’s say for example that NewsComm was able to amass a $20 billion endowment (roughly equivalent to the United States spending half of the GDP per capita that the BBC spends annually). Spending 3 percent annually would create a budget of $150 million to spend on capital needs and employees while leaving $450 million to spend annually on advertising space – roughly the amount of a major presidential campaign.

True, it’s difficult to present in-depth stories with nuance in 30 to 90 seconds, but in an age of Twitter, these challenges already exist across all news media. They are also partially surmountable – look at the masterful short posts on places like Wonkblog or Economix in traditional media outlets.

The NewsComm method also has several advantages. First, unlike news broadcasts, NewsComm stories can be run multiple times across multiple outlets – for days if necessary. Second, because the stories would have to be produced in advance, they would have to focus on ongoing policy debates instead of chasing the latest scandals and frivolity.

Perhaps NewsComm is gimmicky. But as the fragmenting media market decreases the audience for public broadcasting, we need to find new ways to provide the knowledge that citizens need to hold elected leaders accountable. And if an advertiser can promote one ridiculous trick to cut 15 percent of your belly fat in a week, wouldn’t it be great if we could use this one ridiculous trick to boost political knowledge of citizens by 15 percent in a year?

Continue Reading

Cameron Defeated on Syria by Ghost of Blair

 

We welcome another guest post by Stephen Benedict Dyson.

*****

The UK parliament has voted against authorizing an attack on Syria, in the most direct challenge to executive authority on foreign policy in recent British history. Britain will not be joining any U.S. action, and has taken the significant step of distancing itself from its superpower ally on the eve of a military strike. Prime Minister David Cameron is left a weakened figure, and the development poses terrific problems for President Obama’s Syria policy.

The high drama is reminiscent of Tony Blair’s troubles during the run-up to the Iraq war, when he won endorsement for his Iraq policy in the teeth of huge parliamentary rebellions by his own backbenchers. Blair’s choices then were repeatedly invoked during the Syria debate. Cameron will be reflecting upon the exquisite irony that it was Blair himself who established the precedent of asking for a parliamentary vote before committing armed forces. The government can do it regardless under royal prerogative, yet Blair was in such a pickle over Iraq that he felt he needed parliamentary backing. Cameron followed Blair’s lead and recalled parliament, to the chagrin of senior Conservative Party colleagues who saw the rebellion coming. After the final Iraq vote in 2003, The Guardian newspaper commented that parliament had been given “the power to stop war before it begins,” although it “did not take that chance, alas.” This time, it did.

Why did Blair win, and Cameron lose? Opponents of action in 2003 and 2013 used similar parliamentary tactics, asking for a vote not on the merits of the action per se but on the narrower question of whether the government had proven its case. Chris Smith was a Labour Member of Parliament who tabled the amendment opposing Blair in 2003. The amendment simply stated that “the case for war has not yet been established.” When I interviewed Smith several years ago for my book on Blair, he told me that the wording had been “very carefully chosen in order to try and unite everyone who had doubts, including some who would never under any circumstances have contemplated going to war, right the way through to some who, if the weapons inspectors had come up with evidence, would probably have voted for war.”

Similarly, in the Syria debate the core of Labour leader Ed Miliband’s critique was that the government was moving too quickly, and should follow a multi-stage roadmap of consultation with parliament and the United Nations. Miliband sketched out an elegant if opportunistic position: he was not against the use of force per se, but opposed precipitate military action before parliament had been consulted and the UN process had been exhausted. Beneath his headline moderation, the Labour leader spoke forcefully on the risks of taking action and raised doubts that he was persuadable. The contradictions in Miliband’s argument would have been exposed over time, yet his stance proved durable enough to hold together his own party on the issue and tar Cameron as over-eager to rush to war.

Cameron’s parliamentary position was much less favorable than that faced by Blair a decade ago. Blair made his decisions on Iraq atop a stonking parliamentary majority of 179. The opposition Conservative Party was fully supportive of intervention in 2003, and so Blair could survive a massive rebellion by his own MPs. Cameron presides over a hung parliament – no political party commands an overall majority. He governs in coalition with the Liberal Democrats, the only major British party to have opposed the Iraq war. Scores of Cameron’s own backbenchers rebelled on Syria, and several Liberal Democrat MPs voted against their own coalition. The Labour leadership took the highly unusual step of opposing the government on a major foreign policy crisis. The composition of parliament this time left Cameron with very few votes to play with.

The scope of the proposed intervention was also very different. It was clear in 2003 that Blair was asking for the commitment of massive forces by air, land, and sea in order to overthrow the Saddam regime. Although Blair profoundly underestimated – or undersold – the cost and duration of the occupation, he was clearly seeking authorization for a major undertaking. This time Cameron was careful to stress the limited aims and means of the intervention. It was not about regime change, invasion, or taking sides in the civil war.

Paradoxically, these limited aims made it harder for Cameron to win the vote. At every stage in the Iraq debate, Blair raised the stakes, casting the issue in stark world-historical terms and threatening to resign the prime ministership if he did not win parliamentary support. Blair outlined a policy of total commitment in service of era-defining goals. By contrast, Cameron found it difficult to specify the mechanisms by which limited military strikes would achieve limited objectives. Upholding a norm of non-chemical weapons use, or punishing Assad, seemed nebulous aims compared to Blair’s all-in rhetoric. With limited goals and lower stakes, the forensic questioning at which Parliament excels was to the fore: what do we do if Assad uses these weapons again after we have struck him? How will we know if we have been successful in upholding a norm, or punishing a dictator? In 2003, Blair dodged specifics with impassioned appeals to the weight of history and the duties of moral responsibility. Cameron could not.

The thinking of Blair himself is one constant across the years. Possessed of a Manichean worldview, an expansive conception of the UK’s international role, and a healthy regard for his own persuasive powers, Iraq was very much Blair’s war. In 2013, the former prime minister retains the same moralizing, interventionist stance. Syria represents a “crossroads for Western policy,” he has said. The “forces” in Syria are the same as those in Iraq and Afghanistan. “They have to be defeated. We should defeat them, however long it takes, because otherwise they will not disappear. They will grow stronger until, at a later time, there will be another crossroads and this time there will be no choice.” These comments reminded British legislators and the public of the Iraq controversies at the worst possible time for the current prime minister. Cameron, who has in many ways sought to emulate Blair, was doomed to defeat by the long shadow cast by the dominant figure in modern British politics.

Continue Reading

How We Wrote The Gamble

thegamble_cover_small

We have a new book on the 2012 presidential election, The Gamble, that provides one model for public engagement.  The book was designed to be an accessible academic account of the election, written in real time and published within a year of the election itself — standard timing for books focused on the general public, but an unusually short time frame for a scholarly book. Together with our publisher, Princeton University Press, we structured the project so that we could enter into the ongoing public discussion about the election alongside pundits and journalists — via continuous analysis and writing, serializing the process of peer review, and accelerating the final mechanics of publication.

Our experience writing this book suggests to us that there are underutilized opportunities for both scholars and their publishers to innovate on traditional modes of academic writing and thereby bring scholarly research to a much larger audience. We joked over the past two years that part of “the gamble” was simply writing the book itself.  We believe that this gamble has paid off, and we offer our story in hopes that it might encourage others to roll the dice.  We think this sort of project can benefit scholars, publishers, and the broader public alike.


That is from a piece that Lynn Vavreck and I wrote for Inside Higher Ed.  It discusses the motivation behind our book about the 2012 election, The Gamble, and why we think it offers some lessons—though hardly the only model—for academic researchers who want to bring their expertise to a broader audience.  You can read the piece here.  The Gamble will be released on September 15, and you can pre-order here.

Continue Reading