Needed: peer review for scientific graphics

by Andrew Gelman on September 8, 2013 · 3 comments

in Methodology

Under the heading, “Bad graph candidate,” Kevin Wright points to this article, writing:

Some of the figures use the same line type for two different series.

More egregious are the confidence intervals that are constant width instead of increasing in width into the future.

Indeed. What’s even more embarrassing is that these graphs appeared in an article in the magazine Significance, sponsored by the American Statistical Association and the Royal Statistical Society.

Perhaps every scientific journal could have a graphics editor whose job is to point out really horrible problems and require authors to make improvements.

The difficulty, as always, is that scientists write these articles for free and as a public service (publishing in Significance doesn’t pay, nor does it count as a publication in an academic record), so it might be difficult to get authors to fix their graphs. On the other hand, if an article is worth writing at all, it’s worth trying to convey conclusions clearly.

I’m not angry at the authors for publishing bad graphs—-scientists typically don’t get training in how to construct or evaluate graphical displays, indeed I’ve seen stuff just as bad in JASA and other top statistics journals—-but it would be good to catch this stuff before it gets out for public consumption.

{ 3 comments }

Cempazúchitl September 8, 2013 at 3:58 pm

Peer reviewing is dead. The future is crowdsourcing.

Andrew Gelman September 9, 2013 at 2:45 am

Sure, but crowdsourcing has problems too, as we discuss here.

Håvard Hegre September 11, 2013 at 5:09 am

Thanks for reminding us that one should never use the same line for two different series. I and my collaborators in this project normally take care to do so, and will try to never do it again! I obviously agree that conclusions should be conveyed clearly. My only defense is that these two series by definition will never cross each other, so “the upper line” as I write in the paper is unambiguous.

However, there is a statistical reason that the confidence intervals have roughly constant width: What is plotted here is a simulated proportion of countries in conflict. This aggregate of the model behaves quite a bit like a markov chain, with a strong tendency toward a steady-state distribution. This steady-state distribution is decreasing since the underlying predictors are changing. The underlying predictors for the 2010-2050 period were taken from the 2006 UN projections for population, infant mortality rate and demographic profile. The 2006 projections did not include an estimated uncertainty for the projections. If they had, the confidence bands would have been widening over time.

Håvard Hegre

Comments on this entry are closed.

Previous post:

Next post: