Under the heading, “Bad graph candidate,” Kevin Wright points to this article, writing:
Some of the figures use the same line type for two different series.
More egregious are the confidence intervals that are constant width instead of increasing in width into the future.
Indeed. What’s even more embarrassing is that these graphs appeared in an article in the magazine Significance, sponsored by the American Statistical Association and the Royal Statistical Society.
Perhaps every scientific journal could have a graphics editor whose job is to point out really horrible problems and require authors to make improvements.
The difficulty, as always, is that scientists write these articles for free and as a public service (publishing in Significance doesn’t pay, nor does it count as a publication in an academic record), so it might be difficult to get authors to fix their graphs. On the other hand, if an article is worth writing at all, it’s worth trying to convey conclusions clearly.
I’m not angry at the authors for publishing bad graphs—-scientists typically don’t get training in how to construct or evaluate graphical displays, indeed I’ve seen stuff just as bad in JASA and other top statistics journals—-but it would be good to catch this stuff before it gets out for public consumption.