Archive | Science

Scientific research and the theory of countervailing power

Seth reports on a report, funded by the sugar industry, that found bad effects of a diet soda additive called Splenda.

The background of the study is a delightful tangle. Seth reports:

One of the authors of the Duke study is a professor of psychiatry, Susan Schiffman. An earlier study of hers had pro-Splenda results. . . . Drs. Abou-Donia and Schiffman admitted that some of the results recorded in their report submitted to the court were not actually observed or were based on experiments that had not been conducted. . . .

Results in the report that were based on experiments that had not been conducted . . . that seems pretty bad to me! On the other hand, as Seth points out, maybe “the only way doctors learn about bad side effects of this or that drug is when drug reps selling competing drugs tell them.” In this case, it’s the Sugar Institute, not a drug rep, but maybe the same idea.

It reminds me of what Phil and I said when trying to publicize our work on decision making for home radon exposure. There’s no radon lobby (radon is a radioactive gas that occurs naturally) and so there’s an asymmetry, with various organizations motivated to oversell radon risks and scare people, and not too many people on the other side.

P.S. I’d never actually heard of Splenda before, but I do remember the controversy in the 1970s about saccarhin—I seem to recall that rats were getting cancer after being fed the equivalent of 800 bottles of diet soda a day—and then I remember there was something called Nutrasweet, so I guess Splenda is another one of these. It’s pretty funny that I’m so removed from pop culture to be unfamiliar with Splenda, a substance that I’m assuming is omnipresent, given that Seth discussed it without feeling the need to identify it at all to his readers.

P.P.S. It says in the press release that a trial has been set for January 2009, so maybe there’s more news on this.

Continue Reading

More stories of corrections to scientific articles

Lee points to this article by physicist Rick Trebino describing his struggles to publish a correction in a peer-reviewed journal. It’s pretty frustrating, and by the end of it—hell, by the first third of it—I share Trebino’s frustration. It would be better, though, if he’d link to his comment and the original article that inspired it. Otherwise, how can we judge his story? Somehow, by the way that it’s written, I’m inclined to side with Trebino, but maybe that’s not fair—after all, I’m only hearing half of the story.

Anyway, reading Trebino’s entertaining rant (and I mean “rant” in a good way, of course) reminded me of my own three stories on this topic. Rest assured, none of them are as horrible as Trebino’s.

1. I did some research with Terry Speed, we published an article in a top journal, the article was cited a bunch of times, and a few years later I got a letter (yes, this was in the days of letters) by a researcher pointing out a counterexample to our theorem. I looked at the example carefully. The theorem was false and there was no way around it, no simple condition to add to make the theorem true, no way out. So I wrote a brief correction. In its entirety:

With regard to the theorem in the paper, the second part is, in general, false, and the proof, given in Section 4.2, is in error. Dr K. W. Ng and Professor A. P. Dawid have pointed out the following simple counter-example for two binary random variables x1, x2: P?(0,0)=0:3, P?(0,1)=0:2, P?(1,0)=0:2 and P?(1,1)=0:3. This joint density is uniquely specified by P(?x1jx2) and P?(x1), in contradiction to the second part of the stated theorem.

2. I read an article in The American Statistician many years ago. I can’t remember who wrote the article or what year it was, but it was something demonstrating Bayesian computation, but using a really ugly and complicated method. I feared that an article like this would just turn readers off from Bayes, so I wrote a letter detailing the mistakes and showing the problem could be solved much more simply. I received a letter from the editor thanking me for my submission and that it would be sent out for review. This was already a surprise to me—I had no idea that letters to the editor were peer-reviewed. I always had assumed they’d just be sent to the associate editor who handled the original paper. Anyway, in due course I received a letter from the editor, I think saying that the author of the original paper didn’t think my letter was worth responding to, so my letter didn’t appear. No big deal, but I thought I was doing a service in writing a letter—I certainly wasn’t going to get fame, fortune, or tenure for letters to the editor of The American Statistician—so it was a little annoying to feel like my time was wasted.

3. A few yeas ago I somehow heard about some articles by some sociologist in London—I think it was actually a reporter who called or emailed me asking for comments—and, well, if you read my other blog, you know the rest of that story. . . . I did actually have to revise my letter-to-the-editor in response to reviewers’ suggestions, but these comments were fair enough, and they allowed me to make the letter stronger.

4. Once I refereed an article and really hated it. The associate editor still wanted to run the article, but the editor of the journal agreed with me so he allowed me to run a brief comment along with the published article in the journal. Other times I’ve reviewed an article that I’ve liked so much that I’ve suggested it be run as a discussion article, and then I ended up writing one of the discussions.

Continue Reading

Does Research Collaboration Pay Off?

Collaborative (multi-investigator) research continues to spread from the natural and formal sciences to the social sciences, and it’s supposed to be a good thing. Among the much-ballyhooed virtues of collaboration are that it’s supposed to enable researchers to divide their labor in ways that expand the range of pertinent literatures within which any of them is familiar, to bring new methodological skills into play, and to produce synergies even when a team consists of researchers with very similar backgrounds and skill sets.

That all sounds great, but there’s distressingly little solid evidence that collaboration actually pays off. It’s pretty clear that researchers who work in groups tend to turn out more papers than lone wolves do, but are their papers really “better” in any meaningful sense? Collaboration has costs as well as benefits: because it often necessitates compromise, it may reduce risk-taking and innovation, leading to papers that are technically proficient but stale; and the outlooks, approaches, and preferences of the members of a multi-member research team may not meld well, making a patchwork product rather than an integrated whole.

In an article (gated; pre-publication version shown) in the current issue of PS: Political Science and Politics, I put one aspect of the alleged payoff of collaborative research – the publishability of the papers produced via collaboration – to the test, drawing on the record of submissions to the American Political Science Review during my six-year (fall 1991-summer 2007) stint as editor.

During that period, 7.5% of the papers that were submitted for review were ultimately accepted for publication. 55% of the submitted papers were single-authored; the rest were by teams ranging in size from two to nine. The overall acceptance rates of single- and multiple-authored papers were essentially identical (7.5% and 7.4%, respectively). That doesn’t tell the whole story, though, because acceptance rates varied among papers representing different parts of the discipline and different disciplines. With those differences taken into account, the following picture emerged:

(1) Overall, whether a paper was submitted by a single author or by two or more authors had no bearing on its chances of being accepted.

(2) But within that general pattern, a more specific pattern stood out. Single-authored papers did no worse than multiple-authored ones as long as at least one of the authors of the latter was a political scientist. Papers submitted by a single “outsider” fared much more poorly than those submitted by a single political scientist, multiple political scientists, or mixed teams with at least one political scientist. As I noted in the article, typically the single “outsider” was an economist, and their lack of success is consistent with the unflattering stereotype of economic imperialists “marching into neighboring disciplines without making much effort to acquaint themselves with those disciplines’ research literatures. Shortchanging contributions from outside of one’s own discipline might matter little when a paper is being considered by a journal in one’s own discipline. But seeking acceptance of one’s work without paying due heed to prior research in the journal’s own discipline … is likely to be self-defeating.”

This is a topic on which more research needs to be done, on a wide variety of journals and with a wide variety of measures of the “payoff” of collaboration.

Continue Reading

I won’t touch that except to say that I’d have paid a lot to see Wolfram and Jacques Derrida go one-on-one

Not having been able to get onto the new Wolfram site, I won’t comment further on it, but I do want to take the opportunity to point to one of the great savage book-review takedowns I’ve had the pleasure to read, Cosma Shalizi’s excoriation of Wolfram’s A New Kind of Science. Wolfram doesn’t come out it looking well, either as a scientist, or, indeed, as a human being.

There is one new result in this book which is genuinely impressive, though not so impressive as Wolfram makes it out to be. This is a proof that one of the elementary CAs, Rule 110, can support universal computation. … The real problem with this result, however, is that it is not Wolfram’s. … This was done rather by one Matthew Cook, while working in Wolfram’s employ under a contract with some truly remarkable provisions about intellectual property. In short, Wolfram got to control not only when and how the result was made public, but to claim it for himself. In fact, his position was that the existence of the result was a trade secret. Cook, after a messy falling-out with Wolfram, made the result, and the proof, public at a 1998 conference on CAs. (I attended, and was lucky enough to read the paper where Cook goes through the construction, supplying the details missing from A New Kind of Science.) Wolfram, for his part, responded by suing or threatening to sue Cook (now a penniless graduate student in neuroscience), the conference organizers, the publishers of the proceedings, etc. … to deny Cook any authorship, and to threaten people with lawsuits to keep things quiet, is indeed very low. Happily, the suit between Wolfram and Cook has finally been resolved, and Cook’s paper has been published, under his own name, in Wolfram’s journal Complex Systems.
So much for substance. Let me turn to the style, which is that of monster raving egomania, beginning with the acknowledgments. Conventionally, this is your chance to be modest, to give credit to your sources, friends, and inevitably long-suffering nearest and dearest. Wolfram uses it, in five point type, to thank his drudges (including Matthew Cook for “technical content and proofs”), and thank people he’s talked to, not for giving him ideas and corrections, but essentially for giving him the opportunity to come up with his own ideas, owing nothing to them.
Continue Reading