Author Archive | Andrew Gelman
In an ideal world, research articles would be open to criticism and discussion in the same place where they are published, in a sort of non-corrupt version of Yelp. What is happening now is that the occasional paper or research area gets lots of press coverage, and this inspires reactions on science-focused blogs. The trouble here is that it’s easier to give off-the-cuff comments than detailed criticisms.
Here’s an example. It starts a couple years ago with this article by Ryota Kanai, Tom Feilden, Colin Firth, and Geraint Rees, on brain size and political orientation:
In a large sample of young adults, we related self-reported political attitudes to gray matter volume using structural MRI. We found that greater liberalism was associated with increased gray matter volume in the anterior cingulate cortex, whereas greater conservatism was associated with increased volume of the right amygdala. These results were replicated in an independent sample of additional participants. Our findings extend previous observations that political attitudes reflect differences in self-regulatory conflict monitoring . . .
My reaction was a vague sense of skepticism, but I didn’t have the energy to look at the paper in detail so I gave a sort of sideways reaction that did not criticize the article but did not take it seriously either:
Here’s my take on this. Conservatives are jerks, liberals are wimps. It’s that simple. So these researchers can test their hypotheses by more directly correlating the brain functions with the asshole/pussy dimension, no?
A commenter replied:
Did you read the paper? Conservatives are more likely to be cowards/pussies as you call it – more likely to jump when they see something scary, so the theory is that they support authoritarian policies to protect themselves from the boogieman.
The next month, my coblogger Erik Voeten reported on a similar paper by Darren Schreiber, Alan Simmons, Christopher Dawes, Taru Flagan, James Fowler, and Martin Paulus. Erik offered no comments at all, I assume because, like me, he did not actually read the paper in question. In our blogging, Erik and I were publicizing these papers and opening the floor for discussion, although not too much discussion actually happened.
A couple years later, the paper by Schreiber et al. came out in a journal and Voeten reblogged it, again with no reactions of his own. This time there was a pretty lively discussion with some commenters objecting to interpretations of the results, but nobody questioning the scientific claims. (The comment thread eventually became occupied by a troll, but that’s another issue.)
More recently, Dan Kahan was pointed to this same research article on “red and blue brains,” blogged it, and slammed it to the wall:
The paper reports the results of an fMRI—“functional magnetic resonance imagining”— study that the authors describe as showing that “liberals and conservatives use different regions of the brain when they think about risk.” . . .
So what do I think? . . . the paper supplies zero reason to adjust any view I have—or anyone else does, in my opinion—on any matter relating to individual differences in cognition & ideology.
Ouch. Kahan writes that Schreiber et al. used a fundamentally flawed statistical approach in which they basically went searching for statistical significance:
There are literally hundreds of thousands of potential “observations” in the brain of each study subject. Because there is constantly varying activation levels going on throughout the brain at all time, one can always find “statistically significant” correlations between stimuli and brain activation by chance. . . .
Schreiber et al. didn’t discipline their evidence-gathering . . . They did initially offer hypotheses based on four precisely defined brain ROIs in “the right amygdala, left insula, right entorhinal cortex, and anterior cingulate.” They picked these, they said, based on a 2011 paper [the one mentioned at the top of the present post] . . .
But contrary to their hypotheses, Schreiber et al. didn’t find any significant differences in the activation levels within the portions of either the amygdala or the anterior cingulate cortex singled out in the 2011 Kanai et al. paper. Nor did Schreiber et al. find any such differences in a host of other precisely defined areas (the “entorhinal cortex,” “left insula,” or “Right Entorhinal”) that Kanai et al. identified as differeing structurally among Democrats and Republicans in ways that could suggest the hypothesized differences in cognition.
In response, Schreiber et al. simply widened the lens, as it were, of their observational camera to take in a wider expanse of the brain. “The analysis of the specific spheres [from Kanai et al.] did not appear statistically significant,” they explain,” so larger ROIs based on the anatomy were used next.” . . .
Even after resorting to this device, Schreiber et al. found “no significant differences . . . in the anterior cingulate cortex,” but they did manage to find some “significant” differences among Democrats’ and Republicans’ brain activation levels in portions of the “right amygdala” and “insula.”
And it gets worse. Here’s Kahan again:
They selected observations of activating “voxels” in the amygdala of Republican subjects precisely because those voxels—as opposed to others that Schreiber et al. then ignored in “further analysis”—were “activating” in the manner that they were searching for in a large expanse of the brain. They then reported the resulting high correlation between these observed voxel activations and Republican party self-identification as a test for “predicting” subjects’ party affiliations—one that “significantly out-performs the longstanding parental model, correctly predicting 82.9% of the observed choices of party.”
This is bogus. Unless one “use[s] an independent dataset” to validate the predictive power of “the selected . . .voxels” detected in this way, Kriegeskorte et al. explain in their Nature Neuroscience paper, no valid inferences can be drawn. None.
Kahan follows up one of my favorite points, on the way in which multiple comparisons corrections exacerbate the statistical significance filter:
Pushing a button in one’s computer program to ramp up one’s “alpha” (the p-value threshold, essentially, used to avoid “type 1” errors) only means one has to search a bit harder; it still doesn’t make it any more valid to base inferences on “significant correlations” found only after deliberately searching for them within a collection of hundreds of thousands of observations.
Wow. Look what happened. Assuming Kahan is correct here, we all just accepted the claimed results. Nobody actually checked to see if they all made sense.
I thought a bit and left the following comment on Kahan’s blog:
Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place people place papers that can’t be placed. We can deduce that the paper was rejected by Science, Nature, various other biology journals, and maybe some political science journals as well.
I’m not saying you shouldn’t criticize the paper in question, but you can’t really demand better from a paper published in a bottom-feeder journal.
Again, just because something’s in a crap journal, doesn’t mean it’s crap; I’ve published lots of papers in unselective, low-prestige outlets. But it’s certainly no surprise if a paper published in a low-grade journal happens to be crap. They publish the things nobody else will touch.
Some of my favorite papers have been rejected many times before finally reaching publication. So I’m certainly not saying that appearance in a low-ranked journal is definitive evidence that a paper is flawed. But, if it’s really been rejected by 3 journals before getting to this point, that could be telling us something.
One of the problems with traditional pre-publication peer review is that it’s secret. What were the reasons that those 3 journals (I’m guessing) rejected the paper? Were they procedural reasons (“We don’t publish political science papers”), or irrelevant reasons (“I just don’t like this paper”), or valid criticisms (such as Kahan’s noted above)? We have no idea.
As we know so well, fatally flawed papers can appear in top journals and get fawning press; the pre-publication peer-review process is far from perfect. Post-publication peer-review seems like an excellent idea. But, as the above story indicates, it’s not so easy. You can get lots of “Andy Gelmans” and “Erik Voetens” who just post a paper without reading it, and only the occasional “Dan Kahan” who takes a detailed examination.
P.S. The above post is unfair in three ways.
1. It’s misleading to call Plos-One a “crap journal.” Yes, it publishes articles that other journals won’t publish. But that doesn’t make it crap. As various commenters have pointed out, Plos-One has a different publication model compared to traditional journals. “Different” doesn’t mean “crap.”
2. I have no particular reason to think that the paper above was rejected by others before being submitted to Plos-One.
3. Just because the methods in this paper have problems, that doesn’t mean its conclusions are wrong. The data analysis provides some support for the conclusions, even if the evidence isn’t quite as strong as claimed.
I recognize that this sort of study is difficult and costly, and I have a great respect for researchers who work in this area. If I can contribute via some statistical scrutiny, it is not to shoot all this down but rather with the goal of helping these resources be used more effectively.
This seems to be the week for us to plug our new books (and I’m eagerly awaiting John and Lynn’s The Gamble), so I thought I’d say a few words about my own new book (at the time of this writing, still available at 40% off if you pre-order from Amazon):
Continue Reading →
Ashok Rao writes:
Paul Krugman’s pet insult – “Very Serious Person” – is more important to understanding America’s policy failures than most people realize, and goes well beyond economic illiteracy. More than anything, without understanding VSPness (henceforth “vispy”) – one can never comprehend how the Democratic Party screwed up so much in the past five years. . . .
The Democrats are vertically infected with vispiness in a way the Republican party is not. While many often talk about the GOP as a more “hierarchal” party (considering the nature of their primary selection process) – Republicans are freer and more iconoclastic. . . . the only way to become a Republican champion is iconoclastic flair. Rand Paul, Ted Cruz, and even Sarah Palin are hardly “establishment” in the sense of representing prestigious ideas.
Rao argues that leadership in the Republican party is attained via pursuing “fresh and different ideas: ranging all the way from Chris Christie’s loud personality to Paul Ryan’s nutty-nutty budget.”
For the purposes of argument, I will accept Rao’s assessment of the structures of the two parties. The question then arises: Why? After all, basic stereotypes would suggest that Republicans, not Democrats, would be the stodgy ones. One story is that the Democrats are working on “maintaining the ’90s status-quo” (in Rao’s words). But I think it goes back earlier than that. After all, Reagan was an extremist for his time, whereas Clinton was always a moderate.
My theory (which maybe I’ve blogged before, I can’t remember) revolves around the role of the news media. The media are a liberal, Democratic-leaning institution. This can be seen, for example, from surveys of journalists (the last one I saw showed Democratic reporters outnumbering Republicans 2-1) or political endorsements or various other studies. It is my impression that the news media lean left but the public-relation industry leans right.
Anyway, my point here is that the Republican party has a lot of resources, including much of big business, military officers, and organized religion. They don’t need the news media in the way that the Democrats do. And, I suspect one reason why Very Serious People are important for Democrats is that they are respected by the media. The Republicans can put together a budget that is mocked by major newspapers and nobody cares. But if the Democrats lose the support of the New York Times, they’re in trouble. Hence the asymmetry in seriousness. One might say that the Republicans are hurt by a similar asymmetry with regard to social issues, in that they can’t ignore the support of the religious right or talk radio. Although this is a bit different: the so-called Very Serious People pull the Democrats toward the center, while social issue groups pull the Republicans to the right.
To put it another way, each party has a coalition of financial interests and political activists that are important in staffing the party and shaping its goals. The Democratic party’s balance has changed: in recent decades, with the decline of labor unions, various segments of industry such as high-tech have become important, also there are doctors and lawyers and newspapers. These are all groups that will tend to favor centrist, status-quo, what Krugman might call “very serious” policies.
I think this could/should be studied more systematically (ideally in some sort of comparative analysis with data from many countries).
Mark Hansen, statistician and professor of journalism at Columbia University, writes that they’re looking to hire a director for a new program teaching journalists about data and computing.
Columbia Journalism School is creating a new post-baccalaureate program aimed at preparing college graduates who have little or no quantitative or computational background to be successful applicants to masters and doctoral degree programs that require skills in those areas. This is being done in consultation with a consortium of faculty from across the University, many from newly computational fields such as the digital humanities and the computational social sciences, which face the same disconnect between student preparation and emerging data and computing based research practices. As far as we know this program would be the first of its kind in the country.
We came to this project because, as part of the work of the Tow Center for Digital Journalism, we recently started a dual degree program in journalism and computer science. We have found it challenging to recruit young journalists to the program because they find the prospect of immediately enrolling in graduate-level computer science programs daunting. We also know that colleagues across the university want to increase the computational competency of those pursuing graduate study in their disciplines. We are thus leading a group of colleagues in creating a new program to address these needs, which we call Year Zero (so-named to suggest the portion of a graduate degree program that occurs before its first official year).
In the first semester, students will be introduced to a core series of concepts, taught in the context of the artifacts and practices of journalism, the digital humanities and computational social science, and often with pairs of instructors, one from computer science and one from these other fields. In the second term, students who plan to apply to our dual masters degree will take computer sciences courses, while potential candidates for advanced study in other fields will choose from a variety of other computational courses.
We are now seeking a Program Director to work with the Directors of the Tow Center and the Brown Institute to create course offerings for, lead courses in, and help recruit students for Year Zero. This is a full-time, two-year position, but the position is renewable based on performance and the success of the program.
This sounds a lot like the quantitative methods in social sciences M.A. program we started up a decade and a half ago at Columbia. QMSS was immediately successful and became more so, and I have every expectation that Mark’s program will become a similar success.
Reposted because it’s newsworthy.
And our graphs (based on data from the late 1990s):
Wow! And this made its way into PNAS and NYT.
Symposium magazine (“Where Academia Meets Public Life”) has some fun stuff this month:
Learning to Read All Over Again
What produces better students – reading in print or reading on-line? The answer is both.
The Elusive Quest for Research Innovation
Claude S. Fischer
Much of what is considered “new research” has actually been around for a while. But that does not mean it lacks value.
Science Journalism and the Art of Expressing Uncertainty
It is all too easy for unsupported claims to get published in scientific publications. How can journalists address this?
A Scientist Goes Rogue
Can social media and crowdfunding sustain independent researchers?
Still Waiting for Change
Sylvia A. Allegretto
Economists and policymakers alike are ignoring a huge class of workers whose wages have been effectively frozen for decades.
One Professor’s Spirited Enterprise
A burgeoning distilling program has successfully combined science and business at Michigan State University.
Slow and Fast Learning in the Digital Age
The proliferation of online learning tools requires us to take a closer look at how we think, teach and learn.
The authors of these articles include a professor of German and film studies, a sociologist, a reporter/novelist, an economist, a food writer, and a professor of arts management. Enjoy.
This one (from Brian Nosek, Jeffrey Spies, and Matt Motyl) is so great that all quantitative political scientists (and sociologists, and economists, and public health researchers, . . .) should read it too. Right now.