Do social scientists and others with little mastery of mathematics find research findings more persuasive when you just add a little math? Yes, suggests this article by Kimmo Eriksson in a recent issue of Judgment and Decision Making. Eriksson gave 200 participants abstracts of two published papers. Half of these abstracts were randomly enriched with a sentence and equation from an entirely unrelated paper in mathematics (“A mathematical model (*T _{PP}*=

*T*

_{0}−

*fT*

_{0}

*d*

_{f}^{2}−

*fT*) is developed to describe sequential effects.”) The respondents were then asked to judge the quality of the research.

_{P}d_{f}The bottom line of the findings is that those with degrees in math and the sciences were not more impressed by the abstract with the nonsense sentence but those with degrees in the humanities and social sciences and (disturbingly) the medical sciences are.

One can interpret this finding as stressing the need for more math training in the social sciences. Or one could emphasize that mathematically oriented articles have an undue advantage in the peer review process. These conclusions are not mutually exclusive. More math training could lead to less deference to pointless math. Unfortunately, the experiment does not allow us to differentiate between the humanities and various social sciences so we can’t quite be sure who is being fooled here (the mathematically minded economists or the historians?). I would like to see this replicated with a more homogeneous group of scholars evaluating scholarship in their area of expertise.

ps. My description of the participants as “scholars” is misleading. As pointed out in the comments, the participants were recruited via Amazon Turk and mostly have master’s degrees. An interesting study but at best a pilot study for drawing deeper conclusions about academia (as per the last sentences of my post).

{ 5 comments }

Participants in the study were 200 American adults on Amazon Turk who claimed to have postgraduate degrees, and who were to receive fifty cents for participating in what they were told would take five minutes of effort. One might speculate that differences between disciplines in how persuasive their members found papers with mathematical formulas could be a result of how badly people in particular disciplines needed fifty cents. Before we change the amount of math training in the social sciences or modify the peer review process, we might want a little confirmation of the findings with a somewhat more reliable sample. Maybe they could re-run the test on 200 American undergraduate students who receive extra course credit for participating, because we know that’s the gold standard for social science research.

Awesome (in a depressing sort of way)

I’m not even sure if this means that mathematical articles or even mathematical abstracts (since that’s all they saw) have undue influence. If the only information you have for judging an article is the abstract and one abstract appears to have developed a model while the other didn’t, it would seem reasonable for people–especially social scientists who like models–to think that the “mathy” abstract might be a bit more rigorous if only because the model might indicate that the authors of the abstract made their assumptions and the implications of those assumptions explicit and clear. Even though I’m not a modeler, I don’t see why this reflects poorly on any discipline.

If you look at figure 1, you’ll see that the Math/Science people didn’t really think the article was all that interesting to begin with relative to the other respondents. This might indicate that those with science degrees were not interested in the topic addressed in the abstracts since they dealt with social science issues. Thus, whether it had a model or not, it didn’t look like what they knew to be rigorous science–e.g. controlled experiments or mathematical theorizing–so they rated both abstracts poorly.

Hmm…. On second look, I think I misread figure 1. But the broader point that the science/math people didn’t find the topic or approach appealing still remains. In addition, if they’re not familiar with social science work, they might not see why having a model in a paper might make it more rigorous than one without it.

Anecdotally, I’ve found this to be true. For example, a couple of political scientists I’ve encountered (I’m a mathematician) are easily snowed by a helping of mathematics, regardless of its quality or relevance. Moreover, they’ve pretended to understand said mathematics when it’s obvious they haven’t a clue what it’s about. I would be willing to bet that there is a significant chunk of the social scientists (yes, Ph. D.s) for which this would be true. [Note: I'm not saying all, but some.] Of course, there’s always Lang versus Huntington….

Comments on this entry are closed.