This discussion arose in the context of statistics teaching:

April Galyardt writes:I’m teaching my first graduate class this semester. It’s intro stats for graduate students in the college of education. Most of the students are first year PhD students. Though, there are a number of master’s students who are primarily in-service teachers. The difficulties with teaching an undergraduate intro stats course are still present, in that mathematical preparation and phobia vary widely across the class.I’ve been enjoying the class and the students, but I’d like your take on an issue I’ve been thinking about. How do I balance teaching the standard methods, like hypothesis testing, that these future researchers have to know because they are so standard, with discussing the problems with those methods (e.g. p-value as a measure of sample size, and the decline effect, not to mention multiple testing and other common mistakes). It feels a bit like saying “Ok here’s what everybody does, but really it’s broken” and then there’s not enough time to talk about other ideas.

My reply: One approach is to teach the classical methods in settings where they are appropriate. I think some methods are just about never appropriate (for example, so-called exact tests), but in chapters 2-5 of my book with Jennifer, we give lots of applied examples of basic statistical methods. One way to discuss the problems of a method is to show an example where the method makes sense and an example where it doesn’t.

But I imagine the same sort of thing must arise in political science courses all the time. Do any of you have the experience of having to teach something that you think is misleading or wrong? What do you think of the suggested strategy, “show an example where the method makes sense and an example where it doesn’t”?

It’s not about the professor.

It’s about the intersection between the professor, the student, and content of the course. If it is an empirical or objective question and the prof is rational, he probably agrees with the most persuasive argument, but can understand and acknowledge the weaker argument. Present both, that way, with the evidence. If it is a matter of taste or values, then the professor’s preferences are irrelevant and the students should be guided to a place where they confront and wrestle with what they believe and why. There, your job is to open their minds to fresh ideas.

The best compliment I ever got came at the end of a public policy course when the student asked me if I was a Democrat or Republican because “I could not tell.” Awesome. I did my job.

I try hard to avoid the problem altogether. For undergrads – and that’s all I teach – I’ve given up trying to get inferential stats across at any deep level. I instead concentrate on descriptive stats, especially various measures of correlation and basic regression models and how to interpret them. I do discuss significance testing, of course, but only with an eye to getting across what the “p values” are trying to tell us about how reliable the coeffs are. I also try to introduce bootstraping – I use the R Commander for the main app and add in the bootstrap routines. I find that is a lot easier for the students to understand traditional significance reasoning and it’s a lot more defensible for most of the data used in political science research. In short, it’s a data analysis course.

Sounds limited, doesn’t it? But, if I can believe the anecdotes, my students are way, way ahead of others in grad school and research positions. They tell me that understanding inference is a lot easier when they have the rest of the project steady under their feet.

I remember a professor I had in undergrad: we could never figure out what his political views were. Every time he said something, he would refute it later. He was a great lecturer, and probably one of my best, because he always made us question everything. I have tried to incorporate his approach in my own teaching.

This is an interesting thread and something I am struggling with now. I recently introduced multiple regression to undergraduate political scientists and a smart student quickly identified the ability of the researcher to manipulate models to produce desired results. We discussed why this was not a good idea (and what reasonable criteria for including variables in models might be) but I thought it was important to indicate that this is in fact a fairly serious problem in the social sciences.

My approach, overall, has been that students need to learn the basics of statistical modeling before they can learn the problems with the basics. For undergraduates–who are skeptical (and scared) of this empirical enterprise to begin with–it is just too much to do both. Instead, I hope to give them enough information and inspire enough intrigue that they will take additional courses that will provide them with a more sophisticated understanding.

I feel like you are on the right track but go farther. Show both sides of the argument. Take a look at one side and what it does and does not have going for. Afterwards, take a look at the other side and what it does and does not have going for it. By doing this method you will address both sides of the issue and not be biased. As a result, the students will be free to believe what they want to believe.

Jacob:

Your proposal sounds great in the context of a substantive political science course. But in a statistics or methods course, all this discussion could take time away from the main material to be covered.

This has been an interesting thread, but the politics pulls it away from something else discussed above: the variance in mathematical background. I’ve taught intro stats for: social science undergrads, social science grads, medical residents, undergraduate engineers, undergrad and grad business students (the biggest challenge), and probably more that I’m forgetting. The most annoying issue is the math phobia. But let’s think about the “math” involved in such a course. It is: addition, subtraction, multiplication, and division. Wait! Isn’t that primary school material? Now we do get a little abstract with probability and hypothesis testing/confidence intervals. The truth is that some students use the “math” issue and the reputation of a required “stats” course as a crutch for low performance. It would be great if we simply relabeled the undergraduate course “data analysis for political science” (or whatever discipline you want) and reduce the pre-hysteria. On the other end of the spectrum, is it reasonable to expect social science graduate students to understand the basics of probability, calculus, and matrix algebra? Those programs with “bootcamps” apparently think so. I’m mellower on this than I used to be, and it seems that faculty and PhD students have different expectations of what their program is all about, meaning folks who teach the *required* intro course usually have to be sensitive. On the other hand, as pointed out in different language above, if these are graduate students who will be working with data as a core part of their lifelong research pursuits, we need to do more than babysit them through Agresti and Finlay (or whatever).

I guess it depends on exactly the subject is but as long as its nothing especially egregious, just being professional should be enough to teach things you don’t agree with. Most things in political science are contested anyway so a lot of what we teach is presenting various sides of an argument. Stats courses are a bit of an outlier in this since some things are not contested but just plain wrong even if they are fairly common currency. But sometimes you just have to accept your limits and try to do the best for your students. In my case I accept that I can’t teach every controversy in quantitative social science so I try and get my undergrads to the point where they can correctly run their own regressions. That gives them a tool that’s useful in both their studies and when they go off into the wide world of work. If they want to know more then further study is necessary but they at least have a foundation to build on. That might offend some sensibilities but the raw material I have to work with is social science undergrads and there’s only so much you can do with them in a one year course.