# How to teach things we don’t agree with?

by on November 18, 2012 · 8 comments

This discussion arose in the context of statistics teaching:

April Galyardt writes:

I’m teaching my first graduate class this semester. It’s intro stats for graduate students in the college of education. Most of the students are first year PhD students. Though, there are a number of master’s students who are primarily in-service teachers. The difficulties with teaching an undergraduate intro stats course are still present, in that mathematical preparation and phobia vary widely across the class.

I’ve been enjoying the class and the students, but I’d like your take on an issue I’ve been thinking about. How do I balance teaching the standard methods, like hypothesis testing, that these future researchers have to know because they are so standard, with discussing the problems with those methods (e.g. p-value as a measure of sample size, and the decline effect, not to mention multiple testing and other common mistakes). It feels a bit like saying “Ok here’s what everybody does, but really it’s broken” and then there’s not enough time to talk about other ideas.

My reply: One approach is to teach the classical methods in settings where they are appropriate. I think some methods are just about never appropriate (for example, so-called exact tests), but in chapters 2-5 of my book with Jennifer, we give lots of applied examples of basic statistical methods. One way to discuss the problems of a method is to show an example where the method makes sense and an example where it doesn’t.

But I imagine the same sort of thing must arise in political science courses all the time. Do any of you have the experience of having to teach something that you think is misleading or wrong? What do you think of the suggested strategy, “show an example where the method makes sense and an example where it doesn’t”?

Phil C. November 18, 2012 at 7:22 pm

It’s about the intersection between the professor, the student, and content of the course. If it is an empirical or objective question and the prof is rational, he probably agrees with the most persuasive argument, but can understand and acknowledge the weaker argument. Present both, that way, with the evidence. If it is a matter of taste or values, then the professor’s preferences are irrelevant and the students should be guided to a place where they confront and wrestle with what they believe and why. There, your job is to open their minds to fresh ideas.

The best compliment I ever got came at the end of a public policy course when the student asked me if I was a Democrat or Republican because “I could not tell.” Awesome. I did my job.

Tracy Lightcap November 18, 2012 at 7:27 pm

I try hard to avoid the problem altogether. For undergrads – and that’s all I teach – I’ve given up trying to get inferential stats across at any deep level. I instead concentrate on descriptive stats, especially various measures of correlation and basic regression models and how to interpret them. I do discuss significance testing, of course, but only with an eye to getting across what the “p values” are trying to tell us about how reliable the coeffs are. I also try to introduce bootstraping – I use the R Commander for the main app and add in the bootstrap routines. I find that is a lot easier for the students to understand traditional significance reasoning and it’s a lot more defensible for most of the data used in political science research. In short, it’s a data analysis course.

Sounds limited, doesn’t it? But, if I can believe the anecdotes, my students are way, way ahead of others in grad school and research positions. They tell me that understanding inference is a lot easier when they have the rest of the project steady under their feet.

Bob November 18, 2012 at 8:36 pm

I remember a professor I had in undergrad: we could never figure out what his political views were. Every time he said something, he would refute it later. He was a great lecturer, and probably one of my best, because he always made us question everything. I have tried to incorporate his approach in my own teaching.

Scott November 18, 2012 at 11:17 pm

This is an interesting thread and something I am struggling with now. I recently introduced multiple regression to undergraduate political scientists and a smart student quickly identified the ability of the researcher to manipulate models to produce desired results. We discussed why this was not a good idea (and what reasonable criteria for including variables in models might be) but I thought it was important to indicate that this is in fact a fairly serious problem in the social sciences.

My approach, overall, has been that students need to learn the basics of statistical modeling before they can learn the problems with the basics. For undergraduates–who are skeptical (and scared) of this empirical enterprise to begin with–it is just too much to do both. Instead, I hope to give them enough information and inspire enough intrigue that they will take additional courses that will provide them with a more sophisticated understanding.

Jacob November 19, 2012 at 12:17 am

I feel like you are on the right track but go farther. Show both sides of the argument. Take a look at one side and what it does and does not have going for. Afterwards, take a look at the other side and what it does and does not have going for it. By doing this method you will address both sides of the issue and not be biased. As a result, the students will be free to believe what they want to believe.

Andrew Gelman November 19, 2012 at 1:40 pm

Jacob:

Your proposal sounds great in the context of a substantive political science course. But in a statistics or methods course, all this discussion could take time away from the main material to be covered.