“Are Wisconsin Public Employees Underpaid?”

by Andrew Gelman on February 22, 2011 · 7 comments

in Methodology,Political Economy

Amy Cohen points me to this blog by Jim Manzi, who writes:

Ezra Klein and a variety of other thoughtful liberal bloggers have been pointing to an Economic Policy Institute analysis that they claim demonstrates that Wisconsin’s public employees, even after adjusting for benefits and hours worked, face a ” compensation penalty of 5% for choosing to work in the public sector.” Unfortunately, when you get under the hood, the study shows no such thing. . . . reading the actual paper by Jeffrey H. Keefe is instructive. Keefe took a representative sample of Wisconsin workers, and built a regression model that relates “fundamental personal characteristics and labor market skills” to compensation, and then compared public to private sector employees, after “controlling” for these factors. As far as I can see, the factors adjusted for were: years of education; years of experience; gender; race; ethnicity; disability; size of organization where the employee works; and, hours worked per year. Stripped of jargon, what Keefe asserts is that, on average, any two individuals with identical scores on each of these listed characteristics “should” be paid the same amount. . . .

I’m having difficulty extracting the main points from Manzi’s blog without just pasting it all in, so let me say that Manzi’s key idea is that if private sector employees in Wisconsin are being paid on average 5% more per hour than comparable public sector employees there, it’s not at all clear that this is a “compensation penalty” rather than simply that the private sector workers are worth 5% more, perhaps because they’re doing something more valuable or perhaps because they’re more qualified in ways other than measured by Keefe’s control variables.

Manzi concludes:

The whole question – as is obvious even to untrained observers – is whether or not there are material systematic differences between the public and private employee that are not captured by the list of coefficients in his regression model. His statistical tests simply assume that there are not.

I don’t know if Wisconsin’s public employees are underpaid, overpaid, or paid just right. But this study sure doesn’t answer the question.

That sounds about right to me. But I don’t think this sort of study is completely useless either. (Just to be clear: I haven’t actually followed the link to read Keefe’s report, so in writing about this study, I’m really writing about this study as described by Manzi.)

From one perspective, sure, I agree that a statistical analysis of the sort described above based on observational data can never be a true direct comparison. (Not to mention the difficulty of classifying people like me who work in the quasi-public sector.) But if you take things from the other direction, this sort of study can be valuable.

What do I mean by “the other direction,” you might ask? I mean, suppose you start, as people do, with raw numbers: Salary plus benefits = X% of the state budget. The state has Y number of employees. Average income of all Wisconsinites is Z. Then you start adjusting for hours worked, ages of the employees, etc etc, and . . . you end up with Keefe’s analysis.

My point is, people are going to make some comparisons. Comparisons aren’t so dumb as long as you realize their limitations. And once you start to compare, it makes sense to try to compare comparable cases. Taking Manzi’s criticism too strongly would leave us in the position of allowing raw numbers, and allowing pure unblemished randomized experiments, but nothing in between.

In summary:

1. Manzi’s right to emphasize that a simplistic interpretation of regression results can be misleading.

2. Regressions of observational data can be a good way of going beyond raw comparisons and averages.

Some of this discussion reminds me of the literature on the wage premium for risk, where people run regressions on salaries for comparable jobs in order to estimate how much people need to be paid to risk death or injury.. Based on my reading is that these studies can’t be trusted: if you’re not careful, you can easily estimate the value of life to be negative—after all, the riskiest jobs (lumberjack, etc.) tend to pay poorly, while the best-paying jobs (being Bill Gates, etc.) are pretty safe gigs. With care, you can get those regressions to give reasonable coefficients in the range of $1 million per life, but I don’t really see these numbers as meaning anything at all; they’re just the results of fiddling with the models until something reasonable comes out. I’m not saying that the people who do these analyses are cheating, just that they want reasonable results but the models seem too open-ended to be a good measure of risk premiums.

P.S. Ezra Klein replies, agreeing with Manzi’s statistical critique but writing that “the burden of proof is on those who say Wisconsin’s public employees make too much money.” I’m sure people can disagree about where the burden of proof should fall, but I think Klein’s point is similar to mine, that if you want to claim that public employees are overpaid, that claim will start with a comparison of some sort, and then you have to go from there.


Comments on this entry are closed.

Previous post:

Next post: