As I never tire of saying, I know nothing about macroeconomics, so I’m asking you, the readers, for help.
This indeed seems like a stretch, so I followed the link to the report prepared for the Heritage Foundation by William Beach, Karen Campbell, John Ligon, and Guinevere Nell. (Regular readers will know that I always like to refer to reports by their authors rather than their publication or commissioning agency.)
I searched on “unemployment” and found this:
A simulation of the House Budget Resolution using the U.S. Macroeconomic Model from IHS/Global Insight produced the following results for the period 2012 through 2021 . . . The unemployment rate shrank by an annual average of 2.1 percentage points over the forecast period.
This confused me at first—the unemployment rate certainly can’t shrink by an an annual average of 2.1 percentage points for the next 10 years, or it would go negative—but then I realized that what they meant was that they forecast that the unemployment rate, if Ryan’s plan is implemented, would be on average 2.1 percentage points lower than what would happen if his plan were not implemented.
Anyway, the thing I’m wondering is how the simulation worked internally: what was it in the simulation that got the forecasted unemployment rate down to an implausible (according to Krugman) 2.8%. In statistics—even in Bayesian statistics, nowadays!—we’re trained to think that if a model gives an implausible prediction, we should question the model (and also, of course question our preconceptions that the prediction is implausible). Here’s what Beach et al. write about their simulation:
Congressman Paul Ryan . . . specifically asked the CDA to perform conventional and dynamic budget analysis, or analysis that is based on largely ?static? budget models and on economic models with dynamic economic properties. . . .
Center analysts primarily employed the CDA Individual Income Tax Model for its analysis of the effects of tax law changes on a representative sample of taxpayers based on IRS Statistics of Income (SOI) taxpayer microdata.
OK, so far so good. Now let’s see what they say about employment:
The tax and program changes behind the Budget Resolution produce much stronger economic performance when compared to the rate and level of economic activity in the baseline.6 Lower taxes stimulate greater investment, which expands the size of business activity. This expansion fuels a demand for more labor, which enters a labor market that contains workers who themselves face lower taxes. Consequently, significantly higher employment ensues.
OK, and what’s the model that led to these forecasts? I read on, into the description of the dynamic simulation:
Labor Participation Rates. Taxes on labor affect labor-market incentives. Aggregate labor elasticity is a measure of the response of aggregate hours to changes in the after-tax wage rate. These are larger than estimated micro-labor elasticities because they involve not only the intensive margin (more or fewer hours), but also, and even more so, the extensive margin (expanding the labor force). The change in the labor supply variables were adjusted by the macro-labor elasticity of two, which is a middle estimate of the ranges. The adjustment to the add factors allowed the variable to continue to be affected both positively and negatively by other indirect effects. In the final stage of the simulations the add factors were endogenously recalculated in order to take account of the new estimates of the average tax rates mentioned above.
Damn! I didn’t catch that. They do write, however, that “Further details of the simulation are available upon request.” So maybe an interested reader can go through and learn more. It would be good to know where inside the model is the parameter that causes the forecast unemployment rate to go down below 3%.
Thinking about uncertainty
I don’t know enough about macroeconomics (hey, I said it again!) to have a sense of the plausibility of a forecast of 2.8% unemployment. But I think I do have something to add to this discussion, as a statistician:
Any forecast has uncertainty. And, except in rare cases, we’d expect a point forecast to be near the center of that uncertainty. Could 2.8% plausibly be near the center of the uncertainty about an unemployment forecast, ten years from now? I don’t think so. I understand that any mechanistic forecast is based on assumptions, so I’ll set aside the possibility of unexpected events that could hurt the economy (hurricanes, earthquakes, oil crises, dramatic changes in policy enacted in future years by Democrats or Republicans). Instead just suppose everything happens as plans and there are no bumps in the road. Still, there will be some uncertainty now about the unemployment rate in 2021.
Suppose that, on the high end, the unemployment rate in 2021 might be 7%. That’s not a super-high number—some economists argue that the natural rate of unemployment is around 6% in the United States (see, for example, page 4 of these slides which I found in a quick Google search). Considering that the economy might be in a dip in 2021—who knows?—7% seems like a reasonable, even a conservative, upper bound.
OK, if 7% is the upper bound and 2.8% is the point estimate, then the lower bound is . . . hmm, if (7+x)/2 = 2.8, then x = -1.4! A 2.8% point estimate is the center of a forecast interval that goes from -1.4% to 7%. That can’t be right.
OK, sure, the forecast distribution might be skewed. Still, the basic point remains: if the point forecast is 2.8%, you’ll have to have a lot of your uncertainty below 2.8%—and that is not plausible. 2.8% is at the very low end of what anyone thinks might happen, even in a strong economy. In fact, if the unemployment rate ever gets anywhere near 2.8%, I think we can expect a big push to slow the economy down and get inflation under control. Given that unemployment might well be 6% or more in 2021, I don’t see how 2.8% can ever come out as anyone’s point forecast.
Internal (probabilistic) vs. external (statistical) forecasts
In statistics we talk about two methods of forecasting. An internal forecast is based on a logical model that starts with assumptions and progresses forward to conclusions. To put it in the language of applied statistics: you take x, and you take assumptions about theta, and you take a model g(x,theta) and use it to forecast y. You don’t need any data y at all to make this forecast! You might use past y’s to fit the model and estimate the thetas and test g, but you don’t have to.
In contrast, an external forecast uses past values of x and y to forecast future y. Pure statistics, no substantive knowledge. That’s too bad, put the plus side is that it’s grounded in data.
A famous example is the space shuttle crash in 1986. Internal models predicted a very low probability of failure (of course! otherwise they wouldn’t have sent that teacher along on the mission). Simple external models said that in about 100 previous launches, 2 had failed, yielding a simple estimate of 2%.
We have argued, in the context of election forecasting, that the best approach is to combine internal and external approaches.
Based on the plausibility analysis above, the Beach et al. forecast seems to me to be purely internal. It’s great that they’re using real economic knowledge, but as a statistician I can see what happens whey your forecast is not grounded in the data. Short-term, I suggest they calibrate their forecasts by applying them to old data to forecast the past (this is the usual approach). Long-term, I suggest they study the problems with their forecasts and use these flaws to improve their model.
When a model makes bad predictions, that’s an opportunity to do better.
P.S. Yes, Democrats also have been known to promote optimistic forecasts!