Modeling the Ryan proposal

Thanks Pete for pointing out that there is modeling behind the Ryan proposal after all. Macroeconomic Advisers has the kind of in-depth scrutiny of the model results that I love, in The Economic Effects of the Ryan Plan: Assuming the Answer?.

You really should read it, but here are some of the juicier excerpts:

Peek-a-boo

There were actually two sets of results. The first showed real GDP immediately rising by $33.7 billion in 2012 (or 0.2%) relative to the baseline, with total employment rising 831 thousand (or 0.6%) and the civilian unemployment rate falling a stunning 2 percentage points, a decline that persisted for a decade. (This path for the unemployment rate is labeled “First Result” in the table.) The decline in the unemployment rate was greeted — quite correctly, in our view — with widespread incredulity. Shortly thereafter, the initial results were withdrawn and replaced with a second set of results that made no mention of the unemployment rate, but not before we printed a hardcopy! (This is labeled “Second Result” in the table.)

Multiplier Mischief

The simulation shows real federal non-defense purchases down by $37.4 billion in 2012, but real GDP up by $33.7 billion, so the short-run “fiscal multiplier” is negative.[11] As noted above, that analysis was prepared using the GI model of the US economy. We are not intimately familiar with this model but have the impression it is a structural macro model in which near-term movements in GDP are governed by aggregate demand while long-term trends in output are determined by the labor force, the capital stock, and total factor productivity. Obviously we can’t object to this paradigm, since we rely on it, too.

However, precisely because we are so familiar with the characteristics of such systems, we doubt that the GI model, used as intended, shows a negative short-run fiscal multiplier. Indeed, GI’s own discussion of its model makes clear the system does, in fact, have a positive short-run fiscal multiplier.[12] This made us wonder how and on what grounds analysts at Heritage manipulated the system to produce the results reported.

Crowding Out Credibility

So, as we parsed the simulation results, we couldn’t see what was stimulating aggregate demand at unchanged interest rates and in the face of large cuts in government consumption and transfer payments…until we read this:

“Economic studies repeatedly find that government debt crowds out private investment, although the degree to which it does so can be debated. The structure of the model does not allow for this direct feedback between government spending and private investment variables. Therefore, the add factors on private investment variables were also adjusted to reflect percentage changes in publicly held debt (MA italics).”

In sum, we have never seen an investment equation specified this way and, in our judgment, adjusting up investment demand in this manner is tantamount to assuming the answer. If Heritage wanted to show more crowding in, it should have argued for a bigger drop in interest rates or more interest-sensitive investment, responses over which there is legitimate empirical debate. These kinds of adjustments would not have reversed the sign of the short-run fiscal multiplier in the manner that simply adjusting up investment spending did.

Hilarious Housing?

In the simulation, the component of GDP that initially increases most, both in absolute and in percentage terms, is residential investment. This is really hard to fathom. There’s no change in pre-tax interest rates to speak of, hence the after-tax mortgage rate presumably rises with the decline in marginal tax rates even as the proposed tax reform curtails some or all of the mortgage interest deduction. …

The list of problems goes on and on, and there are others. MacroAdviser’s bottom line:

In our opinion, however, the macroeconomic analysis released in conjunction with the House Budget Resolution is not relevant to the coming discussion. We believe that the main result — that aggressive deficit reduction immediately raises GDP at unchanged interest rates — was generated by manipulating a model that would not otherwise produce this result, and that the basis for this manipulation is not supported either theoretically or empirically. Other features of the results — while perhaps unintended — seem highly problematic to us and seriously undermine the credibility of the overall conclusions.

This is really unfortunate, both for the policy debate and the modeling profession. Using models as arguments from authority, while manipulating them to produce propagandistic output, poisons the well for all rational inputs to policy debates. Unfortunately, there’s a long history of such practice, particularly in economic forecasting:

Not surprisingly, the forecasts produced by econometric models often don’t square with the modeler’s intuition. When they feel the model output is wrong, many modelers, including those at the “big three” econometric forecasting firms—Chase Econometrics, Wharton Econometric Forecasting Associates, and Data Resources – simply adjust their forecasts. This fudging, or add factoring as they call it, is routine and extensive. The late Otto Eckstein of Data Resources admitted that their forecasts were 60 percent model and 40 percent judgment (“Forecasters Overhaul Models of Economy in Wake of 1982 Errors,” Wall Street Journal, 17 February 1983). Business Week (“Where Big Econometric Models Go Wrong,” 30 March 1981) quotes an economist who points out that there is no way of knowing where the Wharton model ends and the model’s developer, Larry Klein, takes over. Of course, the adjustments made by add factoring are strongly colored by the personalities and political philosophies of the modelers. In the article cited above, the Wall Street Journal quotes Otto Eckstein as conceding that his forecasts sometimes reflect an optimistic view: “Data Resources is the most influential forecasting firm in the country… If it were in the hands of a doom-and- gloomer, it would be bad for the country.” John Sterman, A Skeptic’s Guide to Computer Models

As a historical note, GI – Global Insight, maker of the model used by Heritage CDA for the Ryan analysis – is the product of a Wharton/DRI merger, though it appears that the use of the GI model may have been outside their purview in this case.

What’s the cure? I’m not sure there is one as long as people are cherry-picking plausible sounding arguments to back up their preconceived notions or narrow self-interest. But assuming that some people do want intelligent discourse, it’s fairly easy to get it by having high standards for model transparency and quality. This means more than peer review, which often entails only weak checks of face validity of output. It means actual interaction with models, supported by software that makes it easy to identify causal relationships and perform tests in extreme conditions. It also means archiving of models and results for long-term replication and quality improvement. It requires that modelers invest more in testing the limits of their own insights, communicating their learnings and tools, and fostering understanding of principles that help raise the average level of debate.

Leave a Reply

Your email address will not be published. Required fields are marked *

8 ÷ 1 =

This site uses Akismet to reduce spam. Learn how your comment data is processed.