Paul Romer on The Trouble with Macroeconomics

Paul Romer (of endogenous growth fame) has a new, scathing critique of macroeconomics.

For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as “tight monetary policy can cause a recession.” Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. A parallel with string theory from physics hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.

Notice the Kuhnian finish: “a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.” This is one of the key features of Sterman & Wittenberg’s model of Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution:

The focal point of the model is a construct called “confidence.” Confidence captures the basic beliefs of practitioners regarding the epistemological status of their paradigm—is it seen as a provisional model or revealed truth? Encompassing logical, cultural, and emotional factors, confidence influences how anomalies are perceived, how practitioners allocate research effort to different activities (puzzle solving versus anomaly resolution, for example), and recruitment to and defection from the paradigm. …. Confidence rises when puzzle-solving progress is high and when anomalies are low. The impact of anomalies and progress is mediated by the level of confidence itself. Extreme levels of confidence hinder rapid changes in confidence because practitioners, utterly certain of the truth, dismiss any evidence contrary to their beliefs. ….

The external factors affecting confidence encompass the way in which practitioners in one paradigm view the accomplishments and claims of other paradigms against which they may be competing. We distinguish between the dominant paradigm, defined as the school of thought that has set the norms of inquiry and commands the allegiance of the most practitioners, and alternative paradigms, the upstart contenders. The confidence of practitioners in a new paradigm tends to increase if its anomalies are less than those of the dominant paradigm, or if it has greater explanatory power, as measured by cumulative solved puzzles. Confidence tends to decrease if the dominant paradigm has fewer anomalies or more solved puzzles. Practitioners in alternative paradigms assess their paradigms against one another as well as against the dominant paradigm. Confidence in an alternative paradigm tends to decrease (increase) if it has more (fewer) anomalies or fewer (more) solved puzzles than the most successful of its competitors.

In spite of its serious content, Romer’s paper is really quite fun, particularly if you get a little Schadenfreude from watching Real Business Cycles and Dynamic Stochastic General Equilibrium take a beating:

To allow for the possibility that monetary policy could matter, empirical DSGE models put sticky-price lipstick on this RBC pig.

But let me not indulge too much in hubris. Every field is subject to the same dynamics, and could benefit from Romer’s closing advice.

A norm that places an authority above criticism helps people cooperate as members of a belief field that pursues political, moral, or religious objectives. As Jonathan Haidt (2012) observes, this type of norm had survival value because it helped members of one group mount a coordinated defense when they were attacked by another group. It is supported by two innate moral senses, one that encourages us to defer to authority, another which compels self-sacrifice to defend the purity of the sacred.

Science, and all the other research fields spawned by the enlightenment, survive by “turning the dial to zero” on these innate moral senses. Members cultivate the conviction that nothing is sacred and that authority should always be challenged. In this sense, Voltaire is more important to the intellectual foundation of the research fields of the enlightenment than Descartes or Newton.

Get the rest from Romer’s blog.

Politics & growth

Trump pledges 4%/yr economic growth (but says his economists don’t want him to). His economists are right – political tinkering with growth is a fantasy:


Source: Maddison

The growth rate of real per capita GDP in the US, and all leading industrial nations, has been nearly constant since the industrial revolution, at about 2% per year. Over that time, marginal tax rates, infrastructure investments and a host of other policies have varied dramatically, without causing the slightest blip.

On the other hand, there are ways you can screw up, like having a war or revolution, or failing to provide rule of law and functioning markets. The key is to preserve the conditions that allow the engine of growth – innovation – to function. Trump seems utterly clueless about innovation. His view of the economy is zero-sum: that value is something you extract from your suppliers and customers, not something you create. That view, plus an affinity for authoritarianism and conflict and neglect of the Constitution, bodes ill for a Trump economy.

Where's my stuff?

I’ve just acquired a pair of 18″ Dell XPS portable desktop tablets. It’s one slick piece of hardware, that makes my iPad seem about as sexy as a beer coaster.

They came with Win8 installed. Now I know why everyone hates it. It makes a good first impression with pretty colors and a simple layout. But after a few minutes, you wonder, where’s all my stuff? There’s no obvious way to run a desktop application, so you end up scouring the web for ways to resurrect the Start menu.

It’s bizarre that Microsoft seems to have forgotten the dynamics that made it a powerhouse in the first place. It’s basically this:

Software is a big nest of positive feedbacks, producing winner-take-all behavior. A few key loops are above. The bottom pair is the classic Bass diffusion model – reinforcing feedback from word of mouth, and balancing feedback from saturation (running out of potential customers). The top loop is an aspect of complementary infrastructure – the more users you have on your platform, the more attractive it is to build apps for it; the more apps there are, the more users you get.

There are lots of similar loops involving accumulation of knowledge, standards, etc. More importantly, this is not a one-player system; there are multiple platforms competing for users, each with its own reinforcing loops. That makes this a success-to-the-successful situation. Microsoft gained huge advantage from these reinforcing loops early in the PC game. Being the first to acquire a huge base of users and applications carried it through many situations in which its tech was not the most exciting thing out there.

So, if you’re Microsoft, and Apple throws you a curve ball by launching a new, wildly successful platform, what should you do? It seems to me that the first imperative should be to preserve the advantages conferred by your gigantic user and application base.

Win8 does exactly the opposite of that:

  • Hiding the Start menu means that users have to struggle to find their familiar stuff, effectively chucking out a vast resource, in favor of new apps that are slicker, but pathetically few in number.
  • That, plus other decisions, enrage committed users and cause them to consider switching platforms, when a smoother transition would have them comfortably loyal.

This strategy seems totally bonkers.

The Beer-TV loop

We recently discovered – after 8 years without TV – that we actually can get reception. We watched a bit of the Olympics. My sons were amused and amazed by the ads, which they otherwise seldom see.

That led them to postulate the beer-TV feedback loop, which is a self-reinforcing descent into ignorance and drunken sloth: TV watching -> + beer ad viewing -> + beer drinking -> – cognitive capacity, motivation -> TV watching.

The loop makes a cameo appearance in this CLD we dreamed up during a conversation about education, skill and motivation:

TV beer LoopIt’s a good thing we don’t get Fox, or they’d probably have a lot more to say.

Why ask why?

Why ask why?

Forward causal inference and reverse causal questions

Andrew Gelman & Guido Imbens

The statistical and econometrics literature on causality is more focused on effects of causes” than on causes of effects.” That is, in the standard approach it is natural to study the effect of a treatment, but it is not in general possible to determine the causes of any particular outcome. This has led some researchers to dismiss the search for causes as “cocktail party chatter” that is outside the realm of science. We argue here that the search for causes can be understood within traditional statistical frameworks as a part of model checking and hypothesis generation. We argue that it can make sense to ask questions about the causes of effects, but the answers to these questions will be in terms of effects of causes.

I haven’t had a chance to digest this yet, but it’s an interesting topic. It’s particularly relevant to system dynamics modeling, where we are seldom seeking only y = f(x), but rather an endogenous theory where x = g(y) also.

See also: Causality in Nonlinear Systems

h/t Peter Christiansen.

Greenwash labeling

I like green labeling, but I’m not convinced that, by itself,  it’s theoretically a viable way to get the economy to a good environmental endpoint. In practice, it’s probably even worse. Consider Energy Star. It’s supposed to be “helping us all save money and protect the environment through energy efficient products and practices.” The reality is that it gives low-quality information a veneer of authenticity, misleading consumers. I have no doubt that it has some benefits, especially through technology forcing, but it’s soooo much less than it could be.

The fundamental signal Energy Star sends is flawed. Because it categorizes appliances by size and type, a hog gets a star as long as it’s also big and of less-efficient design (like a side-by-side refrigerator/freezer). Here’s the size-energy relationship of the federal energy performance standard (which Energy Star fridges must exceed by 20%):


Notice that the standard for a 20 cubic foot fridge is anywhere from 470 to 660 kWh/year.

Continue reading “Greenwash labeling”