Limits to Growth Redux

Every couple of years, an article comes out reviewing the performance of the World3 model against data, or constructing an alternative, extended model based on World3. Here’s the latest:

Abstract
This study investigates the notion of limits to socioeconomic growth with a specific focus on the role of climate change and the declining quality of fossil fuel reserves. A new system dynamics model has been created. The World Energy Model (WEM) is based on the World3 model (The Limits to Growth, Meadows et al., 2004) with climate change and energy production replacing generic pollution and resources factors. WEM also tracks global population, food production and industrial output out to the year 2100. This paper presents a series of WEM’s projections; each of which represent broad sweeps of what the future may bring. All scenarios project that global industrial output will continue growing until 2100. Scenarios based on current energy trends lead to a 50% increase in the average cost of energy production and 2.4–2.7 °C of global warming by 2100. WEM projects that limiting global warming to 2 °C will reduce the industrial output growth rate by 0.1–0.2%. However, WEM also plots industrial decline by 2150 for cases of uncontrolled climate change or increased population growth. The general behaviour of WEM is far more stable than World3 but its results still support the call for a managed decline in society’s ecological footprint.

The new paper puts economic collapse about a century later than it occurred in Limits. But that presumes that the phrase highlighted above is a legitimate simplification: GHGs are the only pollutant, and energy the only resource, that matters. Are we really past the point of concern over PCBs, heavy metals, etc., with all future chemical and genetic technologies free of risk? Well, maybe … (Note that climate integrated assessment models generally indulge in the same assumption.)

But quibbling over dates is to miss a key point of Limits to Growth: the model, and the book, are not about point prediction of collapse in year 20xx. The central message is about a persistent overshoot behavior mode in a system with long delays and finite boundaries, when driven by exponential growth.

We have deliberately omitted the vertical scales and we have made the horizontal time scale somewhat vague because we want to emphasize the general behavior modes of these computer outputs, not the numerical values, which are only approximately known.

Pindyck on Integrated Assessment Models

Economist Robert Pindyck takes a dim view of the state of integrated assessment modeling:

Climate Change Policy: What Do the Models Tell Us?

Robert S. Pindyck

NBER Working Paper No. 19244

Issued in July 2013

Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

Freepers seem to think that this means the whole SCC enterprise is GIGO. But this is not a case where uncertainty is your friend. Bear in mind that the deficiencies Pindyck discusses, discounting welfare and ignoring extreme outcomes, create a one-sided bias toward a SCC that is too low. Zero (the de facto internalized SCC in most places) is one number that’s virtually certain to be wrong.

The IAMs that ate the poor

Discounting has long been controversial in climate integrated assessment models (IAMs), with prevailing assumptions less than favorable to future generations.

The evidence in favor of aggressive discounting has generally been macro in nature – observed returns appear to be consistent with discounting of welfare, so that’s what we should do. To swallow this, you have to believe that markets faithfully reveal preferences and that only on-market returns count. Even then, there’s still the problem of confounding of time preference with inequality aversion. Given that this perspective is contradicted by micro behavior, i.e. actually asking people what they want, it’s hard to see a reason other than convenience for its upper hand in decision making. Ultimately, the situation is neatly self-fulfilling. We observe inflated returns consistent with myopia, so we set myopic hurdles for social decisions, yielding inflated short-term returns.

It gets worse.

Back in 1997, I attended a talk on an early version of the RICE model, a regional version of DICE. In an optimization model with uniform utility functions, there’s an immediate drive to level incomes across all the regions. That’s obviously contrary to the observed global income distribution. A “solution” is to use Negishi weights, which apply weights to each region’s welfare in proportion to the inverse of the marginal utility of consumption there. That prevents income leveling, by explicitly assuming that the rich are rich because they deserve it.

This is a reasonable practical choice if you don’t think you can do anything about income distribution, and you’re not worried that it confounds equity with human capital differences. But when you use the same weights to identify an optimal emissions trajectory, you’re baking the inequity of the current market order into climate policy. In other words, people in developed countries are worth 10x more than people in developing countries.

Way back when, I didn’t have the words at hand to gracefully ask why it was a good idea to model things this way, but I sure wish I’d had the courage to forge ahead anyway.

The silly thing is that there’s no need to make such inequitable assumptions to model this problem. Elizabeth Stanton analyzes Negishi weighting and suggests alternatives. Richard Tol explored alternative frameworks some time before. And there are still more options, I think.

In the intertemporal optimization framework, one could treat the situation as a game between self-interested regions (with Negishi weights) and an equitable regulator (with equal weights to welfare). In that setting, mitigation by the rich might look like a form of foreign aid that couldn’t be squandered by the elites of poor regions, and thus I would expect deep emissions cuts.

Better still, dump notions of equilibrium and explore the problem with behavioral models, reserving optimization for policy analysis with fair objectives.

Thanks to Ramon Bueno for passing along the Stanton article.

Hair of the dog that bit you climate policy

Roy Spencer on reducing emissions by increasing emissions:

COL: Let’s say tomorrow, evidence is found that proves to everyone that global warming as a result of human released emissions of CO2 and methane, is real. What would you suggest we do?

SPENCER: I would say we need to grow the economy as fast as possible, in order to afford the extra R&D necessary to develop new energy technologies. Current solar and wind technologies are too expensive, unreliable, and can only replace a small fraction of our energy needs. Since the economy runs on inexpensive energy, in order to grow the economy we will need to use fossil fuels to create that extra wealth. In other words, we will need to burn even more fossil fuels in order to find replacements for fossil fuels.

via Planet 3.0

On the face of it, this is absurd. Reverse a positive feedback loop by making it stronger? But it could work, if given the right structure – a relative quit smoking by going in a closet to smoke until he couldn’t stand it anymore. Here’s what I can make of the mental model:

Spencer’s arguing that we need to run reinforcing loops R1 and R2 as hard as possible, because loop R3 is too weak to sustain the economy, because renewables (or more generally non-emitting sources) are too expensive. R1 and R2 provide the wealth to drive R&D, in a virtuous cycle R4 that activates R3 and shuts down the fossil sector via B2. There are a number of problems with this thinking.

  • Rapid growth around R1 rapidly grows environmental damage (B1) – not only climate, but also local air quality, etc. It also contributes to depletion (not shown), and with depletion comes increasing cost (weakening R1) and greater marginal damage from extraction technologies (not shown). It makes no sense to manage the economy as if R1 exists and B1 does not. R3 looks much more favorable today in light of this.
  • Spencer’s view discounts delays. But there are long delays in R&D and investment turnover, which will permit more environmental damage to accumulate while we wait for R&D.
  • In addition to the delay, R4 is weak. For example, if economic growth is 3%/year, and all technical progress in renewables is from R&D with a 70% learning rate, it’ll take 44 years to halve renewable costs.
  • A 70% learning curve for R&D is highly optimistic. Moreover, a fair amount of renewable cost reductions are due to learning-by-doing and scale economies (not shown), which require R3 to be active, not R4. No current deployment, no progress.
  • Spencer’s argument ignores efficiency (not shown), which works regardless of the source of energy. Spurring investment in the fossil loop R1 sends the wrong signal for efficiency, by depressing current prices.

In truth, these feedbacks are already present in many energy models. Most of those are standard economic stuff – equilibrium, rational expectations, etc. – assumptions which favor growth. Yet among the subset that includes endogenous technology, I’m not aware of a single instance that finds a growth+R&D led policy to be optimal or even effective.

It’s time for the techno-optimists like Spencer and Breakthrough to put up or shut up. Either articulate the argument in a formal model that can be shared and tested, or admit that it’s a nice twinkle in the eye that regrettably lacks evidence.

And so it begins…

A kerfuffle is brewing over Richard Tol’s FUND model (a recent  installment). I think this may be one of the first instances of something we’ll see a lot more of: public critique of integrated assessment models.

Integrated Assessment Models (IAMs) are a broad class of tools that combine the physics of natural systems (climate, pollutants, etc.) with the dynamics of socioeconomic systems. Most of the time, this means coupling an economic model (usually dynamic general equilibrium or an optimization approach; sometimes bottom-up technical or a hybrid of the two) with a simple to moderately complex model of climate. The IPCC process has used such models extensively to generate emissions and mitigation scenarios.

Interestingly, the IAMs have attracted relatively little attention; most of the debate about climate change is focused on the science. Yet, if you compare the big IAMs to the big climate models, I’d argue that the uncertainties in the IAMs are much bigger. The processes in climate models are basically physics and many are even be subject to experimental verification. We can measure quantities like temperature with considerable precision and spatial detail over long time horizons, for comparison with model output. Some of the economic equivalents, like real GDP, are much slipperier even in their definitions. We have poor data for many regions, and huge problems of “instrumental drift” from changing quality of goods and sectoral composition of activity, and many cultural factors are not even measured. Nearly all models represent human behavior – the ultimate wildcard – by assuming equilibrium, when in fact it’s not clear that equilibrium emerges faster than other dynamics change the landscape on which it arises. So, if climate skeptics get excited about the appropriate centering method for principal components analysis, they should be positively foaming at the mouth over the assumptions in IAMs, because there are far more of them, with far less direct empirical support.

Last summer at EMF Snowmass, I reflected on some of our learning from the C-ROADS experience (here’s my presentation). One of the key points, I think, is that there is a huge gulf between models and modelers, on the one hand, and the needs and understanding of decision makers and the general public on the other. If modelers don’t close that gap by deliberately translating their insights for lay audiences, focusing their tools on decision maker needs, and embracing a much higher level of transparency, someone else will do that translation for them. Most likely, that “someone else” will be much less informed, or have a bigger axe to grind, than the modelers would hope.

With respect to transparency, Tol’s FUND model is further along than many models: the code is available. So, informed tinkerers can peek under the hood if they wish. However, it comes with a warning:

It is the developer’s firm belief that most researchers should be locked away in an ivory tower. Models are often quite useless in unexperienced hands, and sometimes misleading. No one is smart enough to master in a short period what took someone else years to develop. Not-understood models are irrelevant, half-understood models treacherous, and mis-understood models dangerous.

Therefore, FUND does not have a pretty interface, and you will have to make to real effort to let it do something, let alone to let it do something new.

I understand the motivation for this warning. However, it leaves the modeler-consumer gulf gaping.The modelers have their insights into systems, the decision makers have their problems managing those systems, and ne’er the twain shall meet – there just aren’t enough modelers to go around. That leaves reports as the primary conduit of information from model to user, which is fine if your ivory tower is secure enough that you need not care whether your insights have any influence. It’s not even clear that reports are more likely to be understood than models: there have been a number of high-profile instances of ill-conceived institutional press releases and misinterpretation of conclusions and even raw data.

Also, there’s a hint of danger in the very idea of building dangerous models. Obviously all models, like analogies, are limited in their fidelity and generality. It’s important to understand those limitations, just as a pilot must understand the limitations of her instruments. However, if a model is a minefield for the uninitiated user, I have to question its utility. Robustness is an important aspect of model quality; a model given vaguely realistic inputs should yield realistic outputs most of the time, and a model given stupid inputs should generate realistic catastrophes. This is perhaps especially true for climate, where we are concerned about the tails of the distribution of possible outcomes. It’s hard to build a model that’s only robust to the kinds of experiments that one would like to perform, while ignoring other potential problems. To the extent that a model generates unrealistic outcomes, the causes should be traceable; if its not easy for the model user to see in side the black box, then I worry that the developer won’t have done enough inspection either. So, the discipline of building models for naive users imposes some useful quality incentives on the model developer.

IAM developers are busy adding spatial resolution, technical detail, and other useful features to models. There’s comparatively less work on consolidation of insights, with translation and construction of tools for wider consumption. That’s understandable, because there aren’t always strong rewards for doing so. However, I think modelers ignore this crucial task at their future peril.

The elusive MAC curve

Marginal Abatement Cost (MAC) curves are a handy way of describing the potential for and cost of reducing energy consumption or GHG emissions. McKinsey has recently made them famous, but they’ve been around, and been debated, for a long time.

McKinsey MAC 2.0

One version of the McKinsey MAC curve

Five criticisms are common:

1. Negative cost abatement options don’t really exist, or will be undertaken anyway without policy support. This criticism generally arises from the question begged by the Sweeney et al. MAC curve below: if the leftmost bar (diesel anti-idling) has a large negative cost (i.e. profit opportunity) and is price sensitive, why hasn’t anyone done it? Where are those $20 bills on the sidewalk? There is some wisdom to this, but you have to drink pretty deeply of the neoclassical economic kool aid to believe that there really are no misperceptions, institutional barriers, or non-climate externalities that could create negative cost opportunities.

Sweeney et al. California MAC curve

Sweeney, Weyant et al. Analysis of Measures to Meet the Requirements of California’s Assembly Bill 32

The neoclassical perspective is evident in AR4, which reports results primarily of top-down, equilibrium models. As a result, mitigation costs are (with one exception) positive:

AR4 WG3 TS fig. TS.9, implicit MAC curves

AR4 WG3 TS fig. TS-9

Note that these are top-down implicit MAC curves, derived by exercising aggregate models, rather than bottom-up curves constructed from detailed menus of technical options.

2. The curves employ static assumptions, that might not come true. For example, I’ve heard that the McKinsey curves assume $60/bbl oil. This criticism is true, but could be generalized to more or less any formal result that’s presented as a figure rather than an interactive model. I regard it as a caveat rather than a flaw.

3. The curves themselves are static, while reality evolves. I think the key issue here is that technology evolves endogenously, so that to some extent the shape of the curve in the future will depend on where we choose to operate on the curve today. There are also 2nd-order, market-mediated effects (related to #2 as well): a) exploiting the curve reduces energy demand, and thus prices, which changes the shape of the curve, and b) changes in GHG prices or other policies used to drive exploitation of the curve influence prices of capital and other factors, again changing the shape of the curve.

4. The notion of “supply” is misleading or incomplete. Options depicted on a MAC curve typically involve installing some kind of capital to reduce energy or GHG use. But that installation depends on capital turnover, and therefore is available only incrementally. The rate of exploitation is more difficult to pin down than the maximum potential under idealized conditions.

5. A lot of mitigation falls through the cracks. There are two prongs to this criticism: bottom-up, and top-down. Bottom-up models, because they employ a menu of known technologies, inevitably overlook some existing or potential options that might materialize in reality (with the incentive of GHG prices, for example). That error is, to some extent, offset by over-optimism about other technologies that won’t materialize. More importantly, a menu of supply and end use technology choices is an incomplete specification of the economy; there’s also a lot of potential for changes in lifestyle and substitution of activity among economic sectors. Today’s bottom-up MAC curve is essentially a snapshot of how to do what we do now, with fewer GHGs. If we’re serious about deep emissions cuts, the economy may not resemble what we’re doing now very much  in 40 years. Top down models capture the substitution potential among sectors, but still take lifestyle as a given and (mostly) start from a first-best equilibrium world, devoid of mitigation options arising from the frailty of human, institutional, and market failures.

To get the greenhouse gas MAC curve right, you need a model that captures bottom-up and top-down aspects of the economy, with realistic dynamics and agent behavior, endogenous technology, and non-climate externalities all included. As I see it, mainstream integrated assessment models are headed down some of those paths (endogenous technology), but remain wedded to the equilibrium/optimization perspective. Others (including us at Ventana) are exploring other avenues, but it’s a hard road to hoe.

In the meantime, we’re stuck with a multitude of perspectives on mitigation costs. Here are a few from the WCI, compiled by Wei and Rose from partner jurisdictions’ Climate Action Team reports and other similar documents:

WCI partner MAC curves

Wei & Rose, Preliminary Cap & Trade Simulation of Florida Joining WCI

The methods used to develop the various partner options differ, so these curves reflect diverse beliefs rather than a consistent comparison. What’s striking to me is that the biggest opportunities (are perceived to) exist in California, which already has (roughly) the lowest GHG intensity and most stringent energy policies among the partners. Economics 101 would suggest that California might already have exploited the low-hanging fruit, and that greater opportunity would exist, say, here in Montana, where energy policy means low taxes and GHG intensity is extremely high.

For now, we have to live with the uncertainty. However, it seems obvious that an adaptive strategy for discovering the true potential for mitigation is easy. No matter who you beleive, the cost of the initial increment of emissions reductions is either small (<<1% of GDP) or negative, so just put a price on GHGs and see what happens.

Next Generation Climate Policy Models

Today I’m presenting a talk at an ECF workshop, Towards the next generation of climate policy models. The workshop’s in Berlin, but I’m staying in Montana, so my carbon footprint is minimal for this one (just wait until next month …). My slides are here: Towards Next Generation Climate Policy Models.

I created a set of links to supporting materials on del.icio.us.

Update Workshop materials are now on a web site here.

Questioning Imbalance

Nature has a review, by economist Dieter Helm, of William Nordhaus’ new book, A Question of Balance. I don’t have the book yet, but I’ll certainly check it out. I like the conclusion of the review:

But it may be naive to assume that substituting for environmental systems is so easy. Feedbacks in the system may be such that as climate change unfolds, the return on capital and hence the discount rate falls. Environmental damage may slow or stop economic growth; if that were the case, we would not be much better off in the future. And if we are not so well off in growth terms, Nordhaus’s slower and more measured policy approach may not be so favourable over taking rapid action now. In other words, Stern’s conclusion might be correct, but not his derivation of it ’” right answer, wrong analysis.

This is a crucial point. Richard Tol pointed out the substitutability problem back in 1994 but it hasn’t really found its way into formalization in mainstream IAMs. The issue of slowing or stopping growth isn’t limited to climate feedback; oil and gas depletion, the ever-present possibility of conflict, and degradation of other resources also impose constraints.

I have to take issue with one part of the review:

 

Where A Question of Balance has most power is where it is most controversial. Nordhaus tackles Stern head on. Stern’s case for urgent action, which the DICE model shows would be excessively expensive in the short term, rests upon his radical assumption that the time discount rate should be close to zero. This means that we should value people’s consumption equally regardless of whether they live now or in the future. Nordhaus has little time for this moral philosophy: he takes a much more positivistic position, grounded in market evidence and what people actually do, as reflected in market interest rates. The difference between Nordhaus’s optimal climate change policy and Stern’s policy based on a zero discount rate translates into a tenfold difference in the price of carbon. Stern’s discounting approach, Nordhaus argues, gives too low a rate of return and too big a savings rate on climate-stabilizing investments compared with actual macroeconomic data. Not surprisingly, then, his verdict is damning. [emphasis added]

The Stern discounting critique has been done to death. I recently discussed some of the issues here (in particular, see the presentation on discounting and welfare in integrated assessment modeling, based on the primer I wrote for last year’s Balaton meeting). In a nutshell, the discount rate can be decomposed into two terms: pure time preference and inequality aversion. Ramsey showed that, along an optimal growth path,

    interest rate = pure time preference + inequality aversion x growth rate

Stern has been criticized for choosing discounting parameters that aren’t consistent with observed interest and growth rates. That’s true, but let’s not confuse the map with the territory. Stern’s choice is inconsistent with the optimal growth framework, but is the optimal growth framework consistent with reality? Clearly, market interest rates reflect what people actually do in some sense, but they do it in a rather complex institutional setting, rife with opportunities for biases and misperceptions of feedback. Do market interest rates reflect what people actually want? Unfortunately, the micro foundation of macroeconomics is too wobbly to say.

Notice also that the equation above is underdetermined. That is, for realistic growth and interest rates, a variety of pure time preference and inequality aversion assumptions yield equality. Nordhaus, in his original DICE work, preferred 3%/yr pure time preference (no interest in the grandkids) and unit inequality aversion (doubling my income yields the same increment in happiness as doubling a poor African farmer’s income). Dasgupta prefers zero time preference on ethical grounds (as did Ramsey) and higher inequality aversion. The trouble with Nordhaus’ approach is that, unless the new book cites new research, there is no empirical basis for rates of time preference that heavily discount the future. It is difficult to create a realistic simulated context for such long term decisions, but the experimental evidence I’ve seen suggests quite the opposite, that people express some concern for even the distant future.

Thus it’s a mistake to call Nordhaus’ approach “positivistic.” That lends undue authority to what should be recognized as an ethical choice. (Again, this is subject to the caveat that Nordhaus might have new evidence. I await the book.)

The GAO's Panel of Economists on Climate

I just ran across a May 2008 GAO report, detailing the findings of a panel of economists convened to consider US climate policy. The panel used a modified Delphi method, which can be good or evil. The eighteen panelists are fairly neoclassical, with the exception of Richard Howarth, who speaks the language but doesn’t drink the Kool-aid.

First, it’s interesting what the panelists agree on. All of the panelists supported establishing a price on greenhouse gas emissions, and a majority were fairly certain that there would be a net benefit from doing so. A majority also favored immediate action, regardless of the participation of other countries. The favored immediate action is rather fainthearted, though. One-third favored an initial price range under $10/tonCO2, and only three favored exceeding $20/tonCO. One panelist specified a safety valve price at 55 cents. Maybe the low prices are intended to rise rapidly (or at the interest rate, per Hotelling); otherwise I have a hard time seeing why one would bother with the whole endeavor. It’s quite interesting that panelists generally accept unilateral action, which by itself wouldn’t solve the climate problem. Clearly they are counting on setting an example, with imitation bringing more emissions under control, and perhaps also on first-mover advantages in innovation.

Continue reading “The GAO's Panel of Economists on Climate”

Ethics, Equity & Models

I’m at the 2008 Balaton Group meeting, where a unique confluence of modeling talent, philosophy, history, activist know-how, compassion and thirst for sustainability makes it hard to go 5 minutes without having a Big Idea.

Our premeeting tackled Ethics, Values, and the Next Generation of Energy and Climate Modeling. I presented a primer on discounting and welfare in integrated assessment modeling, based on a document I wrote for last year’s meeting, translating some of the issues raised by the Stern Review and critiques into plainer language. Along the way, I kept a running list of assumptions in models and modeling processes that have ethical/equity implications.

There are three broad insights:

  1. Technical choices in models have ethical implications. For example, choices about the representation of technology and resource constraints determine whether a model explores a parameter space where “growing to help the poor” is a good idea or not.
  2. Modelers’ prescriptive and descriptive uses of discounting and other explicit choices with ethical implications are often not clearly distinguished.
  3. Decision makers have no clue how the items above influence model outcomes, and do not in any case operate at that level of description.

My list of ethical issues is long and somewhat overlapping. Perhaps in part that is due to the fact that I compiled it with no clear definition of ‘ethics’ in mind. However, I think it’s also due to the fact that there are inevitably large gray areas in practice, accentuated by the fact that the issue doesn’t receive much formal attention. Here goes: Continue reading “Ethics, Equity & Models”