Questioning Imbalance

Nature has a review, by economist Dieter Helm, of William Nordhaus’ new book, A Question of Balance. I don’t have the book yet, but I’ll certainly check it out. I like the conclusion of the review:

But it may be naive to assume that substituting for environmental systems is so easy. Feedbacks in the system may be such that as climate change unfolds, the return on capital and hence the discount rate falls. Environmental damage may slow or stop economic growth; if that were the case, we would not be much better off in the future. And if we are not so well off in growth terms, Nordhaus’s slower and more measured policy approach may not be so favourable over taking rapid action now. In other words, Stern’s conclusion might be correct, but not his derivation of it ’” right answer, wrong analysis.

This is a crucial point. Richard Tol pointed out the substitutability problem back in 1994 but it hasn’t really found its way into formalization in mainstream IAMs. The issue of slowing or stopping growth isn’t limited to climate feedback; oil and gas depletion, the ever-present possibility of conflict, and degradation of other resources also impose constraints.

I have to take issue with one part of the review:

 

Where A Question of Balance has most power is where it is most controversial. Nordhaus tackles Stern head on. Stern’s case for urgent action, which the DICE model shows would be excessively expensive in the short term, rests upon his radical assumption that the time discount rate should be close to zero. This means that we should value people’s consumption equally regardless of whether they live now or in the future. Nordhaus has little time for this moral philosophy: he takes a much more positivistic position, grounded in market evidence and what people actually do, as reflected in market interest rates. The difference between Nordhaus’s optimal climate change policy and Stern’s policy based on a zero discount rate translates into a tenfold difference in the price of carbon. Stern’s discounting approach, Nordhaus argues, gives too low a rate of return and too big a savings rate on climate-stabilizing investments compared with actual macroeconomic data. Not surprisingly, then, his verdict is damning. [emphasis added]

The Stern discounting critique has been done to death. I recently discussed some of the issues here (in particular, see the presentation on discounting and welfare in integrated assessment modeling, based on the primer I wrote for last year’s Balaton meeting). In a nutshell, the discount rate can be decomposed into two terms: pure time preference and inequality aversion. Ramsey showed that, along an optimal growth path,

    interest rate = pure time preference + inequality aversion x growth rate

Stern has been criticized for choosing discounting parameters that aren’t consistent with observed interest and growth rates. That’s true, but let’s not confuse the map with the territory. Stern’s choice is inconsistent with the optimal growth framework, but is the optimal growth framework consistent with reality? Clearly, market interest rates reflect what people actually do in some sense, but they do it in a rather complex institutional setting, rife with opportunities for biases and misperceptions of feedback. Do market interest rates reflect what people actually want? Unfortunately, the micro foundation of macroeconomics is too wobbly to say.

Notice also that the equation above is underdetermined. That is, for realistic growth and interest rates, a variety of pure time preference and inequality aversion assumptions yield equality. Nordhaus, in his original DICE work, preferred 3%/yr pure time preference (no interest in the grandkids) and unit inequality aversion (doubling my income yields the same increment in happiness as doubling a poor African farmer’s income). Dasgupta prefers zero time preference on ethical grounds (as did Ramsey) and higher inequality aversion. The trouble with Nordhaus’ approach is that, unless the new book cites new research, there is no empirical basis for rates of time preference that heavily discount the future. It is difficult to create a realistic simulated context for such long term decisions, but the experimental evidence I’ve seen suggests quite the opposite, that people express some concern for even the distant future.

Thus it’s a mistake to call Nordhaus’ approach “positivistic.” That lends undue authority to what should be recognized as an ethical choice. (Again, this is subject to the caveat that Nordhaus might have new evidence. I await the book.)

The Deal We Ain't Got

Today, Drew Jones and I presented a simple model as part of the Tällberg Forum’s Washington Conversation, ‘The climate deal we need.’ Our goal was to build from some simple points about the bathtub dynamics of the carbon cycle and climate to yield some insights about what’s needed. Our aspirational list of insights to get across included,

  • stabilizing emissions near current levels fails to stabilize atmospheric concentrations any time soon (because emissions now exceed uptake of carbon; stabilization continues that condition, and the residual accumulates in the atmosphere)
  • achieving stabilization of atmospheric CO2 at low levels (Hansen et al.’s 350 ppm) requires very aggressive cuts (for the same reason; if carbon cycle feedbacks from temperature kick in, negative emissions could be needed)
  • current policies are not on track to meaningful reductions (duh)
  • nevertheless, there is a path (Hansen et al.’s “where should humanity aim” paper lays out one option, and there are others)
  • starting soon is essential (the bathtub continues to fill while we delay – a costly gamble)
  • international negotiation dynamics are tricky due to diversity of interests, coupled problem spaces, and difficulty of transfers (simulations shadow this)
  • but everyone has to be on board or little happens (any one major region or sector, uncontrolled, can blow the deal by emitting above natural uptake)

A good moment came when someone asked, “Why should we care about staying below some temperature threshold?” (I think a scenario with about 3.5C was on the screen at the time). Jim Hansen answered, “because that would be a different planet.”

The conversation didn’t lead to specification of “the deal we need” but it explored a number of interesting facets, which I’ll relate in a few follow-on posts.

Climate War Game – Recap

I presented a brief review of my involvement in the CNAS wargame at Balaton today. My last few slides focus on some observations from the game. They led to a very interesting conversation about targets for future models and games. We have been planning to continue seeking ways to insert models into negotiations, with the goal of connecting individual parties’ positions to aggregate global outcomes. However, in the conversation we identified a much more ambitious goal: reframing the whole negotiation process.

The fundamental problem, in the war game and the real world, is that nations are stuck in a lose-lose paradigm: who will bear the burden of costly mitigation? No one is willing to forego growth, as long as “growth is good” is an unqualified mantra. What negotiations need is a combination of realization that growth founded on externalizing costs of pollution and depletion isn’t really good, and that fixing the institutional and behavioral factors that would unleash large low- or negative-cost emissions reductions and cobenefits would be a win-win. That, combined with a serious and equitable accounting of climate impacts within the scope of present activities and coupling of adaptation and development opportunities to mitigation could tilt the landscape in favor of a meaningful agreement.

Ethics, Equity & Models

I’m at the 2008 Balaton Group meeting, where a unique confluence of modeling talent, philosophy, history, activist know-how, compassion and thirst for sustainability makes it hard to go 5 minutes without having a Big Idea.

Our premeeting tackled Ethics, Values, and the Next Generation of Energy and Climate Modeling. I presented a primer on discounting and welfare in integrated assessment modeling, based on a document I wrote for last year’s meeting, translating some of the issues raised by the Stern Review and critiques into plainer language. Along the way, I kept a running list of assumptions in models and modeling processes that have ethical/equity implications.

There are three broad insights:

  1. Technical choices in models have ethical implications. For example, choices about the representation of technology and resource constraints determine whether a model explores a parameter space where “growing to help the poor” is a good idea or not.
  2. Modelers’ prescriptive and descriptive uses of discounting and other explicit choices with ethical implications are often not clearly distinguished.
  3. Decision makers have no clue how the items above influence model outcomes, and do not in any case operate at that level of description.

My list of ethical issues is long and somewhat overlapping. Perhaps in part that is due to the fact that I compiled it with no clear definition of ‘ethics’ in mind. However, I think it’s also due to the fact that there are inevitably large gray areas in practice, accentuated by the fact that the issue doesn’t receive much formal attention. Here goes: Continue reading “Ethics, Equity & Models”

Climate War Game – Is 2050 Temperature Locked In?

This slide became known as “the Angry Red Future” at the war game:
The Angry Red Future

Source: ORNL & Pew via Nature In the Field

After seeing the presentation around it, Eli Kintisch of Science asked me whether it was realistic to assume that 2050 climate is already locked in. (Keep in mind that we were living in 2015.) I guessed yes, then quickly ran a few simulations to verify. Then I lost my train of thought and lost track of Eli. So, for what it’s still worth, here’s the answer.

Continue reading “Climate War Game – Is 2050 Temperature Locked In?”

Climate War Game – Model Support

Drew Jones of the Sustainability Institute stumbled on a great opportunity for model-based decision support. There are lots of climate models and integrated assessment models, but they’re almost always used offline. That is, modelers work between negotiations to develop analyses that (hopefully) address decision makers’ questions, but during any given meeting, negotiators rely on their mental models and static briefing materials. It’s a bit like training pilots in a flight simulator, then having them talk on the radio to guide a novice, who flies the real plane without instruments.

Continue reading “Climate War Game – Model Support”

Tangible Models

MIT researchers have developed a cool digital drawing board that simulates the physics of simple systems:

You can play with something like this with Crayon Physics or Magic Pen. Digital physics works because the laws involved are fairly simple, though the math behind one of these simulations might appear daunting. More importantly, they are well understood and universally agreed upon (except perhaps among perpetual motion advocates).

I’d like to have the equivalent of the digital drawing board for the public policy and business strategy space: a fluid, intuitive tool that translates assumptions into realistic consequences. The challenge is that there is no general agreement on the rules by which organizations and societies work. Frequently there is not even a clear problem statement and common definition of important variables.

However, in most domains, it is possible to identify and simulate the “physics” of a social system in a useful way. The task is particularly straightforward in cases where the social system is managing an underlying physical system that obeys predictable laws (e.g., if there’s no soup on the shelf, you can’t sell any soup). Jim Hines and MIT Media Lab researchers translated that opportunity into a digital whiteboard for supply chains, using a TUI (tangible user interface). Here’s a demonstration:

There are actually two innovations here. First, the structure of a supply chain has been reduced to a set of abstractions (inventories, connections via shipment and order flows, etc.) that make it possible to assemble one tinker-toy style using simple objects on the board. These abstractions eliminate some of the grunt work of specifying the structure of a system, enabling what Jim calls “modeling at conversation speed”. Second, assumptions, like the target stock or inventory coverage at a node in the supply chain, are tied to controls (wheels) that allow the user to vary them and see the consequences in real time (as with Vensim’s Synthesim). Getting the simulation off a single computer screen and into a tangible work environment opens up great opportunities for collaborative exploration and design of systems. Cool.

Next step: create tangible, shareable, fast tools for uncertain dynamic tasks like managing the social security trust fund or climate policy.

US Regional Climate Initiatives – Model Roll Call

The Pew Climate Center has a roster of international, US federal, and US state & regional climate initiatives. Wikipedia has a list of climate initiatives. The EPA maintains a database of state and regional initiatives, which they’ve summarized on cool maps. The Center for Climate Strategies also has a map of links. All of these give some idea as to what regions are doing, but not always why. I’m more interested in the why, so this post takes a look at the models used in the analyses that back up various proposals.

EPA State Climate Initiatives Map

In a perfect world, the why would start with analysis targeted at identifying options and tradeoffs for society. That analysis would inevitably involve models, due to the complexity of the problem. Then it would fall to politics to determine the what, by choosing among conflicting stakeholder values and benefits, subject to constraints identified by analysis. In practice, the process seems to run backwards: some idea about what to do bubbles up in the political sphere, which then mandates that various agencies implement something, subject to constraints from enabling legislation and other legacies that do not necessarily facilitate the best outcome. As a result, analysis and modeling jumps right to a detailed design phase, without pausing to consider the big picture from the top down. This tendency is somewhat reinforced by the fact that most models available to support analysis are fairly detailed and tactical; that makes them too narrow or too cumbersome to redirect at the broadest questions facing society. There isn’t necessarily anything wrong with the models; they just aren’t suited to the task at hand.

My fear is that the analysis of GHG initiatives will ultimately prove overconstrained and underpowered, and that as a result implementation will ultimately crumble when called upon to make real changes (like California’s ambitious executive order targeting 2050 emissions 80% below 1990 levels). California’s electric power market restructuring debacle jumps to mind. I think underpowered analysis is partly a function of history. Other programs, like emissions markets for SOx, energy efficiency programs, and local regulation of criteria air pollutants have all worked OK in the past. However, these activities have all been marginal, in the sense that they affect only a small fraction of energy costs and a tinier fraction of GDP. Thus they had limited potential to create noticeable unwanted side effects that might lead to damaging economic ripple effects or the undoing of the policy. Given that, it was feasible to proceed by cautious experimentation. Greenhouse gas regulation, if it is to meet ambitious goals, will not be marginal; it will be pervasive and obvious. Analysis budgets of a few million dollars (much less in most regions) seem out of proportion with the multibillion $/year scale of the problem.

One result of the omission of a true top-down design process is that there has been no serious comparison of proposed emissions trading schemes with carbon taxes, though there are many strong substantive arguments in favor of the latter. In California, for example, the CPUC Interim Opinion on Greenhouse Gas Regulatory Strategies states, “We did not seriously consider the carbon tax option in the course of this proceeding, due to the fact that, if such a policy were implemented, it would most likely be imposed on the economy as a whole by ARB.” It’s hard for CARB to consider a tax, because legislation does not authorize it. It’s hard for legislators to enable a tax, because a supermajority is required and it’s generally considered poor form to say the word “tax” out loud. Thus, for better or for worse, a major option is foreclosed at the outset.

With that little rant aside, here’s a survey of some of the modeling activity I’m familiar with:

Continue reading “US Regional Climate Initiatives – Model Roll Call”

More Oil Price Forecasts

The history of long term energy forecasting is a rather mixed bag. Supply and demand forecasts have generally been half decent, in terms of percent error, but that’s primarily because GDP growth is steady, energy intensity is price-inelastic, and there’s a lot of momentum in energy consuming and producing capital. Energy price forecasts, on the other hand, have generally been terrible. Consider the Delphi panel forecasts conducted by the CEC:

California Energy Commission Delphi Forecasts

In 1988, John Sterman showed that energy forecasts, even those using sophisticated models, were well represented by a simple adaptive rule: Continue reading “More Oil Price Forecasts”

SRES – We've got a bigger problem now

Recently Pielke, Wigley and Green discussed the implications of autonomous energy efficiency improvements (AEEI) in IPCC scenarios, provoking many replies. Some found the hubbub around the issue surprising, because the assumptions concerned were well known, at least to modelers. I was among the surprised, but sometimes the obvious needs to be restated loud and clear. I believe that there are several bigger elephants in the room that deserve such treatment. AEEI is important, as are other hotly debated SRES choices like PPP vs. MEX, but at the end of the day, these are just parameter choices. In complex systems parameter uncertainty generally plays second fiddle to structural uncertainty. Integrated assessment models (IAMs) as a group frequently employ similar methods, e.g., dynamic general equilibrium, and leave crucial structural assumptions untested. I find it strange that the hottest debates surround biogeophysical models, which are actually much better grounded in physical principles, when socio-economic modeling is so uncertain.

Continue reading “SRES – We've got a bigger problem now”