Battle of the Bulb II

The White House has announced new standards for lighting. As I’ve said before, I prefer an economic ban to an outright ban. A less-draconian performance standard may have advantages though. I just visited Erling Moxnes in Norway, who handed me an interesting paper that describes one possible benefit of standards, even where consumers are assumed to optimize.

A frequent argument against efficiency standards is that they prohibit products that represent optimal choices for customers and thus lead to reduced customer utility. In this paper we propose and test a method to estimate such losses. Conjoint analysis is used to estimate utility functions for individuals that have recently bought a refrigerator. The utility functions are used to calculate the individuals’ utility of all the refrigerators available in the market. Revealed utility losses due to non-optimal choices by the customers seem consistent with other data on customer behavior. The same utility estimates are used to find losses due to energy efficiency standards that remove products from the market. Contrary to previous claims, we find that efficiency standards can lead to increased utility for the average customer. This is possible because customers do not make perfect choices in the first place.

The key here is not that customers are stupid and need to be coddled by the government. The method accepts customer utility functions as is (along with possible misperceptions). However, consumers perform limited search for appliances (presumably because search is costly), and thus there’s a significant random component to their choices. Standards help in that case by focusing the search space, at least with respect to one product attribute. They’re even more helpful to the extent that energy efficiency is correlated with other aspects of product quality (e.g., due to use of higher-quality components).

Estimating customer utility of energy efficiency standards for refrigerators. Erling Moxnes. Economic Psychology 25, 707-724. 2004.

Waxman-Markey emissions coverage

In an effort to get a handle on Waxman Markey, I’ve been digging through the EPA’s analysis. Here’s a visualization of covered vs. uncovered emissions in 2016 (click through for the interactive version).

0b50f88e-65c3-11de-b8e7-000255111976 Blog_this_caption

The orange bits above are uncovered emissions – mostly the usual suspects: methane from cow burps, landfills, and coal mines; N2O from agriculture; and other small process or fugitive emissions. This broad scope is one of W-M’s strong points.

Talking to the taxman about math

I ran across this gem in the text of Waxman Markey (HR 2454):

(e) Trade-vulnerable Industries-

(1) IN GENERAL- The Administrator shall allocate emission allowances to energy-intensive, trade-exposed entities, to be distributed in accordance with section 765, in the following amounts:

(A) For vintage years 2012 and 2013, up to 2.0 percent of the emission allowances established for each year under section 721(a).

(B) For vintage year 2014, up to 15 percent of the emission allowances established for that year under section 721(a).

(C) For vintage year 2015, up to the product of–

(i) the amount specified in paragraph (2); multiplied by

(ii) the quantity of emission allowances established for 2015 under section 721(a) divided by the quantity of emission allowances established for 2014 under section 721(a).

(D) For vintage year 2016, up to the product of–

(i) the amount specified in paragraph (3); multiplied by

(ii) the quantity of emission allowances established for 2015 under section 721(a) divided by the quantity of emission allowances established for 2014 under section 721(a).

(E) For vintage years 2017 through 2025, up to the product of–

(i) the amount specified in paragraph (4); multiplied by

(ii) the quantity of emission allowances established for that year under section 721(a) divided by the quantity of emission allowances established for 2016 under section 721(a).

(F) For vintage years 2026 through 2050, up to the product of the amount specified in paragraph (4)–

(i) multiplied by the quantity of emission allowances established for the applicable year during 2026 through 2050 under section 721(a) divided by the quantity of emission allowances established for 2016 under section 721(a); and

(ii) multiplied by a factor that shall equal 90 percent for 2026 and decline 10 percent for each year thereafter until reaching zero, except that, if the President modifies a percentage for a year under subparagraph (A) of section 767(c)(3), the highest percentage the President applies for any sector under that subparagraph for that year (not exceeding 100 percent) shall be used for that year instead of the factor otherwise specified in this clause.

What we have here is really a little dynamic model, which can be written down in 4 or 5 lines. The intent is apparently to stabilize the absolute magnitude of the allocation to trade-vulnerable industries. In order to do that, the allocation share has to rise over time, as the total allowances issued falls. After 2026, there’s a 10%-per-year phaseout, but that’s offset by the continued upward pressure on share from the decline in allowances, so the net phaseout rate is about 5%/year, I think. Oops: Actually, I think now that it’s the other way around … from 2017-2025, the formula decreases the share of allowances allocated at the same rate as the absolute allowance allocation declines. Thereafter, it’s that rate plus 10%. There is no obvious rationale for this strange method.

Seems to me that if legislators want to create formulas this complicated, they ought to simply write out the equations (with units) in the text of the bill. Otherwise, natural language hopelessly obscures the structure and no ordinary human can participate effectively in the process. But perhaps that’s part of the attraction?

The RPX is up

While the Case-Shiller index is down and the conventional wisdom suggests that housing prices will continue to fall, the RPX composite is up for the first time since 2007. The year-on-year ratio hit bottom in Feb 09. The RPX has a lot less lag than the CSI, but also a seasonal signal, so this could merely mean that seasonally adjusted prices are just falling more slowly, but it would be nice if it reflected green shoots. I’m not holding my breath though.

The elusive MAC curve

Marginal Abatement Cost (MAC) curves are a handy way of describing the potential for and cost of reducing energy consumption or GHG emissions. McKinsey has recently made them famous, but they’ve been around, and been debated, for a long time.

McKinsey MAC 2.0

One version of the McKinsey MAC curve

Five criticisms are common:

1. Negative cost abatement options don’t really exist, or will be undertaken anyway without policy support. This criticism generally arises from the question begged by the Sweeney et al. MAC curve below: if the leftmost bar (diesel anti-idling) has a large negative cost (i.e. profit opportunity) and is price sensitive, why hasn’t anyone done it? Where are those $20 bills on the sidewalk? There is some wisdom to this, but you have to drink pretty deeply of the neoclassical economic kool aid to believe that there really are no misperceptions, institutional barriers, or non-climate externalities that could create negative cost opportunities.

Sweeney et al. California MAC curve

Sweeney, Weyant et al. Analysis of Measures to Meet the Requirements of California’s Assembly Bill 32

The neoclassical perspective is evident in AR4, which reports results primarily of top-down, equilibrium models. As a result, mitigation costs are (with one exception) positive:

AR4 WG3 TS fig. TS.9, implicit MAC curves

AR4 WG3 TS fig. TS-9

Note that these are top-down implicit MAC curves, derived by exercising aggregate models, rather than bottom-up curves constructed from detailed menus of technical options.

2. The curves employ static assumptions, that might not come true. For example, I’ve heard that the McKinsey curves assume $60/bbl oil. This criticism is true, but could be generalized to more or less any formal result that’s presented as a figure rather than an interactive model. I regard it as a caveat rather than a flaw.

3. The curves themselves are static, while reality evolves. I think the key issue here is that technology evolves endogenously, so that to some extent the shape of the curve in the future will depend on where we choose to operate on the curve today. There are also 2nd-order, market-mediated effects (related to #2 as well): a) exploiting the curve reduces energy demand, and thus prices, which changes the shape of the curve, and b) changes in GHG prices or other policies used to drive exploitation of the curve influence prices of capital and other factors, again changing the shape of the curve.

4. The notion of “supply” is misleading or incomplete. Options depicted on a MAC curve typically involve installing some kind of capital to reduce energy or GHG use. But that installation depends on capital turnover, and therefore is available only incrementally. The rate of exploitation is more difficult to pin down than the maximum potential under idealized conditions.

5. A lot of mitigation falls through the cracks. There are two prongs to this criticism: bottom-up, and top-down. Bottom-up models, because they employ a menu of known technologies, inevitably overlook some existing or potential options that might materialize in reality (with the incentive of GHG prices, for example). That error is, to some extent, offset by over-optimism about other technologies that won’t materialize. More importantly, a menu of supply and end use technology choices is an incomplete specification of the economy; there’s also a lot of potential for changes in lifestyle and substitution of activity among economic sectors. Today’s bottom-up MAC curve is essentially a snapshot of how to do what we do now, with fewer GHGs. If we’re serious about deep emissions cuts, the economy may not resemble what we’re doing now very much  in 40 years. Top down models capture the substitution potential among sectors, but still take lifestyle as a given and (mostly) start from a first-best equilibrium world, devoid of mitigation options arising from the frailty of human, institutional, and market failures.

To get the greenhouse gas MAC curve right, you need a model that captures bottom-up and top-down aspects of the economy, with realistic dynamics and agent behavior, endogenous technology, and non-climate externalities all included. As I see it, mainstream integrated assessment models are headed down some of those paths (endogenous technology), but remain wedded to the equilibrium/optimization perspective. Others (including us at Ventana) are exploring other avenues, but it’s a hard road to hoe.

In the meantime, we’re stuck with a multitude of perspectives on mitigation costs. Here are a few from the WCI, compiled by Wei and Rose from partner jurisdictions’ Climate Action Team reports and other similar documents:

WCI partner MAC curves

Wei & Rose, Preliminary Cap & Trade Simulation of Florida Joining WCI

The methods used to develop the various partner options differ, so these curves reflect diverse beliefs rather than a consistent comparison. What’s striking to me is that the biggest opportunities (are perceived to) exist in California, which already has (roughly) the lowest GHG intensity and most stringent energy policies among the partners. Economics 101 would suggest that California might already have exploited the low-hanging fruit, and that greater opportunity would exist, say, here in Montana, where energy policy means low taxes and GHG intensity is extremely high.

For now, we have to live with the uncertainty. However, it seems obvious that an adaptive strategy for discovering the true potential for mitigation is easy. No matter who you beleive, the cost of the initial increment of emissions reductions is either small (<<1% of GDP) or negative, so just put a price on GHGs and see what happens.

ABC to air Clout & Climate Change documentary

This just in from CNAS:

ABC News will air Earth 2100, the prime time documentary for which they filmed the war game, on June 2, 2009, at 9:00 p.m. (EST). You can view a promotional short report on the documentary from ABC News online, and hopefully you will all be able to view it on television or via Internet.

In conjunction with the airing of the documentary, CNAS has made the participant briefing book and materials from the game available online. We encourage other institutions to use and cite these materials to learn about the game and to stage their own scenario exercises. I also hope that they will be useful to you for your own future reference.

Finally, we are posting a short working paper of major findings from the game. While the game did not result in the kind of breakthrough agreements we all would have liked to see, this exercise achieved CNAS’s goals of exploring and highlighting the potential difficulties and opportunities of international cooperation on climate change. I know that everyone took away different observations from the game, however, and I hope that you will share your memories and your own key findings of the event with us, and allow us to post them online as a new section of the report.

Visit the Climate Change War Game webpage to view the CNAS report on major findings and background on developing the 2015 world, the participant briefing book, and materials generated from the game.

The only thing worse than cap & trade …

… is Marty Feldstein’s lame arguments against it.

  • He cites CBO household costs of policy that reflect outlays, rather than real deadweight or welfare losses after revenue recycling.
  • He wants the US to wait for global agreement before moving. News flash: there won’t be a global agreement without some US movement.
  • He argues that unilateral action is ineffective: true, but irrelevant if you aim to solve the problem. However, if that’s our moral philosophy, I think I should be exempted from all laws – on a global scale, no one will notice my murdering and pillaging, and it’ll be fun for me.

There is one nugget of wisdom in Feldstein’s piece: it’s a travesty to overcompensate carbon-intensive firms, and foolish to use allowance allocation to utilities to defeat the retail price signal. I haven’t read the details of the bill yet, so I don’t know how extensive those provisions really are, but it’s definitely something to watch.

Well, OK, lots of things are worse than cap & trade. More importantly, one thing (an upstream carbon tax) could be a lot better than Waxman Markey. But it’s sad when a Harvard economist sounds like an astroturf skeptic.

Hat tip to Economist’s View.

Good modeling practices

Some thoughts I’ve been collecting, primarily oriented toward system dynamics modeling in Vensim, but relevant to any modeling endeavor:

  • Know why you’re building the model.
    • If you’re targeting a presentation or paper, write the skeleton first, so you know how the model will fill in the answers as you go.
  • Organize your data first.
    • No data? No problem. But surely you have some reference mode in mind, and some constraints on behavior, at least in extreme conditions.
    • In Vensim, dump it all into a spreadsheet, database, or text file and import it into a data model, using the Model>Import data… feature, GET XLS DATA functions, or ODBC.
    • Don’t put data in lookups (table functions) unless you must for some technical reason; they’re a hassle to edit and update, and lousy at distinguishing real data points from interpolation.
  • Keep a lab notebook. An open word processor while you work is useful. Write down hypotheses before you run, so that you won’t rationalize surprises. Continue reading “Good modeling practices”