Strategic Excess? The Model

It’s hard to get an intuitive grasp on the strategic reserve design, so I built a model (which I’m not posting because it’s still rather crude, but will describe in some detail). First, I’ll point out that the model has to be behavioral, dynamic, and stochastic. The whole point of the strategic reserve is to iron out problems that surface due to surprises or the cumulative effects of agent misperceptions of the allowance market. You’re not going to get a lot of insight about this kind of situation from a CGE or intertemporal optimization model – which is troubling because all the W-M analysis I’ve seen uses equilibrium tools. That means that the strategic reserve design is either intuitive or based on some well-hidden analysis.

Here’s one version of my sketch of market operations (click to enlarge):
Strategic reserve structure

It’s already complicated, but actually less complicated than the mechanism described in W-M. For one thing, I’ve made some process continuous (compliance on a rolling basis, rather than at intervals) that sound like they will be discrete in the real implementation.

The strategic reserve is basically a pool of allowances withheld from the market, until need arises, at which point they are auctioned and become part of the active allowance pool, usable for compliance:

m-allowances.png

Reserves auctioned are – to some extent – replaced by recycling of the auction revenue:

m-funds.png

Refilling the strategic reserve consumes international forestry offsets, which may also be consumed by firms for compliance. Offsets are created by entrepreneurs, with supply dependent on market price.

m-offsets.png

Auctions are triggered when market prices exceed a threshold, set according to smoothed actual prices:

m-trigger.png

(Actually I should have labeled this Maximum, not Minimum, since it’s a ceiling, not a floor.)

The compliance market is a bit complicated. Basically, there’s an aggregate firm that emits, and consumes offsets or allowances to cover its compliance obligation for those emissions (non-compliance is also possible, but doesn’t occur in practice; presumably W-M specifies a penalty). The firm plans its emissions to conform to the expected supply of allowances. The market price emerges from the marginal cost of compliance, which has long run and short run components. The LR component is based on eyeballing the MAC curve in the EPA W-M analysis. The SR component is arbitrarily 10x that, i.e. short term compliance surprises are 10x as costly (or the SR elasticity is 10x lower). Unconstrained firms would emit at a BAU level which is driven by a trend plus pink noise (the latter presumably originating from the business cyle, seasonality, etc.).

m-market.png

So far, so good. Next up: experiments.

Strategic Excess? Simple Math

Before digging into a model, I pondered the reserve mechanism a bit. The idea of the reserve is to provide cost containment. The legislation sets a price trigger at 60% above a 36-month moving average of allowance trade prices. When the current allowance price hits the trigger level, allowances held in the reserve are sold quarterly, subject to an upper limit of 5% to 20% of current-year allowance issuance.

To hit the +60% trigger point, the current price would have to rise above the average through some combination of volatility and an underlying trend. If there’s no volatility, the the trigger point permits a very strong trend. If the moving average were a simple exponential smooth, the basis for the trigger would follow the market price with a 36-month lag. That means the trigger would be hit when 60% = (growth rate)*(3 years), i.e. the market price would have to grow 20% per year to trigger an auction. In fact, the moving average is a simple average over a window, which follows an exponential input more closely, so the effective lag is only 1.5 years, and thus the trigger mechanism would permit 40%/year price increases. If you accept that the appropriate time trajectory of prices is more like an increase at the interest rate, it seems that the strategic reserve is fairly useless for suppressing any strong underlying exponential signal.

That leaves volatility. If we suppose that the underlying rate of increase of prices is 10%/year, then the standard deviation of the market price would have to be (60%-(10%/yr*1.5yr))/2 = 22.5% in order to trigger the reserve. That’s not out of line with the volatility of many commodities, but it seems like a heck of a lot of volatility to tolerate when there’s no reason to. Climate damages are almost invariant to whether a ton gets emitted today or next month, so any departure from a smooth price trajectory imposes needless costs (but perhaps worthwhile if cap & trade is really the only way to get a climate policy in place).

The volatility of allowance prices can be translated to a volatility of allowance demand by assuming an elasticity of allowance demand. If elasticity is -0.1 (comparable to short run gasoline estimates), then the underlying demand volatility would be 2.25%. The actual volatility of weekly petroleum consumption around a 1 quarter average is just about twice that:

Weekly petroleum products supplied

So, theoretically the reserve might shave some of these peaks, but one would hope that the carbon market wouldn’t be transmitting this kind of noise in the first place.

Strategic Excess?

I’ve been reading the Breakthrough Institute’s Waxman Markey analysis, which is a bit spotty* but raises many interesting issues. One comment seemed too crazy to be true: that the W-M strategic reserve is “refilled” with forestry offsets. Sure enough, it is true:

726 (g) (2) INTERNATIONAL OFFSET CREDITS FOR REDUCED DEFORESTATION- The Administrator shall use the proceeds from each strategic reserve auction to purchase international offset credits issued for reduced deforestation activities pursuant to section 743(e). The Administrator shall retire those international offset credits and establish a number of emission allowances equal to 80 percent of the number of international offset credits so retired. Emission allowances established under this paragraph shall be in addition to those established under section 721(a).

This provision makes the reserve nearly self-perpetuating: at constant prices, 80% of allowances released from the reserve are replaced. If the reserve accomplishes its own goal of reducing prices, more than 80% get replaced (if replacement exceeds 100%, the excess is vintaged and assigned to future years). This got me wondering: does anyone understand how the reserve really works? Its market rules seem arbitrary. Thus I set out to simulate them.

First, I took a look at some data. What would happen if the reserve strategy were applied to other commodities? Here’s oil:

Oil prices & moving average cap

Red is the actual US weekly crude price, while purple shows the strategic reserve price trigger level: a 3-year moving average + 60%. With this trajectory, the reserve would be shaving a few peaks, but wouldn’t do anything about the long term runup in prices. Same goes for corn: Continue reading “Strategic Excess?”

Battle of the Bulb II

The White House has announced new standards for lighting. As I’ve said before, I prefer an economic ban to an outright ban. A less-draconian performance standard may have advantages though. I just visited Erling Moxnes in Norway, who handed me an interesting paper that describes one possible benefit of standards, even where consumers are assumed to optimize.

A frequent argument against efficiency standards is that they prohibit products that represent optimal choices for customers and thus lead to reduced customer utility. In this paper we propose and test a method to estimate such losses. Conjoint analysis is used to estimate utility functions for individuals that have recently bought a refrigerator. The utility functions are used to calculate the individuals’ utility of all the refrigerators available in the market. Revealed utility losses due to non-optimal choices by the customers seem consistent with other data on customer behavior. The same utility estimates are used to find losses due to energy efficiency standards that remove products from the market. Contrary to previous claims, we find that efficiency standards can lead to increased utility for the average customer. This is possible because customers do not make perfect choices in the first place.

The key here is not that customers are stupid and need to be coddled by the government. The method accepts customer utility functions as is (along with possible misperceptions). However, consumers perform limited search for appliances (presumably because search is costly), and thus there’s a significant random component to their choices. Standards help in that case by focusing the search space, at least with respect to one product attribute. They’re even more helpful to the extent that energy efficiency is correlated with other aspects of product quality (e.g., due to use of higher-quality components).

Estimating customer utility of energy efficiency standards for refrigerators. Erling Moxnes. Economic Psychology 25, 707-724. 2004.

Waxman-Markey emissions coverage

In an effort to get a handle on Waxman Markey, I’ve been digging through the EPA’s analysis. Here’s a visualization of covered vs. uncovered emissions in 2016 (click through for the interactive version).

0b50f88e-65c3-11de-b8e7-000255111976 Blog_this_caption

The orange bits above are uncovered emissions – mostly the usual suspects: methane from cow burps, landfills, and coal mines; N2O from agriculture; and other small process or fugitive emissions. This broad scope is one of W-M’s strong points.

Talking to the taxman about math

I ran across this gem in the text of Waxman Markey (HR 2454):

(e) Trade-vulnerable Industries-

(1) IN GENERAL- The Administrator shall allocate emission allowances to energy-intensive, trade-exposed entities, to be distributed in accordance with section 765, in the following amounts:

(A) For vintage years 2012 and 2013, up to 2.0 percent of the emission allowances established for each year under section 721(a).

(B) For vintage year 2014, up to 15 percent of the emission allowances established for that year under section 721(a).

(C) For vintage year 2015, up to the product of–

(i) the amount specified in paragraph (2); multiplied by

(ii) the quantity of emission allowances established for 2015 under section 721(a) divided by the quantity of emission allowances established for 2014 under section 721(a).

(D) For vintage year 2016, up to the product of–

(i) the amount specified in paragraph (3); multiplied by

(ii) the quantity of emission allowances established for 2015 under section 721(a) divided by the quantity of emission allowances established for 2014 under section 721(a).

(E) For vintage years 2017 through 2025, up to the product of–

(i) the amount specified in paragraph (4); multiplied by

(ii) the quantity of emission allowances established for that year under section 721(a) divided by the quantity of emission allowances established for 2016 under section 721(a).

(F) For vintage years 2026 through 2050, up to the product of the amount specified in paragraph (4)–

(i) multiplied by the quantity of emission allowances established for the applicable year during 2026 through 2050 under section 721(a) divided by the quantity of emission allowances established for 2016 under section 721(a); and

(ii) multiplied by a factor that shall equal 90 percent for 2026 and decline 10 percent for each year thereafter until reaching zero, except that, if the President modifies a percentage for a year under subparagraph (A) of section 767(c)(3), the highest percentage the President applies for any sector under that subparagraph for that year (not exceeding 100 percent) shall be used for that year instead of the factor otherwise specified in this clause.

What we have here is really a little dynamic model, which can be written down in 4 or 5 lines. The intent is apparently to stabilize the absolute magnitude of the allocation to trade-vulnerable industries. In order to do that, the allocation share has to rise over time, as the total allowances issued falls. After 2026, there’s a 10%-per-year phaseout, but that’s offset by the continued upward pressure on share from the decline in allowances, so the net phaseout rate is about 5%/year, I think. Oops: Actually, I think now that it’s the other way around … from 2017-2025, the formula decreases the share of allowances allocated at the same rate as the absolute allowance allocation declines. Thereafter, it’s that rate plus 10%. There is no obvious rationale for this strange method.

Seems to me that if legislators want to create formulas this complicated, they ought to simply write out the equations (with units) in the text of the bill. Otherwise, natural language hopelessly obscures the structure and no ordinary human can participate effectively in the process. But perhaps that’s part of the attraction?

The elusive MAC curve

Marginal Abatement Cost (MAC) curves are a handy way of describing the potential for and cost of reducing energy consumption or GHG emissions. McKinsey has recently made them famous, but they’ve been around, and been debated, for a long time.

McKinsey MAC 2.0

One version of the McKinsey MAC curve

Five criticisms are common:

1. Negative cost abatement options don’t really exist, or will be undertaken anyway without policy support. This criticism generally arises from the question begged by the Sweeney et al. MAC curve below: if the leftmost bar (diesel anti-idling) has a large negative cost (i.e. profit opportunity) and is price sensitive, why hasn’t anyone done it? Where are those $20 bills on the sidewalk? There is some wisdom to this, but you have to drink pretty deeply of the neoclassical economic kool aid to believe that there really are no misperceptions, institutional barriers, or non-climate externalities that could create negative cost opportunities.

Sweeney et al. California MAC curve

Sweeney, Weyant et al. Analysis of Measures to Meet the Requirements of California’s Assembly Bill 32

The neoclassical perspective is evident in AR4, which reports results primarily of top-down, equilibrium models. As a result, mitigation costs are (with one exception) positive:

AR4 WG3 TS fig. TS.9, implicit MAC curves

AR4 WG3 TS fig. TS-9

Note that these are top-down implicit MAC curves, derived by exercising aggregate models, rather than bottom-up curves constructed from detailed menus of technical options.

2. The curves employ static assumptions, that might not come true. For example, I’ve heard that the McKinsey curves assume $60/bbl oil. This criticism is true, but could be generalized to more or less any formal result that’s presented as a figure rather than an interactive model. I regard it as a caveat rather than a flaw.

3. The curves themselves are static, while reality evolves. I think the key issue here is that technology evolves endogenously, so that to some extent the shape of the curve in the future will depend on where we choose to operate on the curve today. There are also 2nd-order, market-mediated effects (related to #2 as well): a) exploiting the curve reduces energy demand, and thus prices, which changes the shape of the curve, and b) changes in GHG prices or other policies used to drive exploitation of the curve influence prices of capital and other factors, again changing the shape of the curve.

4. The notion of “supply” is misleading or incomplete. Options depicted on a MAC curve typically involve installing some kind of capital to reduce energy or GHG use. But that installation depends on capital turnover, and therefore is available only incrementally. The rate of exploitation is more difficult to pin down than the maximum potential under idealized conditions.

5. A lot of mitigation falls through the cracks. There are two prongs to this criticism: bottom-up, and top-down. Bottom-up models, because they employ a menu of known technologies, inevitably overlook some existing or potential options that might materialize in reality (with the incentive of GHG prices, for example). That error is, to some extent, offset by over-optimism about other technologies that won’t materialize. More importantly, a menu of supply and end use technology choices is an incomplete specification of the economy; there’s also a lot of potential for changes in lifestyle and substitution of activity among economic sectors. Today’s bottom-up MAC curve is essentially a snapshot of how to do what we do now, with fewer GHGs. If we’re serious about deep emissions cuts, the economy may not resemble what we’re doing now very much  in 40 years. Top down models capture the substitution potential among sectors, but still take lifestyle as a given and (mostly) start from a first-best equilibrium world, devoid of mitigation options arising from the frailty of human, institutional, and market failures.

To get the greenhouse gas MAC curve right, you need a model that captures bottom-up and top-down aspects of the economy, with realistic dynamics and agent behavior, endogenous technology, and non-climate externalities all included. As I see it, mainstream integrated assessment models are headed down some of those paths (endogenous technology), but remain wedded to the equilibrium/optimization perspective. Others (including us at Ventana) are exploring other avenues, but it’s a hard road to hoe.

In the meantime, we’re stuck with a multitude of perspectives on mitigation costs. Here are a few from the WCI, compiled by Wei and Rose from partner jurisdictions’ Climate Action Team reports and other similar documents:

WCI partner MAC curves

Wei & Rose, Preliminary Cap & Trade Simulation of Florida Joining WCI

The methods used to develop the various partner options differ, so these curves reflect diverse beliefs rather than a consistent comparison. What’s striking to me is that the biggest opportunities (are perceived to) exist in California, which already has (roughly) the lowest GHG intensity and most stringent energy policies among the partners. Economics 101 would suggest that California might already have exploited the low-hanging fruit, and that greater opportunity would exist, say, here in Montana, where energy policy means low taxes and GHG intensity is extremely high.

For now, we have to live with the uncertainty. However, it seems obvious that an adaptive strategy for discovering the true potential for mitigation is easy. No matter who you beleive, the cost of the initial increment of emissions reductions is either small (<<1% of GDP) or negative, so just put a price on GHGs and see what happens.

The only thing worse than cap & trade …

… is Marty Feldstein’s lame arguments against it.

  • He cites CBO household costs of policy that reflect outlays, rather than real deadweight or welfare losses after revenue recycling.
  • He wants the US to wait for global agreement before moving. News flash: there won’t be a global agreement without some US movement.
  • He argues that unilateral action is ineffective: true, but irrelevant if you aim to solve the problem. However, if that’s our moral philosophy, I think I should be exempted from all laws – on a global scale, no one will notice my murdering and pillaging, and it’ll be fun for me.

There is one nugget of wisdom in Feldstein’s piece: it’s a travesty to overcompensate carbon-intensive firms, and foolish to use allowance allocation to utilities to defeat the retail price signal. I haven’t read the details of the bill yet, so I don’t know how extensive those provisions really are, but it’s definitely something to watch.

Well, OK, lots of things are worse than cap & trade. More importantly, one thing (an upstream carbon tax) could be a lot better than Waxman Markey. But it’s sad when a Harvard economist sounds like an astroturf skeptic.

Hat tip to Economist’s View.