Allocation Oddity

Mining my hard drive for stuff I did a few weeks back, when the Waxman Markey draft was just out, I ran across this graph:

Waxman-Markey electricity & petroleum prices

It shows prices for electricity and petroleum from the ADAGE model in the June EPA analysis. BAU = business-as-usual; SCN 02 = updated Waxman-Markey scenario; SCN 06 = W-M without allowance allocations for consumer rate relief and a few other provisions. Notice how the retail price signal on electricity is entirely defeated until the 2025-2030 allowance phaseout. On the other hand, petroleum prices are up in either scenario, because there is no rate relief.

Four questions:

  • Isn’t it worse to have a big discontinuity electricity prices in 2025-2030, rather than a smaller one in 2010-2015?
  • Is your average household even going to notice a 1 or 2 c/kwh change over 5 years, given the volatility of other expenses?
  • Since the NPV of the rate relief by 2025 is not much, couldn’t the phaseout happen a little faster?
  • How does it help to defeat the price signal to the residential sector, a large energy consumer with low-hanging mitigation fruit?

Things might not be as bad as all this, if the goal (not mandate) of serving up rate relief as flat or fixed rebates is actually met. Then the cost of electricity at the margin will go up regardless of allowance allocation, and there would be some equity benefit. But my guess is that, even if that came to pass, consumers would watch their total bills, not the marginal cost, and thus defeat the price signal behaviorally. Also, will people with two addresses and two meters, like me, get a double rebate? Yippee!

Constraints vs. Complements

If you look at recent energy/climate regulatory plans in a lot of places, you’ll find an emerging model: an overall market-based umbrella (cap & trade) with a host of complementary measures targeted at particular sectors. The AB32 Scoping Plan, for example, has several options in each of eleven areas (green buildings, transport, …).

I think complementary policies have an important role: unlocking mitigation that’s bottled up by misperceptions, principal-agent problems, institutional constraints, and other barriers, as discussed yesterday. That’s hard work; it means changing the way institutions are regulated, or creating new institutions and information flows.

Unfortunately, too many of the so-called complementary policies take the easy way out. Instead of tackling the root causes of problems, they just mandate a solution – ban the bulb. There are some cases where standards make sense – where transaction costs of other approaches are high, for example – and they may even improve welfare. But for the most part such measures add constraints to a problem that’s already hard to solve. Sometimes those constraints aren’t even targeting the same problem: is our objective to minimize absolute emissions (cap & trade), minimize carbon intensity (LCFS), or maximize renewable content (RPS)?

You can’t improve the solution to an optimization problem by adding constraints. Even if you don’t view society as optimizing (probably a good idea), these constraints stand in the way of a good solution in several ways. Today’s sensible mandate is tomorrow’s straightjacket. Long permitting processes for land use and local air quality make it harder to adapt to a GHG price signal, for example.  To the extent that constraints can be thought of as property rights (as in the LCFS), they have high transaction costs or are illiquid. The proper level of the constraint is often subject to large uncertainty. The net result of pervasive constraints is likely to be nonuniform, and often unknown, GHG prices throughout the economy – contrary to the efficiency goal of emissions trading or taxation.

My preferred alternative: Start with pricing. Without a pervasive price on emissions, attempts to address barriers are really shooting in the dark – it’s difficult to identify the high-leverage micro measures in an environment where indirect effects and unintended consequences are large, absent a global signal. With a price on emissions, pain points will be more evident. Then they can be addressed with complementary policies, using the following sieve: for each area of concern, first identify the barrier that prevents the market from achieving a good outcome. Then fix the institution or decision process responsible for the barrier (utility regulation, for example), foster the creation of a new institution (to solve the landlord-tenant principal-agent problem, for example), or create a new information stream (labeling or metering, but less perverse than Energy Star). Only if that doesn’t work should we consider a mandate or auxiliary tradable permit system. Even then, we should also consider whether it’s better to simply leave the problem alone, and let the GHG price rise to harvest offsetting reductions elsewhere.

I think it’s reluctance to face transparent prices that drives politics to seek constraining solutions, which hide costs and appear to “stick it to the man.” Unfortunately, we are “the man.” Ultimately that problem rests with voters. Time for us to grow up.

MAC Attack

John Sterman just pointed me to David Levy’s newish blog, Climate Inc., which has some nice thoughts on Marginal Abatement Cost curves: How to get free mac lunches, and Whacking the MAC. They reminded me of my own thoughts on The elusive MAC curve. Climate Inc. also has a very interesting post on the psychology of US and European oil companies’ climate strategies, Back to Petroleum?.

The conclusion from How to get free mac lunches:

Of course, these solutions are not cost free ’“ they involve managerial time, some capital, and transaction costs. Some of the barriers are complex and would require large scale institutional restructuring, requiring government-business collaboration. But one person’s transaction costs are another’s business opportunity (the transaction costs of carbon markets will keep financial firms smiling). The key point here is that there are creative organizational and managerial approaches to unlock the doors to low-cost or even negative-cost carbon reductions. The carbon price is, by itself, an inefficient and ineffective tool ’“ the price would have to be at a politically infeasible level to achieve the desired goal. But we don’t have to rely just on the carbon price or on command and control; a multi-pronged attack is needed.

and Whacking the MAC:

Simply put, it will take a lot more than a market-based carbon price and a handout of free allowances to utilities to unlock the potential of conservation and energy efficiency investments.  It will take some serious innovation, a great deal of risk-taking and capital, and a coordinated effort by policy-makers, investors, and entrepreneurs to jump the significant institutional and legal hurdles currently in the way.  Until then, it will continue to be a real stretch to bend over the hurdles in an effort to reach all the elusive fruit lying on the ground.

Here’s my bottom line on MAC curves:

The existence of negative cost energy efficiency and mitigation options has been debated for decades. The arguments are more nuanced than they used to be, but this will not be settled any time soon. Still, there is an obvious way to proceed. First, put a price on carbon and other externalities. We’d make immediate progress on some fronts, where there are no barriers or misperceptions. In the stickier areas, there would be a financial incentive to solve the institutional, informational and transaction cost barriers that prevented implementation when energy was cheap and emissions were free. Service providers would emerge, and consumers and producers could gang up to push bureaucrats in the right direction. MAC curves would be a useful roadmap for action.

Hottest Day Ever

A few weeks ago, Seattle racked up its hottest day ever, at 103 degrees F. I was there for the fun. Normally I argue that air conditioning in the Pacific Northwest is for wimps, but we weren’t too thrilled about experiencing the record heat in a hotel without functioning AC. The next day (still hot) I was at a hotel that did have AC (the Crowne Plaza), and found this amazing scene:

Crowne Plaza fire

AC on full blast … and people huddled around a gas fire in the lobby?!

Don’t even get me started on the ice machinein a 100 degree closet, with an electric fan venting its waste heat into the hall, only to be expelled to the great outdoors by the building AC…

Incidentally, while it’s been mercifully cool and wet here in Montana, satellite records indicate that July 19 was possibly the hottest day ever recorded worldwide.

Polar Bears & Principles

Amstrup et al. have just published a rebuttal of the Armstrong, Green & Soon critique of polar bear assessments. Polar bears aren’t my area, and I haven’t read the original, so I won’t comment on the ursine substance. However, Amstrup et al. reinforce many of my earlier objections to (mis)application of forecasting principles, so here are some excerpts:

The Principles of Forecasting and Their Use in Science

… AGS based their audit on the idea that comparison to their self-described principles of forecasting could produce a valid critique of scientific results. AGS (p. 383) claimed their principles ‘summarize all useful knowledge about forecasting.’ Anyone can claim to have a set of principles, and then criticize others for violating their principles. However, it takes more than a claim to create principles that are meaningful or useful. In concluding our rejoinder, we point out that the principles espoused by AGS are so deeply flawed that they provide no reliable basis for a rational critique or audit.

Failures of the Principles

Armstrong (2001) described 139 principles and the support for them. AGS (pp. 382’“383) claimed that these principles are evidence based and scientific. They fail, however, to be evidence based or scientific on three main grounds: They use relative terms as if they were absolute, they lack theoretical and empirical support, and they do not follow the logical structure that scientific criticisms require.

Using Relative Terms as Absolute

Many of the 139 principles describe properties that models, methods, and (or) data should include. For example, the principles state that data sources should be diverse, methods should be simple, approaches should be complex, representations should be realistic, data should be reliable, measurement error should be low, explanations should be clear, etc. … However, it is impossible to look at a model, a method, or a datum and decide whether its properties meet or violate the principles because the properties of these principles are inherently relative.

Consider diverse. AGS faulted H6 for allegedly failing to use diverse sources of data. However, H6 used at least six different sources of data (mark-recapture data, radio telemetry data, data from the United States and Canada, satellite data, and oceanographic data). Is this a diverse set of data? It is more diverse than it would have been if some of the data had not been used. It is less diverse than it would have been if some (hypothetical) additional source of data had been included. To criticize it as not being diverse, however, without providing some measure of comparison, is meaningless.

Consider simple. What is simple? Although it might be possible to decide which of two models is simpler (although even this might not be easy), it is impossible’”in principle’”to say whether any model considered in isolation is simple or not. For example, H6 included a deterministic time-invariant population model. Is this model simple? It is certainly simpler than the stationary, stochastic model, or the nonstationary stochastic model also included in H6. However, without a measure of comparison, it is impossible to say which, if any, are ‘simple.’ For AGS to criticize the report as failing to use simple models is meaningless.

A Lack of Theoretical and Empirical Support

If the principles of forecasting are to serve as a basis for auditing the conclusions of scientific studies, they must have strong theoretical and (or) empirical support. Otherwise, how do we know that these principles are necessary for successful forecasts? Closer examination shows that although Armstrong (2001, p. 680) refers to evidence and AGS (pp. 382’“383) call the principles evidence based, almost half (63 of 139) are supported only by received wisdom or common sense, with no additional empirical or theoretical support. …

Armstrong (2001, p. 680) defines received wisdom as when ‘the vast majority of experts agree,’ and common sense as when ‘it is difficult to imagine that things could be otherwise.’ In other words, nearly half of the principles are supported only by opinions, beliefs, and imagination about the way that forecasting should be done. This is not evidence based; therefore, it is inadequate as a basis for auditing scientific studies. … Even Armstrong’s (2001) own list includes at least three cases of principles that are supported by what he calls strong empirical evidence that ‘refutes received wisdom’’”that is, at least three of the principles contradict received wisdom. …

Forecasting Audits Are Not Scientific Criticism

The AGS audit failed to distinguish between scientific forecasts and nonscientific forecasts. Scientific forecasts, because of their theoretical basis and logical structure based upon the concept of hypothesis testing, are almost always projections. That is, they have the logical form of ‘if X happens, then Y will follow.’ The analyses in AMD and H6 take exactly this form. A scientific criticism of such a forecast must show that even if X holds, Y does not, or need not, follow.

In contrast, the AGS audit simply scored violations of self-defined principles without showing how the identified violation might affect the projected result. For example, the accusation that H6 violated the commandment to use simple models is not a scientific criticism, because it says nothing about the relative simplicity of the model with respect to other possible choices. It also says nothing about whether the supposedly nonsimple model in question is in error. A scientific critique on the grounds of simplicity would have to identify a complexity in the model, and show that the complexity cannot be defended scientifically, that the complexity undermines the credibility of the model, and that a simpler model can resolve the issue. AGS did none of these.

There’s some irony to all this. Armstrong & Green criticize climate predictions as mere opinions cast in overly-complex mathematical terms, lacking predictive skill. The instrument of their critique is a complex set of principles, mostly derived from opinions, with undemonstrated ability to predict the skill of models and forecasts.