The rebound delusion

Lately it’s become fashionable to claim that energy efficiency is useless, because the rebound effect will always eat it up. This is actually hogwash, especially in the short term. James Barrett has a nice critique of the super-rebound position at RCE. Some excerpts:

To be clear, the rebound effect is real. The theory behind it is sound: Lower the cost of anything and people will use more of it, including the cost of running energy consuming equipment. But as with many economic ideas that are sound theory (like the idea that you can raise government revenues by cutting tax rates), the trick is in knowing how far to take them in reality. (Cutting tax rates from 100% to 50% would certainly raise revenues. Cutting them from 50% to 0% would just as surely lower them.)

The problem with knowing how far to take things like this is that unlike real scientists who can run experiments in a controlled laboratory environment, economists usually have to rely on what we can observe in the real world. Unfortunately, the real world is complicated and trying to disentangle everything that’s going on is very difficult.

Owen cleverly avoids this problem by not trying to disentangle anything.

One supposed example of the Jevons paradox that he points to in the article is air conditioning. Citing a conversation with Stan Cox, author of Losing Our Cool, Owen notes that between 1993 and 2005, air conditioners in the U.S. increased in efficiency by 28%, but by 2005, homes with air conditioning increased their consumption of energy for their air conditioners by 37%.

Accounting only for the increased income over the timeframe and fixing Owen’s mistake of assuming that every air conditioner in service is new, a few rough calculations point to an increase in energy use for air conditioning of about 30% from 1993 to 2005, despite the gains in efficiency. Taking into account the larger size of new homes and the shift from room to central air units could easily account for the rest.

All of the increase in energy consumption for air conditioning is easily explained by factors completely unrelated to increases in energy efficiency. All of these things would have happened anyway. Without the increases in efficiency, energy consumption would have been much higher.

It’s easy to be sucked in by stories like the ones Owen tells. The rebound effect is real and it makes sense. Owen’s anecdotes reinforce that common sense. But it’s not enough to observe that energy use has gone up despite efficiency gains and conclude that the rebound effect makes efficiency efforts a waste of time, as Owen implies. As our per capita income increases, we’ll end up buying more of lots of things, maybe even energy. The question is how much higher would it have been otherwise.

Why is the rebound effect suddenly popular? Because an overwhelming rebound effect is needed to make sense of proposals to give up on near-term emissions prices and invest in technology, praying for a clean-energy-supply miracle in a few decades.

As Barrett points out, the notion that energy efficiency increases energy use is an exaggeration of the rebound effect. For efficiency to increase use, energy consumption has to be elastic (e<-1). I don’t remember ever seeing an economic study that came to that conclusion. In a production function, such values aren’t physically plausible, because they imply zero energy consumption at a finite energy price.

Therefore, the notion that pursuing energy efficiency makes the climate situation worse is a fabrication. Doubly so, because of an accounting sleight-of-hand. Consider two extremes:

  1. no rebound effects (elasticity ~ 0): efficiency policies work, because they reduce energy use and its associated negative social externalities.
  2. big rebound effects (elasticity < -1): efficiency policies increase energy use, but they do so because there’s a huge private benefit from the increase in mobility or illumination or whatever private purpose the energy is put to.

The super-rebound crowd pooh-poohs #1 and conveniently ignores the welfare outcome of #2, accounting only for the negative side effects.

If rebound effects are modest, as they surely are, it makes much more sense to guide R&D and deployment for both energy supply and demand with a current price signal on emissions. That way, firms make distributed decisions about where to invest, rather than the government picking winners, and appropriate tradeoffs between conservation and clean supply are possible. The price signal can be adapted to meet environmental constraints in the face of rising income. Progress starts now, rather than after decades of waiting for the discover->apply->deploy->embody pipeline.

If the public isn’t ready for it, that doesn’t mean analysts should bargain against their own good sense by recommending things that might be popular, but are unlikely to work. That’s like a doctor advising a smoker to give to cancer research, without mentioning that he really ought to quit.

Update: there’s an excellent followup at RCE.

Storytelling and playing with systems

This journalist gets it:

Maybe journalists shouldn’t tell stories so much. Stories can be a great way of transmitting understanding about things that have happened. The trouble is that they are actually a very bad way of transmitting understanding about how things work. Many of the most important things people need to know about aren’t stories at all.

Our work as journalists involves crafting rewarding media experiences that people want to engage with. That’s what we do. For a story, that means settings, characters, a beginning, a muddle and an end. That’s what makes a good story.

But many things, like global climate change, aren’t stories. They’re issues that can manifest as stories in specific cases.

… the way that stories transmit understanding is only one way of doing so. When it comes to something else – a really big, national or world-spanning issue, often it’s not what happened that matters, so much as how things work.

…When it comes to understanding a system, though, the best way is to interact with it.

Play is a powerful way of learning. Of course the systems I’ve listed above are so big that people can’t play with them in reality. But as journalists we can create models that are accurate and instructive as ways of interactively transmitting understanding.

I use the word ‘play’ in its loosest sense here; one can ‘play’ with a model of a system the same way a mechanic ‘plays’ around with an engine when she’s not quite sure what might be wrong with it.

The act of interacting with a system – poking and prodding, and finding out how the system reacts to your changes – exposes system dynamics in a way nothing else can.

If this grabs you at all, take a look at the original – it includes some nice graphics and an interesting application to class in the UK. The endpoint of the forthcoming class experiment is something like a data visualization tool. It would be cool if they didn’t stop there, but actually created a way for people to explore the implications of different models accounting for the dynamics of class, as Climate Colab and Climate Interactive do with climate models.

Now cap & trade is REALLY dead

From the WaPo:

[Obama] also virtually abandoned his legislation – hopelessly stalled in the Senate – featuring economic incentives to reduce carbon emissions from power plants, vehicles and other sources.

“I’m going to be looking for other means of addressing this problem,” he said. “Cap and trade was just one way of skinning the cat,” he said, strongly implying there will be others.

In the campaign, Republicans slammed the bill as a “national energy tax” and jobs killer, and numerous Democrats sought to emphasize their opposition to the measure during their own re-election races.

Brookings reflects, Toles nails it.

Modelers: you're not competing

Well, maybe a little, but it doesn’t help.

From time to time we at Ventana encounter consulting engagements where the problem space is already occupied by other models. Typically, these are big, detailed models from academic or national lab teams who’ve been working on them for a long time. For example, in an aerospace project we ran into detailed point-to-point trip generation models and airspace management simulations with every known airport and aircraft in them. They were good, but cumbersome and expensive to run. Our job was to take a top-down look at the big picture, integrating the knowledge from the big but narrow models. At first there was a lot of resistance to our intrusion, because we consumed some of the budget, until it became evident that the existence of the top-down model added value to the bottom-up models by placing them in context, making their results more relevant. The benefit was mutual, because the bottom-up models provided grounding for our model that otherwise would have been very difficult to establish. I can’t quite say that we became one big happy family, but we certainly developed a productive working relationship.

I think situations involving complementary models are more common than head-to-head competition among models that serve the same purpose. Even where head-to-head competition does exist, it’s healthy to have multiple models, especially if they embody different methods. (The trouble with global climate policy is that we have many models that mostly embody the same general equilibrium assumptions, and thus differ only in detail.) Rather than getting into methodological pissing matches, modelers should be seeking the synergy among their efforts and making it known to decision makers. That helps to grow the pie for all modeling efforts, and produces better decisions.

Certainly there are exceptions. I once ran across a competing vendor doing marketing science for a big consumer products company. We were baffled by the high R^2 values they were reporting (.92 to .98), so we reverse engineered their model from the data and some slides (easy, because it was a linear regression). It turned out that the great fits were due to the use of 52 independent parameters to capture seasonal variation on a weekly basis. Since there were only 3 years of data (i.e. 3 points per parameter), we dubbed that the “variance eraser.” Replacing the 52 parameters with a few targeted at holidays and broad variations resulted in more realistic fits, and also revealed problems with inverted signs (presumably due to collinearity) and other typical pathologies. That model deserved to be displaced. Still, we learned something from it: when we looked cross-sectionally at several variants for different products, we discovered that coefficients describing the sales response to advertising were dependent on the scale of the product line, consistent with our prior assertion that effects of marketing and other activities were multiplicative, not additive.

The reality is that the need for models is almost unlimited.  The physical sciences are fairly well formalized, but models span a discouragingly small fraction of the scope of human behavior and institutions. We need to get the cost of providing insight down, not restrict the supply through infighting. The real enemy is seldom other models, but rather superstition, guesswork and propaganda.

There must be a model here somewhere

I ran across a nice interpretation of Paul Krugman’s comments on China’s monetary policy. It’s also a great example of the limitations of verbal descriptions of complex feedbacks:

In order to invest in China you need state permission and the state limits how much money comes in. It essentially has an import quota on Yuan.

This means that while Yuan are loose in the international market and therefore cheap, they are actually tight at home and therefore expensive. Because China is controlling the flow on money across the border it can have a loose international monetary policy but a tight domestic monetary policy.

Indeed, it goes deeper than that. A loose international Yuan bids up foreign demand for Chinese goods. This in turn both increase the quantity of goods China produces and their domestic price. Essentially, foreign consumers are given a price advantage relative to domestic consumers.

However, China doesn’t want domestic consumers to face higher prices. So, it has to tighten the domestic Yuan even tighter. It has too push down domestic demand so that the sum of international demand plus domestic demand are not so high that they produce domestic inflation.

The tight domestic Yuan, therefore, is driving down Chinese consumption at precisely the time in which the world could use more consumption. The loose international Yuan also gives foreigners a price advantage when buying Chinese goods and so it is driving down inflation in the US at precisely the time the Fed is trying to dive it up.

However, the story still gets worse from there – I am really riffing here, half of this is just occurring to me as I type. The loose international Yuan can only be used to produce manufactured goods. Manufacturing requires commodities both as the feed stock for the actual goods and to be used in the construction of new manufacturing facilities.

What does that mean. It should mean that when the Fed loosens policy, that China responds by loosening the International Yuan which in turn gets shunted towards commodities. Thus rather than boosting the consumer price level as we hope, Fed easing actually winds up boosting commodities.

This is because China is offsetting the total increase in worldwide consumer demand by tightening the Yuan at home, and boosting the total increase in commodity demand by loosening the Yuan abroad.

If this is a bit baffling, it helps to get the context from the originals. Still, it begs for a model or at least a diagram. At least the punch line is simple:

Thus this Yuan policy does all the wrong things.

Meanwhile, in a bizarre parallel universe where climate policy exists in a vacuum, China calls the US a preening pig. Couldn’t they at least wait for Palin to be elected? Seriously, US climate policy is a joke, but Chinese monetary-industrial policy is just as destructive.

Climate CoLab Contest

The Climate CoLab is an interesting experiment that combines three features,

  • Collaborative simulation modeling (including several integrated assessment models and C-LEARN)
  • On-line debates
  • Collective decision-making

Together these create an infrastructure for collective intelligence that gets beyond the unreal rhetoric that pervades many policy debates.

The CoLab is launching its 2010 round of policy proposal contests:

To members of the Climate CoLab community,

We are pleased to announce the launch of a new Climate CoLab contest, as well as a major upgrade of our software platform.

The contest will address the question: What international climate agreements should the world community make?

The first round runs through October 31 and the final round through November 26.

In early December, the United Nations and U.S. Congress will be briefed on the winning entries.

We are raising funds in the hope of being able to pay travel expenses for one representative from each winning team to attend one or both of these briefings.

We invite you to form teams and enter the contest–learn more at http://climatecolab.org.

We also encourage you to fill out your profiles and add a picture, so that members of the community can get to know each other.

And please inform anyone you believe might be interested about the contest.

Best,

Rob Laubacher

The contest leads to real briefings on the hill, and there are prizes for winners. See details.

Technology first?

The idea of a technology-led solution to climate is gaining ground, most recently with a joint AEI-Brookings proposal. Kristen Sheeran has a nice commentary at RCE on the prospects. Go read it.

I’m definitely bearish on the technology-first idea. I agree that technology investment is a winner, with or without environmental externalities. But for high tech to solve the climate problem by itself, absent any emissions pricing, may require technical discontinuities that are less than likely. That makes technology-first the Hail-Mary pass of climate policy: something you do when you’re out of options.

The world isn’t out of options in a physical sense; it’s just that the public has convinced itself otherwise. That’s a pity.

The invisible hand works

… but not always with the intended outcome. This collapsed condo in China, built without significant rebar connecting building to footing, is a nice demonstration of the fact that markets are lousy at providing unobserved goods, like safety and quality.

ChinaCondoCollapse

Markets are great at decentralizing decisions where there’s rapid outcome feedback, but lousy at dealing with delayed or temporally remote feedback: pollution, emergent disease resistance, risky lending. As long as markets are incomplete, the invisible hand can’t solve such problems on its own any more than the thermostat in a room can prevent it from getting too hot due to a building fire. In this case, the profit motive probably led to a cascade of bad decisions that actively contributed to the toppling of the building.

There are three possible solutions: 1. regulate the market (government building inspections), 2. create a market (label buildings for rebar content?), or 3.  let a solution emerge (financiers and buyers learn their lesson). 3 is the preferred solution of small-government enthusiasts, but I don’t see any evidence that it actually works for nonlocal problems like pollution or low-probability/high-consequence events. That leaves society holding the bag for the fallout of individual decisions. Possibly that was a good deal for all concerned in the 19th century, but it seems like a dubious approach to the 21st.

Fuel economy makeover

The EPA is working on new fuel economy window stickers for cars (you can vote on alternatives). I like this one:

New Fuel Econ Sticker
hoisted from the comments at jalopnik

There are some things to like about the possible new version. For example, it indicates fuel economy on an absolute scale, so that there’s no implicit allocation of pollution rights to bigger vehicles (unlike Energy Star and the CAFE standard):

New Fuel Econ ScaleSince the new stickers will indicate fueling costs, emissions taxes on fuels will be a nice complementary policy, as they’ll be more evident on the dealer lot.

Waiting for a miracle at Lake Mead

Lake Mead has dropped another ten feet since I wrote about its open-loop management,

My hypothesis is that the de facto policy for managing water levels is to wait for good years to restore the excess withdrawals of bad years, and that demand management measures in the interim are toothless. That worked back when river flows were not fully subscribed. The trouble is, supply isn’t stationary, and there’s no reason to assume that it will return to levels that prevailed in the early years of river compacts. At the same time, demand isn’t stationary either, as population growth in the west drives it up. To avoid Lake Mead drying up, the system is going to have to get a spine, i.e. there’s going to have to be some feedback between water availability and demand.

An article in the Arizona Republic confirms my thinking,

To slow the lake’s years-long decline, river users have built a reservoir west of Yuma to catch unused runoff, paid farmers to leave fields unplanted and are negotiating with Mexico to leave some of its allocation in Lake Mead while its farmers recover from an earthquake.

None of the steps will yield significant amounts of water, but together, they could keep Lake Mead from sinking below the drought triggers, buying time until a wet winter can replenish some of the water lost to drought.

“It’s time that we need,” said David Modeer, general manager of the Central Arizona Project, which moves water from the Colorado River to Phoenix and Tucson. “The reservoirs have shown they’re resilient. After a 12-year drought, they’re still half-full. What we do now will be worth it to stay out of a shortage.”

Managers are assuming that a return to historic rainfall patterns will save their bacon. But if climate models are right, and the Southwest will be on the losing end of trends in precipitation, that won’t happen. Even if they’re wrong, increasing demand can easily overwhelm restored rainfall. At some point, the loop will have to close – the question is how. Will property rights get reallocated and price signals aligned so that people live within the limits of supply? Or will the lake wind up permanently depleted? There are some signs of improved cooperation among states, but Nevada appears to be betting on failure:

if the reservoir fell below elevation 1,050 feet, one of the tunnels Nevada uses to draw water from the lake would sit above the waterline and would be useless. Nevada is working on a new, deeper tunnel