CAFE and Policy Resistance

In 2011, the White House announced big increases in CAFE fuel economy standards.

The result has been counterintuitive. But before looking at the outcome, let me correct a misconception. The chart above refers to the “fleetwide average” – but this is the new vehicle fleetwide average, not the average of vehicles on the road. Of course it is the latter that matters for CO2 emissions and other outcomes. The on-the-road average lags the standards by a long time, because the fleet turns over slowly, due to the long lifetime of vehicles. It’s worse than that, because actual performance lags the standards due to loopholes and measurement issues. The EPA puts the 2017 model year here:

But wait … it’s still worse than that. Notice that the future fleetwide average is closer to the car standard than to the truck standard:

That implies that the market share of cars is more than 50%. But look what’s been happening:

The market share of cars is collapsing. (If you look at longer series, it looks like the continuation of a long slide.) Presumably this is because, faced with consumer appetites guided by cheap gas and a standards gap between cars and trucks, automakers are doing the rational thing: they’re dumping their cars fleets and switching to trucks and SUVs. In other words, they’re moving from the upper curve to the less-constrained lower curve:

It’s actually worse than that, because within each vehicle class, EPA uses a footprint methodology that essentially assigns greater emissions property rights to larger vehicles.

So, while the CAFE standards seemingly require higher performance, they simultaneously incentivize behavioral responses that offset much of the improvement. The NRC actually wondered if this would happen when it evaluated CAFE about 5 years ago.

Three outcomes related to the size of vehicles in the fleet are possible due to the regulations: Manufacturers could change the size of individual vehicles, they could change the mix of vehicle sizes in their portfolio (i.e., more large cars relative to small cars), or they could change the mix of cars and light trucks.

I think it’s safe to say that yes, we’re seeing exactly these effects in the US fleet. That makes aggregate progress on emissions rather glacial. Transportation emissions are currently rising, interrupted only by the financial crisis. That’s because we’re not working all the needed leverage points in the system. We have one rule (CAFE) and technology (EVs) but we’re not doing anything about prices (carbon tax) or preferences (e.g., walkable cities). We need a more comprehensive approach if we’re going to beat the unintended consequences.

Rise of the Watt Guzzler

Overconsumption isn’t green.

Tesla’s strategy of building electric cars that are simply better than conventional cars has worked brilliantly. They harnessed lust for raw power in service of greener tech (with the help of public subsidies – the other kind of green involved).

That was great, but now it’s time to grow up. Not directly emitting CO2 just isn’t good enough. If personal vehicle transport continues to grow exponentially, it will just run into other limits, especially because renewable electricity is not entirely benign.

The trucks on the horizon are perfect examples. The Cybertruck consumes nearly twice the energy per mile of a Model 3 (and presumably still more if heavily loaded, which is kind of the point of a truck). That power is cheap, so anyone who can afford the capital cost can afford the juice, but if it’s to be renewable, it’s consuming scarce power that could be put to greener purposes than stroking drivers’ egos. It’s also consuming more parking and road space and putting more rubber into waters.

When you consider in addition the effects of driving automation on demand, you get a perfect storm of increased depletion, pollution, congestion and other side effects.

The EV transition isn’t all bad – it’s a big climate mitigation enabler. But I think we could find wiser ways to apply technology and public money that don’t simply move the externalities to other areas.

Challenges Sourcing Parameters for Dynamic Models

A colleague recently pointed me to this survey:

Estimating the price elasticity of fuel demand with stated preferences derived from a situational approach

It starts with a review of a variety of studies:

Table 1. Price elasticities of fuel demand reported in the literature, by average year of observation.

This is similar to other meta-analyses and surveys I’ve seen in the past. That means using it directly is potentially problematic. In a model, you’d typically plug the elasticity into something like the following:

Indicated fuel demand 
   = reference fuel demand * (price/reference price) ^ elasticity

You’d probably have the expression above embedded in a larger structure, with energy requirements embodied in the capital stock, and various market-clearing feedback loops (as below). The problem is that plugging the elasticities from the literature into a dynamic model involves several treacherous leaps.

First, do the parameter values even make sense? Notice in the results above that 33% of the long term estimates have magnitude < .3, overlapping the top 25% of the short term estimates. That’s a big red flag. Do they have conflicting definitions of “short” and “long”? Are there systematic methods problems?

Second, are they robust as you plan to use them? Many of the short term estimates have magnitude <<.1, meaning that a modest supply shock would cause fuel expenditures to exceed GDP. This is primarily problem with the equation above (but that’s likely similar to what was estimated). A better formulation would consider non-constant elasticity, but most likely the data is not informative about the extremes. One of the long term estimates is even positive – I’d be interested to see the rationale for that. Perhaps fuel is a luxury good?

Third, are the parameters any good? My guess is that some of these estimates are simply violating good practice for estimating dynamic systems. The real long term response involves a lot of lags on varying time scales, from annual (perceptions of prices and behavior change) to decadal (fleet turnover, moving, mode-switching) to longer (infrastructure and urban development). Almost certainly some of this is ignored in the estimate, meaning that the true magnitude of the long term response is understated.

Stated preference estimates avoid some problems, but create others. In the short run, people have a good sense of their options and responses. But in the long term, likely not: you’re essentially asking them to mentally simulate a complex system, evaluating options that may not even exist at present. Expert judgments are subject to some of the same limitations.

I think this explains why it’s possible to build a model that’s backed up with a lot of expertise and literature at every equation, that fails to reproduce the aggregate behavior of the system. Until you’ve spend time integrating components, reconciling conflicting definitions across domains, and revisiting open-loop estimates in a closed-loop context, you don’t have an internally consistent story. Getting to that is a big part of the value of dynamic modeling.

The Tesla roof is a luxury product

No one buys a Tesla Model S because it’s cheaper than a regular car. But there’s currently a flurry of breathless tweets, rejoicing that a Tesla roof is cheaper than a regular roof. That’s dubious.

When I see $21.85 per square foot for anything associated with a house, “cheap” is not what comes to mind. That’s in the territory for luxury interior surfaces, not bulk materials like roofing. I’m reminded of the old saw in energy economics (I think from the EMF meetings in Aspen) that above 7000 feet, the concept of discount rates evaporates.

So, what are the numbers, really?

Continue reading “The Tesla roof is a luxury product”

ICE Roadkill

Several countries have now announced eventual bans of internal combustion engines. It’s nice that such a thing can now be contemplated, but this strikes me as a fundamentally flawed approach.

Banning a whole technology class outright is inefficient. When push comes to shove, that inefficiency is likely to lead to an implementation that’s complex and laden with exceptions. Bans and standards are better than nothing, but that regulatory complexity gives opponents something real to whine about. Then the loonies come out. At any plausible corporate cost of capital, a ban in 2040 has near-zero economic weight today.

Rather than banning gas and diesel vehicles at some abstract date in the far future, we should be pricing their externalities now. Air and water pollution, noise, resource extraction, the opportunity cost of space for roads and parking, and a dozen other free rides are good candidates. And, electric vehicles should not be immune to the same charges where applicable.

Once the basic price signal points the transportation market in the right direction, we can see what happens, and tinker around the edges with standards that address particular misperceptions and market failures.

Structure First!

One of the central tenets of system dynamics and systems thinking is that structure causes behavior. This is often described as an iceberg, with events at as the visible tip, and structure as greater submerged bulk. Patterns of behavior, in the middle, are sequences of events that may signal the existence of the underlying structure.

The header of the current Wikipedia article on the California electricity crisis is a nice illustration of the difference between event and structural descriptions of a problem.

The California electricity crisis, also known as the Western U.S. Energy Crisis of 2000 and 2001, was a situation in which the United States state of California had a shortage of electricity supply caused by market manipulations, illegal[5] shutdowns of pipelines by the Texas energy consortium Enron, and capped retail electricity prices.[6] The state suffered from multiple large-scale blackouts, one of the state’s largest energy companies collapsed, and the economic fall-out greatly harmed GovernorGray Davis’ standing.

Drought, delays in approval of new power plants,[6]:109 and market manipulation decreased supply.[citation needed] This caused an 800% increase in wholesale prices from April 2000 to December 2000.[7]:1 In addition, rolling blackouts adversely affected many businesses dependent upon a reliable supply of electricity, and inconvenienced a large number of retail consumers.

California had an installed generating capacity of 45GW. At the time of the blackouts, demand was 28GW. A demand supply gap was created by energy companies, mainly Enron, to create an artificial shortage. Energy traders took power plants offline for maintenance in days of peak demand to increase the price.[8][9] Traders were thus able to sell power at premium prices, sometimes up to a factor of 20 times its normal value. Because the state government had a cap on retail electricity charges, this market manipulation squeezed the industry’s revenue margins, causing the bankruptcy of Pacific Gas and Electric Company (PG&E) and near bankruptcy of Southern California Edison in early 2001.[7]:2-3

The financial crisis was possible because of partial deregulation legislation instituted in 1996 by the California Legislature (AB 1890) and Governor Pete Wilson. Enron took advantage of this deregulation and was involved in economic withholding and inflated price bidding in California’s spot markets.[10]

The crisis cost between $40 to $45 billion.[7]:3-4

This is mostly a dead buffalo description of the event:

ca_elec_dead_buffalo

It offers only a few hints about the structure that enabled these events to unfold. It would be nice if the article provided a more operational description of the problem up front. (It does eventually get there.) Here’s a stab at it:

ca_elec_structure

A normal market manages supply and demand through four balancing loops. On the demand side, in the short run utilization of electricity-consuming devices falls with increasing price (B1). In the long run, higher prices also suppress installation of new devices (B2). In parallel on the supply side, higher prices increase utilization in the short run (B4) and provide an incentive for capacity investment in the long run (B3).

The California crisis happened because these market-clearing mechanisms were not functioning. Retail pricing is subject to long regulatory approval lags, so there was effectively no demand price elasticity response in the short run, i.e. B1 and B2 were ineffective. The system might still function if it had surplus capacity, but evidently long approval delays prevented B3 from creating that. Even worse, the normal operation of B4 was inverted when Enron amassed sufficient market power. That inverted the normal competitive market incentive to increase capacity utilization when prices are high. Instead, Enron could deliberately lower utilization to extract monopoly prices. If any of B1-B3 had been functioning, Enron’s ability to exploit B4 would have been greatly diminished, and the crisis might not have occurred.

I find it astonishing that deregulation created such a dysfunctional market. The framework for electricity markets was laid out by Caramanis, Schweppe, Tabor & Bohn – they literally wrote the book on Spot Pricing of Electricity. Right in the introduction, page 5, it cautions:

Five ingredients for a successful marketplace are

  1. A supply side with varying supply costs that increase with demand
  2. A demand side with varying demands which can adapt to price changes
  3. A market mechanism for buying and selling
  4. No monopsonistic behavior on the demand side
  5. No monopolistic behavior on the supply side

I guess the market designers thought these were optional?

Missing the point about efficiency rebounds … again

Breakthrough’s Nordhaus and Shellenberger (N&S) spot a bit of open-loop thinking about LED lighting:

ON Tuesday, the Royal Swedish Academy of Sciences awarded the 2014 Nobel Prize in Physics to three researchers whose work contributed to the development of a radically more efficient form of lighting known as light-emitting diodes, or LEDs.

In announcing the award, the academy said, “Replacing light bulbs and fluorescent tubes with LEDs will lead to a drastic reduction of electricity requirements for lighting.” The president of the Institute of Physics noted: “With 20 percent of the world’s electricity used for lighting, it’s been calculated that optimal use of LED lighting could reduce this to 4 percent.”

The problem of course is that lighting energy use would fall 20% to 4% only if there’s no feedback, so that LEDs replace incandescents 1 for 1 (and of course the multiplier can’t be that big, because CFLs and other efficient technologies already supply a lot of light).

N&S go on to argue:

But it would be a mistake to assume that LEDs will significantly reduce overall energy consumption.

Why? Because rebound effects will eat up the efficiency gains:

“The growing evidence that low-cost efficiency often leads to faster energy growth was recently considered by both the Intergovernmental Panel on Climate Change and the International Energy Agency.”

“The I.E.A. and I.P.C.C. estimate that the rebound could be over 50 percent globally.”

Notice the sleight-of-hand: the first statement implies a rebound effect greater than 100%, while the evidence they’re citing describes a rebound of 50%, i.e. 50% of the efficiency gain is preserved, which seems pretty significant.

Presumably the real evidence they have in mind is http://iopscience.iop.org/0022-3727/43/35/354001 – authors Tsao & Saunders are Breakthrough associates. Saunders describes a 100% rebound for lighting here http://thebreakthrough.org/index.php/programs/energy-and-climate/understanding-energy-efficiency-rebound-interview-with-harry-saunders

Now the big non sequitur:

But LED and other ultraefficient lighting technologies are unlikely to reduce global energy consumption or reduce carbon emissions. If we are to make a serious dent in carbon emissions, there is no escaping the need to shift to cleaner sources of energy.

Let’s assume the premise is true – that the lighting rebound effect is 100% or more. That implies that lighting use is highly price elastic, which in turn means that an emissions price like a carbon tax will have a strong influence on lighting energy. Therefore pricing can play a major role in reducing emissions. It’s probably still true that a shift to clean energy is unavoidable, but it’s not an exclusive remedy, and a stronger rebound effect actually weakens the argument for clean sources.

Their own colleagues point this out:

In fact, our paper shows that, for the two 2030 scenarios (with and without solid-state lighting), a mere 12% increase in real electricity prices would result in a net decline in electricity-for-lighting consumption.

What should the real takeaway be?

  • Subsidizing lighting efficiency is ineffective, and possibly even counterproductive.
  • Subsidizing clean energy lowers the cost of delivering lighting and other services, and therefore will also be offset by rebound effects.
  • Emissions pricing is a win-win, because it encourages efficiency, counteracts rebound effects and promotes substitution of clean sources.

Bulbs banned

The incandescent ban is underway.

Conservative think tanks still hate it:

Actually, I think it’s kind of a dumb idea too – but not as bad as you might think, and in the absence of real energy or climate policy, not as dumb as doing nothing. You’d have to be really dumb to believe this:

The ban was pushed by light bulb makers eager to up-sell customers on longer-lasting and much more expensive halogen, compact fluourescent, and LED lighting.

More expensive? Only in a universe where energy and labor costs don’t count (Texas?) and for a few applications (very low usage, or chicken warming).

bulb economicsOver the last couple years I’ve replaced almost all lighting in my house with LEDs. The light is better, the emissions are lower, and I have yet to see a failure (unlike cheap CFLs).

I built a little bulb calculator in Vensim, which shows huge advantages for LEDs in most situations, even with conservative assumptions (low social price of carbon, minimum wage) it’s hard to make incandescents look good. It’s also a nice example of using Vensim for spreadsheet replacement, on a problem that’s not very dynamic but has natural array structure.

bulbModelGet it: bulb.mdl or bulb.vpm (uses arrays, so you’ll need the free Model Reader)

Hair of the dog that bit you climate policy

Roy Spencer on reducing emissions by increasing emissions:

COL: Let’s say tomorrow, evidence is found that proves to everyone that global warming as a result of human released emissions of CO2 and methane, is real. What would you suggest we do?

SPENCER: I would say we need to grow the economy as fast as possible, in order to afford the extra R&D necessary to develop new energy technologies. Current solar and wind technologies are too expensive, unreliable, and can only replace a small fraction of our energy needs. Since the economy runs on inexpensive energy, in order to grow the economy we will need to use fossil fuels to create that extra wealth. In other words, we will need to burn even more fossil fuels in order to find replacements for fossil fuels.

via Planet 3.0

On the face of it, this is absurd. Reverse a positive feedback loop by making it stronger? But it could work, if given the right structure – a relative quit smoking by going in a closet to smoke until he couldn’t stand it anymore. Here’s what I can make of the mental model:

Spencer’s arguing that we need to run reinforcing loops R1 and R2 as hard as possible, because loop R3 is too weak to sustain the economy, because renewables (or more generally non-emitting sources) are too expensive. R1 and R2 provide the wealth to drive R&D, in a virtuous cycle R4 that activates R3 and shuts down the fossil sector via B2. There are a number of problems with this thinking.

  • Rapid growth around R1 rapidly grows environmental damage (B1) – not only climate, but also local air quality, etc. It also contributes to depletion (not shown), and with depletion comes increasing cost (weakening R1) and greater marginal damage from extraction technologies (not shown). It makes no sense to manage the economy as if R1 exists and B1 does not. R3 looks much more favorable today in light of this.
  • Spencer’s view discounts delays. But there are long delays in R&D and investment turnover, which will permit more environmental damage to accumulate while we wait for R&D.
  • In addition to the delay, R4 is weak. For example, if economic growth is 3%/year, and all technical progress in renewables is from R&D with a 70% learning rate, it’ll take 44 years to halve renewable costs.
  • A 70% learning curve for R&D is highly optimistic. Moreover, a fair amount of renewable cost reductions are due to learning-by-doing and scale economies (not shown), which require R3 to be active, not R4. No current deployment, no progress.
  • Spencer’s argument ignores efficiency (not shown), which works regardless of the source of energy. Spurring investment in the fossil loop R1 sends the wrong signal for efficiency, by depressing current prices.

In truth, these feedbacks are already present in many energy models. Most of those are standard economic stuff – equilibrium, rational expectations, etc. – assumptions which favor growth. Yet among the subset that includes endogenous technology, I’m not aware of a single instance that finds a growth+R&D led policy to be optimal or even effective.

It’s time for the techno-optimists like Spencer and Breakthrough to put up or shut up. Either articulate the argument in a formal model that can be shared and tested, or admit that it’s a nice twinkle in the eye that regrettably lacks evidence.