Challenges Sourcing Parameters for Dynamic Models

A colleague recently pointed me to this survey:

Estimating the price elasticity of fuel demand with stated preferences derived from a situational approach

It starts with a review of a variety of studies:

Table 1. Price elasticities of fuel demand reported in the literature, by average year of observation.

This is similar to other meta-analyses and surveys I’ve seen in the past. That means using it directly is potentially problematic. In a model, you’d typically plug the elasticity into something like the following:

Indicated fuel demand 
   = reference fuel demand * (price/reference price) ^ elasticity

You’d probably have the expression above embedded in a larger structure, with energy requirements embodied in the capital stock, and various market-clearing feedback loops (as below). The problem is that plugging the elasticities from the literature into a dynamic model involves several treacherous leaps.

First, do the parameter values even make sense? Notice in the results above that 33% of the long term estimates have magnitude < .3, overlapping the top 25% of the short term estimates. That’s a big red flag. Do they have conflicting definitions of “short” and “long”? Are there systematic methods problems?

Second, are they robust as you plan to use them? Many of the short term estimates have magnitude <<.1, meaning that a modest supply shock would cause fuel expenditures to exceed GDP. This is primarily problem with the equation above (but that’s likely similar to what was estimated). A better formulation would consider non-constant elasticity, but most likely the data is not informative about the extremes. One of the long term estimates is even positive – I’d be interested to see the rationale for that. Perhaps fuel is a luxury good?

Third, are the parameters any good? My guess is that some of these estimates are simply violating good practice for estimating dynamic systems. The real long term response involves a lot of lags on varying time scales, from annual (perceptions of prices and behavior change) to decadal (fleet turnover, moving, mode-switching) to longer (infrastructure and urban development). Almost certainly some of this is ignored in the estimate, meaning that the true magnitude of the long term response is understated.

Stated preference estimates avoid some problems, but create others. In the short run, people have a good sense of their options and responses. But in the long term, likely not: you’re essentially asking them to mentally simulate a complex system, evaluating options that may not even exist at present. Expert judgments are subject to some of the same limitations.

I think this explains why it’s possible to build a model that’s backed up with a lot of expertise and literature at every equation, that fails to reproduce the aggregate behavior of the system. Until you’ve spend time integrating components, reconciling conflicting definitions across domains, and revisiting open-loop estimates in a closed-loop context, you don’t have an internally consistent story. Getting to that is a big part of the value of dynamic modeling.

The Tesla roof is a luxury product

No one buys a Tesla Model S because it’s cheaper than a regular car. But there’s currently a flurry of breathless tweets, rejoicing that a Tesla roof is cheaper than a regular roof. That’s dubious.

When I see $21.85 per square foot for anything associated with a house, “cheap” is not what comes to mind. That’s in the territory for luxury interior surfaces, not bulk materials like roofing. I’m reminded of the old saw in energy economics (I think from the EMF meetings in Aspen) that above 7000 feet, the concept of discount rates evaporates.

So, what are the numbers, really?

Continue reading “The Tesla roof is a luxury product”

ICE Roadkill

Several countries have now announced eventual bans of internal combustion engines. It’s nice that such a thing can now be contemplated, but this strikes me as a fundamentally flawed approach.

Banning a whole technology class outright is inefficient. When push comes to shove, that inefficiency is likely to lead to an implementation that’s complex and laden with exceptions. Bans and standards are better than nothing, but that regulatory complexity gives opponents something real to whine about. Then the loonies come out. At any plausible corporate cost of capital, a ban in 2040 has near-zero economic weight today.

Rather than banning gas and diesel vehicles at some abstract date in the far future, we should be pricing their externalities now. Air and water pollution, noise, resource extraction, the opportunity cost of space for roads and parking, and a dozen other free rides are good candidates. And, electric vehicles should not be immune to the same charges where applicable.

Once the basic price signal points the transportation market in the right direction, we can see what happens, and tinker around the edges with standards that address particular misperceptions and market failures.

Structure First!

One of the central tenets of system dynamics and systems thinking is that structure causes behavior. This is often described as an iceberg, with events at as the visible tip, and structure as greater submerged bulk. Patterns of behavior, in the middle, are sequences of events that may signal the existence of the underlying structure.

The header of the current Wikipedia article on the California electricity crisis is a nice illustration of the difference between event and structural descriptions of a problem.

The California electricity crisis, also known as the Western U.S. Energy Crisis of 2000 and 2001, was a situation in which the United States state of California had a shortage of electricity supply caused by market manipulations, illegal[5] shutdowns of pipelines by the Texas energy consortium Enron, and capped retail electricity prices.[6] The state suffered from multiple large-scale blackouts, one of the state’s largest energy companies collapsed, and the economic fall-out greatly harmed GovernorGray Davis’ standing.

Drought, delays in approval of new power plants,[6]:109 and market manipulation decreased supply.[citation needed] This caused an 800% increase in wholesale prices from April 2000 to December 2000.[7]:1 In addition, rolling blackouts adversely affected many businesses dependent upon a reliable supply of electricity, and inconvenienced a large number of retail consumers.

California had an installed generating capacity of 45GW. At the time of the blackouts, demand was 28GW. A demand supply gap was created by energy companies, mainly Enron, to create an artificial shortage. Energy traders took power plants offline for maintenance in days of peak demand to increase the price.[8][9] Traders were thus able to sell power at premium prices, sometimes up to a factor of 20 times its normal value. Because the state government had a cap on retail electricity charges, this market manipulation squeezed the industry’s revenue margins, causing the bankruptcy of Pacific Gas and Electric Company (PG&E) and near bankruptcy of Southern California Edison in early 2001.[7]:2-3

The financial crisis was possible because of partial deregulation legislation instituted in 1996 by the California Legislature (AB 1890) and Governor Pete Wilson. Enron took advantage of this deregulation and was involved in economic withholding and inflated price bidding in California’s spot markets.[10]

The crisis cost between $40 to $45 billion.[7]:3-4

This is mostly a dead buffalo description of the event:

ca_elec_dead_buffalo

It offers only a few hints about the structure that enabled these events to unfold. It would be nice if the article provided a more operational description of the problem up front. (It does eventually get there.) Here’s a stab at it:

ca_elec_structure

A normal market manages supply and demand through four balancing loops. On the demand side, in the short run utilization of electricity-consuming devices falls with increasing price (B1). In the long run, higher prices also suppress installation of new devices (B2). In parallel on the supply side, higher prices increase utilization in the short run (B4) and provide an incentive for capacity investment in the long run (B3).

The California crisis happened because these market-clearing mechanisms were not functioning. Retail pricing is subject to long regulatory approval lags, so there was effectively no demand price elasticity response in the short run, i.e. B1 and B2 were ineffective. The system might still function if it had surplus capacity, but evidently long approval delays prevented B3 from creating that. Even worse, the normal operation of B4 was inverted when Enron amassed sufficient market power. That inverted the normal competitive market incentive to increase capacity utilization when prices are high. Instead, Enron could deliberately lower utilization to extract monopoly prices. If any of B1-B3 had been functioning, Enron’s ability to exploit B4 would have been greatly diminished, and the crisis might not have occurred.

I find it astonishing that deregulation created such a dysfunctional market. The framework for electricity markets was laid out by Caramanis, Schweppe, Tabor & Bohn – they literally wrote the book on Spot Pricing of Electricity. Right in the introduction, page 5, it cautions:

Five ingredients for a successful marketplace are

  1. A supply side with varying supply costs that increase with demand
  2. A demand side with varying demands which can adapt to price changes
  3. A market mechanism for buying and selling
  4. No monopsonistic behavior on the demand side
  5. No monopolistic behavior on the supply side

I guess the market designers thought these were optional?

Missing the point about efficiency rebounds … again

Breakthrough’s Nordhaus and Shellenberger (N&S) spot a bit of open-loop thinking about LED lighting:

ON Tuesday, the Royal Swedish Academy of Sciences awarded the 2014 Nobel Prize in Physics to three researchers whose work contributed to the development of a radically more efficient form of lighting known as light-emitting diodes, or LEDs.

In announcing the award, the academy said, “Replacing light bulbs and fluorescent tubes with LEDs will lead to a drastic reduction of electricity requirements for lighting.” The president of the Institute of Physics noted: “With 20 percent of the world’s electricity used for lighting, it’s been calculated that optimal use of LED lighting could reduce this to 4 percent.”

The problem of course is that lighting energy use would fall 20% to 4% only if there’s no feedback, so that LEDs replace incandescents 1 for 1 (and of course the multiplier can’t be that big, because CFLs and other efficient technologies already supply a lot of light).

N&S go on to argue:

But it would be a mistake to assume that LEDs will significantly reduce overall energy consumption.

Why? Because rebound effects will eat up the efficiency gains:

“The growing evidence that low-cost efficiency often leads to faster energy growth was recently considered by both the Intergovernmental Panel on Climate Change and the International Energy Agency.”

“The I.E.A. and I.P.C.C. estimate that the rebound could be over 50 percent globally.”

Notice the sleight-of-hand: the first statement implies a rebound effect greater than 100%, while the evidence they’re citing describes a rebound of 50%, i.e. 50% of the efficiency gain is preserved, which seems pretty significant.

Presumably the real evidence they have in mind is http://iopscience.iop.org/0022-3727/43/35/354001 – authors Tsao & Saunders are Breakthrough associates. Saunders describes a 100% rebound for lighting here http://thebreakthrough.org/index.php/programs/energy-and-climate/understanding-energy-efficiency-rebound-interview-with-harry-saunders

Now the big non sequitur:

But LED and other ultraefficient lighting technologies are unlikely to reduce global energy consumption or reduce carbon emissions. If we are to make a serious dent in carbon emissions, there is no escaping the need to shift to cleaner sources of energy.

Let’s assume the premise is true – that the lighting rebound effect is 100% or more. That implies that lighting use is highly price elastic, which in turn means that an emissions price like a carbon tax will have a strong influence on lighting energy. Therefore pricing can play a major role in reducing emissions. It’s probably still true that a shift to clean energy is unavoidable, but it’s not an exclusive remedy, and a stronger rebound effect actually weakens the argument for clean sources.

Their own colleagues point this out:

In fact, our paper shows that, for the two 2030 scenarios (with and without solid-state lighting), a mere 12% increase in real electricity prices would result in a net decline in electricity-for-lighting consumption.

What should the real takeaway be?

  • Subsidizing lighting efficiency is ineffective, and possibly even counterproductive.
  • Subsidizing clean energy lowers the cost of delivering lighting and other services, and therefore will also be offset by rebound effects.
  • Emissions pricing is a win-win, because it encourages efficiency, counteracts rebound effects and promotes substitution of clean sources.

Bulbs banned

The incandescent ban is underway.

Conservative think tanks still hate it:

Actually, I think it’s kind of a dumb idea too – but not as bad as you might think, and in the absence of real energy or climate policy, not as dumb as doing nothing. You’d have to be really dumb to believe this:

The ban was pushed by light bulb makers eager to up-sell customers on longer-lasting and much more expensive halogen, compact fluourescent, and LED lighting.

More expensive? Only in a universe where energy and labor costs don’t count (Texas?) and for a few applications (very low usage, or chicken warming).

bulb economicsOver the last couple years I’ve replaced almost all lighting in my house with LEDs. The light is better, the emissions are lower, and I have yet to see a failure (unlike cheap CFLs).

I built a little bulb calculator in Vensim, which shows huge advantages for LEDs in most situations, even with conservative assumptions (low social price of carbon, minimum wage) it’s hard to make incandescents look good. It’s also a nice example of using Vensim for spreadsheet replacement, on a problem that’s not very dynamic but has natural array structure.

bulbModelGet it: bulb.mdl or bulb.vpm (uses arrays, so you’ll need the free Model Reader)

Hair of the dog that bit you climate policy

Roy Spencer on reducing emissions by increasing emissions:

COL: Let’s say tomorrow, evidence is found that proves to everyone that global warming as a result of human released emissions of CO2 and methane, is real. What would you suggest we do?

SPENCER: I would say we need to grow the economy as fast as possible, in order to afford the extra R&D necessary to develop new energy technologies. Current solar and wind technologies are too expensive, unreliable, and can only replace a small fraction of our energy needs. Since the economy runs on inexpensive energy, in order to grow the economy we will need to use fossil fuels to create that extra wealth. In other words, we will need to burn even more fossil fuels in order to find replacements for fossil fuels.

via Planet 3.0

On the face of it, this is absurd. Reverse a positive feedback loop by making it stronger? But it could work, if given the right structure – a relative quit smoking by going in a closet to smoke until he couldn’t stand it anymore. Here’s what I can make of the mental model:

Spencer’s arguing that we need to run reinforcing loops R1 and R2 as hard as possible, because loop R3 is too weak to sustain the economy, because renewables (or more generally non-emitting sources) are too expensive. R1 and R2 provide the wealth to drive R&D, in a virtuous cycle R4 that activates R3 and shuts down the fossil sector via B2. There are a number of problems with this thinking.

  • Rapid growth around R1 rapidly grows environmental damage (B1) – not only climate, but also local air quality, etc. It also contributes to depletion (not shown), and with depletion comes increasing cost (weakening R1) and greater marginal damage from extraction technologies (not shown). It makes no sense to manage the economy as if R1 exists and B1 does not. R3 looks much more favorable today in light of this.
  • Spencer’s view discounts delays. But there are long delays in R&D and investment turnover, which will permit more environmental damage to accumulate while we wait for R&D.
  • In addition to the delay, R4 is weak. For example, if economic growth is 3%/year, and all technical progress in renewables is from R&D with a 70% learning rate, it’ll take 44 years to halve renewable costs.
  • A 70% learning curve for R&D is highly optimistic. Moreover, a fair amount of renewable cost reductions are due to learning-by-doing and scale economies (not shown), which require R3 to be active, not R4. No current deployment, no progress.
  • Spencer’s argument ignores efficiency (not shown), which works regardless of the source of energy. Spurring investment in the fossil loop R1 sends the wrong signal for efficiency, by depressing current prices.

In truth, these feedbacks are already present in many energy models. Most of those are standard economic stuff – equilibrium, rational expectations, etc. – assumptions which favor growth. Yet among the subset that includes endogenous technology, I’m not aware of a single instance that finds a growth+R&D led policy to be optimal or even effective.

It’s time for the techno-optimists like Spencer and Breakthrough to put up or shut up. Either articulate the argument in a formal model that can be shared and tested, or admit that it’s a nice twinkle in the eye that regrettably lacks evidence.

Thorium Dreams

The NY Times nails it in In Search of Energy Miracles:

Yet not even the speedy Chinese are likely to get a sizable reactor built before the 2020s, and that is true for the other nuclear projects as well. So even if these technologies prove to work, it would not be surprising to see the timeline for widespread deployment slip to the 2030s or the 2040s. The scientists studying climate change tell us it would be folly to wait that long to start tackling the emissions problem.

Two approaches to the issue — spending money on the technologies we have now, or investing in future breakthroughs — are sometimes portrayed as conflicting with one another. In reality, that is a false dichotomy. The smartest experts say we have to pursue both tracks at once, and much more aggressively than we have been doing.

An ambitious national climate policy, anchored by a stiff price on carbon dioxide emissions, would serve both goals at once. In the short run, it would hasten a trend of supplanting coal-burning power plants with natural gas plants, which emit less carbon dioxide. It would drive some investment into low-carbon technologies like wind and solar power that, while not efficient enough, are steadily improving.

And it would also raise the economic rewards for developing new technologies that could disrupt and displace the ones of today. These might be new-age nuclear reactors, vastly improved solar cells, or something entirely unforeseen.

In effect, our national policy now is to sit on our hands hoping for energy miracles, without doing much to call them forth.

Yep.

h/t Travis Franck

Zombies in Great Falls and the SRLI

The undead are rising from their graves to attack the living in Montana, and people are still using the Static Reserve Life Index.

http://youtu.be/c7pNAhENBV4

The SRLI calculates the expected lifetime of reserves based on constant usage rate, as life=reserves/production. For optimistic gas reserves and resources of about 2200 Tcf (double the USGS estimate), and consumption of 24 Tcf/year (gross production is a bit more than that), the SRLI is about 90 years – hence claims of 100 years of gas.

How much natural gas does the United States have and how long will it last?

EIA estimates that there are 2,203 trillion cubic feet (Tcf) of natural gas that is technically recoverable in the United States. At the rate of U.S. natural gas consumption in 2011 of about 24 Tcf per year, 2,203 Tcf of natural gas is enough to last about 92 years.

Notice the conflation of SRLI as indicator with a prediction of the actual resource trajectory. The problem is that constant usage is a stupid assumption. Whenever you see someone citing a long SRLI, you can be sure that a pitch to increase consumption is not far behind. Use gas to substitute for oil in transportation or coal in electricity generation!

Substitution is fine, but increasing use means that the actual dynamic trajectory of the resource will show greatly accelerated depletion. For logistic growth in exploitation of the resource remaining, and a 10-year depletion trajectory for fields, the future must hold something like the following:

That’s production below today’s levels in less than 50 years. Naturally, faster growth now means less production later. Even with a hypothetical further doubling of resources (4400 Tcf, SRLI = 180 years), production growth would exhaust resources in well under 100 years. My guess is that “peak gas” is already on the horizon within the lifetime of long-lived capital like power plants.

Limits to Growth actually devoted a whole section to the silliness of the SRLI, but that was widely misinterpreted as a prediction of resource exhaustion by the turn of the century. So, the SRLI lives on, feasting on the brains of the unwary.