C-ROADS & climate leadership workshop

In Boston, Oct. 18-20, Climate Interactive and Seed Systems will be running a workshop on C-ROADS and climate leadership.

Attend to develop your capacities in:

  • Systems thinking: Causal loop and stock-flow diagramming.
  • Leadership and learning: Vision, reflective conversation, consensus building.
  • Computer simulation: Using and leading policy-testing with the C-ROADS/C-Learn simulation.
  • Policy savvy:  Attendees will play the “World Climate” exercise.
  • Climate, energy, and sustainability strategy: Reflections and insights from international experts.
  • Business success stories: What’s working in the new low Carbon Economy and implications for you.
  • Build your network of people sharing your aspirations for Climate progress.

Save the date.

EIA projections – peak oil or snake oil?

Econbrowser has a nice post from Steven Kopits, documenting big changes in EIA oil forecasts. This graphic summarizes what’s happened:

kopits_eia_forecasts_jun_10
Click through for the original article.

As recently as 2007, the EIA saw a rosy future of oil supplies increasing with demand. It predicted oil consumption would rise by 15 mbpd to 2020, an ample amount to cover most eventualities. By 2030, the oil supply would reach nearly 118 mbpd, or 23 mbpd more than in 2006. But over time, this optimism has faded, with each succeeding year forecast lower than the year before. For 2030, the oil supply forecast has declined by 14 mbpd in only the last three years. This drop is as much as the combined output of Saudi Arabia and China.

In its forecast, the EIA, normally the cheerleader for production growth, has become amongst the most pessimistic forecasters around. For example, its forecasts to 2020 are 2-3 mbpd lower than that of traditionally dour Total, the French oil major. And they are below our own forecasts at Douglas-Westwood through 2020. As we are normally considered to be in the peak oil camp, the EIA’s forecast is nothing short of remarkable, and grim.

Is it right? In the last decade or so, the EIA’s forecast has inevitably proved too rosy by a margin. While SEC-approved prospectuses still routinely cite the EIA, those who deal with oil forecasts on a daily basis have come to discount the EIA as simply unreliable and inappropriate as a basis for investments or decision-making. But the EIA appears to have drawn a line in the sand with its new IEO and placed its fortunes firmly with the peak oil crowd. At least to 2020.

Since production is still rising, I think you’d have to call this “inflection point oil,” but as a commenter points out, it does imply peak conventional oil:

It’s also worth note that most of the liquids production increase from now to 2020 is projected to be unconventional in the IEO. Most of this is biofuels and oil sands. They REALLY ARE projecting flat oil production.

Since I’d looked at earlier AEO projections in the past, I wondered what early IEO projections looked like. Unfortunately I don’t have time to replicate the chart above and overlay the earlier projections, but here’s the 1995 projection:

Oil - IEO 1995

The 1995 projections put 2010 oil consumption at 87 to 95 million barrels per day. That’s a bit high, but not terribly inconsistent with reality and the new predictions (especially if the financial bubble hadn’t burst). Consumption growth is 1.5%/year.

And here’s 2002:

Oil - IEO 2002

In the 2002 projection, consumption is at 96 million barrels in 2010 and 119 million barrels in 2020 (waaay above reality and the 2007-2010 projections), a 2.2%/year growth rate.

I haven’t looked at all the interim versions, but somewhere along the way a lot of optimism crept in (and recently, crept out). In 2002 the IEO oil trajectory was generated by a model called WEPS, so I downloaded WEPS2002 to take a look. Unfortunately, it’s a typical open-loop spreadsheet horror show. My enthusiasm for a detailed audit is low, but it looks like oil demand is purely a function of GDP extrapolation and GDP-energy relationships, with no hint of supply-side dynamics (not even prices, unless they emerge from other models in a sneakernet portfolio approach). There’s no evidence of resources, not even synchronized drilling. No wonder users came to “discount the EIA as simply unreliable and inappropriate as a basis for investments or decision-making.”

Newer projections come from a new version, WEPS+. Hopefully it’s more internally consistent than the 2002 spreadsheet, and it does capture stock/flow dynamics and even includes resources. EIA appears to be getting better. But it appears that there’s still a fundamental problem with the paradigm: too much detail. There just isn’t any point in producing projections for dozens of countries, sectors and commodities two decades out, when uncertainty about basic dynamics renders the detail meaningless. It would be far better to work with simple models, capable of exploring the implications of structural uncertainty, in particular relaxing assumptions of equilibrium and idealized behavior.

Update: Michael Levi at the CFR blog points out that much of the difference in recent forecasts can be attributed to changes in GDP projections. Perhaps so. But I think this reinforces my point about detail, uncertainty, and transparency. If the model structure is basically consumption = f(GDP, price, elasticity) and those inputs have high variance, what’s the point of all that detail? It seems to me that the detail merely obscures the fundamentals of what’s going on, which is why there’s no simple discussion of reasons for the change in forecast.

Greenwash labeling

I like green labeling, but I’m not convinced that, by itself,  it’s theoretically a viable way to get the economy to a good environmental endpoint. In practice, it’s probably even worse. Consider Energy Star. It’s supposed to be “helping us all save money and protect the environment through energy efficient products and practices.” The reality is that it gives low-quality information a veneer of authenticity, misleading consumers. I have no doubt that it has some benefits, especially through technology forcing, but it’s soooo much less than it could be.

The fundamental signal Energy Star sends is flawed. Because it categorizes appliances by size and type, a hog gets a star as long as it’s also big and of less-efficient design (like a side-by-side refrigerator/freezer). Here’s the size-energy relationship of the federal energy performance standard (which Energy Star fridges must exceed by 20%):

standard

Notice that the standard for a 20 cubic foot fridge is anywhere from 470 to 660 kWh/year.

Continue reading “Greenwash labeling”

When rebates go bad

rebate

There’s a long-standing argument over the extent to which rebound effects eat up the gains of energy-conserving technologies, and whether energy conservation programs are efficient. I don’t generally side with the hardline economists who argue that conservation programs fail a cost benefit test, because I think there really are some $20 bills scattered about, waiting to be harvested by an intelligent mix of information and incentives. At the same time, some rebate and credit programs look pretty fishy to me.

On the plus side, I just bought a new refrigerator, using Montana’s $100 stimulus credit. There’s no rebound, because I have to hand over the old one for recycling. There is some rebound potential in general, because I could have used the $100 to upgrade to a larger model. Energy Star segments the market, so a big side-by-side fridge can pass while consuming more energy than a little top-freezer. That’s just stupid. Fortunately, most people have space constraints, so the short run price elasticity of fridge size is low.

On the minus side, consider tax credits for hybrid vehicles. For a super-efficient Prius or Insight, I can sort of see the point. But a $2600 credit for a Toyota Highlander getting 26mpg? What a joke! Mercifully that foolishness has been phased out. But there’s plenty more where that came from.

Consider this Bad Boy:

credit

The Zero-Emission Agricultural Utility Terrain Vehicle (Agricultural UTV) Rebate Program will credit $1950 in the hope of fostering greener farms. But this firm knows who it’s really marketing to:

turkey

Is there really good control over the use of the $, or is public funding just mechanizing outdoor activities where people ought to use the original low-emissions vehicle, their feet? When will I get a rebate for my horse?

Independence of models and errors


Roger Pielke’s blog has an interesting guest post by Ryan Meyer, reporting on a paper that questions the meaning of claims about the robustness of conclusions from multiple models. From the abstract:

Climate modelers often use agreement among multiple general circulation models (GCMs) as a source of confidence in the accuracy of model projections. However, the significance of model agreement depends on how independent the models are from one another. The climate science literature does not address this. GCMs are independent of, and interdependent on one another, in different ways and degrees. Addressing the issue of model independence is crucial in explaining why agreement between models should boost confidence that their results have basis in reality.

Later in the paper, they outline the philosophy as follows,

In a rough survey of the contents of six leading climate journals since 1990, we found 118 articles in which the authors relied on the concept of agreement between models to inspire confidence in their results. The implied logic seems intuitive: if multiple models agree on a projection, the result is more likely to be correct than if the result comes from only one model, or if many models disagree. … this logic only holds if the models under consideration are independent from one another. … using multiple models to analyze the same system is a ‘‘robustness’’ strategy. Every model has its own assumptions and simplifications that make it literally false in the sense that the modeler knows that his or her mathematics do not describe the world with strict accuracy. When multiple independent models agree, however, their shared conclusion is more likely to be true.

I think they’re barking up the right tree, but one important clarification is in order. We don’t actually care about the independence of models per se. In fact, if we had an ensemble of perfect models, they’d necessarily be identical. What we really want is for the models to be right. To the extent that we can’t be right, we’d at least like to have independent systematic errors, so that (a) there’s some chance that mistakes average out and (b) there’s an opportunity to diagnose the differences.

For example, consider three models of gravity, of the form F=G*m1*m2/r^b. We’d prefer an ensemble of models with b = {1.9,2.0,2.1} to one with b = {1,2,3}, even though some metrics of independence (such as the state space divergence cited in the paper) would indicate that the first ensemble is less independent than the second. This means that there’s a tradeoff: if b is a hidden parameter, it’s harder to discover problems with the narrow ensemble, but harder to get good answers out of the dispersed ensemble, because its members are more wrong.

For climate models, ensembles provide some opportunity to discover systematic errors from numerical schemes, parameterization of poorly-understood sub-grid scale phenomena and program bugs, to the extent that models rely on different codes and approaches. As in my gravity example, differences would be revealed more readily by large perturbations, but I’ve never seen extreme conditions tests on GCMs (although I understand that they at least share a lot with models used to simulate other planets). I’d like to see more of that, plus an inventory of major subsystems of GCMs, and the extent to which they use different codes.

While GCMs are essentially the only source of regional predictions, which are a focus of the paper, it’s important to realize that GCMs are not the only evidence for the notion that climate sensitivity is nontrivial. For that, there are also simple energy balance models and paleoclimate data. That means that there are at least three lines of evidence, much more independent than GCM ensembles, backing up the idea that greenhouse gases matter.

It’s interesting that this critique comes up with reference to GCMs, because it’s actually not GCMs we should worry most about. For climate models, there are vague worries about systematic errors in cloud parameterization and other phenomena, but there’s no strong a priori reason, other than Murphy’s Law, to think that they are a problem. Economic models in the climate policy space, on the other hand, nearly all embody notions of economic equilibrium and foresight which we can be pretty certain are wrong, perhaps spectacularly so. That’s what we should be worrying about.

Green labeling is just a waypoint

Alan Atkisson wonders, Can a Glass of Orange Juice in Sweden be “Climate Smart”? He concludes, Maybe consumer items like this could be labeled, “Relatively less climate-stupid.” I agree.

For green labeling to actually work, there must be a “green information” system parallel to the money economy, and people must pay attention to it. That’s a booming business right now.

US_$20_Series_2006_Obverse

Optimistically assuming that all end users have the insight and altruism needed to make the correct environment/money tradeoff, that creates tremendous evolutionary pressure on the production system to evade the intent of the labeling by using cheaper not-so-green alternatives in hidden upstream locations. To paraphrase Groucho, greenness is the key to business success – if you can fake it, you’ve got it made. The evasion need not be so cynical; it simply requires incomplete information, for example sourcing products from places where measurement systems are incomplete. I really rather doubt that we’ll ever have life cycle analysis for every product performed with the same stringency now enforced by money auditing systems.

The optimistic assumptions above are probably misplaced. Altruism is great, but I hate to rely on it, as it’s not clear to me that it’s an ESS. But insight is probably the real constraint. Life cycle analysis is good stuff, but even if it were practical to pass many attributes through the supply chain, with firm-level attribution, the result is complex information about tradeoffs that’s better suited for engineers than for consumers. Add to that the challenges people already face, like making good decisions about saving for retirement and educating children, and I think it’s hard to do much more than muddle minds.

Just as marketers associate cars with love, green labels foster the paradoxical conclusion that some consumption benefits the environment. That may be true for a few goods, but for the most part, it’s not. We should be using green information to examine our broad patterns of consumption, more than to choose what to put in the shopping cart. That might mean non-consumptive tradeoffs, like having more leisure time and less stuff.

Green labeling is great in many cases today, where prices and other incentives are blatantly misaligned with public goods, but ultimately fixing the incentives will get us a lot farther than labeling. That means pricing resources we value upstream, so that value percolates through supply chains as a price signal. In my ideal world, the price tag itself would be a green label.

For green labeling to actually work, there must be a “green information” system parallel to the money economy, and people must pay attention to it. Optimistically assuming that all end users have the insight and altruism needed to make the correct green-money tradeoff, that creates tremendous evolutionary pressure on the production system to evade the intent of the labeling by using cheaper not-so-green alternatives in hidden upstream locations. The evasive response need not be cynical, it simply requires incomplete information, i.e. sourcing products where measurement systems are incomplete. I really rather doubt that we’ll ever have life cycle analysis for every product performed with the same stringency now enforced by money auditing systems. Green labeling is great in many cases today, where prices and other incentives are blatantly misaligned with social goals, but ultimately fixing the incentives will get us a lot farther than labeling.

Other bathtubs – capital

China is rapidly eliminating old coal generating capacity, according to Technology Review.

Draining Bathtub

Coal still meets 70 percent of China’s energy needs, but the country claims to have shut down 60 gigawatts’ worth of inefficient coal-fired plants since 2005. Among them is the one shown above, which was demolished in Henan province last year. China is also poised to take the lead in deploying carbon capture and storage (CCS) technology on a large scale. The gasifiers that China uses to turn coal into chemicals and fuel emit a pure stream of carbon dioxide that is cheap to capture, providing “an excellent opportunity to move CCS forward globally,” says Sarah Forbes of the World Resources Institute in Washington, DC.

That’s laudable. However, the inflow of new coal capacity must be even greater. Here’s the latest on China’s coal output:

ChinaCoalOutput

China Statistical Yearbook 2009 & 2009 main statistical data update

That’s just a hair short of 3 billion tons in 2009, with 8%/yr growth from ’07-’09, in spite of the recession. On a per capita basis, US output and consumption is still higher, but at those staggering growth rates, it won’t take China long to catch up.

A simple model of capital turnover involves two parallel bathtubs, a “coflow” in SD lingo:

CapitalTurnover

Every time you build some capital, you also commit to the energy needed to run it (unless you don’t run it, in which case why build it?). If you get fancy, you can consider 3rd order vintaging and retrofits, as here:

Capital Turnover 3o

To get fancier still, see the structure in John Sterman’s thesis, which provides for limited retrofit potential (that Gremlin just isn’t going to be a Prius, no matter what you do to the carburetor).

The basic challenge is that, while it helps to retire old dirty capital quickly (increasing the outflow from the energy requirements bathtub), energy requirements will go up as long as the inflow of new requirements is larger, which is likely when capital itself is growing and the energy intensity of new capital is well above zero. In addition, when capital is growing rapidly, there just isn’t much old stuff around (proportionally) to throw away, because the age structure of capital will be biased toward new vintages.

Hat tip: Travis Franck