EPA gets the bathtub

Eli Rabett has been posting the comment/response section of the EPA endangerment finding. For the most part the comments are a quagmire of tinfoil-hat pseudoscience; I’m astonished that the EPA could find some real scientists who could stomach wading through and debunking it all – an important but thankless job.

Today’s installment tackles the atmospheric half life of CO2:

A common analogy used for CO2 concentrations is water in a bathtub. If the drain and the spigot are both large and perfectly balanced, then the time than any individual water molecule spends in the bathtub is short. But if a cup of water is added to the bathtub, the change in volume in the bathtub will persist even when all the water molecules originally from that cup have flowed out the drain. This is not a perfect analogy: in the case of CO2, there are several linked bathtubs, and the increased pressure of water in one bathtub from an extra cup will actually lead to a small increase in flow through the drain, so eventually the cup of water will be spread throughout the bathtubs leading to a small increase in each, but the point remains that the “residence time” of a molecule of water will be very different from the “adjustment time” of the bathtub as a whole.

Having tested a lot of low-order carbon cycle models, including I think all possible linear variants up to 3rd order, I agree with EPA – anyone who claims that the effective half life or time constant of CO2 uptake is 10 or 20 or even 50 years is bonkers.

States' role in climate policy

Jack Dirmann passed along an interesting paper arguing for a bigger role for states in setting federal climate policy.

This article explains why states and localities need to be full partners in a national climate change effort based on federal legislation or the existing Clean Air Act. A large share of reductions with the lowest cost and the greatest co-benefits (e.g., job creation, technology development, reduction of other pollutants) are in areas that a federal cap-and-trade program or other purely federal measures will not easily reach. These are also areas where the states have traditionally exercised their powers – including land use, building construction, transportation, and recycling. The economic recovery and expansion will require direct state and local management of climate and energy actions to reach full potential and efficiency.

This article also describes in detail a proposed state climate action planning process that would help make the states full partners. This state planning process – based on a proven template from actions taken by many states – provides an opportunity to achieve cheaper, faster, and greater emissions reductions than federal legislation or regulation alone would achieve. It would also realize macroeconomic benefits and non-economic co-benefits, and would mean that the national program is more economically and environmentally sustainable.

Continue reading “States' role in climate policy”

Climate Science, Climate Policy and Montana

Last night I gave a climate talk at the Museum of the Rockies here in Bozeman, organized by Cherilyn DeVries and sponsored by United Methodist. It was a lot of fun – we had a terrific discussion at the end, and the museum’s monster projector was addictive for running C-LEARN live. Thanks to everyone who helped to make it happen. My next challenge is to do this for young kids.

MT Climate Schematic

My slides are here as a PowerPoint show: Climate Science, Climate Policy & Montana (better because it includes some animated builds) or PDF: Climate Science, Climate Policy & Montana (PDF)

Some related resources:

Climate Interactive & the online C-LEARN model

Montana Climate Change Advisory Committee

Montana Climate Office

Montana emissions inventory & forecast visualization (click through the graphic):

Cb009aee-64f1-11df-8f87-000255111976 Blog_this_caption
Related posts:

Flying South

Montana’s Climate Future

Would you like fries with that?

Education is a mess, and well-motivated policy changes are making it worse.

I was just reading this and this, and the juices got flowing, so my wife and I brainstormed this picture:

Education CLD

Click to enlarge

Yep, it’s spaghetti, like a lot of causal brainstorming efforts. The underlying problem space is very messy and hard to articulate quickly, but I think the essence is simple. Educational outcomes are substandard, creating pressure to improve. In at least some areas, outcomes slipped a lot because the response to pressure was to erode learning goals rather than to improve (blue loop through the green goal). One benefit of No Child Left Behind testing is to offset that loop, by making actual performance salient and restoring the pressure to improve. Other intuitive responses (red loops) also have some benefit: increasing school hours provides more time for learning; standardization yields economies of scale in materials and may improve teaching of low-skill teachers; core curriculum focus aligns learning with measured goals.

The problem is that these measures have devastating side effects, especially in the long run. Measurement obsession eats up time for reflection and learning. Core curriculum focus cuts out art and exercise, so that lower student engagement and health diminishes learning productivity. Low engagement means more sit-down-and-shut-up, which eats up teacher time and makes teaching unattractive. Increased hours lead to burnout of both students and teachers. Long hours and standardization make teaching unattractive. Degrading the attractiveness of teaching makes it hard to attract quality teachers. Students aren’t mindless blank slates; they know when they’re being fed rubbish, and check out. When a bad situation persists, an anti-intellectual culture of resistance to education evolves.

The nest of reinforcing feedbacks within education meshes with one in broader society. Poor education diminishes future educational opportunity, and thus the money and knowledge available to provide future schooling. Economic distress drives crime, and prison budgets eat up resources that could otherwise go to schools. Dysfunction reinforces the perception that government is incompetent, leading to reduced willingness to fund schools, ensuring future dysfunction. This is augmented by flight of the rich and smart to private schools.

I’m far from having all the answers here, but it seems that standard SD advice on the counter-intuitive behavior of social systems applies. First, any single policy will fail, because it gets defeated by other feedbacks in the system. Perhaps that’s why technology-led efforts haven’t lived up to expectations; high tech by itself doesn’t help if teachers have no time to reflect on and refine its use. Therefore intervention has to be multifaceted and targeted to activate key loops. Second, things get worse before they get better. Making progress requires more resources, or a redirection of resources away from things that produce the short-term measured benefits that people are watching.

I think there are reasons to be optimistic. All of the reinforcing feedback loops that currently act as vicious cycles can run the other way, if we can just get over the hump of the various delays and irreversibilities to start the process. There’s enormous slack in the system, in a variety of forms: time wasted on discipline and memorization, burned out teachers who could be re-energized and students with unmet thirst for knowledge.

The key is, how to get started. I suspect that the conservative approach of privatization half-works: it successfully exploits reinforcing feedback to provide high quality for those who opt out of the public system. However, I don’t want to live in a two class society, and there’s evidence that high inequality slows economic growth. Instead, my half-baked personal prescription (which we pursue as homeschooling parents) is to make schools more open, connecting students to real-world trades and research. Forget about standardized pathways through the curriculum, because children develop at different rates and have varied interests. Replace quantity of hours with quality, freeing teachers’ time for process improvement and guidance of self-directed learning. Suck it up, and spend the dough to hire better teachers. Recover some of that money, and avoid lengthy review, by using schools year ’round. I’m not sure how realistic all of this is as long as schools function as day care, so maybe we need some reform of work and parental attitudes to go along.

[Update: There are of course many good efforts that can be emulated, by people who’ve thought about this more deeply than I. Pegasus describes some here. Two of note are the Waters Foundation and Creative Learning Exchange. Reorganizing education around systems is a great way to improve productivity through learner-directed learning, make learning exciting and relevant to the real world, and convey skills that are crucial for society to confront its biggest problems.]

The real Kerry-Lieberman APA stands up, with two big surprises

The official discussion draft of the Kerry-Lieberman American Power Act is out. My heart sank when I saw the page count – 987. I won’t be able to review this in any detail soon. Based on a quick look, I see two potentially huge items: the “hard price collar” has a soft ceiling, and transport fuels are in the market, despite claims to the contrary.

Hard is soft

First, the summary states that there’s a “hard price collar which binds carbon prices and creates a predictable system for carbon prices to rise at a fixed rate over inflation.” That’s not quite right. There is indeed a floor, set by an auction reserve price in Section 790. However, I can’t find a ceiling as such. Instead, Section 726 establishes a “Cost Containment Reserve” that is somewhat like the Waxman-Markey strategic reserve, without the roach motel moving average price (offsets check in, but they don’t check out). Instead, reserve allowances are available at the escalating ceiling price ($25 + 5%/yr). There’s a much larger initial reserve (4 gigatons) and I think a more generous topping off (1.5% of allowances each year initially; 5% after 2030). However, there appears to be no mechanism to provide allowances beyond the set-aside. That means that the economy-wide target is in fact binding. If demand eats up the reserve allowance buffer, prices will have to rise above the ceiling in order to clear the market. So, the market actually faces a hard target, with the reserve/ceiling mechanism merely creating a temporary respite from price spikes. The price ceiling is soft if allowance demand at the ceiling price is sufficient to exhaust the buffer. The mental model behind this design must be that estimated future emission prices are about right, so that one need only protect against short term volatility. However, if those estimates are systematically wrong, and the marginal cost of mitigation persistently exceeds the ceiling, the reserve provides no protection against price escalation.

Transport is in the market

The short transport summary asserts:

Since a robust domestic refining industry is critical to our national security, we needed to make a change. We took fuel providers out of the market. Instead of every refinery participating in the market for allowances, we made sure the price of carbon was constant across the industry. That means all fuel providers see the same price of carbon in a given quarter. The system is simple. First, the EPA and EIA Administrators look to historic product sales to estimate how many allowances will be necessary to cover emissions for the quarter, and they set that number of allowances aside at the market price. Then refineries and fuel providers sell fuel, competing as they have always done to offer the best product at the best price. Finally, at the end of the quarter, the refiners and fuel providers purchase the allowances that have been set aside for them. If there are too many or too few allowances set aside, that difference is made up by adjusting the projection for the following quarter. These allowances cannot be banked or traded, and can only be used for compliance purposes.

In fact, transport is in the market, just via a different mechanism. Instead of buying allowances realtime, with banking and borrowing, refiners are price takers and get allowances via a set-aside mechanism. Since there’s nothing about the mechanism that creates allowances, the market still has to clear. The mechanism simply introduces a one quarter delay into the market clearing process. I don’t see how this additional complication is any better for refiners. Introducing the delay into the negative feedback loops that clear the market could be destabilizing. This is so enticing, I’ll have to simulate it.

My analysis is a bit hasty here, so I could be wrong, but if I’m right these two issues have huge implications for the performance of the bill.

Oily balls

The device designed to cut the oil flow after BP’s oil rig exploded was faulty, the head of a congressional committee said on Wednesday … the rig’s underwater blowout preventer had a leak in its hydraulic system and the device was not powerful enough to cut through joints to seal the drill pipe. …

Markey joked about BP’s proposal to stuff the blowout preventer with golf balls, oil tires “and other junk” to block the spewing oil.

“When we heard the best minds were on the case, we expected MIT, not the PGA,” said Markey, referring to the professional golfing group. “We already have one hole in the ground and now their solution is to shoot a hole in one?”

Via Reuters

Kerry-Lieberman "American Power Act" leaked

I think it’s a second-best policy, but perhaps the most we can hope for, and better than nothing.

Climate Progress has a first analysis and links to the leaked draft legislation outline and short summary of the Kerry-Lieberman American Power Act. [Update: there’s now a nice summary table.] For me, the bottom line is, what are the emissions and price trajectories, what emissions are covered, and where does the money go?

The target is 95.25% of 2005 by 2013, 83% by 2020, 58% by 2030, and 17% by 2050, with six Kyoto gases covered. Entities over 25 MTCO2eq/year are covered. Sector coverage is unclear; the summary refers to “the three major emitting sectors, power plants, heavy industry, and transportation” which is actually a rather incomplete list. Presumably the implication is that a lot of residential, commercial, and manufacturing emissions get picked up upstream, but the mechanics aren’t clear.

The target looks like this [Update: ignoring minor gases]:

Kerry Lieberman Target

This is not much different from ACES or CLEAR, and like them it’s backwards. Emissions reductions are back-loaded. The rate of reduction (green dots) from 2030 to 2050, 6.1%/year, is hardly plausible without massive retrofit or abandonment of existing capital (or negative economic growth). Given that the easiest reductions are likely to be the first, not the last, more aggressive action should be happening up front. (Actually there are a multitude of reasons for front-loading reductions as much as reasonable price stability allows).

There’s also a price collar:

Kerry Lieberman Price

These mechanisms provide a predictable price corridor, with the expected prices of the EPA Waxman-Markey analysis (dashed green) running right up the middle. The silly strategic reserve is gone. Still, I think this arrangement is backwards, in a different sense from the target. The right way to manage the uncertainty in the long run emissions trajectory needed to stabilize climate without triggering short run economic dislocation is with a mechanism that yields stable prices over the short to medium term, while providing for adaptive adjustment of the long term price trajectory to achieve emissions stability. A cap and trade with no safety valve is essentially the opposite of that: short run volatility with long run rigidity, and therefore a poor choice. The price collar bounds the short term volatility to 2:1 (early) to 4:1 (late) price movements, but it doesn’t do anything to provide for adaptation of the emissions target or price collar if emissions reductions turn out to be unexpectedly hard, easy, important, etc. It’s likely that the target and collar will be regarded as property rights and hard to change later in the game.

I think we should expect the unexpected. My personal guess is that the EPA allowance price estimates are way too low. In that case, we’ll find ourselves stuck on the price ceiling, with targets unmet. 83% reductions in emissions at an emissions price corresponding with under $1/gallon for fuel just strike me as unlikely, unless we’re very lucky technologically. My preference would be an adaptive carbon price, starting at a substantially higher level (high enough to prevent investment in new carbon intensive capital, but not so high initially as to strand those assets – maybe $50/TonCO2). By default, the price should rise at some modest rate, with an explicit adjustment process taking place at longish intervals so that new information can be incorporated. Essentially the goal is to implement feedback control that stabilizes long term climate without short term volatility (as here or here and here).

Some other gut reactions:

Good:

  • Clean energy R&D funding.
  • Allowance distribution by auction.
  • Border adjustments (I can only find these in the summary, not the draft outline).

Bad:

  • More subsidies, guarantees and other support for nuclear power plants. Why not let the first round play out first? Is this really a good use of resources or a level playing field?
  • Subsidized CCS deployment. There are good reasons for subsidizing R&D, but deployment should be primarily motivated by the economic incentive of the emissions price.
  • Other deployment incentives. Let the price do the work!
  • Rebates through utilities. There’s good evidence that total bills are more salient to consumers than marginal costs, so this at least partially defeats the price signal. At least it’s temporary (though transient measures have a way of becoming entitlements).

Indifferent:

  • Preemption of state cap & trade schemes. Sorry, RGGI, AB32, and WCI. This probably has to happen.
  • Green jobs claims. In the long run, employment is controlled by a bunch of negative feedback loops, so it’s not likely to change a lot. The current effects of the housing bust/financial crisis and eventual effects of big twin deficits are likely to overwhelm any climate policy signal. The real issue is how to create wealth without borrowing it from the future (e.g., by filling up the atmospheric bathtub with GHGs) and sustaining vulnerability to oil shocks, and on that score this is a good move.
  • State preemption of offshore oil leasing within 75 miles of its shoreline. Is this anything more than an illusion of protection?
  • Banking, borrowing and offsets allowed.

Unclear:

  • Performance standards for coal plants.
  • Transportation efficiency measures.
  • Industry rebates to prevent leakage (does this defeat the price signal?).

Dynamics on the iPhone

Scott Johnson asks about C-LITE, an ultra-simple version of C-ROADS, built in Processing – a cool visually-oriented language.

C-LITE

(Click the image to try it).

With this experiment, I was striving for a couple things:

  • A reduced-form version of the climate model, with “good enough” accuracy and interactive speed, as in Vensim’s Synthesim mode (no client-server latency).
  • Tufte-like simplicity of the UI (no grids or axis labels to waste electrons). Moving the mouse around changes the emissions trajectory, and sweeps an indicator line that gives the scale of input and outputs.
  • Pervasive representation of uncertainty (indicated by shading on temperature as a start).

This is just a prototype, but it’s already more fun than models with traditional interfaces.

I wanted to run it on the iPhone, but was stymied by problems translating the model to Processing.js (javascript) and had to set it aside. Recently Travis Franck stepped in and did a manual translation, proving the concept, so I took another look at the problem. In the meantime, a neat export tool has made it easy. It turns out that my code problem was as simple as replacing “float []” with “float[]” so now I have a javascript version here. It runs well in Firefox, but there are a few glitches on Safari and iPhones – text doesn’t render properly, and I don’t quite understand the event model. Still, it’s cool that modest dynamic models can run realtime on the iPhone. [Update: forgot to mention that I sued Michael Schieben’s touchmove function modification to processing.js.]

The learning curve for all of this is remarkably short. If you’re familiar with Java, it’s very easy to pick up Processing (it’s probably easy coming from other languages as well). I spent just a few days fooling around before I had the hang of building this app. The core model is just standard Euler ODE code:

initialize parameters
initialize levels
do while time < final time
compute rates & auxiliaries
compute levels

The only hassle is that equations have to be ordered manually. I built a Vensim prototype of the model halfway through, in order to stay clear on the structure as I flew seat-of-the pants.

With the latest Processing.js tools, it’s very easy to port to javascript, which runs on nearly everything. Getting it running on the iPhone (almost) was just a matter of discovering viewport meta tags and a line of CSS to set zero margins. The total codebase for my most complicated version so far is only 500 lines. I think there’s a lot of potential for sharing model insights through simple, appealing browser tools and handheld platforms.

As an aside, I always wondered why javascript didn’t seem to have much to do with Java. The answer is in this funny programming timeline. It’s basically false advertising.

Complexity is not the enemy

Following its misguided attack on complex CLDs, a few of us wrote a letter to the NYTimes. Since they didn’t publish, here it is:

Dear Editors, Systemic Spaghetti Slide Snookers Scribe. Powerpoint Pleases Policy Players

“We Have Met the Enemy and He Is PowerPoint” clearly struck a deep vein of resentment against mindless presentations. However, the lead “spaghetti” image, while undoubtedly too much to absorb quickly, is in fact packed with meaning for those who understand its visual lingo. If we can’t digest a mere slide depicting complexity, how can we successfully confront the underlying problem?

The diagram was not created in Powerpoint. It is a “causal loop diagram,” one of a several ways to describe relationships that influence the evolution of messy problems like the war in the Middle East. It’s a perfect illustration of General McMaster’s observation that, “Some problems in the world are not bullet-izable.” Diagrams like this may not be intended for public consumption; instead they serve as a map that facilitates communication within a group. Creating such diagrams allows groups to capture and improve their understanding of very complex systems by sharing their mental models and making them open to challenge and modification. Such diagrams, and the formal computer models that often support them, help groups to develop a more robust understanding of the dynamics of a problem and to develop effective and elegant solutions to vexing challenges.

It’s ironic that so many call for a return to pure verbal communication as an antidote for Powerpoint. We might get a few great speeches from that approach, but words are ill-suited to describe some data and systems. More likely, a return to unaided words would bring us a forgettable barrage of five-pagers filled with laundry-list thinking and unidirectional causality.

The excess supply of bad presentations does not exist in a vacuum. If we want better presentations, then we should determine why organizational pressures demand meaningless propaganda, rather than blaming our tools.

Tom Fiddaman of Ventana Systems, Inc. & Dave Packer, Kristina Wile, and Rebecca Niles Peretz of The Systems Thinking Collaborative

Other responses of note:

We have met an ally and he is Storytelling (Chris Soderquist)

Why We Should be Suspect of Bullet Points and Laundry Lists (Linda Booth Sweeney)

Java Vensim helper

MIT’s Climate Collaboratorium has posted java code that it used to wrap C-LEARN as a web service using the multicontext .dll. If you’re doing something similar, you may find the code useful, particularly the VensimHelper class. The liberal MIT license applies. However, be aware that you’ll need a license for the Vensim multicontext .dll to go with it.