Bigfoot

There were three surprised when I recently ordered an Apple Macbook Pro. The first was how good the industrial design is compared to any PC laptop I’ve had. The second was getting a FedEx tracking number – straight from Shanghai. The third was how big the carbon footprint of this svelte machine is.

IMG_2711
Here it is, perched on a massive granite stair that took prybars, Egyptian pyramid-building techniques, and considerable sweat to place (not to mention the negative contribution to my kids’ vocabulary). The two bigger blocks: about 370kg (over 800 pounds). The Mac’s lifecycle carbon footprint: 350kg (2/3 manufacturing & transport, 1/3 use).

A System Zoo

I just picked up a copy of Hartmut Bossel’s excellent System Zoo 1, which I’d seen years ago in German, but only recently discovered in English. This is the first of a series of books on modeling – it covers simple systems (integration, exponential growth and decay), logistic growth and variants, oscillations and chaos, and some interesting engineering systems (heat flow, gliders searching for thermals). These are high quality models, with units that balance, well-documented by the book. Every one I’ve tried runs in Vensim PLE so they’re great for teaching.

I haven’t had a chance to work my way through the System Zoo 2 (natural systems – climate, ecosystems, resources) and System Zoo 3 (economy, society, development), but I’m pretty confident that they’re equally interesting.

You can get the models for all three books, in English, from the Uni Kassel Center for Environmental Systems Research – it’s now easy to find a .zip archive of the zoo models for the whole series, in Vensim .mdl format, on CESR’s home page: www2.cesr.de/downloads.

To tantalize you, here are some images of model output from Zoo 1. First, a phase map of a bistable oscillator, which was so interesting that I built one with my kids, using legos and neodymium magnets:

Continue reading “A System Zoo”

Then & Now

Time has an interesting article on the climate policy positions of the GOP front runners. It’s amazing how far we’ve backed away from regulating greenhouse emissions:

Then Now
Pawlenty signed the Next Generation Energy Act of 2007 in Minnesota, which called for a plan to “recommend how the state could adopt a regulatory system that imposes a cap on the aggregate air pollutant emissions of a group of sources.” The current Tim Pawlenty line on carbon is that “cap and trade would be a disaster.”
Here he is in Iowa in 2007, voicing concern about man-made global warming while supporting more government subsidies for new energy sources, new efficiency standards, and a new global carbon treaty. Mitt Romney regularly attacks Barack Obama for pushing a cap and trade system through Congress.

And so on…

I can’t say that I’ve ever been much of a cap and trade fan, and I’d lay a little of the blame for our current sorry state at the door of cap and trade supporters who were willing to ignore what a bloated beast the bills had become. Not much, though. Most of the blame falls to the anti-science and let’s pretend externalities don’t exist crowds, who wouldn’t give a carbon tax the time of day either.

The House Climate Science Hearing

Science’s Eli Kintishch and Gavin Schmidt liveblogged the House hearing on climate science this morning. My favorite tidbits:

Gavin Schmidt:

One theme that will be constant is that unilateral action by the US is meaningless if everyone else continues with business as usual. However, this is not a ethical argument for not doing anything. Edward Burke (an original conservative) rightly said: “Nobody made a greater mistake than he who did nothing because he could do only a little.” http://www.realclimate.org/index.php/archives/2009/05/the-tragedy-of-climate-commons/

Eli Kintisch:

If my doctor told me I had cancer, says Waxman, “I wouldn’t scour the country to find someone who said I didn’t need [treatment]”

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Because Congress has granted EPA authority to regulate, and the agency has followed its legislative mandate. If Congress wants to change how EPA operates, fine, but it must do it comprehensively, not by seeking to overturn the endangerment finding via fiat.

[Comment From Steven Leibo Ph.D. Steven Leibo Ph.D. : ]

If republicans thought this hearing would be helpful for their cause it was surely a big mistake..that from a non scientist

[Comment From J Bowers J Bowers : ]

There are no car parks or air conditioners in space.

Eli Kintisch:

Burress: US had popular “revulsion” against the Waxman Markey bill. “Voting no was not enough…people wanted us to stop that thing dead in its tracks” No action by India and China…

[Comment From thingsbreak thingsbreak : ]

This India and China bashing is perverse, from an emissions “pie slicing” perspective.

Eli Kintisch:

Inslee: “embarassment” that “chronic anti-science” syndrome by Republicans. Colleagues in GOP won’t believe, he says, “until the entire antarctic ice sheet has melted or hell has frozen over”

Eli Kintisch:

Rep Griffith (R-Va): Asks about melting ice caps on Mars. Is sun getting brighter, he asks?

[Comment From thingsbreak thingsbreak : ]

Mars ice caps melting. Drink!

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Mars ice caps, snore!

Eli Kintisch:

In general I would say this hearing is a disappointment: the issue of whether congress can/should have a close control on EPA decisions is at least an interesting one that different people who are reasonable can disagree about.

So far little discussion of that issue at all. 🙁

Maybe because these are scientists the real issue is just not coming up. Weird hearing.

Eli Kintisch:

Waxman: I would hate to see Congress take a position “that the science was false” by passing/marking up HR 910; wants to slow mark up on tuesday. But Whitfield disagrees; says that markup on thursday will proceed and debate will go on then…

Eli Kintisch:

Rush (who is the ranking member on this subcommittee) also asks Whitfield to delay the thursday markup. “Force.. the American people…we should be more deliberative”

Gavin Schmidt:

So that’s that. I can’t say I was particularly surprised at how it went. Far too much cherry-picking, strawmen arguments and posturing. Is it possible to have susbtantive discussion in public on these issues?

I think I shouldn’t have peeked into the sausage machine.

Legislators' vision for Montana

This is it: a depleted mining wasteland:

NASA Berkeley Pit

Berkeley Pit, Butte MT, NASA Earth Observatory

The spearhead is an assault on the MT constitution’s language on the environment,

All persons are born free and have certain inalienable rights. They include the right to a clean, and healthful, and economically productive environment and the rights of pursuing life’s basic necessities, enjoying and defending their lives and liberties, acquiring, possessing and protecting property, and seeking their safety, health and happiness in all lawful ways. In enjoying these rights, all persons recognize corresponding responsibilities.

What does “economically productive” add that wasn’t already covered by “pursuing … acquiring … posessing” anyway? Ironically, this could cut both ways – would it facilitate restrictions on future resource extraction, because depleted mines become economically unproductive?

Other bills attempt to legalize gravel pits in residential areas, sell coal at discount prices, and dismantle or cripple any other environmental protection you could think of.

The real kicker is Joe Read’s HB 549, AN ACT STATING MONTANA’S POSITION ON GLOBAL WARMING:

Section 1.  Public policy concerning global warming. (1) The legislature finds that to ensure economic development in Montana and the appropriate management of Montana’s natural resources it is necessary to adopt a public policy regarding global warming.

At least we’re clear up front that the coal industry is in charge!

(2) The legislature finds:

I’m sure you can guess how many qualified climate scientists are in the Montana legislature.

(a) global warming is beneficial to the welfare and business climate of Montana;

I guess Joe didn’t get the memo, that skiing and fishing could be hard hit. Maybe he thinks crops and trees do just fine with too little water and warmth, or too much.

(b) reasonable amounts of carbon dioxide released into the atmosphere have no verifiable impacts on the environment; and

Yeah, and pi is 3.2, just like it was in Indiana in 1897. I guess you could argue about the meaning of “reasonable,” but apparently Joe even rejects chemistry (ocean acidification) and biology (CO2 fertilization) along with atmospheric science.

(c) global warming is a natural occurrence and human activity has not accelerated it.

Ahh, now we’re doing detection & attribution. Legislating the answers to scientific questions is a fool’s errand. How did this text go through peer review?

(3) (a) For the purposes of this section, “global warming” relates to an increase in the average temperature of the earth’s surface.

Well, at least one sentence in this bill makes sense – at least if you assume that “average” is over time as well as space.

(b) It does not include a one-time, catastrophic release of carbon dioxide.

Where did that strawdog come from? Apparently there’s a catastrophic release of CO2 every time Joe Read opens his mouth.

A few parts per million

IMG_1937

There’s a persistent rumor that CO2 concentrations are too small to have a noticeable radiative effect on the atmosphere. (It appears here, for example, though mixed with so much other claptrap that it’s hard to wrap your mind around the whole argument – which would probably cause your head to explode due to an excess of self-contradiction anyway.)

To fool the innumerate, one must simply state that CO2 constitutes only about 390 parts per million, or .039%, of the atmosphere. Wow, that’s a really small number! How could it possibly matter? To be really sneaky, you can exploit stock-flow misperceptions by talking only about the annual increment (~2 ppm) rather than the total, which makes things look another 100x smaller (apparently a part of the calculation in Joe Bastardi’s width of a human hair vs. a 1km bridge span).

Anyway, my kids and I got curious about this, so we decided to put 390ppm of food coloring in a glass of water. Our precision in shaving dye pellets wasn’t very good, so we actually ended up with about 450ppm. You can see the result above. It’s very obviously blue, in spite of the tiny dye concentration. We think this is a conservative visual example, because a lot of the tablet mass was apparently a fizzy filler, and the atmosphere is 1000 times less dense than water, but effectively 100,000 times thicker than this glass. However, we don’t know much about the molecular weight or radiative properties of the dye.

This doesn’t prove much about the atmosphere, but it does neatly disprove the notion that an effect is automatically small, just because the numbers involved sound small. If you still doubt this, try ingesting a few nanograms of the toxin infused into the period at the end of this sentence.

The alien Hail Mary, and other climate policy plays

Cap & Trade is suspended in Europe and dead in the US, and the techno delusion may not be far behind. Some strange bedfellows have lined up behind the idea of R&D-driven climate policy. But now it appears that clean energy research is not a bipartisan no-brainer after all. Energy committee member Rand Paul’s bill would not only cut energy R&D funding by eliminating DOE altogether, it would cut our ability to even monitor the global environment by gutting NOAA and NASA. That only leaves one option:

13 In the otherwise dull year 2327, mankind successfully contacts aliens. Well, technically their answering machine, as the aliens themselves have gone to Alpha Centauri for the summer.

14 Desperate for help, humans leave increasingly stalker-y messages, turning off the aliens with how clingy our species is.

15 The aliens finally agree to equip Earth with a set of planet-saving carbon neutralizers, but work drags on as key parts must be ordered from a foreign supplier in the Small Magellanic Cloud.

16 The job comes in $3.7 quadrillion above estimate. Humanity thinks it is being taken advantage of but isn’t sure.

“20 things you didn’t know about the future,” in Discover

Seriously, where does that leave us? In terms of what we should do, I don’t think much has changed. As I wrote a while back, the climate policy table needs four legs:

  1. Prices
  2. Technology (the landscape of possibilities on which we make decisions)
  3. Institutional rules and procedures
  4. Preferences, operating within social networks

Preferences and technology are really the fundamentals among the four. Technology represents the set of options available to us for transforming energy and resources into life and play. Preferences guide how we choose among those options. Prices and rules are really just the information signals that allow us to coordinate those decisions.

However, neither preferences nor technology are as fundamental as they look. Models generally take preferences as a given, but in fact they’re endogenous. What we want on a day to day basis is far removed from our most existential needs. Instead, we construct preferences on the basis of technologies we know about, prices, rules, and the preferences and choices of others. That creates norms, fads, marketing, keep-up-with-the Joneses and other positive feedback mechanisms. Similarly, technology is more than discovery of principles and invention of devices. Those innovations don’t do anything until they’re woven into the fabric of society, guided by (you guessed it), prices, institutions, and preferences. That creates more positive feedbacks, like the chicken-egg problems of alternative fuel vehicle deployment.

If we could all get up in the morning and work out in our heads how to make Pareto-efficient decisions, we might not need prices and institutions, but we can’t, so we do. Prices matter because they’re a primary carrier of information through the economy. Not every decision is overtly economic, so we also have institutions, rules and routinized procedures to guide behavior. The key is that these signals should serve our values (the deeply held ones we’d articulate upon reflection, which might differ from the preferences revealed by transactions), not the other way around.

Preferences clearly can have a lot of direct leverage on behavior – if we all equated driving a big gas guzzler with breaking wind in a crowded elevator, we’d probably see different cars on the lot. However, most decisions are not so transparent. It’s already hard to choose “paper or plastic?” How about “desktop or server?” When you add multiple layers of supply chain and varied national origins to the picture, it becomes very hard to create a green information system paralleling the price system. It’s probably even harder to get individuals and firms to conform to such a system, when there are overwhelming evolutionary rewards to defection. Borrowing from Giraudoux, the secret to success is sustainability; once you can fake that you’ve got it made.

Similarly, the sheer complexity of society makes it hard to predict which technologies constitute a winning combination for creating low-carbon happiness. A technology-led strategy runs the risk of failing in the attempt to recreate a high-carbon lifestyle with low-carbon inputs.  I don’t think anyone has the foresight to select that portfolio. Even if we could do it, there’s no guarantee that, absent other signals, new technologies will be put to their intended uses, or that they will survive the “valley of death” between R&D and commercialization. It’s like airdropping a tyrannosaurus into an arctic ecosystem – sure, he’s got big teeth, but will he survive?

Complexity also militates against a rules-led approach. It’s simply too cumbersome to codify a rich set of tradeoffs in command-and-control regulations, which can become an impediment to innovation and are subject to regulatory capture. Also, systems like the CAFE standard create shadow prices of compliance, rather than explicit prices. This makes it hard to diagnose the effects of constraints and to coordinate them with other policies. There’s a niche for rules, but they shouldn’t be the big stick (on the other hand, eliminating the legacy of some past measures could be a win-win).

That’s why emissions pricing is really a keystone policy. Once you have prices aligned with the long term value of stable climate (and other resources), it’s easier to align the other legs of the table. Emissions prices create huge incentives for private R&D, leaving a smaller gap for government to fill – just the market failures in appropriation of benefits of technology. The points of pain where institutions are inadequate, or stand in the way of progress, will be more evident and easier to correct, and there will be less burden on policy making institutions, because they won’t have to coordinate many small programs to do the job of one big signal. Preferences will start evolving in a low-carbon direction, with rewards to those who (through luck or altruism) have already done so. Most importantly, emissions pricing gets some changes moving now, not after a decade or two of delay.

Concretely, I still think an upstream, revenue-neutral carbon tax is a practical implementation route. If there’s critical mass among trade partners, it could even evolve into a harmonized global system through the pressure of border carbon adjustments. The question is, how to get started?

Knowing Sooner

SEED magazine recently published an article on models for managing complex systems. In it, I talk about the C-ROADS experience. It nicely captures the punchline:

having the capacity to accurately predict the utility of proposed policy—whether it be domestic legislature or multilateral agreements—in real time while discussions are ongoing, opens the door for an entirely new way to enact policy.

I get too much credit for C-ROADS in the article; here are some of the people who really made it happen:

CI teamThe ClimateInteractive team: Travis Franck, Drew Jones, Stephanie McCauley, Phil Sawin, Beth Sawin, and Lori Siegel. Many other partners have also been instrumental, including John Sterman (MIT), Peter Senge (SOL), and really too many others to mention.

And so it begins…

A kerfuffle is brewing over Richard Tol’s FUND model (a recent  installment). I think this may be one of the first instances of something we’ll see a lot more of: public critique of integrated assessment models.

Integrated Assessment Models (IAMs) are a broad class of tools that combine the physics of natural systems (climate, pollutants, etc.) with the dynamics of socioeconomic systems. Most of the time, this means coupling an economic model (usually dynamic general equilibrium or an optimization approach; sometimes bottom-up technical or a hybrid of the two) with a simple to moderately complex model of climate. The IPCC process has used such models extensively to generate emissions and mitigation scenarios.

Interestingly, the IAMs have attracted relatively little attention; most of the debate about climate change is focused on the science. Yet, if you compare the big IAMs to the big climate models, I’d argue that the uncertainties in the IAMs are much bigger. The processes in climate models are basically physics and many are even be subject to experimental verification. We can measure quantities like temperature with considerable precision and spatial detail over long time horizons, for comparison with model output. Some of the economic equivalents, like real GDP, are much slipperier even in their definitions. We have poor data for many regions, and huge problems of “instrumental drift” from changing quality of goods and sectoral composition of activity, and many cultural factors are not even measured. Nearly all models represent human behavior – the ultimate wildcard – by assuming equilibrium, when in fact it’s not clear that equilibrium emerges faster than other dynamics change the landscape on which it arises. So, if climate skeptics get excited about the appropriate centering method for principal components analysis, they should be positively foaming at the mouth over the assumptions in IAMs, because there are far more of them, with far less direct empirical support.

Last summer at EMF Snowmass, I reflected on some of our learning from the C-ROADS experience (here’s my presentation). One of the key points, I think, is that there is a huge gulf between models and modelers, on the one hand, and the needs and understanding of decision makers and the general public on the other. If modelers don’t close that gap by deliberately translating their insights for lay audiences, focusing their tools on decision maker needs, and embracing a much higher level of transparency, someone else will do that translation for them. Most likely, that “someone else” will be much less informed, or have a bigger axe to grind, than the modelers would hope.

With respect to transparency, Tol’s FUND model is further along than many models: the code is available. So, informed tinkerers can peek under the hood if they wish. However, it comes with a warning:

It is the developer’s firm belief that most researchers should be locked away in an ivory tower. Models are often quite useless in unexperienced hands, and sometimes misleading. No one is smart enough to master in a short period what took someone else years to develop. Not-understood models are irrelevant, half-understood models treacherous, and mis-understood models dangerous.

Therefore, FUND does not have a pretty interface, and you will have to make to real effort to let it do something, let alone to let it do something new.

I understand the motivation for this warning. However, it leaves the modeler-consumer gulf gaping.The modelers have their insights into systems, the decision makers have their problems managing those systems, and ne’er the twain shall meet – there just aren’t enough modelers to go around. That leaves reports as the primary conduit of information from model to user, which is fine if your ivory tower is secure enough that you need not care whether your insights have any influence. It’s not even clear that reports are more likely to be understood than models: there have been a number of high-profile instances of ill-conceived institutional press releases and misinterpretation of conclusions and even raw data.

Also, there’s a hint of danger in the very idea of building dangerous models. Obviously all models, like analogies, are limited in their fidelity and generality. It’s important to understand those limitations, just as a pilot must understand the limitations of her instruments. However, if a model is a minefield for the uninitiated user, I have to question its utility. Robustness is an important aspect of model quality; a model given vaguely realistic inputs should yield realistic outputs most of the time, and a model given stupid inputs should generate realistic catastrophes. This is perhaps especially true for climate, where we are concerned about the tails of the distribution of possible outcomes. It’s hard to build a model that’s only robust to the kinds of experiments that one would like to perform, while ignoring other potential problems. To the extent that a model generates unrealistic outcomes, the causes should be traceable; if its not easy for the model user to see in side the black box, then I worry that the developer won’t have done enough inspection either. So, the discipline of building models for naive users imposes some useful quality incentives on the model developer.

IAM developers are busy adding spatial resolution, technical detail, and other useful features to models. There’s comparatively less work on consolidation of insights, with translation and construction of tools for wider consumption. That’s understandable, because there aren’t always strong rewards for doing so. However, I think modelers ignore this crucial task at their future peril.