… in a climate bathtub:
Via John Sterman.
I’ve written quite a bit about bathtub dynamics here. I got the term from “Cloudy Skies” and other work by John Sterman and Linda Booth Sweeney.
We report experiments assessing people’s intuitive understanding of climate change. We presented highly educated graduate students with descriptions of greenhouse warming drawn from the IPCC’s nontechnical reports. Subjects were then asked to identify the likely response to various scenarios for CO2 emissions or concentrations. The tasks require no mathematics, only an understanding of stocks and flows and basic facts about climate change. Overall performance was poor. Subjects often select trajectories that violate conservation of matter. Many believe temperature responds immediately to changes in CO2 emissions or concentrations. Still more believe that stabilizing emissions near current rates would stabilize the climate, when in fact emissions would continue to exceed removal, increasing GHG concentrations and radiative forcing. Such beliefs support wait and see policies, but violate basic laws of physics.
The climate bathtubs are really a chain of stock processes: accumulation of CO2 in the atmosphere, accumulation of heat in the global system, and accumulation of meltwater in the oceans. How we respond to those, i.e. our emissions trajectory, is conditioned by some additional bathtubs: population, capital, and technology. This post is a quick look at the first.
I’ve grabbed the population sector from the World3 model. Regardless of what you think of World3’s economics, there’s not much to complain about in the population sector. It looks like this:
People are categorized into young, reproductive age, working age, and older groups. This 4th order structure doesn’t really capture the low dispersion of the true calendar aging process, but it’s more than enough for understanding the momentum of a population. If you think of the population in aggregate (the sum of the four boxes), it’s a bathtub that fills as long as births exceed deaths. Roughly tuned to history and projections, the bathtub fills until the end of the century, but at a diminishing rate as the gap between births and deaths closes:
Notice that the young (blue) peak in 2030 or so, long before the older groups come into near-equilibrium. An aging chain like this has a lot of momentum. A simple experiment makes that momentum visible. Suppose that, as of 2010, fertility suddenly falls to slightly below replacement levels, about 2.1 children per couple. (This is implemented by changing the total fertility lookup). That requires a dramatic shift in birth rates:
However, that doesn’t translate to an immediate equilibrium in population. Instead,population still grows to the end of the century, but reaching a lower level. Growth continues because the aging chain is internally out of equilibrium (there’s also a small contribution from ongoing extension of life expectancy, but it’s not important here). Because growth has been ongoing, the demographic pyramid is skewed toward the young. So, while fertility is constant per person of child-bearing age, the population of prospective parents grows for a while as the young grow up, and thus births continue to increase. Also, at the time of the experiment, the elderly population has not reached equilibrium given rising life expectancy and growth down the chain.
Achieving immediate equilibrium in population would require a much more radical fall in fertility, in order to bring births immediately in line with deaths. Implementing such a change would require shifting yet another bathtub – culture – in a way that seems unlikely to happen quickly. It would also have economic side effects. Often, you hear calls for more population growth, so that there will be more kids to pay social security and care for the elderly. However, that’s not the first effect of accelerated declines in fertility. If you look at the dependency ratio (the ratio of the very young and old to everyone else), the first effect of declining fertility is actually a net benefit (except to the extent that young children are intrinsically valued, or working in sweatshops making fake Gucci wallets):
The bottom line of all this is that, like other bathtubs, it’s hard to change population quickly, partly because of the physics of accumulation of people, and partly because it’s hard to even talk about the culture of fertility (and the economic factors that influence it). Population isn’t likely to contribute much to meeting 2020 emissions targets, but it’s part of the long game. If you want to win the long game, you have to anticipate long delays, which means getting started now.
The model (Vensim binary, text, and published formats): World3 Population.vmf World3-Population.mdl World3 Population.vpm
Following up on my earlier post, a few more on the menu:
SiMCaP – A simple tool for exploring emissions pathways, climate sensitivity, etc.
PRIMAP 2C Check Tool – A dirt-simple spreadsheet, exploiting the fact that cumulative emissions are a pretty good predictor of temperature outcomes along plausible emissions trajectories.
EdGCM – A full 3D model, for those who feel the need to get physical.
Last but not least, C-LEARN runs on the web. Desktop C-ROADS software is in the development pipeline.
I’m too busy to write much, but here are some quick updates.
C-ROADS is in the news, via Jeff Tolleffson at Nature News.
Our State of the Global Deal conclusion, that current proposals are not on track, now has more reinforcement:
Check out Drew Jones on TEDx.
Amstrup et al. have just published a rebuttal of the Armstrong, Green & Soon critique of polar bear assessments. Polar bears aren’t my area, and I haven’t read the original, so I won’t comment on the ursine substance. However, Amstrup et al. reinforce many of my earlier objections to (mis)application of forecasting principles, so here are some excerpts:
The Principles of Forecasting and Their Use in Science
… AGS based their audit on the idea that comparison to their self-described principles of forecasting could produce a valid critique of scientific results. AGS (p. 383) claimed their principles ‘summarize all useful knowledge about forecasting.’ Anyone can claim to have a set of principles, and then criticize others for violating their principles. However, it takes more than a claim to create principles that are meaningful or useful. In concluding our rejoinder, we point out that the principles espoused by AGS are so deeply flawed that they provide no reliable basis for a rational critique or audit.
Failures of the Principles
Armstrong (2001) described 139 principles and the support for them. AGS (pp. 382’“383) claimed that these principles are evidence based and scientific. They fail, however, to be evidence based or scientific on three main grounds: They use relative terms as if they were absolute, they lack theoretical and empirical support, and they do not follow the logical structure that scientific criticisms require.
Using Relative Terms as Absolute
Many of the 139 principles describe properties that models, methods, and (or) data should include. For example, the principles state that data sources should be diverse, methods should be simple, approaches should be complex, representations should be realistic, data should be reliable, measurement error should be low, explanations should be clear, etc. … However, it is impossible to look at a model, a method, or a datum and decide whether its properties meet or violate the principles because the properties of these principles are inherently relative.
Consider diverse. AGS faulted H6 for allegedly failing to use diverse sources of data. However, H6 used at least six different sources of data (mark-recapture data, radio telemetry data, data from the United States and Canada, satellite data, and oceanographic data). Is this a diverse set of data? It is more diverse than it would have been if some of the data had not been used. It is less diverse than it would have been if some (hypothetical) additional source of data had been included. To criticize it as not being diverse, however, without providing some measure of comparison, is meaningless.
Consider simple. What is simple? Although it might be possible to decide which of two models is simpler (although even this might not be easy), it is impossible’”in principle’”to say whether any model considered in isolation is simple or not. For example, H6 included a deterministic time-invariant population model. Is this model simple? It is certainly simpler than the stationary, stochastic model, or the nonstationary stochastic model also included in H6. However, without a measure of comparison, it is impossible to say which, if any, are ‘simple.’ For AGS to criticize the report as failing to use simple models is meaningless.
…
A Lack of Theoretical and Empirical Support
If the principles of forecasting are to serve as a basis for auditing the conclusions of scientific studies, they must have strong theoretical and (or) empirical support. Otherwise, how do we know that these principles are necessary for successful forecasts? Closer examination shows that although Armstrong (2001, p. 680) refers to evidence and AGS (pp. 382’“383) call the principles evidence based, almost half (63 of 139) are supported only by received wisdom or common sense, with no additional empirical or theoretical support. …
Armstrong (2001, p. 680) defines received wisdom as when ‘the vast majority of experts agree,’ and common sense as when ‘it is difficult to imagine that things could be otherwise.’ In other words, nearly half of the principles are supported only by opinions, beliefs, and imagination about the way that forecasting should be done. This is not evidence based; therefore, it is inadequate as a basis for auditing scientific studies. … Even Armstrong’s (2001) own list includes at least three cases of principles that are supported by what he calls strong empirical evidence that ‘refutes received wisdom’’”that is, at least three of the principles contradict received wisdom. …
Forecasting Audits Are Not Scientific Criticism
The AGS audit failed to distinguish between scientific forecasts and nonscientific forecasts. Scientific forecasts, because of their theoretical basis and logical structure based upon the concept of hypothesis testing, are almost always projections. That is, they have the logical form of ‘if X happens, then Y will follow.’ The analyses in AMD and H6 take exactly this form. A scientific criticism of such a forecast must show that even if X holds, Y does not, or need not, follow.
In contrast, the AGS audit simply scored violations of self-defined principles without showing how the identified violation might affect the projected result. For example, the accusation that H6 violated the commandment to use simple models is not a scientific criticism, because it says nothing about the relative simplicity of the model with respect to other possible choices. It also says nothing about whether the supposedly nonsimple model in question is in error. A scientific critique on the grounds of simplicity would have to identify a complexity in the model, and show that the complexity cannot be defended scientifically, that the complexity undermines the credibility of the model, and that a simpler model can resolve the issue. AGS did none of these.
There’s some irony to all this. Armstrong & Green criticize climate predictions as mere opinions cast in overly-complex mathematical terms, lacking predictive skill. The instrument of their critique is a complex set of principles, mostly derived from opinions, with undemonstrated ability to predict the skill of models and forecasts.
I hadn’t noticed until I heard it here, but Armstrong & Green are back at it, with various claims that climate forecasts are worthless. In the Financial Post, they criticize the MIT Joint Program model,
… No more than 30% of forecasting principles were properly applied by the MIT modellers and 49 principles were violated. For an important problem such as this, we do not think it is defensible to violate a single principle.
As I wrote in some detail here, the Forecasting Principles are a useful seat-of-the-pants guide to good practices, but there’s no evidence that following them all is necessary or sufficient for a good outcome. Some are likely to be counterproductive in many situations, and key elements of good modeling practice are missing (for example, balancing units of measure).
It’s not clear to me that A&G really understand models and modeling. They seem to view everything through the lens of purely statistical methods like linear regression. Green recently wrote,
Another important principle is that the forecasting method should provide a realistic representation of the situation (Principle 7.2). An interesting statement in the MIT report that implies (as one would expect given the state of knowledge and omitted relationships) that the modelers have no idea to what extent their models provide a realistic representation of reality is as follows:
‘Changes in global surface average temperature result from a combination of emissions and climate parameters, and therefore two runs that look similar in terms of temperature may be very different in detail.’ (MIT Report p. 28)
While the modelers have sufficient latitude in their parameters to crudely reproduce a brief period of climate history, there is no reason to believe the models can provide useful forecasts.
What the MIT authors are saying, in essence, is that
T = f(E,P)
and that it is possible to achieve the same future temperature T with different combinations of emissions E and parameters P. Green seems to be taking a leap, to assume that historic T does not provide much constraint on P. First, that’s not necessarily true, given that historic E cannot be chosen freely. It could still be the case that the structure of f(E,P) means that historic T provides a weak constraint on P given E. But if that’s true (as it basically is), the problem is self-diagnosing: estimates of P will have broad confidence bounds, as will forecasts of T. Green completely ignores the MIT authors’ explicit characterization of this uncertainty. He also ignores the fact that the output of the model is not just T, and that we have priors for many elements of P (from more granular models or experiments, for example). Thus we have additional lines of evidence with which to constrain forecasts. Green also neglects to consider the implications of uncertainties in P that are jointly distributed in an offsetting manner (as is likely for climate sensitivity, ocean circulation, and aerosol forcing).
A&G provide no formal method to distinguish between situations in which models yield useful or spurious forecasts. In an earlier paper, they claimed rather broadly,
‘To our knowledge, there is no empirical evidence to suggest that presenting opinions in mathematical terms rather than in words will contribute to forecast accuracy.’ (page 1002)
This statement may be true in some settings, but obviously not in general. There are many situations in which mathematical models have good predictive power and outperform informal judgments by a wide margin.
A&G’s latest paper with Willie Soon, Validity of Climate Change Forecasting for Public Policy Decision Making, apparently forthcoming in IJF, is an attempt to make the distinction, i.e. to determine whether climate models have any utility as predictive tools. An excerpt from the abstract summarizes their argument:
Policymakers need to know whether prediction is possible and if so whether any proposed forecasting method will provide forecasts that are substantively more accurate than those from the relevant benchmark method. Inspection of global temperature data suggests that it is subject to irregular variations on all relevant time scales and that variations during the late 1900s were not unusual. In such a situation, a ‘no change’ extrapolation is an appropriate benchmark forecasting method. … The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. … We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change’s 1992 linear projection of long-term warming at a rate of 0.03°C-per-year. The small sample of errors from ex ante projections at 0.03°C-per-year for 1992 through 2008 was practically indistinguishable from the benchmark errors. … Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth’”the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions.
There are many things wrong here:
How do AG&S arrive at this sorry state? Their article embodies a “sh!t happens” epistemology. They write, “The belief that ‘things have changed’ and the future cannot be judged by the past is common, but invalid.” The problem is, one can say with equal confidence that, “the belief that ‘things never change’ and the past reveals the future is common, but invalid.” In reality, there are predictable phenomena (the orbits of the planets) and unpredictable ones (the fall of the Berlin wall). AG&S have failed to establish that climate is unpredictable or to provide us with an appropriate method for deciding whether it is predictable or not. Nor have they given us any insight into how to know or what to do if we can’t decide. Doing nothing because we think we don’t know anything is probably better than sacrificing virgins to the gods, but it doesn’t strike me as a robust strategy.
I recently wondered whether developing countries were asking for the wrong thing in Bonn. Now Bolivia is barking up the right tree with a proposed “climate debt” concept. The idea’s actually quite old; it’s already well developed in the Greenhouse Development Rights framework.
The trick is, how to achieve an equitable outcome that’s consistent with the physics of climate? Consider this reaction to ideas like climate debt:
Obama’s Global Tax
By INVESTOR’S BUSINESS DAILY | Posted Tuesday, July 29, 2008 4:20 PM PT
Election ’08: A plan by Barack Obama to redistribute American wealth on a global level is moving forward in the Senate. It follows Marxist theology – from each according to his ability, to each according to his need.
…
Obama would give them all a fish without teaching them how to fish. Pledging to cut global poverty in half on the backs of U.S. taxpayers is a ridiculous and impossible goal.
…
We already transfer too much national wealth to the United Nations and its busybody agencies. …
…
If you’re worried abut gasoline and heating oil prices now, think what they’ll be like when the U.S. is subjected in an Obama administration to global energy consumption and production taxes. Obama’s Global Poverty Act is the “international community’s” foot in the door.
…
Obama has called on the U.S. to “lead by example” on global warming and probably would submit to a Kyoto-like agreement that would sock Americans with literally trillions of dollars in costs over the next half century for little or no benefit.
“We can’t drive our SUVs and eat as much as we want and keep our homes on 72 degrees at all times . . . and then just expect that other countries are going to say OK,” Obama has said. “That’s not leadership. That’s not going to happen.”
Oh, really? Who’s to say we can’t load up our SUV and head out in search of bacon double cheeseburgers at the mall? China? India? Bangladesh? The U.N.?
I suspect that these sentiments are quite prevalent, at least in the US. I’m even sympathetic in at least one respect: transfers from the global rich to poor are beneficial in principle, but difficult to execute. Transfers from country to country are susceptible to capture by elites. Direct transfers among individuals could be facilitated by a global carbon market with allowances allocated to individuals (one of the few good arguments for emissions trading in my mind), but would undemocratic regimes permit their citizens to participate?
I don’t see agreement on this front any time soon. I could see things going a different way: the US, EU and a few other developed nations move to reduce, then goad developing nations along with a mixture of carrot (offset projects and other transfers) and stick (border carbon adjustments).
I ran across this gem in the text of Waxman Markey (HR 2454):
(e) Trade-vulnerable Industries-
(1) IN GENERAL- The Administrator shall allocate emission allowances to energy-intensive, trade-exposed entities, to be distributed in accordance with section 765, in the following amounts:
(A) For vintage years 2012 and 2013, up to 2.0 percent of the emission allowances established for each year under section 721(a).
(B) For vintage year 2014, up to 15 percent of the emission allowances established for that year under section 721(a).
(C) For vintage year 2015, up to the product of–
(i) the amount specified in paragraph (2); multiplied by
(ii) the quantity of emission allowances established for 2015 under section 721(a) divided by the quantity of emission allowances established for 2014 under section 721(a).
(D) For vintage year 2016, up to the product of–
(i) the amount specified in paragraph (3); multiplied by
(ii) the quantity of emission allowances established for 2015 under section 721(a) divided by the quantity of emission allowances established for 2014 under section 721(a).
(E) For vintage years 2017 through 2025, up to the product of–
(i) the amount specified in paragraph (4); multiplied by
(ii) the quantity of emission allowances established for that year under section 721(a) divided by the quantity of emission allowances established for 2016 under section 721(a).
(F) For vintage years 2026 through 2050, up to the product of the amount specified in paragraph (4)–
(i) multiplied by the quantity of emission allowances established for the applicable year during 2026 through 2050 under section 721(a) divided by the quantity of emission allowances established for 2016 under section 721(a); and
(ii) multiplied by a factor that shall equal 90 percent for 2026 and decline 10 percent for each year thereafter until reaching zero, except that, if the President modifies a percentage for a year under subparagraph (A) of section 767(c)(3), the highest percentage the President applies for any sector under that subparagraph for that year (not exceeding 100 percent) shall be used for that year instead of the factor otherwise specified in this clause.
What we have here is really a little dynamic model, which can be written down in 4 or 5 lines. The intent is apparently to stabilize the absolute magnitude of the allocation to trade-vulnerable industries. In order to do that, the allocation share has to rise over time, as the total allowances issued falls. After 2026, there’s a 10%-per-year phaseout, but that’s offset by the continued upward pressure on share from the decline in allowances, so the net phaseout rate is about 5%/year, I think. Oops: Actually, I think now that it’s the other way around … from 2017-2025, the formula decreases the share of allowances allocated at the same rate as the absolute allowance allocation declines. Thereafter, it’s that rate plus 10%. There is no obvious rationale for this strange method.
Seems to me that if legislators want to create formulas this complicated, they ought to simply write out the equations (with units) in the text of the bill. Otherwise, natural language hopelessly obscures the structure and no ordinary human can participate effectively in the process. But perhaps that’s part of the attraction?
This just in from CNAS:
ABC News will air Earth 2100, the prime time documentary for which they filmed the war game, on June 2, 2009, at 9:00 p.m. (EST). You can view a promotional short report on the documentary from ABC News online, and hopefully you will all be able to view it on television or via Internet.
In conjunction with the airing of the documentary, CNAS has made the participant briefing book and materials from the game available online. We encourage other institutions to use and cite these materials to learn about the game and to stage their own scenario exercises. I also hope that they will be useful to you for your own future reference.
Finally, we are posting a short working paper of major findings from the game. While the game did not result in the kind of breakthrough agreements we all would have liked to see, this exercise achieved CNAS’s goals of exploring and highlighting the potential difficulties and opportunities of international cooperation on climate change. I know that everyone took away different observations from the game, however, and I hope that you will share your memories and your own key findings of the event with us, and allow us to post them online as a new section of the report.
Visit the Climate Change War Game webpage to view the CNAS report on major findings and background on developing the 2015 world, the participant briefing book, and materials generated from the game.
Climate Interactive has the story.
Try it yourself, or see it in action in an interactive webinar on June 3rd.