More on Climate Predictions

No pun intended.

Scott Armstrong has again asserted on the JDM list that global warming forecasts are merely unscientific opinions (ignoring my prior objections to the claim). My response follows (a bit enhanced here, e.g., providing links).

Today would be an auspicious day to declare the death of climate science, but I’m afraid the announcement would be premature.

JDM researchers might be interested in the forecasts of global warming as they are based on unaided subjective forecasts (unaided by forecasting principles) entered into complex computer models.

This seems to say that climate scientists first form an opinion about the temperature in 2100, or perhaps about climate sensitivity to 2x CO2, then tweak their models to reproduce the desired result. This is a misperception about models and modeling. First, in a complex physical model, there is no direct way for opinions that represent outcomes (like climate sensitivity) to be “entered in.” Outcomes emerge from the specification and calibration process. In a complex, nonlinear, stochastic model it is rather difficult to get a desired behavior, particularly when the model must conform to data. Climate models are not just replicating the time series of global temperature; they first must replicate geographic and seasonal patterns of temperature and precipitation, vertical structure of the atmosphere, etc. With a model that takes hours or weeks to execute, it’s simply not practical to bend the results to reflect preconceived notions. Second, not all models are big and complex. Low order energy balance models can be fully estimated from data, and still yield nonzero climate sensitivity.

I presume that the backing for the statement above is to be found in Green and Armstrong (2007), on which I have already commented here and on the JDM list. Continue reading “More on Climate Predictions”

On Limits to Growth

It’s a good idea to read things you criticize; checking your sources doesn’t hurt either. One of the most frequent targets of uninformed criticism, passed down from teacher to student with nary a reference to the actual text, must be The Limits to Growth. In writing my recent review of Green & Armstrong (2007), I ran across this tidbit:

Complex models (those involving nonlinearities and interactions) harm accuracy because their errors multiply. Ascher (1978), refers to the Club of Rome’s 1972 forecasts where, unaware of the research on forecasting, the developers proudly proclaimed, “in our model about 100,000 relationships are stored in the computer.” (page 999)

Setting aside the erroneous attributions about complexity, I found the statement that the MIT world models contained 100,000 relationships surprising, as both can be diagrammed on a single large page. I looked up electronic copies of World Dynamics and World3, which have 123 and 373 equations respectively. A third or more of those are inconsequential coefficients or switches for policy experiments. So how did Ascher, or Ascher’s source, get to 100,000? Perhaps by multiplying by the number of time steps over the 200 year simulation period – hardly a relevant measure of complexity.

Meadows et al. tried to steer the reader away from focusing on point forecasts. The introduction to the simulation results reads,

Each of these variables is plotted on a different vertical scale. We have deliberately omitted the vertical scales and we have made the horizontal time scale somewhat vague because we want to emphasize the general behavior modes of these computer outputs, not the numerical values, which are only approximately known. (page 123)

Many critics have blithely ignored such admonitions, and other comments to the effect of, “this is a choice, not a forecast” or “more study is needed.” Often, critics don’t even refer to the World3 runs, which are inconvenient in that none reaches overshoot in the 20th century, making it hard to establish that “LTG predicted the end of the world in year XXXX, and it didn’t happen.” Instead, critics choose the year XXXX from a table of resource lifetime indices in the chapter on nonrenewable resources (page 56), which were not forecasts at all. Continue reading “On Limits to Growth”

Evidence on Climate Predictions

Last Year, Kesten Green and Scott Armstrong published a critique of climate science, arguing that there are no valid scientific forecasts of climate. RealClimate mocked the paper, but didn’t really refute it. The paper came to my attention recently when Green & Armstrong attacked John Sterman and Linda Booth Sweeney’s paper on mental models of climate change.

I reviewed Green & Armstrong’s paper and concluded that their claims were overstated. I responded as follows: Continue reading “Evidence on Climate Predictions”