Last Year, Kesten Green and Scott Armstrong published a critique of climate science, arguing that there are no valid scientific forecasts of climate. RealClimate mocked the paper, but didn’t really refute it. The paper came to my attention recently when Green & Armstrong attacked John Sterman and Linda Booth Sweeney’s paper on mental models of climate change.
I reviewed Green & Armstrong’s paper and concluded that their claims were overstated. I responded as follows:
Scott Armstrong and Kesten Green contended here recently that there is no evidence for dangerous anthropogenic global warming. I reviewed Green & Armstrong (2007) at forecastingprinciples.org to see how they arrive at that conclusion.
To support the claim that there are no valid forecasts, the authors would have to show that:
1. Chapter 8 of the IPCC Fourth Assessment (working group 1) shows that climate scientists do not follow “evidence-based forecasting” methods, as defined by Green & Armstrong (G&A).
2. The underlying models and processes summarized in Chapter 8 also do not follow proper methods.
3. It is impossible to create a valid forecast without following the authors’ methods.
With respect to 1, I think it is fair to grant that Chapter 8 is weak on model validation details. However, it does not follow that models are invalid, especially because G&A make errors in their attributions of performance against the various principles. They apparently did not examine the primary climate literature in any detail, and clearly missed an enormous amount of relevant information, leading to serious misconceptions about models and their application; hence they can make no claim about 2. With respect to 3, the authors fail to establish whether climate is a problem domain where models work or chaos reigns, and fail to demonstrate viable alternative forecasts based on forecasting principles.
A demonstration of an actual defect in a model or a forecast would be much more convincing than any of the above. The authors include a variety of anecdotes indicating structural omissions and deficiencies in models. However, they provide no evidence that any of these are important or preclude forecasting. The authors did not directly examine models, data, or model output. Had they done so, they would have discovered that, over the last two decades, data supports early climate model predictions and rejects the authors’ suggested no-change alternative hypothesis.
That is not to say that models are perfect. They are not, but this fact is already well-documented (just point Google scholar to “model intercomparison project” for examples). Many efforts have been made to assess the uncertainty around models through a variety of means, including testing of the effects of arbitrary omitted feedbacks on outcomes. An assessment of the modeling process is a useful check on such assessments, but only if it reflects the modeling process accurately.
One of the A&G’s first forecasting principles is “Make sure forecasts are independent of politics.” The authors should take this one to heart. They have precluded thoughtful communication with climate scientists by creating a publicity circus around their claims at http://theclimatebet.com/ . Their site plugs the Heartland Institute’s climate change conference – a veneer of science over a media event (see Revkin in the NYT and RealClimate).
The culmination of all this wisdom is a bet, challenging Al Gore or any other taker to predict temperatures at 10 weather stations over 10 years (see theclimatebet.com ). For all their talk of data quality, use of robust methods, etc., the authors are surely aware that this is a pitifully small sample, guaranteed to hide the signal in the noise. I can’t help but think that, if their conclusions were sound and they wished to be taken seriously by scientists, they would have offered reasonable terms.
I believe it is valuable for researchers to critique decision processes outside their own discipline. However, in such cases the researcher is obligated to learn enough about the subject to make informed judgments. This paper does not meet that burden.
I’ve posted an extended critique of the paper here.