Equation Soup

Most climate skepticism I encounter these days has transparently crappy technical content, if it has any at all. It’s become boring to read.

But every once in a while a paper comes along that is sufficiently complex and free of immediately obvious errors that it becomes difficult to evaluate. One recent example that came across my desk is,

Polynomial cointegration tests of anthropogenic impact on global warming Continue reading “Equation Soup”

Climate incentives

Richard Lindzen and many others have long maintained that climate science promotes alarm in order to secure funding. For example:

Regarding Professor Nordhaus’s fifth point that there is no evidence that money is at issue, we simply note that funding for climate science has expanded by a factor of 15 since the early 1990s, and that most of this funding would disappear with the absence of alarm. Climate alarmism has expanded into a hundred-billion-dollar industry far broader than just research. Economists are usually sensitive to the incentive structure, so it is curious that the overwhelming incentives to promote climate alarm are not a consideration to Professor Nordhaus. There are no remotely comparable incentives to the contrary position provided by the industries that he claims would be harmed by the policies he advocates.

I’ve always found this idea completely absurd, but to prep for an upcoming talk I decided to collect some rough numbers. A picture says it all:

Data

Notice that it’s completely impractical to make the scale large enough to see any detail in climate science funding or NGOs. I didn’t even bother to include the climate-specific NGOs, like 350.org and USCAN, because they are too tiny to show up (under $10m/yr). Yet, if anything, my tally of the climate-related activity is inflated. For example, a big slice of US Global Change Research is remote sensing (56% of the budget is NASA), which is not strictly climate-related. The cleantech sector is highly fragmented and diverse, and driven by many incentives other than climate. Over 2/3 of the NGO revenue stream consists of Ducks Unlimited and the Nature Conservancy, which are not primarily climate advocates.

Nordhaus, hardly a tree hugger himself, sensibly responds,

As a fifth point, they defend their argument that standard climate science is corrupted by the need to exaggerate warming to obtain research funds. They elaborate this argument by stating, “There are no remotely comparable incentives to the contrary position provided by the industries that he claims would be harmed by the policies he advocates.”

This is a ludicrous comparison. To get some facts on the ground, I will compare two specific cases: that of my university and that of Dr. Cohen’s former employer, ExxonMobil. Federal climate-related research grants to Yale University, for which I work, averaged $1.4 million per year over the last decade. This represents 0.5 percent of last year’s total revenues.

By contrast, the sales of ExxonMobil, for which Dr. Cohen worked as manager of strategic planning and programs, were $467 billion last year. ExxonMobil produces and sells primarily fossil fuels, which lead to large quantities of CO2 emissions. A substantial charge for emitting CO2 would raise the prices and reduce the sales of its oil, gas, and coal products. ExxonMobil has, according to several reports, pursued its economic self-interest by working to undermine mainstream climate science. A report of the Union of Concerned Scientists stated that ExxonMobil “has funneled about $16 million between 1998 and 2005 to a network of ideological and advocacy organizations that manufacture uncertainty” on global warming. So ExxonMobil has spent more covertly undermining climate-change science than all of Yale University’s federal climate-related grants in this area.

Money isn’t the whole story. Science is self-correcting, at least if you believe in empiricism and some kind of shared underlying physical reality. If funding pressures could somehow overcome the gigantic asymmetry of resources to favor alarmism, the opportunity for a researcher to have a Galileo moment would grow as the mainstream accumulated unsolved puzzles. Sooner or later, better theories would become irresistible. But that has not been the history of climate science; alternative hypotheses have been more risible than irresistible.

Given the scale of the numbers, each of the big 3 oil companies could run a climate science program as big as the US government’s, for 1% of revenues. Surely the NPV of their potential costs, if faced with a real climate policy, would justify that. But they don’t. Why? Perhaps they know that they wouldn’t get a different answer, or that it’s far cheaper to hire shills to make stuff up than to do real science?

Minds are like parachutes, or are they dumpsters?

Open Minds has yet another post in a long series demolishing bizarre views of climate skeptics, particularly those from WattsUpWithThat. Several of the targets are nice violations of conservation laws and bathtub dynamics. For example, how can you believe that the ocean is the source of rising atmospheric CO2, when atmospheric CO2 increases by less than human emissions and ocean CO2 is also rising?

The alarming thing about this is that, if I squint and forget that I know anything about dynamics, some of the rubbish sounds like science. For example,

The prevailing paradigm simply does not make sense from a stochastic systems point of view – it is essentially self-refuting. A very low bandwidth system, such as it demands, would not be able to have maintained CO2 levels in a tight band during the pre-industrial era and then suddenly started accumulating our inputs. It would have been driven by random events into a random walk with dispersion increasing as the square root of time. I have been aware of this disconnect for some time. When I found the glaringly evident temperature to CO2 derivative relationship, I knew I had found proof. It just does not make any sense otherwise. Temperature drives atmospheric CO2, and human inputs are negligible. Case closed.

I suspect that a lot of people would have trouble distinguishing this foolishness from sense. In fact, it’s tough to precisely articulate what’s wrong with this statement, because it falls so far short of a runnable model specification. I also suspect that I would have trouble distinguishing similar foolishness from sense in some other field, say biochemistry, if I were unfamiliar with the content and jargon.

This reinforces my conviction that words are inadequate for discussing complex, quantitative problems. Verbal descriptions of dynamic mental models hide all kinds of inconsistencies and are generally impossible to reliably test and refute. If you don’t have a formal model, you’ve brought a knife, or maybe a banana, to a gunfight.

There are two remedies for this. We need more formal mathematical model literacy, and more humility about mental models and verbal arguments.

A natural driver of increasing CO2 concentration?

You wouldn’t normally look at a sink with the tap running and conclude that the water level must be rising because the drain is backing up. Nevertheless, a physically similar idea has been popular in climate skeptic circles lately.

You actually don’t need much more than a mass balance to conclude that anthropogenic emissions are the cause of rising atmospheric CO2, but with a model and some data you can really pound a lot of nails into the coffin of the idea that temperature is somehow responsible.

This notion has been adequately debunked already, but here goes:

This is another experimental video. As before, there’s a lot of fine detail, so you may want to head over to Vimeo to view in full screen HD. I find it somewhat astonishing that it takes 45 minutes to explore a first-order model.

Here’s the model: co2corr2.vpm (runs in Vensim PLE; requires DSS or Pro for calibration optimization)

Update: a new copy, replacing a GET DATA FIRST TIME call to permit running with simpler versions of Vensim. co2corr3.vpm

Economists in the bathtub

Env-Econ is one of several econ sites to pick up on standupeconomist Yoram Bauman’s assessment, Grading Economics Textbooks on Climate Change.

Most point out the bad, but there’s also a lot of good. On Bauman’s curve, there are 4 As, 3 Bs, 5 Cs, 3 Ds, and one F. Still, the bad tends to be really bad. Bauman writes about one,

Overall, the book is not too bad if you ignore that it’s based on climate science that is almost 15 years out of date and that it has multiple errors that would make Wikipedia blush. The fact that this textbook has over 20 percent of the market shakes my faith in capitalism.

The interesting thing is that the worst textbooks go astray more on the science than on the economics. The worst cherry-pick outdated studies, distort the opinions of scientists, and toss in red herrings like “For Greenland, a warming climate is good economic news.”

I find the most egregious misrepresentation in Schiller’s The Economy Today (D+):

The earth’s climate is driven by solar radiation. The energy the sun absorbs must be balanced by outgoing radiation from the earth and the atmosphere. Scientists fear that a flow imbalance is developing. Of particular concern is a buildup of carbon dioxide (CO2) that might trap heat in the earth’s atmosphere, warming the planet. The natural release of CO2 dwarfs the emissions from human activities. But there’s a concern that the steady increase in man-made CO2 emissions—principally from burning fossil fuels like gasoline and coal—is tipping the balance….

First, there’s no “might” about the fact that CO2 traps heat (infrared radiation); the only question is how much, when feedback effects come into play.  But the bigger issue is Schiller’s implication about the cause of atmospheric CO2 buildup. Here’s a picture of Schiller’s words, with arrow width scaled roughly to actual fluxes:

CO2flows1

Apparently, nature is at fault for increasing atmospheric CO2. This is like worrying that the world will run out of air, because people are inhaling it all (Schiller may be inhaling something else). The reality is that the natural flux, while large, is a two way flow:

CO2flows2

What goes into the ocean and biosphere generally comes out again. For the last hundred centuries, those flows were nearly equal (i.e. zero net flow). But now that humans are emitting a lot of carbon, the net flow is actually from the atmosphere into natural systems, like this:

CO2flows3

That’s quite a different situation. If an author can’t paint an accurate verbal picture of a simple stock-flow system like this, how can a text help students learn to manage resources, money or other stocks?

Dumb and Dumber

Not to be outdone by Utah, South Dakota has passed its own climate resolution.

They raise the ante – where Utah cherry-picked twelve years of data, South Dakotans are happy with only 8. Even better, their pattern matching heuristic violates bathtub dynamics:

WHEREAS, the earth has been cooling for the last eight years despite small increases in anthropogenic carbon dioxide

They have taken the skeptic claim, that there’s little warming in the tropical troposphere, and bumped it up a notch:

WHEREAS, there is no evidence of atmospheric warming in the troposphere where the majority of warming would be taking place

Nope, no trend here:

Satellite tropospheric temperature, RSS

Satellite tropospheric temperature (RSS, TLT)

Continue reading “Dumb and Dumber”

Legislating Science

The Utah House has declared that CO2 is harmless. The essence of the argument in HJR 12: temperature’s going down, climategate shows that scientists are nefarious twits, whose only interest is in riding the federal funding gravy train, and emissions controls hurt the poor. While it’s reassuring that global poverty is a big concern of Utah Republicans, the scientific observations are egregiously bad:

29 WHEREAS, global temperatures have been level and declining in some areas over the
30 past 12 years;
31 WHEREAS, the “hockey stick” global warming assertion has been discredited and
32 climate alarmists’ carbon dioxide-related global warming hypothesis is unable to account for
33 the current downturn in global temperatures;
34 WHEREAS, there is a statistically more direct correlation between twentieth century
35 temperature rise and Chlorofluorocarbons (CFCs) in the atmosphere than CO2;
36 WHEREAS, outlawed and largely phased out by 1978, in the year 2000 CFC’s began to
37 decline at approximately the same time as global temperatures began to decline;

49 WHEREAS, Earth’s climate is constantly changing with recent warming potentially an
50 indication of a return to more normal temperatures following a prolonged cooling period from
51 1250 to 1860 called the “Little Ice Age”;

The list cherry-picks skeptic arguments that rely on a few papers (if that), nearly all thoroughly discredited. There are so many things wrong here that it’s not worth the electrons to refute them one by one. The quality of their argument calls to mind to the 1897 attempt in Indiana to legislate that pi = 3.2. It’s sad that this resolution’s supporters are too scientifically illiterate to notice, or too dishonest to care. There are real uncertainties about climate; it would be nice to see a legislative body really grapple with the hard questions, rather than chasing red herrings.

Polar Bears & Principles

Amstrup et al. have just published a rebuttal of the Armstrong, Green & Soon critique of polar bear assessments. Polar bears aren’t my area, and I haven’t read the original, so I won’t comment on the ursine substance. However, Amstrup et al. reinforce many of my earlier objections to (mis)application of forecasting principles, so here are some excerpts:

The Principles of Forecasting and Their Use in Science

… AGS based their audit on the idea that comparison to their self-described principles of forecasting could produce a valid critique of scientific results. AGS (p. 383) claimed their principles ‘summarize all useful knowledge about forecasting.’ Anyone can claim to have a set of principles, and then criticize others for violating their principles. However, it takes more than a claim to create principles that are meaningful or useful. In concluding our rejoinder, we point out that the principles espoused by AGS are so deeply flawed that they provide no reliable basis for a rational critique or audit.

Failures of the Principles

Armstrong (2001) described 139 principles and the support for them. AGS (pp. 382’“383) claimed that these principles are evidence based and scientific. They fail, however, to be evidence based or scientific on three main grounds: They use relative terms as if they were absolute, they lack theoretical and empirical support, and they do not follow the logical structure that scientific criticisms require.

Using Relative Terms as Absolute

Many of the 139 principles describe properties that models, methods, and (or) data should include. For example, the principles state that data sources should be diverse, methods should be simple, approaches should be complex, representations should be realistic, data should be reliable, measurement error should be low, explanations should be clear, etc. … However, it is impossible to look at a model, a method, or a datum and decide whether its properties meet or violate the principles because the properties of these principles are inherently relative.

Consider diverse. AGS faulted H6 for allegedly failing to use diverse sources of data. However, H6 used at least six different sources of data (mark-recapture data, radio telemetry data, data from the United States and Canada, satellite data, and oceanographic data). Is this a diverse set of data? It is more diverse than it would have been if some of the data had not been used. It is less diverse than it would have been if some (hypothetical) additional source of data had been included. To criticize it as not being diverse, however, without providing some measure of comparison, is meaningless.

Consider simple. What is simple? Although it might be possible to decide which of two models is simpler (although even this might not be easy), it is impossible’”in principle’”to say whether any model considered in isolation is simple or not. For example, H6 included a deterministic time-invariant population model. Is this model simple? It is certainly simpler than the stationary, stochastic model, or the nonstationary stochastic model also included in H6. However, without a measure of comparison, it is impossible to say which, if any, are ‘simple.’ For AGS to criticize the report as failing to use simple models is meaningless.

A Lack of Theoretical and Empirical Support

If the principles of forecasting are to serve as a basis for auditing the conclusions of scientific studies, they must have strong theoretical and (or) empirical support. Otherwise, how do we know that these principles are necessary for successful forecasts? Closer examination shows that although Armstrong (2001, p. 680) refers to evidence and AGS (pp. 382’“383) call the principles evidence based, almost half (63 of 139) are supported only by received wisdom or common sense, with no additional empirical or theoretical support. …

Armstrong (2001, p. 680) defines received wisdom as when ‘the vast majority of experts agree,’ and common sense as when ‘it is difficult to imagine that things could be otherwise.’ In other words, nearly half of the principles are supported only by opinions, beliefs, and imagination about the way that forecasting should be done. This is not evidence based; therefore, it is inadequate as a basis for auditing scientific studies. … Even Armstrong’s (2001) own list includes at least three cases of principles that are supported by what he calls strong empirical evidence that ‘refutes received wisdom’’”that is, at least three of the principles contradict received wisdom. …

Forecasting Audits Are Not Scientific Criticism

The AGS audit failed to distinguish between scientific forecasts and nonscientific forecasts. Scientific forecasts, because of their theoretical basis and logical structure based upon the concept of hypothesis testing, are almost always projections. That is, they have the logical form of ‘if X happens, then Y will follow.’ The analyses in AMD and H6 take exactly this form. A scientific criticism of such a forecast must show that even if X holds, Y does not, or need not, follow.

In contrast, the AGS audit simply scored violations of self-defined principles without showing how the identified violation might affect the projected result. For example, the accusation that H6 violated the commandment to use simple models is not a scientific criticism, because it says nothing about the relative simplicity of the model with respect to other possible choices. It also says nothing about whether the supposedly nonsimple model in question is in error. A scientific critique on the grounds of simplicity would have to identify a complexity in the model, and show that the complexity cannot be defended scientifically, that the complexity undermines the credibility of the model, and that a simpler model can resolve the issue. AGS did none of these.

There’s some irony to all this. Armstrong & Green criticize climate predictions as mere opinions cast in overly-complex mathematical terms, lacking predictive skill. The instrument of their critique is a complex set of principles, mostly derived from opinions, with undemonstrated ability to predict the skill of models and forecasts.

Unprincipled Forecast Evaluation

I hadn’t noticed until I heard it here, but Armstrong & Green are back at it, with various claims that climate forecasts are worthless. In the Financial Post, they criticize the MIT Joint Program model,

… No more than 30% of forecasting principles were properly applied by the MIT modellers and 49 principles were violated. For an important problem such as this, we do not think it is defensible to violate a single principle.

As I wrote in some detail here, the Forecasting Principles are a useful seat-of-the-pants guide to good practices, but there’s no evidence that following them all is necessary or sufficient for a good outcome. Some are likely to be counterproductive in many situations, and key elements of good modeling practice are missing (for example, balancing units of measure).

It’s not clear to me that A&G really understand models and modeling. They seem to view everything through the lens of purely statistical methods like linear regression. Green recently wrote,

Another important principle is that the forecasting method should provide a realistic representation of the situation (Principle 7.2). An interesting statement in the MIT report that implies (as one would expect given the state of knowledge and omitted relationships) that the modelers have no idea to what extent their models provide a realistic representation of reality is as follows:

‘Changes in global surface average temperature result from a combination of emissions and climate parameters, and therefore two runs that look similar in terms of temperature may be very different in detail.’ (MIT Report p. 28)

While the modelers have sufficient latitude in their parameters to crudely reproduce a brief period of climate history, there is no reason to believe the models can provide useful forecasts.

What the MIT authors are saying, in essence, is that

T = f(E,P)

and that it is possible to achieve the same future temperature T with different combinations of emissions E and parameters P. Green seems to be taking a leap, to assume that historic T does not provide much constraint on P. First, that’s not necessarily true, given that historic E cannot be chosen freely. It could still be the case that the structure of f(E,P) means that historic T provides a weak constraint on P given E. But if that’s true (as it basically is), the problem is self-diagnosing: estimates of P will have broad confidence bounds, as will forecasts of T. Green completely ignores the MIT authors’ explicit characterization of this uncertainty. He also ignores the fact that the output of the model is not just T, and that we have priors for many elements of P (from more granular models or experiments, for example). Thus we have additional lines of evidence with which to constrain forecasts. Green also neglects to consider the implications of uncertainties in P that are jointly distributed in an offsetting manner (as is likely for climate sensitivity, ocean circulation, and aerosol forcing).

A&G provide no formal method to distinguish between situations in which models yield useful or spurious forecasts. In an earlier paper, they claimed rather broadly,

‘To our knowledge, there is no empirical evidence to suggest that presenting opinions in mathematical terms rather than in words will contribute to forecast accuracy.’ (page 1002)

This statement may be true in some settings, but obviously not in general. There are many situations in which mathematical models have good predictive power and outperform informal judgments by a wide margin.

A&G’s latest paper with Willie Soon, Validity of Climate Change Forecasting for Public Policy Decision Making, apparently forthcoming in IJF, is an attempt to make the distinction, i.e. to determine whether climate models have any utility as predictive tools. An excerpt from the abstract summarizes their argument:

Policymakers need to know whether prediction is possible and if so whether any proposed forecasting method will provide forecasts that are substantively more accurate than those from the relevant benchmark method. Inspection of global temperature data suggests that it is subject to irregular variations on all relevant time scales and that variations during the late 1900s were not unusual. In such a situation, a ‘no change’ extrapolation is an appropriate benchmark forecasting method. … The accuracy of forecasts from the benchmark is such that even perfect forecasts would be unlikely to help policymakers. … We nevertheless demonstrate the use of benchmarking with the example of the Intergovernmental Panel on Climate Change’s 1992 linear projection of long-term warming at a rate of 0.03°C-per-year. The small sample of errors from ex ante projections at 0.03°C-per-year for 1992 through 2008 was practically indistinguishable from the benchmark errors. … Again using the IPCC warming rate for our demonstration, we projected the rate successively over a period analogous to that envisaged in their scenario of exponential CO2 growth’”the years 1851 to 1975. The errors from the projections were more than seven times greater than the errors from the benchmark method. Relative errors were larger for longer forecast horizons. Our validation exercise illustrates the importance of determining whether it is possible to obtain forecasts that are more useful than those from a simple benchmark before making expensive policy decisions.

There are many things wrong here:

  1. Demonstrating that unforced variability (history) can be adequately forecasted by a naive benchmark has no bearing on whether future forced variability will continue to be well-represented, or whether models can predict future emergence of a signal from noise. AG&S’ procedure is like watching an airplane taxi, concluding that aerodynamics knowledge is of no advantage, and predicting that the plane will remain on the ground forever.
  2. Comparing a naive forecast for global mean temperature against models amounts to a rejection of a vast amount of information. What is the naive forecast for the joint behavior of temperature, preciptiation, lapse rates, sea level, and their spatial and seasonal patterns? These have been evaluated for models, but AG&S do not suggest benchmarks.
  3. A no-change forecast is not necessarily the best naive forecast for a series with unknown variability, if that series has some momentum or structure which can be exploited to do better. The particular no change forecast selected byAG&S is suboptimal, because it uses a single year as a forecast, unneccesarily projecting annual variation into the future. In general, a stronger naive forecast (e.g., a smoothed value of a few recent years) would strengthen AG&S’ case, so it’s unclear why they’ve chosen an excessively naive benchmark. Fortunately, their base year, 1991, was rather “average”.
  4. The first exhibit presented is the EPICA ice core temperature. Roughly 85% of the data shown has a time interval too long to show century-scale temperature variations, and none of it could be expected to fully reveal decadal-scale variations, so it’s mostly irrelevant with respect to the kind of forecasts they seek to evaluate.
  5. The mere fact that a series has unknown historic variability does not mean that it cannot be forecast [corrected 8/18/09]. The EPICA and Vostok CO2 records look qualitatively much like the temperature record, yet CO2 accumulation in the atmosphere is quite predictable over decadal time scales, and models could handily beat a naive forecast.
  6. AG&S’ method of forecast evaluation unduly weights the short term, like the A&G sucker bet does. This is not strictly a problem, but it does make interpretation of the bounds on AG&S’ alternate forecast (“The benchmark forecast is that the global mean temperature for each year for the rest of this century will be within 0.5°C of the 2008 figure.”) a little tricky.
  7. The retrospective evaluation of the 1990/1992 IPCC projection of 0.3C/decade ignores many factors. First, 0.3C/decade over a century does not imply a smooth trend over short time scales; models and reality have substantial unforced variability which must be taken into account. The paragraph cited by AG&S includes the statement, “The rise will not be steady because of the influence of other factors.” Second, the 1992 report (in the very paragraph AG&S cite) notes that projections do not account for aerosols, so 0.3C/decade can’t be taken as a point prediction for the future, even if contingency on GHG emissions is resolved. Third, the IPCC projection stated approximate bounds – 0.2 to 0.5 C/decade – that should be accounted for in the evaluation, but are not. Still, the IPCC projection beats the naive benchmark.
  8. AG&S’ evaluation of the 0.3C/decade future BAU projection as a backcast over 1851-1975 is absurd. They write, “It is not unreasonable, then, to suppose for the purposes of our validation illustration that scientists in 1850 had noticed that the increasing industrialization of the world was resulting in exponential growth in ‘greenhouse gases’ and to project that this would lead to global warming of 0.03°C per year.” Actually, it’s completely unreasonable. Many figures in the 1990 FAR clearly indicate that the 0.3C/decade projection was not valid on [-infinity,infinity]. For example, figures 6, 8, and 9 from the SPM – just a few pages from material cited by AG&S – clearly show a gentle trend <0.05C/decade through 1950. Furthermore, even the most rudimentary understanding of the dynamics of GHG and heat accumulation is sufficient to realize that one would not expect a linear historic temperature trend to emerge from the emissions signal.

How do AG&S arrive at this sorry state? Their article embodies a “sh!t happens” epistemology. They write, “The belief that ‘things have changed’ and the future cannot be judged by the past is common, but invalid.” The problem is, one can say with equal confidence that, “the belief that ‘things never change’ and the past reveals the future is common, but invalid.” In reality, there are predictable phenomena (the orbits of the planets) and unpredictable ones (the fall of the Berlin wall). AG&S have failed to establish that climate is unpredictable or to provide us with an appropriate method for deciding whether it is predictable or not. Nor have they given us any insight into how to know or what to do if we can’t decide. Doing nothing because we think we don’t know anything is probably better than sacrificing virgins to the gods, but it doesn’t strike me as a robust strategy.

The only thing worse than cap & trade …

… is Marty Feldstein’s lame arguments against it.

  • He cites CBO household costs of policy that reflect outlays, rather than real deadweight or welfare losses after revenue recycling.
  • He wants the US to wait for global agreement before moving. News flash: there won’t be a global agreement without some US movement.
  • He argues that unilateral action is ineffective: true, but irrelevant if you aim to solve the problem. However, if that’s our moral philosophy, I think I should be exempted from all laws – on a global scale, no one will notice my murdering and pillaging, and it’ll be fun for me.

There is one nugget of wisdom in Feldstein’s piece: it’s a travesty to overcompensate carbon-intensive firms, and foolish to use allowance allocation to utilities to defeat the retail price signal. I haven’t read the details of the bill yet, so I don’t know how extensive those provisions really are, but it’s definitely something to watch.

Well, OK, lots of things are worse than cap & trade. More importantly, one thing (an upstream carbon tax) could be a lot better than Waxman Markey. But it’s sad when a Harvard economist sounds like an astroturf skeptic.

Hat tip to Economist’s View.