Unskeptical skepticism

Atmospheric CO2 doesn’t drive temperature, and temperature doesn’t drive CO2. They drive each other, in a feedback loop. Each relationship involves integration – CO2 accumulates temperature changes through mechanisms like forest growth and ocean uptake, and temperature is the accumulation of heat flux controlled by the radiative effects of CO2.

This has been obvious for at least decades, yet it still eludes many. A favorite counter-argument for an influence of CO2 on temperature has long been the observation that temperature appears to lead CO2 at turning points in the ice core record. Naively, this violates the requirement for establishing causality, that cause must precede effect. But climate is not a simple system with binary states and events, discrete time and single causes. In a feedback system, the fact that X lags Y by some discernible amount doesn’t rule out an influence of Y on X; in fact such bidirectional causality is essential for simple oscillators.

A newish paper by Shakun et al. sheds some light on the issue of ice age turning points. It turns out that much of the issue is a matter of data – that ice core records are not representative of global temperatures. But it still appears that CO2 is not the triggering mechanism for deglaciation. The authors speculate that the trigger is northern hemisphere temperatures, presumably driven by orbital insolation changes, followed by changes in ocean circulation. Then CO2 kicks in as amplifier. Simulation backs this up, though it appears to me from figure 3 that models capture the qualitative dynamics, but underpredict the total variance in temperature over the period. To me, this is an interesting step toward a more complete understanding of ice age terminations, but I’ll wait for a few more papers before accepting declarations of victory on the topic.

Predictably, climate skeptics hate this paper. For example, consider Master Tricksed Us! at WattsUpWithThat. Commenters positively drool over the implication that Shakun et al. “hid the incline” by declining to show the last 6000 years for proxy temperature/CO2 relationship.

I leave the readers to consider the fact that for most of the Holocene, eight millennia or so, half a dozen different ice core records say that CO2 levels were rising pretty fast by geological standards … and despite that, the temperatures have been dropping over the last eight millennia …

But not so fast. First, there’s no skepticism about the data. Perhaps Shakun et al. omitted the last 6k years for a good reason, like homogeneity. A spot check indicates that there might be issues – series MD95-2037 ends in the year 6838 BP, for example. So, perhaps the WUWT graph merely shows spatial selection bias in the dataset. Second, the implication that rising CO2 and falling temperatures somehow disproves a CO2->temperature link is yet another failure to appreciate bathtub dynamics and multivariate causality.

This credulous fawning over the slightest hint of a crack in mainstream theory strikes me as the opposite of skepticism. The essence of a skeptical attitude, I think, is to avoid early lock-in to any one pet theory or data stream. Winning theories emerge from testing lots of theories against lots of constraints. That requires continual questioning of models and data, but also questioning of the questions. Objections that violate physics like accumulation, or heaps of mutually exclusive objections, have to be discarded like any other failed theory. The process should involve more than fools asking more questions than a wise man can answer. At the end of the day, “no theory is possible” is itself a theory that implies null predictions that can be falsified like any other, if it’s been stated explicitly enough.

Burt Rutan's climate causality confusion

I’ve always thought Burt Rutan was pretty cool, so I was rather disappointed when he signed on to a shady climate op-ed in the WSJ (along with Scott Armstrong). I was curious what Rutan’s mental model was, so I googled and found his arguments summarized in an extensive slide deck, available here.

It would probably take me 98 posts to detail the problems with these 98 slides, so I’ll just summarize a few that are particularly noteworthy from the perspective of learning about complex systems.

Data Quality

Rutan claims to be motivated by data fraud,

In my background of 46 years in aerospace flight testing and design I have seen many examples of data presentation fraud. That is what prompted my interest in seeing how the scientists have processed the climate data, presented it and promoted their theories to policy makers and the media. (here)

This is ironic, because he credulously relies on much fraudulent data. For example, slide 18 attempts to show that CO2 concentrations were actually much higher in the 19th century. But that’s bogus, because many of those measurements were from urban areas or otherwise subject to large measurement errors and bias. You can reject many of the data points on first principles, because they imply physically impossible carbon fluxes (500 billion tons in one year).

Slides 32-34 also present some rather grossly distorted comparisons of data and projections, complete with attributions of temperature cycles that appear to bear no relationship to the data (Slide 33, right figure, red line).

Slides 50+ discuss the urban heat island effect and surfacestations.org effort. Somehow they neglect to mention that the outcome of all of that was a cool bias in the data, not a warm bias.

Bathtub Dynamics

Slides 27 and 28 seek a correlation between the CO2 and temperature time series. Failure is considered evidence that temperature is not significantly influenced by CO2. But this is a basic failure to appreciate bathtub dynamics. Temperature is an indicator of the accumulation of heat. Heat integrates radiative flux, which depends on GHG concentrations. So, even in a perfect system where CO2 is the only causal influence on temperature, we would not expect to see matching temporal trends in emissions, concentrations, and temperatures. How do you escape engineering school and design airplanes without knowing about integration?

Correlation and causation

Slide 28 also engages in the fallacy of the single cause and denying the antecedent. It proposes that, because warming rates were roughly the same from 1915-1945 and 1970-2000, while CO2 concentrations varied, CO2 cannot be the cause of the observations. This of course presumes (falsely) that CO2 is the only influence on temperatures, neglecting volcanoes, endogenous natural variability, etc., not to mention blatantly cherry-picking arbitrary intervals.

Slide 14 shows another misattribution of single cause, comparing CO2 and temperature over 600 million years, ignoring little things like changes in the configuration of the continents and output of the sun over that long period.

In spite of the fact that Rutan generally argues against correlation as evidence for causation, Slide 46 presents correlations between orbital factors and sunspots (the latter smoothed in some arbitrary way) as evidence that these factors do drive temperature.

Feedback

Slide 29 shows temperature leading CO2 in ice core records, concluding that temperature must drive CO2, and not the reverse. In reality, temperature and CO2 drive one another in a feedback loop. That turning points in temperature sometimes lead turning points in CO2 does not preclude CO2 from acting as an amplifier of temperature changes. (Recently there has been a little progress on this point.)

Too small to matter

Slide 12 indicates that CO2 concentrations are too small to make a difference, which has no physical basis, other than the general misconception that small numbers don’t matter.

Computer models are not evidence

So Rutan claims on slide 47. Of course this is true in a trivial sense, because one can always build arbitrary models that bear no relationship to anything.

But why single out computer models? Mental models and pencil-and-paper calculations are not uniquely privileged. They are just as likely to fail to conform to data, laws of physics, and rules of logic as a computer model. In fact, because they’re not stated formally, testable automatically, or easily shared and critiqued, they’re more likely to contain some flaws, particularly mathematical ones. The more complex a problem becomes, the more the balance tips in favor of formal (computer) models, particularly in non-experimental sciences where trial-and-error is not practical.

There’s also no such thing as model-free inference. Rutan presents many of his charts as if data speaks for itself. In fact, no measurements can be taken without a model of the underlying process to be measured (in a thermometer, the thermal expansion of a fluid). More importantly, event the simplest trend calculation or comparison of time series implies a model. Leaving that model unstated just makes it easier to engage in bathtub fallacies and other errors in reasoning.

The bottom line

The problem here is that Rutan has no computer model. So, he feels free to assemble a dog’s breakfast of data, sourced from illustrious scientific institutions like the Heritage Foundation (slide 12), and call it evidence. Because he skips the exercise of trying to put everything into a rigorous formal feedback model, he’s received no warning signs that he has strayed far from reality.

I find this all rather disheartening. Clearly it is easy for a smart, technical person to be wildly incompetent outside his original field of expertise. But worse, it’s even easy for them to assemble a portfolio of pseudoscience that looks like evidence, capitalizing on past achievements to sucker a loyal following.

Strange times for Europe's aviation carbon tax

The whole global climate negotiation process is a bit of a sideshow, in that negotiators don’t have the freedom to actually agree to anything meaningful. When they head to Poznan, or Copenhagen, or Durban, they get their briefings from finance and economic ministries, not environment ministries. The mandates are evidently that there’s no way most countries will agree to anything like the significant emissions cuts needed to achieve stabilization.

That’s particularly clear at the moment, with Europe imposing a carbon fee on flights using their airspace, and facing broad opposition. And what opponent makes the biggest headlines? India’s environment minister – possibly the person on the planet who should be happiest to see any kind of meaningful emissions policy anywhere.

Clearly, climate is not driving the bus.

A Titanic feedback reversal

Ever get in a hotel shower and turn the faucet the wrong way, getting scalded or frozen as a result? It doesn’t help when the faucet is unmarked or backwards. If a new account is correct, that’s what happened to the Titanic.

(Reuters) – The Titanic hit an iceberg in 1912 because of a basic steering error, and only sank as fast as it did because an official persuaded the captain to continue sailing, an author said in an interview published on Wednesday.

“They could easily have avoided the iceberg if it wasn’t for the blunder,” Patten told the Daily Telegraph.

“Instead of steering Titanic safely round to the left of the iceberg, once it had been spotted dead ahead, the steersman, Robert Hitchins, had panicked and turned it the wrong way.”

Patten, who made the revelations to coincide with the publication of her new novel “Good as Gold” into which her account of events are woven, said that the conversion from sail ships to steam meant there were two different steering systems.

Crucially, one system meant turning the wheel one way and the other in completely the opposite direction.

Once the mistake had been made, Patten added, “they only had four minutes to change course and by the time (first officer William) Murdoch spotted Hitchins’ mistake and then tried to rectify it, it was too late.”

It sounds like the steering layout violates most of Norman’s design principles (summarized here):

  1. Use both knowledge in the world and knowledge in the head.
  2. Simplify the structure of tasks.
  3. Make things visible: bridge the Gulfs of Execution and Evaluation.
  4. Get the mappings right.
  5. Exploit the power of constraints, both natural and artificial.
  6. Design for error.
  7. When all else fails, standardize.

Notice that these are really all about providing appropriate feedback, mental models, and robustness.

(This is a repost from Sep. 22, 2010, for the 100 year anniversary).

What a real breakthrough might look like

It’s possible that a techno fix will stave off global limits indefinitely, in a Star Trek future scenario. I think it’s a bad idea to rely on it, because there’s no backup plan.

But it’s equally naive to think that we can return to some kind of low-tech golden age. There are too many people to feed and house, and those bygone eras look pretty ugly when you peer under the mask.

But this is a false dichotomy.

Some techno/growth enthusiasts talk about sustainability as if it consisted entirely of atavistic agrarian aspirations. But what a lot of sustainability advocates are after, myself included, is a high-tech future that operates within certain material limits (planetary boundaries, if you will) before those limits enforce themselves in nastier ways. That’s not really too hard to imagine; we already have a high tech economy that operates within limits like the laws of motion and gravity. Gravity takes care of itself, because it’s instantaneous. Stock pollutants and resources don’t, because consequences are remote in time and space from actions; hence the need for coordination. Continue reading “What a real breakthrough might look like”

The neo-cornucopians, live from planet Deepwater Horizon

On the heels of the 40th anniversary of Limits to Growth, the Breakthrough crowd is still pushing a technical miracle, just around the corner. Their latest editorial paints sustainability advocates as the bad guys:

Stop and think for a moment about the basic elements of the planetary boundaries hypothesis: apocalyptic fears of the future, a professed desire to return to an earlier state of nature, hypocrisy about wealth, appeals to higher authorities. These are the qualities of our worst religions, not our best scientific theories.

Who are these straw dog greenies, getting rich and ruling the world? Anyway, I thought the planetary boundaries were about biogeophysical systems, appealing to “higher authority” in that the laws of physics apply to civilizations too. Ted Nordhaus doesn’t believe it though:

To be sure, there are tipping points in nature, including in the climate system, but there is no way for scientists to identify fixed boundaries beyond which point human civilization becomes unsustainable for the simple reason that there are no fixed boundaries.

The Breakthrough prescription for the ills of growth is more growth: Continue reading “The neo-cornucopians, live from planet Deepwater Horizon”