Forest Tipping in the Rockies

Research shows that some forests in the Rockies aren’t recovering from wildfires.

Evidence for declining forest resilience to wildfires under climate change

Abstract
Forest resilience to climate change is a global concern given the potential effects of increased disturbance activity, warming temperatures and increased moisture stress on plants. We used a multi‐regional dataset of 1485 sites across 52 wildfires from the US Rocky Mountains to ask if and how changing climate over the last several decades impacted post‐fire tree regeneration, a key indicator of forest resilience. Results highlight significant decreases in tree regeneration in the 21st century. Annual moisture deficits were significantly greater from 2000 to 2015 as compared to 1985–1999, suggesting increasingly unfavourable post‐fire growing conditions, corresponding to significantly lower seedling densities and increased regeneration failure. Dry forests that already occur at the edge of their climatic tolerance are most prone to conversion to non‐forests after wildfires. Major climate‐induced reduction in forest density and extent has important consequences for a myriad of ecosystem services now and in the future.

I think this is a simple example of a tipping point in action.

Forest Cover Tipping Points

Using an example from Hirota et al., in my toy model article above, here’s what happens:

At high precipitation, a fire (red arrow, top) takes the forest down to zero tree cover, but regrowth (green arrow, top) restores the forest. At lower precipitation, due to climate change, the forest remains stable, until fire destroys it (lower red arrow). Then regrowth can’t get past the newly-stable savanna state (lower green arrow). No amount of waiting will take the trees from 30% cover to the original 90% tree cover. (The driving forces might be more complex than precipitation and fire; things like insects, temperature, snowpack and evaporation also matter.)

The insidious thing about this is that you can’t tell that the forest state has become destabilized until the tipping event happens. That means the complexity of the system defeats any simple heuristic for managing the trees. The existence of healthy, full tree cover doesn’t imply that they’ll grow back to the same state after a catastrophe or clearcut.

Climate Bathtub Chartjunk

I just ran across Twist and Shout: Images and Graphs in Skeptical Climate Media, a compendium of cherry picking and other chartjunk abuses.

I think it misses a large class of (often willful) errors: ignoring the climate bathtub. Such charts typically plot CO2 emissions or concentration against temperature, with the implication that any lack of correlation indicates a problem with the science. But this engages in a combination of a pattern matching fallacy and fallacy of the single cause. Sometimes these things make it into the literature, but most live on swampy skeptic sites.

An example, reportedly from John Christy, who should know better:

Notice how we’re supposed to make a visual correlation between emissions and temperature (even though two integrations separate them, and multiple forcings and noise influence temperature). Also notice how the nonzero minimum axis crossing for CO2 exaggerates the effect. That’s in addition to the usual tricks of inserting an artificial trend break at the 1998 El Nino and truncating the rest of history.

Silver Lining to the White House Climate Panel?

The White House is reportedly convening a panel to reexamine the scientific consensus on climate. How does that work, exactly? Are they going to publish thousands of new papers to shift the apparent balance of opinion in the scientific literature? And hasn’t analysis of consensus already been done to death, with a null result for the skeptics?

The problem is that there isn’t much for skeptics to work with. There aren’t any models that make useful predictions with very low climate sensitivity. In fact, skeptical predictions haven’t really panned out at all. Lindzen’s Adaptive Iris is still alive – sort of – but doesn’t result in a strong negative feedback. The BEST reanalysis didn’t refute previous temperature data. The surfacestations.org effort used crowdsourcing to reveal some serious weather station siting problems, which ultimately amounted to nothing.

And those are really the skeptics’ Greatest Hits. After that, it’s a rapid fall from errors to nuts. No, satellites temperatures don’t show a negative trend. Yes, Fourier and wavelet analyses are typically silly, but fortunately tend to refute themselves quickly. This list could grow long quickly, though skeptics are usually pretty reluctant to make testable models or predictions. That’s why even prominent outlets for climate skepticism have to resort to simple obfuscation.

So, if there’s a silver lining to the proposed panel, it’s that they’d have to put the alleged skeptics’ best foot forward, by collecting and identifying the best models, data and predictions. Then it would be readily apparent what a puny body of evidence that yielded.

 

Future Climate of the Bridgers

Ten years ago, I explored future climate analogs for my location in Montana:

When things really warm up, to +9 degrees F (not at all implausible in the long run), 16 of the top 20 analogs are in CO and UT, …

Looking at a lot of these future climate analogs on Google Earth, their common denominator appears to be rattlesnakes. I’m sure they’re all nice places in their own way, but I’m worried about my trees. I’ll continue to hope that my back-of-the-envelope analysis is wrong, but in the meantime I’m going to hedge by managing the forest to prepare for change.

I think there’s a lot more to worry about than trees. Fire, wildlife, orchids, snowpack, water availability, …

Recently I decided to take another look, partly inspired by the Bureau of Reclamation’s publication of downscaled data. This solves some of the bias correction issues I had in 2008. I grabbed the model output (36 runs from CMIP5) and observations for the 1/8 degree gridpoint containing Bridger Bowl:

Then I used Vensim to do a little data processing, converting the daily time series (which are extremely noisy weather) into 10-year moving averages (i.e., climate). Continue reading “Future Climate of the Bridgers”

The Nordhaus Nobel

Congratulations to William Nordhaus for winning a Nobel in Economics for work on climate. However … I find that this award leaves me conflicted. I’m happy to see the field proclaim that it’s optimal to do something about climate change. But if this is the best economics has to offer, it’s also an indication of just how far divorced the field is from reality. (Or perhaps not; not all economists agree that we have reached a Neoclassical nirvana.)

Nordhaus was probably the first big name in economics to tackle the problem, and has continued to refine the work over more than two decades. At the same time, Nordhaus’ work has never recommended more than a modest effort to solve the climate problem. In the original DICE model, the optimal policy reduced emissions about 10%, with a tiny carbon tax of $10-15/tonC – a lot less than a buck a gallon on gasoline, for example. (Contrast this perspective with Stopping Climate Change Is Hopeless. Let’s Do It.)

Nordhaus’ mild prescription for action emerges naturally from the model’s assumptions. Ask yourself if you agree with the following statements:

If you find yourself agreeing, congratulations – you’d make a successful economist! All of these and more were features of the original DICE and RICE models, and the ones that most influence the low optimal price of carbon survive to this day. That low price waters down real policies, like the US government’s social cost of carbon.

In any case, you’re not off the hook; even with these rosy assumptions Nordhaus finds that we still ought to have a real climate policy. Perhaps that is the greatest irony here – that even the most Neoclassical view of climate that economics has to offer still recommends action. The perspective that climate change doesn’t exist or doesn’t matter requires assumptions even more contorted than those above, in a mythical paradise where fairies and unicorns cavort with the invisible hand.

Limits to Growth Redux

Every couple of years, an article comes out reviewing the performance of the World3 model against data, or constructing an alternative, extended model based on World3. Here’s the latest:

Abstract
This study investigates the notion of limits to socioeconomic growth with a specific focus on the role of climate change and the declining quality of fossil fuel reserves. A new system dynamics model has been created. The World Energy Model (WEM) is based on the World3 model (The Limits to Growth, Meadows et al., 2004) with climate change and energy production replacing generic pollution and resources factors. WEM also tracks global population, food production and industrial output out to the year 2100. This paper presents a series of WEM’s projections; each of which represent broad sweeps of what the future may bring. All scenarios project that global industrial output will continue growing until 2100. Scenarios based on current energy trends lead to a 50% increase in the average cost of energy production and 2.4–2.7 °C of global warming by 2100. WEM projects that limiting global warming to 2 °C will reduce the industrial output growth rate by 0.1–0.2%. However, WEM also plots industrial decline by 2150 for cases of uncontrolled climate change or increased population growth. The general behaviour of WEM is far more stable than World3 but its results still support the call for a managed decline in society’s ecological footprint.

The new paper puts economic collapse about a century later than it occurred in Limits. But that presumes that the phrase highlighted above is a legitimate simplification: GHGs are the only pollutant, and energy the only resource, that matters. Are we really past the point of concern over PCBs, heavy metals, etc., with all future chemical and genetic technologies free of risk? Well, maybe … (Note that climate integrated assessment models generally indulge in the same assumption.)

But quibbling over dates is to miss a key point of Limits to Growth: the model, and the book, are not about point prediction of collapse in year 20xx. The central message is about a persistent overshoot behavior mode in a system with long delays and finite boundaries, when driven by exponential growth.

We have deliberately omitted the vertical scales and we have made the horizontal time scale somewhat vague because we want to emphasize the general behavior modes of these computer outputs, not the numerical values, which are only approximately known.

No, Climate Change CAN’T Be Stopped by Turning Air Into Gasoline

My award for dumbest headline of the week goes to The Atlantic:

Climate Change Can Be Stopped by Turning Air Into Gasoline

A team of scientists from Harvard University and the company Carbon Engineering announced on Thursday that they have found a method to cheaply and directly pull carbon-dioxide pollution out of the atmosphere.

If their technique is successfully implemented at scale, it could transform how humanity thinks about the problem of climate change. It could give people a decisive new tool in the race against a warming planet, but could also unsettle the issue’s delicate politics, making it all the harder for society to adapt.

Their research seems almost to smuggle technologies out of the realm of science fiction and into the real. It suggests that people will soon be able to produce gasoline and jet fuel from little more than limestone, hydrogen, and air. It hints at the eventual construction of a vast, industrial-scale network of carbon scrubbers, capable of removing greenhouse gases directly from the atmosphere.

The underlying article that triggered the story has nothing to do with turning CO2 into gasoline. It’s purely about lower-cost direct capture of CO2 from the air (DAC). Even if we assume that the article’s right, and DAC is now cheaper, that in no way means “climate change can be stopped.” There are several huge problems with that notion:

First, if you capture CO2 from the air, make a liquid fuel out of it, and burn that in vehicles, you’re putting the CO2 back in the air. This doesn’t reduce CO2 in the atmosphere; it just reduces the growth rate of CO2 in the atmosphere by displacing the fossil carbon that would otherwise be used. With constant radiative forcing from elevated CO2, temperature will continue to rise for a long time. You might get around this by burning the fuel in stationary plants and sequestering the CO2, but there are huge problems with that as well. There are serious sink constraint problems, and lots of additional costs.

Second, just how do you turn all that CO2 into fuel? The additional step is not free, nor is it conventional Fischer-Tropsch technology, which starts with syngas from coal or gas. You need relatively vast amounts of energy and hydrogen to do it on the necessary gigatons/year scale. One estimate puts the cost of such fuels at $3.80-9.20 a gallon (some of the costs overlap, but it’ll be more at the pump, after refining and marketing).

Third, who the heck is going to pay for all of this? If you want to just offset global emissions of ~40 gigatons CO2/year at the most optimistic cost of $100/ton, with free fuel conversion, that’s $4 trillion a year. If you’re going to cough up that kind of money, there are a lot of other things you could do first, but no one has an incentive to do it when the price of emissions is approximately zero.

Ironically, the Carbon Engineering team seems to be aware of these problems:

Keith said it was important to still stop emitting carbon-dioxide pollution where feasible. “My view is we should stick to trying to cut emissions first. As a voter, my view is it’s cheaper not to emit a ton of [carbon dioxide] than it is to emit it and recapture it.”

I think there are two bottom lines here:

  1. Anyone who claims to have a silver bullet for a problem that pervades all human enterprise is probably selling snake oil.
  2. Without a substantial emissions price as the primary incentive guiding market decisions about carbon intensity, all large scale abatement efforts are a fantasy.

Fancy Stats and Silly Climate Contests

Climate skeptics seem to have a thing for contests and bets. For example, there’s Armstrong’s proposed bet, baiting Al Gore. Amusingly (for data nerds anyway), the bet, which pitted a null forecast against the taker’s chosen climate model, could have been beaten easily by either a low-order climate model or a less-naive null forecast. And, of course, it completely fails to understand that climate science is not about fitting a curve to the global temperature record.

Another instance of such foolishness recently came to my attention. It doesn’t have a name that I know of, but here’s the basic idea:

  • The author generates 1000 time series:

Each series has length 135: the same length as that of the most commonly studied series of global temperatures (which span 1880–2014). The 1000 series were generated as follows. First, 1000 random series were obtained (for more details, see below). Then, some of those series were randomly selected and had a trend added to them. Each added trend was either 1°C/century or −1°C/century. For comparison, a trend of 1°C/century is greater than the trend that is claimed for global temperatures.

  • The challenger pays $10 for the privilege of attempting to detect which of the 1000 series are perturbed by a trend, winning $100,000 for correctly identifying 90% or more.

The best challenger managed to identify 860 series, so the prize went unclaimed. But only two challenges are described, so I have to wonder how many serious attempts were made. Had I known about the contest in advance, I would not have tried it. I know plenty about fitting dynamic models to data, though abstract statistical methods aren’t really my thing. But I still have to ask myself some questions:

  • Is there really money to be made, or will the author simply abscond to the pub with my $10? For the sake of argument, let’s assume that the author really has $100k at stake.
  • Is it even possible to win? The author did not reveal the process used to generate the series in advance. That alone makes this potentially a sucker bet. If you’re in control of the noise and structure of the process, it’s easy to generate series that are impossible to reliably disentangle. (Tellingly, the author later revealed the code to generate the series, but it appears there’s no code to successfully identify 90%!)

For me, the statistical properties of the contest make it an obvious non-starter. But does it have any redeeming social value? For example, is it an interesting puzzle that has something to do with actual science? Sadly, no.

The hidden assumption of the contest is that climate science is about estimating the trend of the global temperature time series. Yes, people do that. But it’s a tiny fraction of climate science, and it’s a diagnostic of models and data, not a real model in itself. Science in general is not about such things. It’s about getting a good model, not a good fit. In some places the author talks about real physics, but ultimately seems clueless about this – he’s content with unphysical models:

Moreover, the Contest model was never asserted to be realistic.

Are ARIMA models truly appropriate for climatic time series? I do not have an opinion. There seem to be no persuasive arguments for or against using ARIMA models. Rather, studying such models for climatic series seems to be a worthy area of research.

Liljegren’s argument against ARIMA is that ARIMA models have a certain property that the climate system does not have. Specifically, for ARIMA time series, the variance becomes arbitrarily large, over long enough time, whereas for the climate system, the variance does not become arbitrarily large. It is easy to understand why Liljegren’s argument fails.

It is a common aphorism in statistics that “all models are wrong”. In other words, when we consider any statistical model, we will find something wrong with the model. Thus, when considering a model, the question is not whether the model is wrong—because the model is certain to be wrong. Rather, the question is whether the model is useful, for a particular application. This is a fundamental issue that is commonly taught to undergraduates in statistics. Yet Liljegren ignores it.

As an illustration, consider a straight line (with noise) as a model of global temperatures. Such a line will become arbitrarily high, over long enough time: e.g. higher than the temperature at the center of the sun. Global temperatures, however, will not become arbitrarily high. Hence, the model is wrong. And so—by an argument essentially the same as Liljegren’s—we should not use a straight line as a model of temperatures.

In fact, a straight line is commonly used for temperatures, because everyone understands that it is to be used only over a finite time (e.g. a few centuries). Over a finite time, the line cannot become arbitrarily high; so, the argument against using a straight line fails. Similarly, over a finite time, the variance of an ARIMA time series cannot become arbitrarily large; so, Liljegren’s argument fails.

Actually, no one in climate science uses straight lines to predict future temperatures, because forcing is rising, and therefore warming will accelerate. But that’s a minor quibble, compared to the real problem here. If your model is:

global temperature = f( time )

you’ve just thrown away 99.999% of the information available for studying the climate. (Ironically, the author’s entire point is that annual global temperatures don’t contain a lot of information.)

No matter how fancy your ARIMA model is, it knows nothing about conservation laws, robustness in extreme conditions, dimensional consistency, or real physical processes like heat transfer. In other words, it fails every reality check a dynamic modeler would normally apply, except the weakest – fit to data. Even its fit to data is near-meaningless, because it ignores all other series (forcings, ocean heat, precipitation, etc.) and has nothing to say about replication of spatial and seasonal patterns. That’s why this contest has almost nothing to do with actual climate science.

This is also why data-driven machine learning approaches have a long way to go before they can handle general problems. It’s comparatively easy to learn to recognize the cats in a database of photos, because the data spans everything there is to know about the problem. That’s not true for systemic problems, where you need a web of data and structural information at multiple scales in order to understand the situation.

The CO2 record is no surprise

The 2016 record in CO2 concentration and increment is exactly what you’d expect for a system driven by growing emissions.

Here’s the data. The CO2 concentration at Mauna Loa has increased steadily since records began in 1958. Superimposed on the trend is a seasonal oscillation, which you can remove by a moving average over a monthly window (red):

In a noiseless system driven by increasing, you’d expect every year to be a concentration record, and that’s nearly true here. Almost 99% of 12-month intervals exceed all previous records.

If you look at the year-on-year difference in monthly concentrations, you can see that not only is the concentration rising, but the rate of increase is increasing as well:

This first difference is noisier, but consistent. As a natural consequence, you’d expect a typical point to be higher than any average of the interval preceding.

In other words, a record concentration coinciding with a record increase is not unusual, dynamically or statistically. Until emissions decline significantly, news outlets might as well post a standing item to this effect.

The CO2 concentration trajectory is, incidentially, closer to parabolic than to exponential. That’s because emissions have risen more or less linearly in recent decades,

CO2 emissions, GtC/yr

CO2 concentration (roughly) integrates emissions, so if emissions = c1*time, concentration = c2*time^2 is expected. The cause for concern here is that a peak in the rate of increase has occurred at a time with flat emissions for a few years, signalling that saturation of natural sinks may be to blame. I think it’s premature to draw that conclusion, given the level of noise in the system. But sooner or later our luck will run out, so reducing emissions is as important as ever.

After emissions do peak, you’d expect CO2 difference records to become rare. However, for CO2 concentrations to stop setting records requires that emissions fall below natural uptake, which will take longer to achieve.

Does statistics trump physics?

My dissertation was a critique and reconstruction of William Nordhaus’ DICE model for climate-economy policy (plus a look at a few other models). I discovered a lot of issues, for example that having a carbon cycle that didn’t conserve carbon led to a low bias in CO2 projections, especially in high-emissions scenarios.

There was one sector I didn’t critique: the climate itself. That’s because Nordhaus used an established model, from climatologists Schneider & Thompson (1981). It turns out that I missed something important: Nordhaus reestimated the parameters of the model from time series temperature and forcing data.

Nordhaus’ estimation focused on a parameter representing the thermal inertia of the atmosphere/surface ocean system. The resulting value was about 3x higher than Schneider & Thompson’s physically-based parameter choice. That delays the effects of GHG emissions by about 15 years. Since the interest rate in the model is about 5%, that lag substantially diminishes the social cost of carbon and the incentive for mitigation.

DICE Climate Sector
The climate subsystem of the DICE model, implemented in Vensim

So … should an economist’s measurement of a property of the climate, from statistical methods, overrule a climatologist’s parameter choice, based on physics and direct observations of structure at other scales?

I think the answer could be yes, IF the statistics are strong and reconcilable with physics or the physics is weak and irreconcilable with observations. So, was that the case?

Continue reading “Does statistics trump physics?”