This informationisbeautiful graphic is pretty, but I don’t find it informative. The y scale is nonlinear, and I don’t know if the x scale conveys anything. It’s hard to work out the timing of inundation, which is really the key. The focus on the low points of big cities in developed countries is misleading, because those will be defended for a long time. Ho Chi Minh city should be on there, as well as the US gulf coast. USA Today would love this.

# Tag: sea level

## Sea Level Roundup

Realclimate has Martin Vermeer’s reflections on the making of his recent sea level paper with Stefan Rahmstorf. At some point I hope to post a replication of that study, in a model with the Grinsted and Rahmstorf 2007 structures, but I haven’t managed to replicate it yet. The problem may be that I haven’t yet tackled the reservoir storage issue.

At Nature Reports, Olive Heffernan introduces several sea level articles. Rahmstorf contrasts the recent set of semi-empirical models, predicting sea level of a meter or more this century, with the AR4 finding. Lowe and Gregory wonder if the semi-empirical models are really seeing enough of the dynamic ice signal to have predictive power, and worry about *over*adaptation to high scenarios. Mark Schrope reports on *under*adaptation – vulnerable developments in Florida. Mason Inman reports on ecological engineering, a softer approach to coastal defense.

## Sea Level Rise

Citations: Rahmstorf 2007, “A semi-empirical approach to projecting future sea level rise.” *Science* **315**. Grinsted, Moore & Jevrejeva 2009. “Reconstructing sea level from paleo and projected temperatures 200 to 2100 AD.” *Climate Dynamics* [1]

Source: Replicated by Tom Fiddaman based on an earlier replication of Rahmstorf provided by John Sterman

Units balance: Yes

Format: Vensim; requires Model Reader or an advanced version

Notes: See discussion at metasd.

Files:

Grinsted_v3b – first model; default calibration replicates Rahmstorf, and optimization is set up to adjust constant terms to fit Rahmstorf slope to data

Grinsted_v3c – second model; updated data and calibration, as in Part III

Grinsted_v3c-k2 – third model; set up for Kalman filtering, as in Part V

## Sea level update – newish work

I linked some newish work on sea level by Aslak Grinsted et al. in my last post. There are some other new developments:

On the data front, Rohling et al. investigate sea level over the last half a million years and in the Pliocene (3+ million years ago). Here’s the relationship between CO2 and Antarctic temperatures:

Two caveats and one interesting observation here:

- The axes are flipped; if you think causally with CO2 on the x-axis, you need to mentally reflect this picture.
- TAA refers to Antarctic temperature, which is subject to polar amplification
- Notice that the empirical line (red) is much shallower than the relationship in model projections (green). Since the axes are flipped, that means that empirical Antarctic temperatures are much more sensitive to CO2 than projections, if it’s valid to extrapolate, and we wait long enough.

## Sea level update – Grinsted edition

I’m waaayyy overdue for an update on sea level models.

I’ve categorized my 6 previous posts on the Rahmstorf (2007) and Grinsted et al. models under sea level.

I had some interesting correspondence last year with Aslak Grinsted.

I agree with the ellipsis idea that you show in the figure on page IV. However, i conclude that if i use the paleo temperature reconstructions then the long response times are ‘eliminated’. You can sort of see why on this page: Fig2 here illustrates one problem with having a long response time:

http://www.glaciology.net/Home/Miscellaneous-Debris/rahmstorf2007lackofrealism

It seems it is very hard to make the turn at the end of the LIA with a large inertia.

I disagree with your statement “this suggests to me that G’s confidence bounds, +/- 67 years on the Moberg variant and +/- 501 years on the Historical variant are most likely slices across the short dimension of a long ridge, and thus understate the true uncertainty of a and tau.”

The inverse monte carlo method is designed not to “slice across” the distributions. I think the reason we get so different results is that your payoff function is very different from my likelihood function – as you also point out on page VI.

Aslak is politely pointing out that I screwed up one aspect of the replication. We agree that the fit payoff surface is an ellipse (I think the technical I used was “banana-ridge”). However, my hypothesis about the inexplicably narrow confidence bounds in the Grinsted et al. paper was wrong. It turns out that the actual origin of the short time constant and narrow confidence bounds is a constraint that I neglected to implement. The constraint involves the observation that variations in sea level over the last two millenia have been small. That basically chops off most of the long-time-constant portion of the banana, leaving the portion described in the paper. I’ve confirmed this with a quick experiment.

## C-LEARN is live

Climate Interactive has the story.

Try it yourself, or see it in action in an interactive webinar on June 3rd.

## Sea Level Rise – VI – The Bottom Line (Almost)

The pretty pictures look rather compelling, but we’re not quite done. A little QC is needed on the results. It turns out that there’s trouble in paradise:

- the residuals (modeled vs. measured sea level) are noticeably autocorrelated. That means that the model’s assumed error structure (a white disturbance integrated into sea level, plus white measurement error) doesn’t capture what’s really going on. Either disturbances to sea level are correlated, or sea level measurements are subject to correlated errors, or both.
- attempts to estimate the driving noise on sea level (as opposed to specifying it a priori) yield near-zero values.

#1 is not really a surprise; G discusses the sea level error structure at length and explicitly address it through a correlation matrix. (It’s not clear to me how they handle the flip side of the problem, state estimation with correlated driving noise – I think they ignore that.)

#2 might be a consequence of #1, but I haven’t wrapped my head around the result yet. A little experimentation shows the following:

driving noise SD | equilibrium sensitivity (a, mm/C) | time constant (tau, years) | sensitivity (a/tau, mm/yr/C) |

~ 0 (1e-12) | 94,000 | 30,000 | 3.2 |

1 | 14,000 | 4400 | 3.2 |

10 | 1600 | 420 | 3.8 |

Intermediate values yield values consistent with the above. Shorter time constants are consistent with expectations given higher driving noise (in effect, the model is getting estimated over shorter intervals), but the real point is that they’re all long, and all yield about the same sensitivity.

The obvious solution is to augment the model structure to include states representing persistent errors. At the moment, I’m out of time, so I’ll have to just speculate what that might show. Generally, autocorrelation of the errors is going to reduce the power of these results. That is, because there’s less information in the data than meets the eye (because the measurements aren’t fully independent), one will be less able to discriminate among parameters. In this model, I seriously doubt that the fundamental banana-ridge of the payoff surface is going to change. Its sides will be less steep, reflecting the diminished power, but that’s about it.

Assuming I’m right, where does that leave us? Basically, my hypotheses in Part IV were right. The likelihood surface for this model and data doesn’t permit much discrimination among time constants, other than ruling out short ones. R’s very-long-term paleo constraint for a (about 19,500 mm/C) and corresponding long tau is perfectly plausible. If anything, it’s more plausible than the short time constant for G’s Moberg experiment (in spite of a priori reasons to like G’s argument for dominance of short time constants in the transient response). The large variance among G’s experiment (estimated time constants of 208 to 1193 years) is not really surprising, given that large movements along the a/tau axis are possible without degrading fit to data. The one thing I really can’t replicate is G’s high sensitivities (6.3 and 8.2 mm/yr/C for the Moberg and Jones/Mann experiments, respectively). These seem to me to lie well off the a/tau ridgeline.

The conclusion that IPCC WG1 sea level rise is an underestimate is robust. I converted Part V’s random search experiment (using the optimizer) into sensitivity files, permitting Monte Carlo simulations forward to 2100, using the joint a-tau-T0 distribution as input. (See the setup in *k-grid-sensi.vsc *and *k-grid-sensi-4x.vsc *for details). I tried it two ways: the 21 points with a deviation of less than 2 in the payoff (corresponding with a 95% confidence interval), and the 94 points corresponding with a deviation of less than 8 (i.e., assuming that fixing the error structure would make things 4x less selective). Sea level in 2100 is distributed as follows:

The sample would have to be bigger to reveal the true distribution (particularly for the “overconfident” version in blue), but the qualitative result is unlikely to change. All runs lie above the IPCC range (.26-.59), which excludes ice dynamics.

Continue reading “Sea Level Rise – VI – The Bottom Line (Almost)”

## Sea Level Rise Models – V

To take a look at the payoff surface, we need to do more than the naive calibrations I’ve used so far. Those were adequate for choosing constant terms that aligned the model trajectory with the data, given a priori values of a and tau. But that approach could give flawed estimates and confidence bounds when used to estimate the full system.

Elaborating on my comment on estimation at the end of Part II, consider a simplified description of our model, in discrete time:

(1) sea_level(t) = f(sea_level(t-1), temperature, parameters) + driving_noise(t)

(2) measured_sea_level(t) = sea_level(t) + measurement_noise(t)

The driving noise reflects disturbances to the system state: in this case, random perturbations to sea level. Measurement noise is simply errors in assessing the true state of global sea level, which could arise from insufficient coverage or accuracy of instruments. In the simple case, where driving and measurement noise are both zero, measured and actual sea level are the same, so we have the following system:

(3) sea_level(t) = f(sea_level(t-1), temperature, parameters)

In this case, which is essentially what we’ve assumed so far, we can simply initialize the model, feed it temperature, and simulate forward in time. We can estimate the parameters by adjusting them to get a good fit. However, if there’s driving noise, as in (1), we could be making a big mistake, because the noise may move the real-world state of sea level far from the model trajectory, in which case we’d be using the wrong value of sea_level(t-1) on the right hand side of (1). In effect, the model would blunder ahead, ignoring most of the data.

In this situation, it’s better to use ordinary least squares (OLS), which we can implement by replacing modeled sea level in (1) with measured sea level:

(4) sea_level(t) = f(measured_sea_level(t-1), temperature, parameters)

In (4), we’re ignoring the model rather than the data. But that could be a bad move too, because if measurement noise is nonzero, the sea level data could be quite different from true sea level at any point in time.

The point of the Kalman Filter is to combine the model and data estimates of the true state of the system. To do that, we simulate the model forward in time. Each time we encounter a data point, we update the model state, taking account of the relative magnitude of the noise streams. If we think that measurement error is small and driving noise is large, the best bet is to move the model dramatically towards the data. On the other hand, if measurements are very noisy and driving noise is small, better to stick with the model trajectory, and move only a little bit towards the data. You can test this in the model by varying the driving noise and measurement error parameters in SyntheSim, and watching how the model trajectory varies.

The discussion above is adapted from David Peterson’s thesis, which has a more complete mathematical treatment. The approach is laid out in Fred Schweppe’s book, *Uncertain Dynamic Systems*, which is unfortunately out of print and pricey. As a substitute, I like Stengel’s *Optimal Control and Estimation*.

An example of Kalman Filtering in everyday devices is GPS. A GPS unit is designed to estimate the state of a system (its location in space) using noisy measurements (satellite signals). As I understand it, GPS units maintain a simple model of the dynamics of motion: my expected position in the future equals my current perceived position, plus perceived velocity times time elapsed. It then corrects its predictions as measurements allow. With a good view of four satellites, it can move quickly toward the data. In a heavily-treed valley, it’s better to update the predicted state slowly, rather than giving jumpy predictions. I don’t know whether handheld GPS units implement it, but it’s possible to estimate the noise variances from the data and model, and adapt the filter corrections on the fly as conditions change.

## Sea Level Rise Models – IV

So far, I’ve established that the qualitative results of Rahmstorf (R) and Grinsted (G) can be reproduced. Exact replication has been elusive, but the list of loose ends (unresolved differences in data and so forth) is long enough that I’m not concerned that R and G made fatal errors. However, I haven’t made much progress against the other items on my original list of questions:

- Is the Grinsted et al. argument from first principles, that the current sea level response is dominated by short time constants, reasonable?
- Is Rahmstorf right to assert that Grinsted et al.’s determination of the sea level rise time constant is shaky?
- What happens if you impose the long-horizon paleo constraint to equilibrium sea level rise in Rahmstorf’s RC figure on the Grinsted et al. model?

At this point I’ll reveal my working hypotheses (untested so far):

- I agree with G that there are good reasons to think that the sea level response occurs over multiple time scales, and therefore that one could make a good argument for a substantial short-time-constant component in the current transient.
- I agree with R that the estimation of long time constants from comparatively short data series is almost certainly shaky.
- I suspect that R’s paleo constraint could be imposed without a significant degradation of the model fit (an apparent contradiction of G’s results).
- In the end, I doubt the data will resolve the argument, and we’ll be left with the conclusion that R and G agree on: that the IPCC WGI sea level rise projection is an underestimate.

## Sea Level Rise Models – III

Starting from the Rahmstorf (R) parameterization (tested, but not exhaustively), let’s turn to Grinsted et al (G).

First, I’ve made a few changes to the model and supporting spreadsheet. The previous version ran with a small time step, because some of the tide data was monthly (or less). That wasted clock cycles and complicated computation of residual autocorrelations and the like. In this version, I binned the data into an annual window and shifted the time axes so that the model will use the appropriate end-of-year points (when Vensim has data with a finer time step than the model, it grabs the data point nearest each time step for comparison with model variables). I also retuned the mean adjustments to the sea level series. I didn’t change the temperature series, but made it easier to use pure-Moberg (as G did). Those changes necessitate a slight change to the R calibration, so I changed the default parameters to reflect that.

Now it should be possible to plug in G parameters, from Table 1 in the paper. First, using Moberg: a = 1290 (note that G uses meters while I’m using mm), tau = 208, b = 770 (corresponding with T0=-0.59), initial sea level = -2. The final time for the simulation is set to 1979, and only Moberg temperature data are used. The setup for this is in change files, *GrinstedMoberg.cin* and *MobergOnly.cin*.