I’m waaayyy overdue for an update on sea level models.
I’ve categorized my 6 previous posts on the Rahmstorf (2007) and Grinsted et al. models under sea level.
I had some interesting correspondence last year with Aslak Grinsted.
I agree with the ellipsis idea that you show in the figure on page IV. However, i conclude that if i use the paleo temperature reconstructions then the long response times are ‘eliminated’. You can sort of see why on this page: Fig2 here illustrates one problem with having a long response time:
http://www.glaciology.net/Home/Miscellaneous-Debris/rahmstorf2007lackofrealism
It seems it is very hard to make the turn at the end of the LIA with a large inertia.
I disagree with your statement “this suggests to me that G’s confidence bounds, +/- 67 years on the Moberg variant and +/- 501 years on the Historical variant are most likely slices across the short dimension of a long ridge, and thus understate the true uncertainty of a and tau.”
The inverse monte carlo method is designed not to “slice across” the distributions. I think the reason we get so different results is that your payoff function is very different from my likelihood function – as you also point out on page VI.
Aslak is politely pointing out that I screwed up one aspect of the replication. We agree that the fit payoff surface is an ellipse (I think the technical I used was “banana-ridge”). However, my hypothesis about the inexplicably narrow confidence bounds in the Grinsted et al. paper was wrong. It turns out that the actual origin of the short time constant and narrow confidence bounds is a constraint that I neglected to implement. The constraint involves the observation that variations in sea level over the last two millenia have been small. That basically chops off most of the long-time-constant portion of the banana, leaving the portion described in the paper. I’ve confirmed this with a quick experiment.
The multivariate random walk sensitivity analysis method is cool:
The inverse monte carlo makes a uniform random walk in the model space, except that some steps are rejected according to some rules depending on the relative likelihood of the previous and current model. The end result is that it samples the model space according to the likelihood function. I.e. a region that has twice as high likelihood will be twice as densily sampled. The actual algorithm is extremely simple.
Probability of accepting a given step in the random walk is
rand*L(previous_accepted_model)<L(suggested_model)
It is also called the metropolis algorithm. It is a very useful method for high dimensional model spaces, or model spaces with multiple minima. Klaus’ page has the derivations of why this works.
The Metropolis algorithm is more famous for its use in simulated annealing. A few other notes:
I also have some clarifications regarding the sea level record we used. We use the jevrejeva 2006 one from 1850 onwards. Prior to that we use the amsterdam record. Quote:
“We also extend the GSL reconstruction prior to 1850 using the record of annual mean sea level from Amsterdam since 1700 (van Veen, 1945) correcting it for the post glacial land submergence rate of 0.16 mm yr-1 (Peltier, 2004)”You write: “Notice that during the period of disagreement, reported standard errors (in gray) are large. However, the magnitude of the discrepancy between series is larger than the reported standard error. Either the papers are measuring slightly different things, or there’s a significant systematic component that leads to underestimation of the error bounds.”
My comment: It is to be expected that the two curves are more than a single standard error apart sometimes. These deviations may last for a long-long time considering that the uncertainty is highly serially correlated. In the G paper I model the uncertainty with something that is quite close to a markov chain process. the reason why we chose amsterdam prior to 1850 is because I think that the errors in the GIA correction for this location is rather low. Basically I think that the negative trend 1807-1850 in the jevrejeva06 curve is artificially caused by slightly wrong GIA corrections.On page VI you say: “It’s not clear to me how they handle the flip side of the problem, state estimation with correlated driving noise – I think they ignore that” – That is sort-of correct, in the sense that we treat the temperature as given (=error-free) in each of the experiments. But we do not ignore it: We address it by using different temperature reconstructions. So the true uncertainty should be gauged from all experiments.
Aslak has some other interesting comments and papers:
- Relationship between sea level rise and global temperature (a look at the paleo constraints in Rahmstorf’s RC critique, which I cited in my first installment)
- Relative importance of mass and volume changes to global sea level rise
- Recent global sea level acceleration started over 200 years ago?
- Anthropogenic forcing dominates sea level rise since 1850
1 thought on “Sea level update – Grinsted edition”