Sea Level Rise Models – V

To take a look at the payoff surface, we need to do more than the naive calibrations I’ve used so far. Those were adequate for choosing constant terms that aligned the model trajectory with the data, given a priori values of a and tau. But that approach could give flawed estimates and confidence bounds when used to estimate the full system.

Elaborating on my comment on estimation at the end of Part II, consider a simplified description of our model, in discrete time:

(1) sea_level(t) = f(sea_level(t-1), temperature, parameters) + driving_noise(t)

(2) measured_sea_level(t) = sea_level(t) + measurement_noise(t)

The driving noise reflects disturbances to the system state: in this case, random perturbations to sea level. Measurement noise is simply errors in assessing the true state of global sea level, which could arise from insufficient coverage or accuracy of instruments. In the simple case, where driving and measurement noise are both zero, measured and actual sea level are the same, so we have the following system:

(3) sea_level(t) = f(sea_level(t-1), temperature, parameters)

In this case, which is essentially what we’ve assumed so far, we can simply initialize the model, feed it temperature, and simulate forward in time. We can estimate the parameters by adjusting them to get a good fit. However, if there’s driving noise, as in (1), we could be making a big mistake, because the noise may move the real-world state of sea level far from the model trajectory, in which case we’d be using the wrong value of sea_level(t-1) on the right hand side of (1). In effect, the model would blunder ahead, ignoring most of the data.

In this situation, it’s better to use ordinary least squares (OLS), which we can implement by replacing modeled sea level in (1) with measured sea level:

(4) sea_level(t) = f(measured_sea_level(t-1), temperature, parameters)

In (4), we’re ignoring the model rather than the data. But that could be a bad move too, because if measurement noise is nonzero, the sea level data could be quite different from true sea level at any point in time.

The point of the Kalman Filter is to combine the model and data estimates of the true state of the system. To do that, we simulate the model forward in time. Each time we encounter a data point, we update the model state, taking account of the relative magnitude of the noise streams. If we think that measurement error is small and driving noise is large, the best bet is to move the model dramatically towards the data. On the other hand, if measurements are very noisy and driving noise is small, better to stick with the model trajectory, and move only a little bit towards the data. You can test this in the model by varying the driving noise and measurement error parameters in SyntheSim, and watching how the model trajectory varies.

The discussion above is adapted from David Peterson’s thesis, which has a more complete mathematical treatment. The approach is laid out in Fred Schweppe’s book, Uncertain Dynamic Systems, which is unfortunately out of print and pricey. As a substitute, I like Stengel’s Optimal Control and Estimation.

An example of Kalman Filtering in everyday devices is GPS. A GPS unit is designed to estimate the state of a system (its location in space) using noisy measurements (satellite signals). As I understand it, GPS units maintain a simple model of the dynamics of motion: my expected position in the future equals my current perceived position, plus perceived velocity times time elapsed. It then corrects its predictions as measurements allow. With a good view of four satellites, it can move quickly toward the data. In a heavily-treed valley, it’s better to update the predicted state slowly, rather than giving jumpy predictions. I don’t know whether handheld GPS units implement it, but it’s possible to estimate the noise variances from the data and model, and adapt the filter corrections on the fly as conditions change.

Continue reading “Sea Level Rise Models – V”

Sea Level Rise Models – IV

So far, I’ve established that the qualitative results of Rahmstorf (R) and Grinsted (G) can be reproduced. Exact replication has been elusive, but the list of loose ends (unresolved differences in data and so forth) is long enough that I’m not concerned that R and G made fatal errors. However, I haven’t made much progress against the other items on my original list of questions:

  • Is the Grinsted et al. argument from first principles, that the current sea level response is dominated by short time constants, reasonable?
  • Is Rahmstorf right to assert that Grinsted et al.’s determination of the sea level rise time constant is shaky?
  • What happens if you impose the long-horizon paleo constraint to equilibrium sea level rise in Rahmstorf’s RC figure on the Grinsted et al. model?

At this point I’ll reveal my working hypotheses (untested so far):

  • I agree with G that there are good reasons to think that the sea level response occurs over multiple time scales, and therefore that one could make a good argument for a substantial short-time-constant component in the current transient.
  • I agree with R that the estimation of long time constants from comparatively short data series is almost certainly shaky.
  • I suspect that R’s paleo constraint could be imposed without a significant degradation of the model fit (an apparent contradiction of G’s results).
  • In the end, I doubt the data will resolve the argument, and we’ll be left with the conclusion that R and G agree on: that the IPCC WGI sea level rise projection is an underestimate.

Continue reading “Sea Level Rise Models – IV”

Sea Level Rise Models – III

Starting from the Rahmstorf (R) parameterization (tested, but not exhaustively), let’s turn to Grinsted et al (G).

First, I’ve made a few changes to the model and supporting spreadsheet. The previous version ran with a small time step, because some of the tide data was monthly (or less). That wasted clock cycles and complicated computation of residual autocorrelations and the like. In this version, I binned the data into an annual window and shifted the time axes so that the model will use the appropriate end-of-year points (when Vensim has data with a finer time step than the model, it grabs the data point nearest each time step for comparison with model variables). I also retuned the mean adjustments to the sea level series. I didn’t change the temperature series, but made it easier to use pure-Moberg (as G did). Those changes necessitate a slight change to the R calibration, so I changed the default parameters to reflect that.

Now it should be possible to plug in G parameters, from Table 1 in the paper. First, using Moberg: a = 1290 (note that G uses meters while I’m using mm), tau = 208, b = 770 (corresponding with T0=-0.59), initial sea level = -2. The final time for the simulation is set to 1979, and only Moberg temperature data are used. The setup for this is in change files, GrinstedMoberg.cin and MobergOnly.cin.

Moberg, Grinsted parameters

Continue reading “Sea Level Rise Models – III”

Sea Level Rise Models – II

Picking up where I left off, with model and data assembled, the next step is to calibrate, to see whether the Rahmstorf (R) and Grinsted (G) results can be replicated. I’ll do that the easy way, and the right way.

An easy first step is to try the R approach, assuming that the time constant tau is long and that the rate of sea level rise is proportional to temperature (or the delta against some preindustrial equilibrium).

Rahmstorf estimated the temperature-sea level rise relationship by regressing a smoothed rate of sea level rise against temperature, and found a slope of 3.4 mm/yr/C.

Rahmstorf figure 2

Continue reading “Sea Level Rise Models – II”

Sea Level Rise Models – I

A recent post by Stefan Rahmstorf at RealClimate discusses a new paper on sea level projections by Grinsted, Moore and Jevrejeva. This paper comes at an interesting time, because we’ve just been discussing sea level projections in the context of our ongoing science review of the C-ROADS model. In C-ROADS, we used Rahmstorf’s earlier semi-empirical model, which yields higher sea level rise than AR4 WG1 (the latter leaves out ice sheet dynamics). To get a better handle on the two papers, I compared a replication of the Rahmstorf model (from John Sterman, implemented in C-ROADS) with an extension to capture Grinsted et al. This post (in a few parts) serves as both an assessment of the models and a bit of a tutorial on data analysis with Vensim.

My primary goal here is to develop an opinion on four questions:

  • Can the conclusions be rejected, given the data?
  • Is the Grinsted et al. argument from first principles, that the current sea level response is dominated by short time constants, reasonable?
  • Is Rahmstorf right to assert that Grinsted et al.’s determination of the sea level rise time constant is shaky?
  • What happens if you impose the long-horizon paleo constraint to equilibrium sea level rise in Rahmstorf’s RC figure on the Grinsted et al. model?

Paleo constraints on equilibrium sea level

Continue reading “Sea Level Rise Models – I”

Are We Slaves to Open Loop Theories?

The ongoing bailout/stimulus debate is decidedly Keynesian. Yet Keynes was a halfhearted Keynesian:

US Keynesianism, however, came to mean something different. It was applied to a fiscal revolution, licensing deficit finance to pull the economy out of depression. From the US budget of 1938, this challenged the idea of always balancing the budget, by stressing the need to boost effective demand by stimulating consumption.

None of this was close to what Keynes had said in his General Theory. His emphasis was on investment as the motor of the economy; but influential US Keynesians airily dismissed this as a peculiarity of Keynes. Likewise, his efforts to separate capital projects from ordinary budgets, balanced if possible, found few echoes in Washington, despite frequent mention of his name.

Should this surprise us? It does not appear to have disconcerted Keynes. ‘Practical men were often the slaves of some defunct economist,’ he wrote. By the end of the second world war, Lord Keynes of Tilton was no mere academic scribbler but a policymaker, in a debate dominated by second-hand versions of ideas he had put into circulation in a previous life. He was enough of a pragmatist, and opportunist, not to quibble. After dining with a group of Keynesian economists in Washington, in 1944, Keynes commented: ‘I was the only non-Keynesian there.’

FT.com, In the long run we are all dependent on Keynes

This got me wondering about the theoretical underpinnings of the stimulus prescription. Economists are talking in the language of the IS/LM model, marginal propensity to consume, multipliers for taxes vs. spending, and so forth. But these are all equilibrium shorthand for dynamic concepts. Surely the talk is founded on dynamic models that close loops between money, expectations and the real economy, and contain an operational representation of money creation and lending?

The trouble is, after a bit of sniffing around, I’m not seeing those models. On the jacket of Dynamic Macroeconomics, James Tobin wrote in 1997:

“Macrodynamics is a venerable and important tradition, which fifty or sixty years ago engaged the best minds of the economics profession: among them Frisch, Tinbergan, Harrod, Hicks, Samuelson, Goodwin. Recently it has been in danger of being swallowed up by rational expectations, moving equilibrium, and dynamic optimization. We can be grateful to the authors of this book for keeping alive the older tradition, while modernizing it in the light of recent developments in techniques of dynamic modeling.”
’”James Tobin, Sterling Professor of Economics Emeritus, Yale University

Is dynamic macroeconomics still moribund, supplanted by CGE models (irrelevant to the problem at hand) and black box econometric methods? Someone please point me to the stochastic behavioral disequilibrium nonlinear dynamic macroeconomics literature I’ve missed, so I can sleep tonight knowing that policy is informed by something more than comparative statics.

In the meantime, the most relevant models I’m aware of are in system dynamics, not economics. An interesting option (which you can read and run) is Nathan Forrester’s thesis, A Dynamic Synthesis of Basic Macroeconomic Theory (1982).

Forrester’s model combines Samuelson’s multiplier accelerator, Metzler’s inventory-adjustment model, Hicks’ IS/LM, and the aggregate-supply/aggregate-demand model into a 10th order continuous dynamic model. The model generates an endogenous business cycle (4-year period) as well as a longer (24-year) cycle. The business cycle arises from inventory and employment adjustment, while the long cycle involves multiplier-accelerator and capital stock adjustment mechanisms, involving final demand. Forrester used the model to test a variety of countercyclic economic policies, commonly recommended as antidotes for business cycle swings:

Results of the policy tests explain the apparent discrepancy between policy conclusions based on static and dynamic models. The static results are confirmed by the fact that countercyclic demand-management policies do stabilize the demand-driven [long] cycle. The dynamic results are confirmed by the fact that the same countercyclic policies destabilize the business cycle. (pg. 9)

It’s not clear to me what exactly this kind of counterintuitive behavior might imply for our current situation, but it seems like a bad time to inadvertently destabilize the business cycle through misapplication of simpler models.

It’s unclear to what extent the model applies to our current situation, because it doesn’t include budget constraints for agents, and thus doesn’t include explicit money and debt stocks. While there are reasonable justifications for omitting those features for “normal” conditions, I suspect that since the origin of our current troubles is a debt binge, those justifications don’t apply where we are now in the economy’s state space. If so, then the equilibrium conclusions of the IS/LM model and other simple constructs are even more likely to be wrong.

I presume that the feedback structure needed to get your arms around the problem properly is in Jay Forrester’s System Dynamics National Model, but unfortunately it’s not available for experimentation.

John Sterman’s model of The Energy Transition and the Economy (1981) does have money stocks and debt for households and other sectors. It doesn’t have an operational representation of bank reserves, and it monetizes the deficit, but if one were to repurpose the model a bit (by eliminating the depletion issue, among other things) it might provide an interesting compromise between the two Forrester models above.

I still have a hard time believing that macroeconomics hasn’t trodden some of this fertile ground since the 80s, so I hope someone can comment with a more informed perspective. However, until someone disabuses me of the notion, I have the gnawing suspicion that the models are broken and we’re flying blind. Sure hope there aren’t any mountains in this fog.

Four Legs and a Tail

An effective climate policy needs prices, technology, institutional rules, and preferences.

I’m continuously irked by calls for R&D to save us from climate change. Yes, we need it very badly, but it’s no panacea. Without other signals, like a price on carbon, technology isn’t going to do a lot. It’s a one-legged dog. True, we might get lucky with some magic bullet, but I’m not willing to count on that. An effective climate policy needs four legs:

  1. Prices
  2. Technology (the landscape of possibilities on which we make decisions)
  3. Institutional rules and procedures
  4. Preferences, operating within social networks

Continue reading “Four Legs and a Tail”

My Bathtub is Nonlinear

I’m working on raising my kids as systems thinkers. I’ve been meaning to share some of our adventures here for some time, so here’s a first installment, from quite a while back.

I decided to ignore the great online resources for system dynamics education and reinvent the wheel. But where to start? I wanted an exercise that included stocks and flows, accumulation, graph reading, estimation, and data collection, with as much excitement as could be had indoors. (It was 20 below outside, so fire and explosions weren’t an option).

We grabbed a sheet of graph paper, fat pens, a yardstick, and a stopwatch and headed for the bathtub. Step 1 (to sustain interest) was turn on the tap to fill the tub. While it filled, I drew time and depth axes on the graph paper and explained what we were trying to do. That involved explaining what a graph was for, and what locations on the axes meant (they were perhaps 5 and 6 and probably hadn’t seen a graph of behavior over time before).

When the tub was full, we made a few guesses about how long it might take to empty, then started the clock and opened the drain. Every ten or twenty seconds, we’d stop the timer, take a depth reading, and plot the result on our graph. After a few tries, the kids could place the points. About half way, we took a longer pause to discuss the trajectory so far. I proposed a few forecasts of how the second half of the tub might drain – slowing, speeding up, etc. Each of us took a guess about time-to-empty. Naturally my own guess was roughly consistent with exponential decay. Then we reopened the drain and collected data until the tub was dry.

To my astonishment, the resulting plot showed a perfectly linear decline in water depth, all the way to zero (as best we could measure). In hindsight, it’s not all that strange, because the tub tapers at the bottom, so that a constant linear decline in the outflow rate corresponds with the declining volumetric flow rate you’d expect (from decreasing pressure at the outlet as the water gets shallower). Still, I find it rather amazing that the shape of the tub (and perhaps nonlinearity in the drain’s behavior) results in such a perfectly linear trajectory.

We spent a fair amount of time further exploring bathtub dynamics, with much filling and emptying. When the quantity of water on the floor got too alarming, we moved to the sink to explore equilibrium by trying to balance the tap inflow and drain outflow, which is surprisingly difficult.

We lost track of our original results, so we recently repeated the experiment. This time, we measured the filling as well as the draining, shown below on the same axes. The dotted lines are our data; others are our prior guesses. Again, there’s no sign of exponential draining – it’s a linear rush to the finish line. Filling – which you’d expect to be a perfect ramp if the tub had constant volume per depth – is initially fast, then slows slightly as the tapered bottom area is full. However, that effect doesn’t seem to be big enough to explain the outflow behavior.

Bathtub data

I’ve just realized that I have a straight-sided horse trough lying about, so I think we may need to head outside for another test …

Update: the follow-on to this is rather important.

Policy Resistance in Emerging Markets

A great example of policy undone by feedback, from Paul Krugman’s column, The Widening Gyre:

The really shocking thing, however, is the way the crisis is spreading to emerging markets ’” countries like Russia, Korea and Brazil.

These countries were at the core of the last global financial crisis, in the late 1990s (which seemed like a big deal at the time, but was a day at the beach compared with what we’re going through now). They responded to that experience by building up huge war chests of dollars and euros, which were supposed to protect them in the event of any future emergency. And not long ago everyone was talking about ‘decoupling,’ the supposed ability of emerging market economies to keep growing even if the United States fell into recession. ‘Decoupling is no myth,’ The Economist assured its readers back in March. ‘Indeed, it may yet save the world economy.’

That was then. Now the emerging markets are in big trouble. In fact, says Stephen Jen, the chief currency economist at Morgan Stanley, the ‘hard landing’ in emerging markets may become the ‘second epicenter’ of the global crisis. (U.S. financial markets were the first.)

What happened? In the 1990s, emerging market governments were vulnerable because they had made a habit of borrowing abroad; when the inflow of dollars dried up, they were pushed to the brink. Since then they have been careful to borrow mainly in domestic markets, while building up lots of dollar reserves. But all their caution was undone by the private sector’s obliviousness to risk.

In Russia, for example, banks and corporations rushed to borrow abroad, because dollar interest rates were lower than ruble rates. So while the Russian government was accumulating an impressive hoard of foreign exchange, Russian corporations and banks were running up equally impressive foreign debts. Now their credit lines have been cut off, and they’re in desperate straits.

The unstated closure to the loop is that emerging market governments’ borrowing in domestic markets and hoarding of foreign exchange were likely a cause of higher domestic rate spreads over dollar rates, and thus contributed to the undoing of the policy by driving other borrowing abroad.