The Ambiguity of Causal Loop Diagrams and Archetypes

I find causal loop diagramming to be a very useful brainstorming and presentation tool, but it falls short of what a model can do for you.

Here’s why. Consider the following pair of archetypes (Eroding Goals and Escalation, from wikipedia):

Eroding Goals and Escalation archetypes

Archetypes are generic causal loop diagram (CLD) templates, with a particular behavior story. The Escalation and Eroding Goals archetypes have identical feedback loop structures, but very different stories. So, there’s no unique mapping from feedback loops to behavior. In order to predict what a set of loops is going to do, you need more information.

Here’s an implementation of Eroding Goals:

Notice several things:

  • I had to specify where the stocks and flows are.
  • “Actions to Improve Goals” and “Pressure to Adjust Conditions” aren’t well defined (I made them proportional to “Gap”).
  • Gap is not a very good variable name.
  • The real world may have structure that’s not mentioned in the archetype (indicated in red).

Here’s Escalation:

The loop structure is mathematically identical; only the parameterization is different. Again, the missing information turns out to be crucial. For example, if A and B start with the same results, there is no escalation – A and B results remain constant. To get escalation, you either need (1) A and B to start in different states, or (2) some kind of drift or self-excitation in decision making (green arrow above).

Even then, you may get different results. (2) gives exponential growth, which is the standard story for escalation. (1) gives escalation that saturates:

The Escalation archetype would be better if it distinguished explicit goals for A and B results. Then you could mathematically express the key feature of (2) that gives rise to arms races:

  • A’s goal is x% more bombs than B
  • B’s goal is y% more bombs than A

Both of these models are instances of a generic second-order linear model that encompasses all possible things a linear model can do:

Notice that the first-order and second-order loops are disentangled here, which makes it easy to see the “inner” first order loops (which often contribute damping) and the “outer” second order loop, which can give rise to oscillation (as above) or the growth in the escalation archetype. That loop is difficult to discern when it’s presented as a figure-8.

Of course, one could map these archetypes to other figure-8 structures, like:

How could you tell the difference? You probably can’t, unless you consider what the stocks and flows are in an operational implementation of the archetype.

The bottom line is that the causal loop diagram of an archetype or anything else doesn’t tell you enough to simulate the behavior of the system. You have to specify additional assumptions. If the system is nonlinear or stochastic, there might be more assumptions than I’ve shown above, and they might be important in new ways. The process of surfacing and testing those assumptions by building a stock-flow model is very revealing.

If you don’t build a model, you’re in the awkward position of intuiting behavior from structure that doesn’t uniquely specify any particular mode. In doing so, you might be way ahead of non-systems thinkers approaching the same problem with a laundry list. But your ability to discover errors, incorporate data and discover leverage is far greater if you can simulate.

The model: wikiArchetypes1b.mdl (runs in any version of Vensim)

The dynamics of UFO sightings

The Economist reports on UFO sightings:

UFOdataThis deserves a model:

UFOs

UFOs.vpm (Vensim published model, requires Pro/DSS or the free Reader)

The model is a mixed discrete/continuous simulation of an individual sleeping, working and drinking. This started out as a multi-agent model, but I realized along the way that sleeping, working and drinking is a fairly ergodic process on long time scales (at least with respect to UFOs), so one individual with a distribution of behaviors over time or simulations is as good as a population of agents.

The model replicates the data somewhat faithfully:

UFOdistributionThe model shows a morning peak (people awake but out and about) and a workday dip (inside, lurking near the water cooler) but the data do not. This suggests to me that:

  • Alcohol is the dominant factor in sightings.
  • I don’t party nearly enough to see a UFO.

Actually, now that I’ve built this version, I think the interesting model would have a longer time horizon, to address the non-ergodic part: contagion of sightings across individuals.

h/t Andreas Größler.

Early economic dynamics: Samuelson's multiplier-accelerator

Paul Samuelson’s 1939 analysis of the multiplier-accelerator is a neat piece of work. Too bad it’s wrong.

Interestingly, this work dates from a time in which the very idea of a mathematical model was still questioned:

Contrary to the impression commonly held, mathematical methods properly employed, far from making economic theory more abstract, actually serve as a powerful liberating device enabling the entertainment and analysis of ever more realistic and complicated hypotheses.

Samuelson should be hailed as one of the early explorers of a very big jungle.

The basic statement of the model is very simple:

NationalIncome

In quasi-System Dynamics notation, that looks like:

SamuelsonDiagramB

A caveat:

The limitations inherent in so simplified a picture as that presented here should not be overlooked. In particular, it assumes that the marginal propensity to consume and the relation are constants; actually these will change with the level of income, so that this representation is strictly a marginal analysis to be applied to the study of small oscillations. Nevertheless it is more general than the usual analysis.

Samuelson hand-simulated the model (it’s fun – once – but he runs four scenarios):Simulated Samuelson then solves the discrete time system, to identify four regions with different behavior: goal seeking (exponential decay to a steady state), damped oscillations, unstable (explosive) oscillations, and unstable exponential growth or decline. He nicely maps the parameter space:

parameterSpace

ParamRegionBehaviorSo where’s the problem?

The first is not so much of Samuelson’s making as it is a limitation of the pre-computer era. The essential simplification of the model for analytic solution is;

Simplified

This is fine, but it’s incredibly abstract. Presented with this equation out of context – as readers often are – it’s almost impossible to posit a sensible description of how the economy works that would enable one to critique the model. This kind of notation remains common in econometrics, to the detriment of understanding and progress.

At the first SD conference, Gil Low presented a critique and reconstruction of the MA model that addressed this problem. He reconstructed the model, providing an operational description of the economy that remains consistent with the multiplier-accelerator framework.

LowThe mere act of crafting a stock-flow description reveals problem #1: the basic multiplier-accelerator doesn’t conserve stuff.

inventory1 InventoryCapital2Non-conservation of stuff leads to problem #2. When you do implement inventories and capital stocks, the period of multiplier-accelerator oscillations moves to about 2 decades – far from the 3-7 year period of the business cycle that Samuelson originally sought to explain. This occurs in part because the capital stock, with a 15-year lifetime, introduces considerable momentum. You simply can’t discover this problem in the original multiplier-accelerator framework, because too many physical and behavioral time constants are buried in the assumptions associated with its 2 parameters.

Low goes on to introduce labor, finding that variations in capacity utilization do produce oscillations of the required time scale.

ShortTermI think there’s a third problem with the approach as well: discrete time. Discrete time notation is convenient for matching a model to data sampled at regular intervals. But the economy is not even remotely close to operating in discrete annual steps. Moreover a one-year step is dangerously close to the 3-year period of the business cycle phenomenon of interest. This means that it is a distinct possibility that some of the oscillatory tendency is an artifact of discrete time sampling. While improper oscillations can be detected analytically, with discrete time notation it’s not easy to apply the simple heuristic of halving the time step to test stability, because it merely compresses the time axis or causes problems with implicit time constants, depending on how the model is implemented. Halving the time step and switching to RK4 integration illustrates these issues:

RK4

It seems like a no-brainer, that economic dynamic models should start with operational descriptions, continuous time, and engineering state variable or stock flow notation. Abstraction and discrete time should emerge as simplifications, as needed for analysis or calibration. The fact that this has not become standard operating procedure suggests that the invisible hand is sometimes rather slow as it gropes for understanding.

The model is in my library.

See Richardson’s Feedback Thought in Social Science and Systems Theory for more history.

Samuelson’s Multiplier Accelerator

This is a fairly direct implementation of the multiplier-accelerator model from Paul Samuelson’s classic 1939 paper,

“Interactions between the Multiplier Analysis and the Principle of Acceleration” PA Samuelson – The Review of Economics and Statistics, 1939 (paywalled on JSTOR, but if you register you can read a limited number of publications for free)

SamuelsonDiagramB

This is a nice example of very early economic dynamics analyses, and also demonstrates implementation of discrete time notation in Vensim. Continue reading “Samuelson’s Multiplier Accelerator”

Bulbs banned

The incandescent ban is underway.

Conservative think tanks still hate it:

Actually, I think it’s kind of a dumb idea too – but not as bad as you might think, and in the absence of real energy or climate policy, not as dumb as doing nothing. You’d have to be really dumb to believe this:

The ban was pushed by light bulb makers eager to up-sell customers on longer-lasting and much more expensive halogen, compact fluourescent, and LED lighting.

More expensive? Only in a universe where energy and labor costs don’t count (Texas?) and for a few applications (very low usage, or chicken warming).

bulb economicsOver the last couple years I’ve replaced almost all lighting in my house with LEDs. The light is better, the emissions are lower, and I have yet to see a failure (unlike cheap CFLs).

I built a little bulb calculator in Vensim, which shows huge advantages for LEDs in most situations, even with conservative assumptions (low social price of carbon, minimum wage) it’s hard to make incandescents look good. It’s also a nice example of using Vensim for spreadsheet replacement, on a problem that’s not very dynamic but has natural array structure.

bulbModelGet it: bulb.mdl or bulb.vpm (uses arrays, so you’ll need the free Model Reader)

What's the empirical distribution of parameters?

Vensim‘s answer to exploring ill-behaved problem spaces is either to do hill-climbing with random restarts, or MCMC and simulated annealing. Either way, you need to start with some initial distribution of points to search.

It’s helpful if that distribution is somehow efficient at exploring the interesting parts of the space. I think this is closely related to the problem of selecting uninformative priors in Bayesian statistics. There’s lots of research about appropriate uninformative priors for various kinds of parameters. For example,

  • If a parameter represents a probability, one might choose the Jeffreys or Haldane prior.
  • Indifference to units, scale and inversion might suggest the use of a log uniform prior, where nothing else is known about a positive parameter

However, when a user specifies a parameter in Vensim, we don’t even know what it represents. So what’s the appropriate prior for a parameter that might be positive or negative, a probability, a time constant, a scale factor, an initial condition for a physical stock, etc.?

On the other hand, we aren’t quite as ignorant as the pure maximum entropy derivation usually assumes. For example,

  • All numbers have to lie between the largest and smallest float or double, i.e. +/- 3e38 or 2e308.
  • More practically, no one scales their models such that a parameter like 6.5e173 would ever be required. There’s a reason that metric prefixes range from yotta to yocto (10^24 to 10^-24). The only constant I can think of that approaches that range is Avogadro’s number (though there are probably others), and that’s not normally a changeable parameter.
  • For lots of things, one can impose more constraints, given a little more information,
    • A time constant or delay must lie on [TIME STEP,infinity], and the “infinity” of interest is practically limited by the simulation duration.
    • A fractional rate of change similarly must lie on [-1/TIME STEP,1/TIME STEP] for stability
    • Other parameters probably have limits for stability, though it may be hard to discover them except by experiment.
    • A parameter with units of year is probably modern, [1900-2100], unless you’re doing Mayan archaeology or paleoclimate.

At some point, the assumptions become too heroic, and we need to rely on users for some help. But it would still be really interesting to see the distribution of all parameters in real models. (See next …)

Environmental Homeostasis

Replicated from

The Emergence of Environmental Homeostasis in Complex Ecosystems

The Earth, with its core-driven magnetic field, convective mantle, mobile lid tectonics, oceans of liquid water, dynamic climate and abundant life is arguably the most complex system in the known universe. This system has exhibited stability in the sense of, bar a number of notable exceptions, surface temperature remaining within the bounds required for liquid water and so a significant biosphere. Explanations for this range from anthropic principles in which the Earth was essentially lucky, to homeostatic Gaia in which the abiotic and biotic components of the Earth system self-organise into homeostatic states that are robust to a wide range of external perturbations. Here we present results from a conceptual model that demonstrates the emergence of homeostasis as a consequence of the feedback loop operating between life and its environment. Formulating the model in terms of Gaussian processes allows the development of novel computational methods in order to provide solutions. We find that the stability of this system will typically increase then remain constant with an increase in biological diversity and that the number of attractors within the phase space exponentially increases with the number of environmental variables while the probability of the system being in an attractor that lies within prescribed boundaries decreases approximately linearly. We argue that the cybernetic concept of rein control provides insights into how this model system, and potentially any system that is comprised of biological to environmental feedback loops, self-organises into homeostatic states.

See my related blog post for details.

Continue reading “Environmental Homeostasis”

Wonderland

Wonderland model by Sanderson et al.; see Alexandra Milik, Alexia Prskawetz, Gustav Feichtinger, and Warren C. Sanderson, “Slow-fast Dynamics in Wonderland,” Environmental Modeling and Assessment 1 (1996) 3-17.

Here’s an excerpt from my 1998 critique of this model: Continue reading “Wonderland”