Forecasting Uncertainty

Here’s an example that illustrates what I think Forrester was talking about.

This is a set of scenarios from a simple SIR epidemic model in Ventity.

There are two sources of uncertainty in this model: the aggressiveness of the disease (transmission rate) and the effectiveness of an NPI policy that reduces transmission.

Early in the epidemic, at time 40 where the decision to intervene is made, it’s hard to tell the difference between a high transmission rate and a lower transmission rate with a slightly larger initial infected population. This is especially true in the real world, because early in an epidemic the information-gathering infrastructure is weak.

However, you can still make decent momentum forecasts by extrapolating from the early returns for a few more days – to time 45 or 50 perhaps. But this is not useful, because that roughly corresponds with the implementation lag for the policy. So, over the period of skilled momentum forecasts, it’s not possible to have much influence.

Beyond time 50, there’s a LOT of uncertainty in the disease trajectory, both from the uncontrolled baseline (is R0 low or high?) and the policy effectiveness (do masks work?). The yellow curve (high R0, low effectiveness) illustrates a key feature of epidemics: a policy that falls short of lowering the reproduction ratio below 1 results in continued growth of infection. It’s still beneficial, but constituents are likely to perceive this as a failure and abandon the policy (returning to the baseline, which is worse).

Some of these features are easier to see by looking at the cumulative outcome. Notice that the point prediction for times after about 60 has extremely large variance. But not everything is uncertain. In the uncontrolled baseline runs (green and brownish), eventually almost everyone gets the disease, it’s a matter of when not if, so uncertainty actually decreases after time 90 or so. Also, even though the absolute outcome varies a lot, the policy always improves on the baseline (at least neglecting cost, as we are here). So, while the forecast for time 100 might be bad, the contingent prediction for the effect of the policy is qualitatively insensitive to the uncertainty.

Reading between the lines: Forrester on forecasting

I’d like to revisit Jay Forrester’s Next 50 Years article, with particular attention to a couple things I think about every day: forecasting and prediction. I previously tackled Forrester’s view on data.

Along with unwise simplification, we also see system dynamics being drawn into attempting what the client wants even when that is unwise or impossible. Of particular note are two kinds of effort—using system dynamics for forecasting, and placing emphasis on a model’s ability to exactly fit historical data.

With regard to forecasting specific future conditions, we face the same barrier that has long plagued econometrics.

Aside from what Forrester is about to discuss, I think there’s also a key difference, as of the time this was written. Econometric models typically employed lots of data and fast computation, but suffered from severe restrictions on functional form (linearity or log-linearity, Normality of distributions, etc.). SD models had essentially unrestricted functional form, particularly with respect to integration and arbitrary nonlinearity, but suffered from insufficient computational power to do everything we would have liked. To some extent, the fields are converging due to loosening of these constraints, in part because the computer under my desk today is now bigger than the fastest supercomputer in the world when I finished my dissertation years ago.

Econometrics has seldom done better in forecasting than would be achieved by naïve extrapolation of past trends. The reasons for that failure also afflict system dynamics. The reasons why forecasting future conditions fail are fundamental in the nature of systems. The following diagram may be somewhat exaggerated, but illustrates my point.

A system variable has a past path leading up to the current decision time. In the short term, the system has continuity and momentum that will keep it from deviating far from an extrapolation of the past. However, random events will cause an expanding future uncertainty range. An effective forecast for conditions at a future time can be made only as far as the forecast time horizon, during which past continuity still prevails. Beyond that horizon, uncertainty is increasingly dominant. However, the forecast is of little value in that short forecast time horizon because a responding decision will be defeated by the very continuity that made the forecast possible. The resulting decision will have its effect only out in the action region when it has had time to pressure the system away from its past trajectory. In other words, one can forecast future conditions in the region where action is not effective, and one can have influence in the region where forecasting is not reliable. You will recall a more complete discussion of this in Appendix K of Industrial Dynamics.

I think Forrester is basically right. However, I think there’s a key qualification. Some things – particularly physical systems – can be forecast quite well, not just because momentum permits extrapolation, but because there is a good understanding of the system. There’s a continuum of forecast skill, between “all models are wrong” and “some models are useful,” and you need to know where you are on that.

Fortunately, your model can tell you about the prospects for forecasting. You can characterize the uncertainty in the model parameters and environmental drivers, generate a distribution of outcomes, and use that to understand where forecasts will begin to fall apart. This is extremely valuable knowledge, and it may be key for implementation. Stakeholders want to know what your intervention is going to do to the system, and if you can’t tell them – with confidence bounds of some sort – they may have no reason to believe your subsequent attributions of success or failure.

In the hallways of SD, I sometimes hear people misconstrue Forrester, to say that “SD doesn’t predict.” This is balderdash. SD is all about prediction. We may not make point predictions of the future state of a system, but we absolutely make predictions about the effects of a policy change, contingent on uncertainties about parameters, structure and external disturbances. If we didn’t do that, what would be the point of the exercise? That’s precisely what JWF is getting at here:

The emphasis on forecasting future events diverts attention from the kind of forecast that system dynamics can reliably make, that is, the forecasting of the kind of continuing effect that an enduring policy change might cause in the behavior of the system. We should not be advising people on the decision they should now make, but rather on how to change policies that will guide future decisions. A properly designed system dynamics model is effective in forecasting how different decision-making policies lead to different kinds of system behavior.

Better Documentation

There’s a recent talk by Stefan Rahmstorf that gives a good overview of the tipping point in the AMOC, which has huge implications.

I thought it would be neat to add the Stommel box model to my library, because it’s a nice low-order example of a tipping point. I turned to a recent update of the model by Wei & Zhang in GRL.

It’s an interesting paper, but it turns out that documentation falls short of the standards we like to see in SD, making it a pain to replicate. The good part is that the equations are provided:

The bad news is that the explanation of these terms is brief to the point of absurdity:

This paragraph requires you to maintain a mental stack of no less than 12 items if you want to be able to match the symbols to their explanations. You also have to read carefully if you want to know that ‘ means “anomaly” rather than “derivative”.

The supplemental material does at least include a table of parameters – but it’s incomplete. To find the delay taus, for example, you have to consult the text and figure captions, because they vary. Initial conditions are also not conveniently specified.

I like the terse mathematical description of a system because you can readily take in the entirety of a state variable or even the whole system at a glance. But it’s not enough to check the “we have Greek letters” box. You also need to check the “serious person could reproduce these results in a reasonable amount of time” box.

Code would be a nice complement to the equations, though that comes with it’s own problems: tower-of-Babel language choices and extraneous cruft in the code. In this case, I’d be happy with just a more complete high-level description – at least:

  • A complete table of parameters and units, with values used in various experiments.
  • Inclusion of initial conditions for each state variable.
  • Separation of terms in the RhoH-RhoL equation.

A lot of these issues are things you wouldn’t even know are there until you attempt replication. Unfortunately, that is something reviewers seldom do. But electrons are cheap, so there’s really no reason not to do a more comprehensive documentation job.

 

A case for strict unit testing

Over on the Vensim forum, Jean-Jacques Laublé points out an interesting bug in the World3 population sector. His forum post includes the model, with a revealing extreme conditions test and a correction. I think it’s important enough to copy my take here:

This is a very interesting discovery. The equations in question are:

maturation 14 to 15 =
 ( ( Population 0 To 14 ) )
 * ( 1
 - mortality 0 to 14 )
 / 15
 Units: Person/year
 The fractional rate at which people aged 0-14 mature into the
 next age cohort (MAT1#5).

**************************************************************
 mortality 0 to 14=
 IF THEN ELSE(Time = 2020 * one year, 1 / one year, mortality 0 to 14 table
 ( life expectancy/one year ) )
 Units: 1/year
 The fractional mortality rate for people aged 0-14 (M1#4).

**************************************************************

(The second is the one modified for the pulse mortality test.)

In the ‘maturation 14 to 15′ equation, the obvious issue is that ’15’ is a hidden dimensioned parameter. One might argue that this instance is ‘safe’ because 15 years is definitionally the residence time of people in the 0 to 15 cohort – but I would still avoid this usage, and make the 15 yrs a named parameter, like “child cohort duration”, with a corresponding name change to the stock. If nothing else, this would make the structure easier to reuse.

The sneaky bit here, revealed by JJ’s test, is that the ‘1’ in the term (1 – mortality 0 to 14) is not a benign dimensionless number, as we often assume in constructions like 1/(1+a*x). This 1 actually represents the maximum feasible stock outflow rate, in fraction/year, implying that a mortality rate of 1/yr, as in the test input, would consume the entire outflow, leaving no children alive to mature into the next cohort. This is incorrect, because the maximum feasible outflow rate is 1/TIME STEP, and TIME STEP = 0.5, so that 1 should really be 2 ~ frac/year. This is why maturation wrongly goes to 0 in JJ’s experiment, where some children remain to age into the next cohort.

In addition, this construction means that the origin of units in the equation are incorrect – the ’15’ has to be assumed to be dimensionless for this to work. If we assign correct units to the inputs, we have a problem:

maturation 14 to 15 = ~ people/year/year
 ( ( Population 0 To 14 ) ) ~ people
 * ( 1 - mortality 0 to 14 ) ~ fraction/year
 / 15 ~ 1/year

Obviously the left side of this equation, maturation, cannot be people/year/year.

JJ’s correction is:

maturation 14 to 15=
 ( ( Population 0 To 14 ) )
 * ( 1 - (mortality 0 to 14 * TIME STEP))
 / size of the 0 to 14 population

In this case, the ‘1’ represents the maximum fraction of the population that can flow out in a time step, so it really is dimensionless. (mortality 0 to 14 * TIME STEP) represents the fractional outflow from mortality within the time step, so it too is properly dimensionless (1/year * year). You could also write this term as:

( 1/TIME STEP - mortality 0 to 14 ) / (1/TIME STEP)

In this case you can see that the term is reducing maturation by the fraction of cohort residents who don’t make it to the next age group. 1/TIME STEP represents the maximum feasible outflow, i.e. 2/year if TIME STEP = 0.5 year. In this form, it’s easy to see that this term approaches 1 (no effect) in the continuous time limit as TIME STEP approaches 0.

I should add that these issues probably have only a tiny influence on the kind of experiments performed in Limits to Growth and certainly wouldn’t change the qualitative conclusions. However, I think there’s still a strong argument for careful attention to units: a model that’s right for the wrong reasons is a danger to future users (including yourself), who might use it in unanticipated ways that challenge the robustness in extremes.

Just Say No to Complex Equations

Found in an old version of a project model:

IF THEN ELSE( First Time Work Flow[i,Proj,stage
] * TIME STEP >= ( Perceived First Time Scope UEC Work
[i,Proj,uec] + Unstarted Work[i,Proj,stage] )
:OR: Task Is Active[i,Proj,Workstage] = 0
:OR: avg density of OOS work[i,Proj,stage] > OOS density threshold,
Completed work still out of sequence[i,Proj,stage] / TIME STEP
+ new work out of sequence[i,Proj,stage] ,
MIN( Completed work still out of sequence[i,Proj,stage] / Minimum Time to Retrofit Prerequisites into OOS Work
+ new work out of sequence[i,Proj,stage],
new work in sequence[i,Proj,stage]
* ZIDZ( avg density of OOS work[i,Proj,stage],
1 – avg density of OOS work[i,Proj,stage] ) ) )

An equation like this needs to be broken into at least 3 or 4 human-readable chunks. In reviewing papers for the SD conference, I see similar constructions more often than I’d like.

Defining SD

Open Access Note by Asmeret Naugle, Saeed Langarudi, Timothy Clancy: https://doi.org/10.1002/sdr.1762

Abstract
A clear definition of system dynamics modeling can provide shared understanding and clarify the impact of the field. We introduce a set of characteristics that define quantitative system dynamics, selected to capture core philosophy, describe theoretical and practical principles, and apply to historical work but be flexible enough to remain relevant as the field progresses. The defining characteristics are: (1) models are based on causal feedback structure, (2) accumulations and delays are foundational, (3) models are equation-based, (4) concept of time is continuous, and (5) analysis focuses on feedback dynamics. We discuss the implications of these principles and use them to identify research opportunities in which the system dynamics field can advance. These research opportunities include causality, disaggregation, data science and AI, and contributing to scientific advancement. Progress in these areas has the potential to improve both the science and practice of system dynamics.

I shared some earlier thoughts here, but my refined view is in the SDR now:


Invited Commentaries by Tom Fiddaman, Josephine Kaviti Musango, Markus Schwaninger, Miriam Spano: https://doi.org/10.1002/sdr.1763

Model quality: draining the swamp for large models

In my high road post a while ago, I advocated “voluntary simplicity” as a way of avoiding a large model with an insurmountable burden of undiscovered rework.

Sometimes this is not a choice, because you’re asked to repurpose a large model for some related question. Maybe you didn’t build it, or maybe you did, but it’s time to pause and reflect before proceeding. It’s critical to determine where on the spectrum above the model lies – between the Vortex of Confusion and Nirvana.

I will assert from experience that a large model was almost always built with some narrow questions in mind, and that it was exercised mainly over parts of its state space relevant to those questions. It’s quite likely that a lot of bad behaviors lurk in other parts of the state space. When you repurpose the model those things are going to bite you.

If you start down the red road, “we’ll just add a little stuff for the new question…” you will find yourself in a world of hurt later. It’s absolutely essential that you first do some rigorous testing to establish what the limitations of the model might be in the new context.

The reason scope is so dangerous is that its effect on your ability to make quality progress is nonlinear. The number of interactions you have to manage, and therefore opportunities for errors, and complexity of corrections, is proportional to scope squared. The speed of your model declines with 1/scope, as does the time you have to pay attention to each variable.

My tentative recipe for success, or at least survival:

1. Start early. Errors beget more errors, so the sooner you discover them, the sooner you can arrest that vicious cycle.

2. Be ruthless. Don’t test to see if the model can answer the new question; test to see if you can break it and get nonsense answers.

3. Use your tools. Pay attention to unit errors and runtime warnings. Write Reality Checks to automate tests. Set ranges on key variables to ensure that they’re within reason.

4. Isolate. Because of the nonlinear interaction problem, it’s hard to interpret tests on a full model. Instead, extract components and test them in isolation. You can do this by copy-pasting, or even easier in Vensim, by using Synthesim Overrides to modify inputs to steps, ramps, etc.

5. Don’t let go. When you find a problem, track it back to its root cause.

6. Document. Keep a lab notebook, or an email stream, or a todo list, so you don’t lose the thought when you have multiple issues to chase.

7. Be extreme. Pick a stock and kick it with a pulse or an override. Take all the people out of the factory, or all the ships out of the fleet. What happens? Does anything go negative? Do decisions remain consistent with goals?

8. Calibrate. Calibration against data can be a useful way to find issues, but it’s much slower than other options, so this is something to pursue late in the process. Also, remember that model-data gaps are equally likely to reveal a problem with the data.

9. Rebuild. If you’re finding a lot of problems, you may be better of starting clean, using the existing model as a conceptual guide, but reconsidering the detailed design of the implementation.

10. Submodel. It’s often hard to fix something inside the full plate of spaghetti. But you may be able to identify a solution in an external submodel, free of distractions, and then transplant it back into the host.

11. Reduce. If you can’t rebuild the full scope within available resources, cut things out. This may not be appetizing to the client, but it’s certainly better than delivering a fragile model that only works if you don’t touch it.

12. If you find you’re in a hole, stop digging. Don’t add any features until you have things under control, because they’ll only exacerbate the problems.

13. Communicate. Let the client, and your team, know what you’re up to, and why quality is more important than their cherished kitchen sink.

Integration & Correlation

Claims by AI chatbots, engineers and Nobel prize winners notwithstanding, absence of correlation does not prove absence of causation, any more than presence of correlation proves presence of causation. Bard outlines several reasons from noise and nonlinearity, but missed a key one: bathtub statistics.

Here’s a really simple example of how this reasoning can go wrong. Consider a system with a stock Y(t) that integrates a flow X(t):

X(t) = -t

Y(t) = ∫X(t)dt

We don’t need to simulate to solve for Y(t) = -1/2*t^2 +C.

Over the interval t=[-1,1] the X and Y time series look like this:

The X-Y relationship is parabolic, with correlation zero:

Zero correlation can’t mean “not causal” because we constructed the system to be causal. Even worse, the sign of the relationship depends on the subset of the interval you examine:


This is not the only puzzling case. Consider instead:

X(t) = 1

Y(t) = ∫X(t)dt = t + C

In this case, X(t) has zero variance. But Corr(X,Y) = Cov(X,Y)/σ(X)σ(Y) which is 0/0. What are we to make of that?

This pathology can also arise from feedback. Consider a thermostat that controls a heater that operates in two states (on or off). If the heater is fast, and the thermostat is sensitive with a narrow temperature band, then σ(temperature) will be near 0, even though the heater is cycling with σ(heater state)>0.

AI Chatbots on Causality

Having recently encountered some major causality train wrecks, I got curious about LLM “understanding” of causality. If AI chatbots are trained on the web corpus, and the web doesn’t “get” causality, there’s no reason to think that AI will make sense either.

TLDR; ChatGPT and Bing utterly fail this test, for reasons that are evident in Google Bard’s surprisingly smart answer.

ChatGPT: FAIL

Bing: FAIL

Google Bard: PASS

Google gets strong marks for mentioning a bunch of reasons to expect that we might not find a correlation, even though x is known to cause y. I’d probably only give it a B+, because it neglected integration and feedback, but it’s a good answer that properly raises lots of doubts about simplistic views of causality.