Coupled Catastrophes

I ran across this cool article on network dynamics, and thought the model would be an interesting application for Ventity:

Coupled catastrophes: sudden shifts cascade and hop among interdependent systems

Charles D. Brummitt, George Barnett and Raissa M. D’Souza


An important challenge in several disciplines is to understand how sudden changes can propagate among coupled systems. Examples include the synchronization of business cycles, population collapse in patchy ecosystems, markets shifting to a new technology platform, collapses in prices and in confidence in financial markets, and protests erupting in multiple countries. A number of mathematical models of these phenomena have multiple equilibria separated by saddle-node bifurcations. We study this behaviour in its normal form as fast–slow ordinary differential equations. In our model, a system consists of multiple subsystems, such as countries in the global economy or patches of an ecosystem. Each subsystem is described by a scalar quantity, such as economic output or population, that undergoes sudden changes via saddle-node bifurcations. The subsystems are coupled via their scalar quantity (e.g. trade couples economic output; diffusion couples populations); that coupling moves the locations of their bifurcations. The model demonstrates two ways in which sudden changes can propagate: they can cascade (one causing the next), or they can hop over subsystems. The latter is absent from classic models of cascades. For an application, we study the Arab Spring protests. After connecting the model to sociological theories that have bistability, we use socioeconomic data to estimate relative proximities to tipping points and Facebook data to estimate couplings among countries. We find that although protests tend to spread locally, they also seem to ‘hop’ over countries, like in the stylized model; this result highlights a new class of temporal motifs in longitudinal network datasets.

Ventity makes sense here because the system consists of a network of coupled states. Ventity makes it easy to represent a wide variety of network architectures. This means there are two types of entities in the system: “Nodes” and “Couplings.”

The Node entitytype contains a single state (X), with local feedback, as well as a remote influence from Coupling and a few global parameters referenced from the Model entity:

A Coupling is simply a reference from one Node to another, with a strength parameter:

If you don’t create any Couplings, the Nodes run standalone, as in Section 2.1 of the paper. You can use that to see how the bistable dynamics of X create a tipping point, by running a set of nodes with different initial conditions:

By increasing the global Model.const a, you can induce a bifurcation that destabilizes the lower branch of the system, so that all trajectories tend to increase:

Section 2.2 of the paper illustrates a master-slave system, with two Nodes and a single Coupling by which the master Node influences the Slave. I actually set this up with a single master driving multiple slaves, where each slave has a different initial X. Then increasing the master’s initial X spills over to shift the stability of Slave 4’s initial state:

In Section 2.3, things get really interesting, with cascade hopping. In this scenario, there are three coupled Nodes, X -> Y -> Z. X (blue) is disturbed exogenously by changing its local const a parameter at time 8, causing it to transition from a stable value near 1 to about -1.2. This in turn influences a slight shift in Y’s state (red), but due to weak coupling that’s not enough to destabilize Y. However, the small shift in Y is enough to nudge Z out of its state, causing a sudden transition to -1.2 around time 18.

Consider what this would do to any simple correlation-based thinking, or a regression model. X has clearly caused a catastrophic change in Z, but without much of an obvious change in Y. In the presence of noise, it would be easy to conclude that this was all a coincidence. (If you harbor any doubt about the causality, just set Node X’s const a chg to zero and see what happens.)

I encourage you to take a look at the original paper – it has some nice phase diagrams and goes on to consider some interesting applications. I think the same structure could be used to implement another interesting network dynamics paper: State-dependent effective interactions in oscillator networks through coupling functions with dead zones. And if you like the topic, Network Catastrophe: Self-Organized Patterns Reveal both the Instability and the Structure of Complex Networks has more interesting data-centric applications.

An interesting extension of this model would be to generalize to larger networks, by modifying the input data or using actions to generate random networks.

The model:

Opiod Epidemic Dynamics

I ran across an interesting dynamic model of the opioid epidemic that makes a good target for replication and critique:

Prevention of Prescription Opioid Misuse and Projected Overdose Deaths in the United States

Qiushi Chen; Marc R. Larochelle; Davis T. Weaver; et al.

Importance  Deaths due to opioid overdose have tripled in the last decade. Efforts to curb this trend have focused on restricting the prescription opioid supply; however, the near-term effects of such efforts are unknown.

Objective  To project effects of interventions to lower prescription opioid misuse on opioid overdose deaths from 2016 to 2025.

Design, Setting, and Participants  This system dynamics (mathematical) model of the US opioid epidemic projected outcomes of simulated individuals who engage in nonmedical prescription or illicit opioid use from 2016 to 2025. The analysis was performed in 2018 by retrospectively calibrating the model from 2002 to 2015 data from the National Survey on Drug Use and Health and the Centers for Disease Control and Prevention.

Conclusions and Relevance  This study’s findings suggest that interventions targeting prescription opioid misuse such as prescription monitoring programs may have a modest effect, at best, on the number of opioid overdose deaths in the near future. Additional policy interventions are urgently needed to change the course of the epidemic.

The model is fully described in supplementary content, but unfortunately it’s implemented in R and described in Greek letters, so it can’t be run directly:

That’s actually OK with me, because I think I learn more from implementing the equations myself than I do if someone hands me a working model.

While R gives you access to tremendous tools, I think it’s not a good environment for designing and testing dynamic models of significant size. You can’t easily inspect everything that’s going on, and there’s no easy facility for interactive testing. So, I was curious whether that would prove problematic in this case, because the model is small.

Here’s what it looks like, replicated in Vensim:

It looks complicated, but it’s not complex. It’s basically a cascade of first-order delay processes: the outflow from each stock is simply a fraction per time. There are no large-scale feedback loops. Continue reading “Opiod Epidemic Dynamics”

Towards Principles for Subscripting in Models

For many aspects of models, we have well-accepted rules that define good practice. All physical stocks must have first-order negative feedback on the outflow. Normalize your lookup tables. Thou shalt balance units.

In some areas, the rules haven’t been written down. Subscripts (arrays) are the poor stepchild of dynamic models. They simply didn’t exist when simulation languages emerged, and no one really thinks about them much. They’re treated as a utility, like memory allocation in C, rather than as a key part of the model architecture. I think that needs to change, so this post is attempt to write down some guidance. Consider it a work in progress; I’d be interested in your thoughts.

What’s the Question?

There are really two kinds of questions:

  • How much detail do you want in your model? This is just the age-old problem of aggregation, which I won’t rehash in this post.
  • How do the subscripts you’re using contribute to a transparent, operational description of the system?

It’s the latter I’m concerned with. In essence: how do you implement a given level of detail so that the array structure makes sense? Continue reading “Towards Principles for Subscripting in Models”

Modeling Investigations

538 had this cool visualization of the Russia investigation in the context of Watergate, Whitewater, and other historic investigations.

The original is fun to watch, but I found it hard to understand the time dynamics from the animation. For its maturity (660 days and counting), has the Russia investigation yielded more or fewer indictments than Watergate (1492 days total)? Are the indictments petering out, or accelerating?

A simplified version of the problem looks a lot like an infection model (a.k.a. logistic growth or Bass diffusion):

So, the interesting question is whether we can – from partway through the history of the system – estimate the ultimate number of indictments and convictions it will yield. This is fraught with danger, especially when you have no independent information about the “physics” of the system, especially the population of potential crooks to be caught. Continue reading “Modeling Investigations”

Biological Dynamics of Stress Response

At ISDC 2018, we gave the Dana Meadows Award for best student paper to Gizem Aktas, for Modeling the Biological Mechanisms that Determine the Dynamics of Stress Response of the Human Body (with Yaman Barlas)This is a very interesting paper that elegantly synthesizes literature on stress, mood, and hormone interactions. I plan to write more about it later, but for the moment, here’s the model for your exploration.

The dynamic stress response of the human body to stressors is produced by nonlinear interactions among its physiological sub-systems. The evolutionary function of the response is to enable the body to cope with stress. However, depending on the intensity and frequency of the stressors, the mechanism may lose its function and the body can go into a pathological state. Three subsystems of the body play the most essential role in the stress response: endocrine, immune and neural systems. We constructed a simulation model of these three systems to imitate the stress response under different types of stress stimuli. Cortisol, glucocorticoid receptors, proinflammatory cytokines, serotonin, and serotonin receptors are the main variables of the model. Using both qualitative and quantitative physiological data, the model is structurally and behaviorally well-validated. In subsequent scenario runs, we have successfully replicated the development of major depression in the body. More interestingly, the model can present quantitative representation of some very well acknowledged qualitative hypotheses about the stress response of the body. This is a novel quantitative step towards the comprehension of stress response in relation with other disorders, and it provides us with a tool to design and test treatment methods.

The original is a STELLA model; here I’ve translated it to Vensim and made some convenience upgrades. I used the forthcoming XMILE translation in Vensim to open the model. You get an ugly diagram (due to platform differences and XMILE’s lack of support for flow-clouds), but it’s functional enough to browse. I cleaned up the diagrams and moved them into multiple views to take better advantage of Vensim’s visual approach.

The model ran right away, though I had to add one MAX statement to handle a uniflow (not supported in Vensim, and something I remain allergic to). There’s actually an important lesson on model replication and calibration in this.

When I first translated the model, I ran a few scenarios, using the comprehensive replication instructions in the supplemental material for the paper. I built up a Vensim command script to make it easy to replicate all the scenarios in the paper. To do that, I had to modify the equations a bit, so that manual equation editing (in STELLA) could be replaced by automatic parameter changes.

Then I ran my script and eyeballed a few graphs. Things looked pretty good:

The same, right? Not so fast! If you look closely, you’ll find that the Vensim version (bottom) has 9 peaks instead of 10, due to my replacement of a cascade of IF … ELSE test inputs with a simpler PULSE TRAIN. When you fix the count, there are still issues, because the duration parameter for each pulse (0.2) is not an integral multiple of the TIME STEP. (Incidentally, differences arising from PULSE implementations are tricky – see Yutaka Takahashi’s poster from ISDC 2018).

It took me several iterations to work out what was going wrong. I found that, to really verify that the translation (plus my initially erroneous upgrades) was OK, I had to export a run from STELLA, import it as a dataset in Vensim, and compare behavior hour by hour. That’s how I discovered the subtle but important uniflow difference.

The fact that tiny differences in test input implementations matter highlights the extreme numerical sensitivity of the model. This is a feature, not a bug. It arises from positive feedback that creates sensitive thresholds in stress response: 5% more episodic stress can be the difference between routine recovery and total collapse.

For example, here’s a sensitivity experiment with external stress at 10, 20, 30, 40, 50 & 60 units:

Notice that for external stress <= 40, recovery is quick – hours to days. But somewhere above 40 is a nonlinear threshold, beyond which recovery takes weeks.

This .zip archive contains:

  • An updated source model (.stmx) from the author, used for the translation.
  • The translated model (.mdl and .vpm). This version won’t work in PLE because it uses macros, but you can use the free Model Reader to run it.
  • Command scripts for replicating the paper’s scenarios, plus the vector of stress levels above.


Update: StressResponseModel_converted fixes a unit error in a test input (my mistake) – this version is closest to the original in the paper.

Update 2: StressResponseModel_converted has an improved control panel and runs 4x faster. It departs from the original to improve sensitivity analysis capability and pulse test stability, but remains dynamically identical (as far as I can determine).

The original paper and supplementary material should be in the conference submission system.

Stay tuned for more on this topic! Here’s a detailed critique & analysis.

Dynamic Cohorts

This is the model library entry for my ISDC 2017 plenary paper with Larry Yeager on dynamic cohorts in Ventity:

Dynamic cohorts: a new approach to managing detail

While it is desirable to minimize the complexity of a model, some problems require the detailed representation of heterogeneous subgroups, where nonlinearities prevent aggregation or explicit chronological aging is needed. It is desirable to have a representation that avoids burdening the modeler or user computationally or cognitively. Eberlein & Thompson (2013) propose continuous cohorting, a novel solution to the cohort blending problem in population modeling, and test it against existing aging chain and cohort-shifting approaches. Continuous cohorting prevents blending of ages and other properties, at at some cost in complexity.

We propose another new solution, dynamic cohorts, that prevents blending with a comparatively low computational burden. More importantly, the approach simplifies the representation of distinct age, period and cohort effects and representation of dynamics other than the aging process, like migration and attribute coflows. By encapsulating the lifecycle of a representative cohort in a single entity, rather than dispersing it across many states over time, it makes it easier to develop and explain the model structure.

Paper: Dynamic Cohorts P1363.pdf

Models: Dynamic Cohorts

Presentation slides: Dynamic Cohorts Fid Ventana v2b.pdf

I’ve previously written about this here.

Thyroid Dynamics

Quite a while back, I posted about the dynamics of the thyroid and its interactions with other systems.

That was a conceptual model; this is a mathematical model. This is a Vensim replication of:

Marisa Eisenberg, Mary Samuels, and Joseph J. DiStefano III

Extensions, Validation, and Clinical Applications of a Feedback Control System Simulator of the Hypothalamo-Pituitary-Thyroid Axis

Background:We upgraded our recent feedback control system (FBCS) simulation model of human thyroid hormone (TH) regulation to include explicit representation of hypothalamic and pituitary dynamics, and up-dated TH distribution and elimination (D&E) parameters. This new model greatly expands the range of clinical and basic science scenarios explorable by computer simulation.

Methods: We quantified the model from pharmacokinetic (PK) and physiological human data and validated it comparatively against several independent clinical data sets. We then explored three contemporary clinical issues with the new model: …

… These results highlight how highly nonlinear feedback in the hypothalamic-pituitary-thyroid axis acts to maintain normal hormone levels, even with severely reduced TSH secretion.

Volume 18, Number 10, 2008
DOI: 10.1089=thy.2007.0388

This version is a superset of the authors’ earlier 2006 model, and closely reproduces that with a few parameter changes.

L-T4 Bioequivalence and Hormone Replacement Studies via Feedback Control Simulations

Volume 16, Number 12, 2006

The model is used in:

TSH-Based Protocol, Tablet Instability, and Absorption Effects on L-T4 Bioequivalence

Volume 19, Number 2, 2009
DOI: 10.1089=thy.2008.0148

This works with any Vensim version:

thyroid 2008 d.mdl

thyroid 2008 d.vpm

Discrete Time Stinks

Discrete time modeling is often convenient, occasionally right and frequently treacherous.

You often see models expressed in discrete time, like
Samuelson's multiplier-accelerator
That’s Samuelson’s multiplier-accelerator model. The same notation is ubiquitous in statistics, economics, ABM and many other areas.

Samuelson multiplier-accelerator schematic

So, what’s the problem?

  1. Most of the real world does not happen in discrete time. A few decisions, like electric power auctions, happen at regular intervals, but those are the exception. Most of the time we’re modeling on long time scales relative to underlying phenomena, and we have lots of heterogeneous agents or particles or whatever, with diverse delays and decision intervals.
  2. Discrete time can be artificially unstable. A stable continuous system can be made unstable by simulating at too large a discrete interval. A discrete system may oscillate, where its continuous equivalent would not.
  3. You can’t easily test for the effect of the time time step on stability. Q: If your discrete time model is running with one Excel row per interval, how will you test an interval that’s 1/2 or 1/12 as big for comparison? A: You won’t. Even if it occurs to you to try, it would be too much of a pain.
  4. The measurement interval isn’t necessarily the relevant dynamic time scale. Often the time step of a discrete model derives from the measurement interval in the data. There’s nothing magic about that interval, with respect to how the system actually works.
  5. The notions of stocks and flows and system state are obscured. (See the diagram from the Samuelson model above.) Lack of stock flow consistency can lead to other problems, like failure to conserve physical quantities.
  6. Units are ambiguous. This is a consequence of #5. When states and their rates of change appear on an equal footing in an equation, it’s hard to work out what’s what. Discrete models tend to be littered with implicit time constants and other hidden parameters.
  7. Most delays aren’t discrete. In the Samuelson model, output depends on last year’s output. But why not last week’s, or last century’s? And why should a delay consist of precisely 3 periods, rather than be distributed over time? (This critique applies to some Delay Differential Equations, too.)
  8. Most logic isn’t discrete. When time is marching along merrily in discrete lockstep, it’s easy to get suckered into discrete thinking: “if the price of corn is lower than last year’s price of corn, buy hogs.” That might be a good model of one farmer, but it lacks nuance, and surely doesn’t represent the aggregate of diverse farmers. This is not a fault of discrete time per se, but the two often go hand in hand. (This is one of many flaws in the famous Levinthal & March model.)

Certainly, there are cases that require a discrete time simulation (here’s a nice chapter on analysis of such systems). But most of the time, a continuous approach is a better starting point, as Jay Forrester wrote 50 years ago. The best approach is sometimes a hybrid, with an undercurrent of continuous time for the “physics” of the model, but with measurement processes represented by explicit sampling at discrete intervals.

So, what if you find a skanky discrete time model in your analytic sock drawer? Fear not, you can convert it.

Consider the adstock model, representing the cumulative effects of advertising:

Ad Effect = f(Adstock)
Adstock(t) = Advertising(t) + k*Adstock(t-1)

Notice that k is related to the lifetime of advertising, but because it’s relative to the discrete interval, it’s misleadingly dimensionless. Also, the interval is fixed at 1 time unit, and can’t be changed without scaling k.

Also notice that the ad effect has an instantaneous component. Usually there’s some delay between ad exposure and action. That delay might be negligible in some cases, like in-app purchases, but it’s typically not negligible for in-store behavior.

You can translate this into Vensim lingo literally by using a discrete delay:

Adstock = Advertising + k*Previous Adstock ~ GRPs
Previous Adstock = DELAY FIXED( Adstock, Ad Life, 0 ) ~ GRPs
Ad life = ... ~ weeks

That’s functional, but it’s not much of an improvement. Much better is to recognize that Adstock is (surprise!) a stock that changes over time:

Ad Effect = f(Adstock) ~ dimensionless
Adstock = INTEG( Advertising - Forgetting, 0 ) ~ GRPs
Advertising = ... ~ GRPs/week
Forgetting = Adstock / Ad Life ~ GRPs/week
Ad Life = ... ~ weeks

Now the ad life has a dimensioned real-world interpretation and you can simulate with whatever time step you need, independent of the parameters (as long as it’s small enough).

There’s one fly in the ointment: the instantaneous ad effect I mentioned above. That happens when, for example, the data interval is weekly, and ads released have some effect within their week of release – the Monday sales flyer drives weekend sales, for example.

There are two solutions for this:

  • The “cheat” is to include a bit of the current flow of advertising in the effective adstock, via a “current week effect” parameter. This is a little tricky, because it locks you into the weekly time step. You can generalize that away at the cost of more complexity in the equations.
  • A more fundamental solution is to run the model at a finer time step than the data interval. This gives you a cleaner model, and you lose nothing with respect to calibration (in Vensim/Ventity at least).

Occasionally, you’ll run into more than one delayed state on the right side of the equation, as with the inclusion of Y(t-1) and Y(t-2) in the Samuelson model (top). That generally signals either a delay with a complex structure (e.g., 2nd or higher order), or some other higher-order effect. Generally, you should be able to give a name and interpretation to these states (as with the construction of Y and C in the Samuelson model). If you can’t, don’t pull your hair out. It could be that the original is ill-formulated. Instead, think things through from scratch with stocks and flows in mind.

Nelson Rules

I ran across the Nelson Rules in a machine learning package. These are a set of heuristics for detecting changes in statistical process control. Their inclusion felt a bit like navigating a 787 with a mechanical flight computer (which is a very cool device, by the way).

The idea is pretty simple. You have a time series of measurements, normalized to Z-scores, and therefore varying (most of the time) by plus or minus 3 standard deviations. The Nelson Rules provide a way to detect anomalies: drift, oscillation, high or low variance, etc. Rule 1, for example, is just a threshold for outlier detection: it fires whenever a measurement is more than 3 SD from the mean.

In the machine learning context, it seems strange to me to use these heuristics when more powerful tests are available. This is not unlike the problem of deciding whether a random number generator is really random. It’s fairly easy to determine whether it’s producing a uniform distribution of values, but what about cycles or other long-term patterns? I spent a lot of time working on this when we replaced the RNG in Vensim. Many standard tests are available. They’re not all directly applicable, but the thinking is.

In any case, I got curious how the Nelson rules performed in the real world, so I developed a test model.

This feeds a test input (Normally distributed random values, with an optional signal superimposed) into a set of accounting variables that track metrics and compare with the rule thresholds. Some of these are complex.

Rule 4, for example, looks for 14 points with alternating differences. That’s a little tricky to track in Vensim, where we’re normally more interested in continuous time. I tackle that with the following structure:

Difference = Measurement-SMOOTH(Measurement,TIME STEP)
Is Positive=IF THEN ELSE(Difference>0,1,-1)
N Switched=INTEG(IF THEN ELSE(Is Positive>0 :AND: N Switched<0
,(1-2*N Switched )/TIME STEP
,IF THEN ELSE(Is Positive<0 :AND: N Switched>0
 ,(-1-2*N Switched)/TIME STEP
 ,(Is Positive-N Switched)/TIME STEP)),0)
Rule 4=IF THEN ELSE(ABS(N Switched)>14,1,0)

There’s a trick here. To count alternating differences, we need to know (a) the previous count, and (b) whether the previous difference encountered was positive or negative. Above, N Switched stores both pieces of information in a single stock (INTEG). That’s possible because the count is discrete and positive, so we can overload the storage by giving it the sign of the previous difference encountered.

Thus, if the current difference is negative (Is Positive < 0) and the previous difference was positive (N Switched > 0), we (a) invert the sign of the count by subtracting 2*N Switched, and (b) augment the count, here by subtracting 1 to make it more negative.

Similar tricks are used elsewhere in the structure.

How does it perform? Surprisingly well. Here’s what happens when the measurement distribution shifts by one standard deviation halfway through the simulation:

There are a few false positives in the first 1000 days, but after the shift, there are many more detections from multiple rules.

The rules are pretty good at detecting a variety of pathologies: increases or decreases in variance, shifts in the mean, trends, and oscillations. The rules also have different false positive rates, which might be OK, as long as they catch nonoverlapping problems, and don’t have big differences in sensitivity as well. (The original article may have more to say about this – I haven’t checked.)

However, I’m pretty sure that I could develop some pathological inputs that would sneak past these rules. By contrast, I’m pretty sure I’d have a hard time sneaking anything past the NIST or Diehard RNG test suites.

If I were designing this from scratch, I’d use machine learning tools more directly – there are lots of tests for distributions, changes, trend breaks, oscillation, etc. that can be used online with a consistent likelihood interpretation and optimal false positive/negative tradeoffs.

Here’s the model:



The Ambiguity of Causal Loop Diagrams and Archetypes

I find causal loop diagramming to be a very useful brainstorming and presentation tool, but it falls short of what a model can do for you.

Here’s why. Consider the following pair of archetypes (Eroding Goals and Escalation, from wikipedia):

Eroding Goals and Escalation archetypes

Archetypes are generic causal loop diagram (CLD) templates, with a particular behavior story. The Escalation and Eroding Goals archetypes have identical feedback loop structures, but very different stories. So, there’s no unique mapping from feedback loops to behavior. In order to predict what a set of loops is going to do, you need more information.

Here’s an implementation of Eroding Goals:

Notice several things:

  • I had to specify where the stocks and flows are.
  • “Actions to Improve Goals” and “Pressure to Adjust Conditions” aren’t well defined (I made them proportional to “Gap”).
  • Gap is not a very good variable name.
  • The real world may have structure that’s not mentioned in the archetype (indicated in red).

Here’s Escalation:

The loop structure is mathematically identical; only the parameterization is different. Again, the missing information turns out to be crucial. For example, if A and B start with the same results, there is no escalation – A and B results remain constant. To get escalation, you either need (1) A and B to start in different states, or (2) some kind of drift or self-excitation in decision making (green arrow above).

Even then, you may get different results. (2) gives exponential growth, which is the standard story for escalation. (1) gives escalation that saturates:

The Escalation archetype would be better if it distinguished explicit goals for A and B results. Then you could mathematically express the key feature of (2) that gives rise to arms races:

  • A’s goal is x% more bombs than B
  • B’s goal is y% more bombs than A

Both of these models are instances of a generic second-order linear model that encompasses all possible things a linear model can do:

Notice that the first-order and second-order loops are disentangled here, which makes it easy to see the “inner” first order loops (which often contribute damping) and the “outer” second order loop, which can give rise to oscillation (as above) or the growth in the escalation archetype. That loop is difficult to discern when it’s presented as a figure-8.

Of course, one could map these archetypes to other figure-8 structures, like:

How could you tell the difference? You probably can’t, unless you consider what the stocks and flows are in an operational implementation of the archetype.

The bottom line is that the causal loop diagram of an archetype or anything else doesn’t tell you enough to simulate the behavior of the system. You have to specify additional assumptions. If the system is nonlinear or stochastic, there might be more assumptions than I’ve shown above, and they might be important in new ways. The process of surfacing and testing those assumptions by building a stock-flow model is very revealing.

If you don’t build a model, you’re in the awkward position of intuiting behavior from structure that doesn’t uniquely specify any particular mode. In doing so, you might be way ahead of non-systems thinkers approaching the same problem with a laundry list. But your ability to discover errors, incorporate data and discover leverage is far greater if you can simulate.

The model: wikiArchetypes1b.mdl (runs in any version of Vensim)