Mental Models vs. Models in the Loop

Timothy Clancy, Saeed P. Langarudi and Raafat Zaini have an interesting new commentary in the SDR.

Never the strongest: reconciling the four schools of thought in system dynamics in the debate on quality

With the passing of Jay Forrester, the field of system dynamics exists at a similar crossroads. Debates of implicit, if not explicit, inheritance and future direction are already breaking out among competing generals. Who owns Forrester’s legacy? Will we proceed down the reference mode of the Macedonian and Mughal Empires—or will we instead seek an alternative reference mode of Alexandria: integration, reconciliation, and mutually recognized coexistence of different schools within the broader field of system dynamics?

We suggest the latter path—and that begins by recognizing at least four, if not more, distinct schools of thought on how to approach system dynamics and the study of complex systems. We believe these schools arise from differing mental models in the field and the consequences that arise in practice from these differences.

I haven’t really absorbed it yet, so I’ll refrain from direct comment, but it did spur me to finish off a draft of some similar thoughts on these questions.

I personally lean very much toward the hard science, data-driven side of the field: what the authors call the Empirical school of thought. But as a policy, I lean toward a big tent view of the field that includes work with low model content (which I don’t equate with low quality).

I think the central tension in the debate has already been posed by JWF and others long ago – all the way back to Industrial Dynamics really. In Some Basic Concepts in System Dynamics (2009), Forrester summarized,

The basic feedback loop in Figure 4 is too simple to represent real-world situations. But simple loops have more serious shortcomings—they are misleading and teach the wrong lessons. Most of our intuitive learning comes from very simple systems. The truths learned from simple systems are often completely opposite from the behavior of more complex systems. A person understands filling a water glass, as in Figure 3. But, if we go to a system that is only five times as complicated, as in Figure 5, intuition fails. A person cannot look at Figure 5 and anticipate the behavior of the pictured system.

Figure 5 from World Dynamics is five times more complicated than Figure 4 in the sense that it has five stocks—the rectangles in the figure. The figure shows how rapidly apparent complexity increases as more system stocks are added.

Mathematicians would describe Figure 5 as a fifth-order, nonlinear, dynamic system. No one can predict the behavior by studying the diagram or its underlying equations. Only by using computer simulation can the implied behavior be revealed.

I think the message is pretty clear here. To solve complex problems, you must formally simulate the system because mental simulations are treacherous. I’d go even one step further, and argue that it’s not sufficient to simulate the system once, figure out where the leverage point is, implement the solution, and toss out the model when you’re done. The simulation needs to become an ongoing part of the loop for model predictive control.

If that’s the ideal, why settle for anything less? I think there are a number of possible answers.

  1. Even in a perfect world where it’s easy to construct the model-in-the-loop, you need buy-in from the participants in the system to implement the model, and that requires a skillset that’s quite distinct.
  2. While it’s true that no one can intuit the behavior of a 10th order system, there might be a lot of value in managing low-order components of the system that are amenable to mental simulation or simple decision rules. There might be two reasons for this:
    • The complex system is dominated by a few key parameters (as in sloppy systems).
    • Risking global suboptimization by improving locally is better than optimizing nothing (though this might be a matter of luck).
  3. Often no single stakeholder in the system has the resources or authority to implement needed changes. But exposing the connectivity of the system, even if you can’t predict exactly how it works, is sometimes enough to catalyze creation of higher-level structures that enable change in the future.
  4. Not everyone is, or wants to be, a modeler. Moreover some participants in the system may reject models, data, and pretty much everything else since the Enlightenment, but you still have to include them.
  5. The non-modelers, as participants in the system, hold key knowledge that the modelers need.
  6. A qualitative map of a system is a good start towards an eventual quantitative model.
  7. Not every problem is big enough to model.

I’m sure you can probably think of more. I think these are good reasons to embrace non-model-based work on systems, as long as one refrains from making strong predictions about behavior from incomplete descriptions of behavior. Fortunately that leaves a lot of interesting things to think about.

I think the opposite perspective, that nothing is worth doing without a model and data, requires some counterexamples. Are there instances in which a group mapping exercise, playing a dynamic game, or engaging in cross-functional dialog led to reduced performance? I’m not aware of good examples of this, and certainly not of good diagnoses of the outcome. Attribution in complex systems is notoriously difficult. I think what this suggests is that we need stronger links to the evaluation research community, because we don’t really know what works and what doesn’t. We already have some strength in this area from the dynamic decision making experiment thread of SD, but … physician heal thyself.

There is one thing that troubles me though, just beyond the boundaries of our field. It’s climate policy (and related global issues). Most climate policy advocates are in some sense systems thinkers. Many build nice diagrams or use other systemic tools. If you don’t care about systems, it’s hard to see why you’d care about climate to begin with.

Yet … it seems that a substantial fraction of people who are pro-climate policy favor policies that are counterproductive or insufficient. They like low-carbon fuel standards that are unstable, inefficient, and can even increase emissions. They like standards that allocate more property rights to bigger polluters, or simply make it harder to change. They like to impose constraints on new fossil supply that work exactly like OPEC to increase prices and profits for incumbent producers. They subsidize EVs and solar, increasing the incentive to consume energy and congest roads, with benefits accruing to the rich who can afford the capital outlay.

What this means is that my reason #2, “Risking global suboptimization by improving locally is better than optimizing nothing,” isn’t working out too well. I think this is exactly the kind of counterintuitive behavior of social systems that JWF was referring too. I don’t believe you can sort these things out with CLDs or other qualitative methods, except perhaps when they are used as explanatory tools for underlying formal models.

I think the bottom line is that, inside the big tent, the tall pole must remain construction and validation of robust behavioral dynamic models.

Feedback is Interdisciplinary

Quite a while ago, I wrote about modeling the STEM workforce:

An integrated model needs three things: what, how, and why. The “what” is the state of the system – stocks of students, workers, teachers, etc. in each part of the system. Typically this is readily available – Census, NSF and AAAS do a good job of curating such data. The “how” is the flows that change the state. There’s not as much data on this, but at least there’s good tracking of graduation rates in various fields, and the flows actually integrate to the stocks. Outside the educational system, it’s tough to understand the matrix of flows among fields and economic sectors, and surprisingly difficult even to get decent measurements of attrition from a single organization’s personnel records. The glaring omission is the “why” – the decision points that govern the aggregate flows. Why do kids drop out of science? What attracts engineers to government service, or the finance sector, or leads them to retire at a given age? I’m sure there are lots of researchers who know a lot about these questions in small spheres, but there’s almost nothing about the “why” questions that’s usable in an integrated model.

I think the current situation is a result of practicality rather than a fundamental philosophical preference for analysis over synthesis. It’s just easier to create, fund and execute standalone micro research than it is to build integrated models.

According to Jay Forrester, Gordon Brown said it much more succinctly:

The message is in the feedback, and the feedback is inherently
interdisciplinary.

Sources of Information for Modeling

The traditional picture of information sources for modeling is a funnel. For example, in Some Basic Concepts in System Dynamics (2009), Forrester showed:

I think the diagram, or at least the concept, is much older than that.

However, I think the landscape has changed a lot, with more to come. Generally, the mental database hasn’t changed too much, but the numerical database has grown a lot. The funnel isn’t 1-dimensional, so the relationships have changed on some axes, but not so much on others.

Notionally, I’d propose that the situation is something like this:

The mental database is still king for variety of concepts and immediacy or salience of information (especially to the owner of the brain involved). And, it still has some weaknesses, like the inability to easily observe, agree on and quantify the constructs included in it. In the last few decades, the numerical database has extended its reach tremendously.

The proper shape of the plot is probably very domain specific. When I drew this, I had in mind the typical corporate or policy setting, where information systems contain only a fraction of the information necessary to understand the organizations involved. But in some areas, the reverse may be true. For example, in earth systems, datasets are vast and include measurements that human senses can’t even make, whereas personal experience – and therefore mental models – is limited and treacherous.

I think I’ve understated the importance of the written database in the diagram above – perhaps I’m missing a dimension characterizing its cumulative nature (compared to the transience of mental databases). There’s also an interesting evolution underway, as tools for text analysis and large language models (ChatGPT) are making the written database more numerical in nature.

Finally, I think there’s a missing database in the traditional framework, which has growing importance. That’s the database of models themselves. They’ve been around for a long time – especially in physical sciences, but also corporate spreadsheets and the like. But increasingly, reasonably sophisticated models of organizational components are available as inputs to higher-level strategic problem solving modeling efforts.

How many “thinkings” are there?

In my recent Data & Uncertainty talk, one slide augmented Barry Richmond’s list of 7 critical modes of thinking:

The four new items are Statistical, Solution, Behavioral, and Complexity thinking. The focus on solutions and behavioral decision making has been around for a long time in SD (and BDM is really part of Barry’s Operational Thinking).

On the other hand, statistical and complexity elements are not particularly widespread in SD. Certainly elements of both have been around from the beginning, but others – like explicit treatment of measurement errors, process noise and Bayesian SD (statistical) and spatial, agent and network dynamics (complexity) are new. Both perhaps deserve some expansion into multiple concepts, but it would be neat to have a compact list of the most essential thinking modes across disciplines. What’s your list?

Postdoc @ UofM in SD for Wildlife Management


This is an interesting opportunity. The topic is important, it’s a hard problem, and it’s interesting both on the techy side and the people/process side. You can get a little flavor of recent CWD work here. The team is smart & nice, and supports competent and ethical resource managers on the ground. Best of all, it’s in Montana, though you do have to be a Griz instead of a Cat.

That QR code (and this link) points to the full job listing.

Grand Challenges for Socioeconomic Systems Modeling

Following my big tent query, I was reexamining Axtell’s critique of SD aggregation and my response. My opinion hasn’t changed much: I still think Axtell’s critique of aggregation is very useful, albeit directed at a straw dog vision of SD that doesn’t exist, and that building bridges remains important.

As I was attempting to relocate the critique document, I ran across this nice article on Eight grand challenges in socio-environmental systems modeling.

Modeling is essential to characterize and explore complex societal and environmental issues in systematic and collaborative ways. Socio-environmental systems (SES) modeling integrates knowledge and perspectives into conceptual and computational tools that explicitly recognize how human decisions affect the environment. Depending on the modeling purpose, many SES modelers also realize that involvement of stakeholders and experts is fundamental to support social learning and decision-making processes for achieving improved environmental and social outcomes. The contribution of this paper lies in identifying and formulating grand challenges that need to be overcome to accelerate the development and adaptation of SES modeling. Eight challenges are delineated: bridging epistemologies across disciplines; multi-dimensional uncertainty assessment and management; scales and scaling issues; combining qualitative and quantitative methods and data; furthering the adoption and impacts of SES modeling on policy; capturing structural changes; representing human dimensions in SES; and leveraging new data types and sources. These challenges limit our ability to effectively use SES modeling to provide the knowledge and information essential for supporting decision making. Whereas some of these challenges are not unique to SES modeling and may be pervasive in other scientific fields, they still act as barriers as well as research opportunities for the SES modeling community. For each challenge, we outline basic steps that can be taken to surmount the underpinning barriers. Thus, the paper identifies priority research areas in SES modeling, chiefly related to progressing modeling products, processes and practices.

Elsawah et al., 2020

The findings are nicely summarized in Figure 1:


Click to Enlarge

Not surprisingly, item #1 is … building bridges. This is why I’m more of a “big tent” guy. Is systems thinking a subset of system dynamics, or is system dynamics a subset of systems thinking? I think the appropriate answer is, “who cares?” Such disciplinary fence-building is occasionally informative, but more often needlessly divisive and useless for solving real-world problems.

It’s interesting to contrast this with George Richardson’s list for SD:

The potential pitfalls of our current successes suggest the time is right to sketch a view of outstanding problems in the field of system dynamics, to focus the attention of people in the field on especially promising or especially problematic issues. …

Understanding model behavior
Accumulating wise practice
Advancing practice
Accumulating results
Making models accessible
Qualitative mapping and formal modeling
Widening the base
Confidence and validation

Problems for the Future of System Dynamics
George P. Richardson

The contrasts here are interesting. Elsewah et al. are more interested in multiscale phenomena, data, uncertainty and systemic change (#5, which I think means autopoeisis, not merely change over time). I think these are all important and perhaps underappreciated priorities for the future of SD as well. Richardson on the other hand is more interested in validation and understanding of models, making progress cumulative, and widening participation in several ways.

More importantly, I think there’s really a lot of overlap – in fact I don’t think either party would disagree with anything on the other’s list. In particular, both support mixed qualitative and computational methods and increasing the influence of models.

I think Forrester’s view on influence is illuminating:

One hears repeatedly the question of how we in system dynamics might reach “decision makers.” With respect to the important questions, there are no decision makers. Those at the top of a hierarchy only appear to have influence. They can act on small questions and small deviations from current practice, but they are subservient to the constituencies that support them. This is true in both government and in corporations. The big issues cannot be dealt with in the realm of small decisions. If you want to nudge a small change in government, you can apply systems thinking logic, or draw a few causal loop diagrams, or hire a lobbyist, or bribe the right people. However, solutions to the most important sources of social discontent require reversing cherished policies that are causing the trouble. There are no decision makers with the power and courage to reverse ingrained policies that would be directly contrary to public expectations. Before one can hope to influence government, one must build the public constituency to support policy reversals.

System Dynamics—the Next Fifty Years
Jay W. Forrester

This neatly explains Forrester’s emphasis on education as a prerequisite for change. Richardson may agree, because this is essentially “widening the base” and “making models accessible”. My first impression was that Elsawah et al. were taking more of a “modeling priesthood” view of things, but in the end they write:

New kinds of interactive interfaces are also needed to help stakeholders access models, be it to make sense of simulation results (e.g. through monetization of values or other forms of impact representation), to shape assumptions and inputs in model development and scenario building, and to actively negotiate around inevitable conflicts and tradeoffs. The role of stakeholders should be much more expansive than a passive from experts, and rather is a co-creator of models, knowledge and solutions.

Where I sit in post-covid America, with atavistic desires for simpler times that never existed looming large in politics, broadening the base for model participation seems more important than ever. It’s just a bit daunting to compare the long time constant on learning with the short fuse on some of the big problems we hope these grand challenges will solve.

Should System Dynamics Have a Big Tent or Narrow Focus?

In a breakout in the student colloquium at ISDC 2022, we discussed the difficulty of getting a paper accepted into the conference, where the content was substantially a discrete event or agent simulation. Readers may know that I’m not automatically a fan of discrete models. Discrete time stinks. However, I think “discreteness” itself is not the enemy – it’s just that the way people approach some discrete models is bad, and continuous is often a good way to start.

On the flip side, there are certainly cases in which it’s sensible to start with a more granular, detailed model. In fact there are cases in which nonlinearity makes correct aggregation impossible in principle. This may not require going all the way to a discrete, agent model, but I think there’s a compelling case for the existence of systems in which the only good model is not a classic continuous time, aggregate, continuous value model. In between, there are also cases in which it may be practical to aggregate, but you don’t know how to do it a priori. In such cases, it’s useful to compare aggregate models with underlying detailed models to see what the aggregation rules should be, and to know where they break down.

I guess this is a long way of saying that I favor a “big tent” interpretation of System Dynamics. We should be considering models broadly, with the goal of understanding complex systems irrespective of methodological limits. We should go where operational thinking takes us, even if it’s not continuous.

This doesn’t mean that everything is System Dynamics. I think there are lots of things that should generally be excluded. In particular, anything that lacks dynamics – at a minimum pure stock accumulation, but usually also feedback – doesn’t make the cut. While I think that good SD is almost always at the intersection of behavior and physics, we sometimes have nonbehavioral models at the conference, i.e. models that lack humans, and that’s OK because there are some interesting opportunities for cross-fertilization. But I would exclude models that address human phenomena, but with the kind of delusional behavioral model that you get when you assume perfect information, as in much of economics.

I think a more difficult question is, where should we draw the line between System Dynamics and model-free Systems Thinking? I think we do want some model-free work, because it’s the gateway drug, and often influential. But it’s also high risk, in the sense that it may involve drawing conclusions about behavior from complex maps, where we’ve known from the beginning that no one can intuitively solve a 10th order system. I think preserving the core of the SD genome, that conclusions should emerge from replicable, transparent, realistic simulations, is absolutely essential.

Related:

Discrete Time Stinks

Dynamics of the last Twinkie

Bernoulli and Poisson are in a bar …

Modeling Discrete & Stochastic Events in Vensim

Finding SD conference papers

How to search the System Dynamics conference proceedings, and other places to find SD papers.

There’s been a lot of turbulence in the SD society web organization, which is greatly improved. One side effect is that conference proceedings have moved. The conference proceedings page now points to a dedicated subdomain.

If you want to do a directed search of the proceedings for papers on a particular topic, the google search syntax is now:

site:proceedings.systemdynamics.org topic

where ‘topic’ should be replaced by your terms of interest, as in

site:proceedings.systemdynamics.org stock flow

(This post was originally published in Oct. 2012; obsolete approaches have been removed for simplicity.)

Other places to look for papers include the System Dynamics Review and Google Scholar.