Believing Exponential Growth

Verghese: You were prescient about the shape of the BA.5 variant and how that might look a couple of months before we saw it. What does your crystal ball show of what we can expect in the United Kingdom and the United States in terms of variants that have not yet emerged?

Pagel: The other thing that strikes me is that people still haven’t understood exponential growth 2.5 years in. With the BA.5 or BA.3 before it, or the first Omicron before that, people say, oh, how did you know? Well, it was doubling every week, and I projected forward. Then in 8 weeks, it’s dominant.

It’s not that hard. It’s just that people don’t believe it. Somehow people think, oh, well, it can’t happen. But what exactly is going to stop it? You have to have a mechanism to stop exponential growth at the moment when enough people have immunity. The moment doesn’t last very long, and then you get these repeated waves.

You have to have a mechanism that will stop it evolving, and I don’t see that. We’re not doing anything different to what we were doing a year ago or 6 months ago. So yes, it’s still evolving. There are still new variants shooting up all the time.

At the moment, none of these look devastating; we probably have at least 6 weeks’ breathing space. But another variant will come because I can’t see that we’re doing anything to stop it.

Medscape, We Are Failing to Use What We’ve Learned About COVID, Eric J. Topol, MD; Abraham Verghese, MD; Christina Pagel, PhD

Reading Between the Lines on Forrester’s Perspective on Data

I like Jay Forrester’s “Next 50 Years” reflection, except for his perspective on data:

I believe that fitting curves to past system data can be misleading.

OK, I’ll grant that fitting “curves” – as in simple regressions – may be a waste of time, but that’s a bit of a strawdog. The interesting questions are about fitting good dynamic models that pass all the usual structural tests as well as fitting data.

Also, the mere act of fitting a simple model doesn’t mislead; the mistake is believing the model. Simple fits can be extremely useful for exploratory analysis, even if you later discard the theories they imply.

Having a model give results that fit past data curves may impress a client.

True, though perhaps this is not the client you’d hope to have.

However, given a model with enough parameters to manipulate, one can cause any model to trace a set of past data curves.

This is Von Neumann’s elephant. He’s right, but I roll my eyes every time I hear this repeated – it’s a true but useless statement, like all models are wrong. Nonlinear dynamic models that pass SD quality checks usually don’t have anywhere near the degrees of freedom needed to reproduce arbitrary behaviors.

Doing so does not give greater assurance that the model contains the structure that is causing behavior in the real system.

On the other hand, if the model can’t fit the data, why would you think it does contain the structure that is causing the behavior in the real system?

Furthermore, the particular curves of past history are only a special case. The historical curves show how the system responded to one particular combination of random events impinging on the system. If the real system could be rerun, but with a different random environment, the data curves would be different even though the system under study and its essential dynamic character are the same.

This is certainly true. However, the problem is that the particular curve of history is the only one we have access to. Every other description of behavior we might use to test the model is intuitively stylized – and we all know how reliable intuition in complex systems can be, right?

Exactly matching a historical time series is a weak indicator of model usefulness.


One must be alert to the possibility that adjusting model parameters to force a fit to history may push those parameters outside of plausible values as judged by other available information.

This problem is easily managed by assigning strong priors to known parameters in the model calibration process.

Historical data is valuable in showing the characteristic behavior of the real system and a modeler should aspire to have a model that shows the same kind of behavior. For example, business cycle studies reveal a large amount of information about the average lead and lag relationships among variables. A business-cycle model should show similar average relative timing. We should not want the model to exactly recreate a sample of history but rather that it exhibit the kinds of behavior being experienced in the real system.

As above, how do we know what kinds of behavior are being experienced, if we only have access to one particular history? I think this comment implies the existence of intuitive data from other exemplars of the same system. If that’s true, perhaps we should codify those as reference modes and treat them like data.

Again, yielding to what the client wants may be the easy road, but it will undermine the powerful contributions that system dynamics can make.

This is true in so many ways. The client often wants too much detail, or too many scenarios, or too many exogenous influences. Any of these can obstruct learning, or break the budget.

These pages are full of data-free conceptual models that I think are valuable. But I also love data, so I have a different bottom line:

  • Data and calibration by themselves can’t make the model worse – you’re adding additional information to the testing process, which is good.
  • However, time devoted to data and calibration has an opportunity cost, which can be very high. So, you have to weigh time spent on the data against time spent on communication, theory development, robustness testing, scenario exploration, sensitivity analysis, etc.
  • That time spent on data is not all wasted, because it’s a good excuse to talk to people about the system, may reveal features that no one suspected, and can contribute to storytelling about the solution later.
  • Data is also a useful complement to talking to people about the system. Managers say they’re doing X. Are they really doing Y? Such cases may be revealed by structural problems, but calibration gives you a sharper lens for detecting them.
  • If the model doesn’t fit the data, it might be the data that is wrong or misinterpreted, and this may be an important insight about a measurement system that’s driving the system in the wrong direction.
  • If you can’t reproduce history, you have some explaining to do. You may be able to convince yourself that the model behavior replicates the essence of the problem, superimposed on some useless noise that you’d rather not reproduce. Can you convince others of this?

There are no decision makers…

A little gem from Jay Forrester:

One hears repeatedly the question of how we in system dynamics might reach “decision makers.” With respect to the important questions, there are no decision makers. Those at the top of a hierarchy only appear to have influence. They can act on small questions and small deviations from current practice, but they are subservient to the constituencies that support them. This is true in both government and in corporations. The big issues cannot be dealt with in the realm of small decisions. If you want to nudge a small change in government, you can apply systems thinking logic, or draw a few causal loop diagrams, or hire a lobbyist, or bribe the right people. However, solutions to the most important sources of social discontent require reversing cherished policies that are causing the trouble. There are no decision makers with the power and courage to reverse ingrained policies that would be directly contrary to public expectations. Before one can hope to influence government, one must build the public constituency to support policy reversals.

Climate Catastrophe Loops

PNAS has a new article on climate catastrophe mechanisms, focused on the social side, not natural tipping points. The article includes a causal loop diagram capturing some of the key feedbacks:

The diagram makes an unconventional choice: link polarity is denoted by dashed lines, rather than the usual + and – designations at arrowheads. Per the caption,

This is a causal loop diagram, in which a complete line represents a positive polarity (e.g., amplifying feedback; not necessarily positive in a normative sense) and a dotted line denotes a negative polarity (meaning a dampening feedback).

Does this new convention work? I don’t think so. It’s not less visually cluttered, and it makes negative links look tentative, though in fact there’s no reason for a negative link to have any less influence than a positive one. I think it makes it harder to assess loop polarity by following reversals from – links. There’s at least one goof: increasing ecosystem services should decrease food and water shortages, so that link should have negative polarity.

The caption also confuses link and loop polarity: “a complete line represents a positive polarity (e.g., amplifying feedback”. A single line is a causal link, not a loop, and therefore doesn’t represent feedback at all. (The rare exception might be a variable with a link back to itself, sometimes used to indicate self-reinforcement without elaborating on the mechanism.)

Nevertheless, I think this is a useful start toward a map of the territory. For me, it was generative, i.e. it immediately suggested a lot of related effects. I’ve elaborated on the original here:

  1. Food, fuel and water shortages increase pressure to consume more natural resources (biofuels, ag land, fishing for example) and therefore degrade biodiversity and ecosystem services. (These are negative links, but I’m not following the dash convention – I’m leaving polarity unlabeled for simplicity.) This is perverse, because it creates reinforcing loops worsening the resource situation.
  2. State fragility weakens protections that would otherwise protect natural resources against degradation.
  3. Fear of scarcity induces the wealthy to protect their remaining resources through rent seeking, corruption and monopoly.
  4. Corruption increases state fragility, and fragile states are less able to defend against further corruption.
  5. More rent seeking, corruption and monopoly increases economic inequality.
  6. Inequality, rent seeking, corruption, and scarcity all make emissions mitigation harder, eventually worsening warming.
  7. Displacement breeds conflict, and conflict displaces people.
  8. State fragility breeds conflict, as demagogues blame “the other” for problems and nonviolent conflict resolution methods are less available.
  9. Economic inequality increases mortality, because mortality is an extreme outcome, and inequality puts more people in the vulnerable tail of the distribution.

#6 is key, because it makes it clear that warming is endogenous. Without it, the other variables represent a climate-induced cascade of effects. In reality, I think we’re already seeing many of the tipping effects (resource and corruption effects on state fragility, for example) and the resulting governance problems are a primary cause of the failure to reduce emissions.

I’m sure I’ve missed a bunch of links, but this is already a case of John Muir‘s idea, “When we try to pick out anything by itself, we find it hitched to everything else in the Universe.”

Unfortunately, most of the hitches here create reinforcing loops, which can amplify our predicament and cause catastrophic tipping events. I prefer to see this as an opportunity: we can run these vicious cycles in reverse, making them virtuous. Fighting corruption makes states less fragile, making mitigation more successful, reducing future warming and the cascade of side effects that would otherwise reinforce state fragility in the future. Corruption is just one of many places to start, and any progress is amplified. It’s just up to us to cross enough virtuous tipping points to get the whole system moving in a good direction.

Grand Challenges for Socioeconomic Systems Modeling

Following my big tent query, I was reexamining Axtell’s critique of SD aggregation and my response. My opinion hasn’t changed much: I still think Axtell’s critique of aggregation is very useful, albeit directed at a straw dog vision of SD that doesn’t exist, and that building bridges remains important.

As I was attempting to relocate the critique document, I ran across this nice article on Eight grand challenges in socio-environmental systems modeling.

Modeling is essential to characterize and explore complex societal and environmental issues in systematic and collaborative ways. Socio-environmental systems (SES) modeling integrates knowledge and perspectives into conceptual and computational tools that explicitly recognize how human decisions affect the environment. Depending on the modeling purpose, many SES modelers also realize that involvement of stakeholders and experts is fundamental to support social learning and decision-making processes for achieving improved environmental and social outcomes. The contribution of this paper lies in identifying and formulating grand challenges that need to be overcome to accelerate the development and adaptation of SES modeling. Eight challenges are delineated: bridging epistemologies across disciplines; multi-dimensional uncertainty assessment and management; scales and scaling issues; combining qualitative and quantitative methods and data; furthering the adoption and impacts of SES modeling on policy; capturing structural changes; representing human dimensions in SES; and leveraging new data types and sources. These challenges limit our ability to effectively use SES modeling to provide the knowledge and information essential for supporting decision making. Whereas some of these challenges are not unique to SES modeling and may be pervasive in other scientific fields, they still act as barriers as well as research opportunities for the SES modeling community. For each challenge, we outline basic steps that can be taken to surmount the underpinning barriers. Thus, the paper identifies priority research areas in SES modeling, chiefly related to progressing modeling products, processes and practices.

Elsawah et al., 2020

The findings are nicely summarized in Figure 1:

Click to Enlarge

Not surprisingly, item #1 is … building bridges. This is why I’m more of a “big tent” guy. Is systems thinking a subset of system dynamics, or is system dynamics a subset of systems thinking? I think the appropriate answer is, “who cares?” Such disciplinary fence-building is occasionally informative, but more often needlessly divisive and useless for solving real-world problems.

It’s interesting to contrast this with George Richardson’s list for SD:

The potential pitfalls of our current successes suggest the time is right to sketch a view of outstanding problems in the field of system dynamics, to focus the attention of people in the field on especially promising or especially problematic issues. …

Understanding model behavior
Accumulating wise practice
Advancing practice
Accumulating results
Making models accessible
Qualitative mapping and formal modeling
Widening the base
Confidence and validation

Problems for the Future of System Dynamics
George P. Richardson

The contrasts here are interesting. Elsewah et al. are more interested in multiscale phenomena, data, uncertainty and systemic change (#5, which I think means autopoeisis, not merely change over time). I think these are all important and perhaps underappreciated priorities for the future of SD as well. Richardson on the other hand is more interested in validation and understanding of models, making progress cumulative, and widening participation in several ways.

More importantly, I think there’s really a lot of overlap – in fact I don’t think either party would disagree with anything on the other’s list. In particular, both support mixed qualitative and computational methods and increasing the influence of models.

I think Forrester’s view on influence is illuminating:

One hears repeatedly the question of how we in system dynamics might reach “decision makers.” With respect to the important questions, there are no decision makers. Those at the top of a hierarchy only appear to have influence. They can act on small questions and small deviations from current practice, but they are subservient to the constituencies that support them. This is true in both government and in corporations. The big issues cannot be dealt with in the realm of small decisions. If you want to nudge a small change in government, you can apply systems thinking logic, or draw a few causal loop diagrams, or hire a lobbyist, or bribe the right people. However, solutions to the most important sources of social discontent require reversing cherished policies that are causing the trouble. There are no decision makers with the power and courage to reverse ingrained policies that would be directly contrary to public expectations. Before one can hope to influence government, one must build the public constituency to support policy reversals.

System Dynamics—the Next Fifty Years
Jay W. Forrester

This neatly explains Forrester’s emphasis on education as a prerequisite for change. Richardson may agree, because this is essentially “widening the base” and “making models accessible”. My first impression was that Elsawah et al. were taking more of a “modeling priesthood” view of things, but in the end they write:

New kinds of interactive interfaces are also needed to help stakeholders access models, be it to make sense of simulation results (e.g. through monetization of values or other forms of impact representation), to shape assumptions and inputs in model development and scenario building, and to actively negotiate around inevitable conflicts and tradeoffs. The role of stakeholders should be much more expansive than a passive from experts, and rather is a co-creator of models, knowledge and solutions.

Where I sit in post-covid America, with atavistic desires for simpler times that never existed looming large in politics, broadening the base for model participation seems more important than ever. It’s just a bit daunting to compare the long time constant on learning with the short fuse on some of the big problems we hope these grand challenges will solve.

Should System Dynamics Have a Big Tent or Narrow Focus?

In a breakout in the student colloquium at ISDC 2022, we discussed the difficulty of getting a paper accepted into the conference, where the content was substantially a discrete event or agent simulation. Readers may know that I’m not automatically a fan of discrete models. Discrete time stinks. However, I think “discreteness” itself is not the enemy – it’s just that the way people approach some discrete models is bad, and continuous is often a good way to start.

On the flip side, there are certainly cases in which it’s sensible to start with a more granular, detailed model. In fact there are cases in which nonlinearity makes correct aggregation impossible in principle. This may not require going all the way to a discrete, agent model, but I think there’s a compelling case for the existence of systems in which the only good model is not a classic continuous time, aggregate, continuous value model. In between, there are also cases in which it may be practical to aggregate, but you don’t know how to do it a priori. In such cases, it’s useful to compare aggregate models with underlying detailed models to see what the aggregation rules should be, and to know where they break down.

I guess this is a long way of saying that I favor a “big tent” interpretation of System Dynamics. We should be considering models broadly, with the goal of understanding complex systems irrespective of methodological limits. We should go where operational thinking takes us, even if it’s not continuous.

This doesn’t mean that everything is System Dynamics. I think there are lots of things that should generally be excluded. In particular, anything that lacks dynamics – at a minimum pure stock accumulation, but usually also feedback – doesn’t make the cut. While I think that good SD is almost always at the intersection of behavior and physics, we sometimes have nonbehavioral models at the conference, i.e. models that lack humans, and that’s OK because there are some interesting opportunities for cross-fertilization. But I would exclude models that address human phenomena, but with the kind of delusional behavioral model that you get when you assume perfect information, as in much of economics.

I think a more difficult question is, where should we draw the line between System Dynamics and model-free Systems Thinking? I think we do want some model-free work, because it’s the gateway drug, and often influential. But it’s also high risk, in the sense that it may involve drawing conclusions about behavior from complex maps, where we’ve known from the beginning that no one can intuitively solve a 10th order system. I think preserving the core of the SD genome, that conclusions should emerge from replicable, transparent, realistic simulations, is absolutely essential.


Discrete Time Stinks

Dynamics of the last Twinkie

Bernoulli and Poisson are in a bar …

Modeling Discrete & Stochastic Events in Vensim

Finding SD conference papers

How to search the System Dynamics conference proceedings, and other places to find SD papers.

There’s been a lot of turbulence in the SD society web organization, which is greatly improved. One side effect is that conference proceedings have moved. The conference proceedings page now points to a dedicated subdomain.

If you want to do a directed search of the proceedings for papers on a particular topic, the google search syntax is now: topic

where ‘topic’ should be replaced by your terms of interest, as in stock flow

(This post was originally published in Oct. 2012; obsolete approaches have been removed for simplicity.)

Other places to look for papers include the System Dynamics Review and Google Scholar.

Nature Reverses on Limits

Last week Nature editorialized,

Are there limits to economic growth? It’s time to call time on a 50-year argument

Fifty years ago this month, the System Dynamics group at the Massachusetts Institute of Technology in Cambridge had a stark message for the world: continued economic and population growth would deplete Earth’s resources and lead to global economic collapse by 2070. This finding was from their 200-page book The Limits to Growth, one of the first modelling studies to forecast the environmental and social impacts of industrialization.

For its time, this was a shocking forecast, and it did not go down well. Nature called the study “another whiff of doomsday” (see Nature 236, 47–49; 1972). It was near-heresy, even in research circles, to suggest that some of the foundations of industrial civilization — mining coal, making steel, drilling for oil and spraying crops with fertilizers — might cause lasting damage. Research leaders accepted that industry pollutes air and water, but considered such damage reversible. Those trained in a pre-computing age were also sceptical of modelling, and advocated that technology would come to the planet’s rescue. Zoologist Solly Zuckerman, a former chief scientific adviser to the UK government, said: “Whatever computers may say about the future, there is nothing in the past which gives any credence whatever to the view that human ingenuity cannot in time circumvent material human difficulties.”

“Another Whiff of Doomsday” (unpaywalled: Nature whiff of doomsday 236047a0.pdf) was likely penned by Nature editor John Maddox, who wrote in his 1972 book, the Doomsday Syndrome,

“Tiny though the earth may appear from the moon, it is in reality an enormous object. The atmosphere of the earth alone weighs more than 5,000 million million tons, more than a million tons of air for each human being now alive. The water on the surface of the earth weights more than 300 times as much – in other words, each living person’s share of the water would just about fill a cube half a mile in each direction… It is not entirely out of the question that human intervention could at some stage bring changes, but for the time being the vast scale on which the earth is built should be a great comfort. In other words, the analogy of space-ship earth is probably not yet applicable to the real world. Human activity, spectacular though it may be, is still dwarfed by the human environment.”

Reciting the scale of earth’s resources hasn’t held up well as a counterargument to Limits., for the reason given by Forrester and Meadows et al. at the time: exponential growth approaches any finite limit in a relatively small number of doublings. The Nature editors were clearly aware of this back in ’72, but ignored its implications:

Instead, they subscribed to a “smooth approach” view, in which “a kind of restraint” limits population all by itself:

There are a lot of problems with this reasoning, not least of which is that economic activity is growing faster than population, yet there is no historic analog of the demographic transition for economies. However, I think the most fundamental problem with the editors’ mental model is that it’s effectively first order. Population is the only stock of interest; to the extent that they mention resources and pollution, it is only to propose that prices and preferences will take care of them. There’s no consideration of the possibility of a laissez-faire demographic transition resulting in absolute levels of population and economic activity requiring resource withdrawals that deplete resources and saturate sinks, leading to eventual overshoot and collapse. I’m reminded of Jay Forrester’s frequent comment, to the effect of, “if you have a model, you’ll be the only person in the room who can speak for 20 minutes without self-contradiction.” The ’72 Nature editorial clearly suffers for lack of a model.

While the ’22 editorial at last acknowledges the existence of the problem, its prescription is “more research.”

Researchers must try to resolve a dispute on the best way to use and care for Earth’s resources.

But the debates haven’t stopped. Although there’s now a consensus that human activities have irreversible environmental effects, researchers disagree on the solutions — especially if that involves curbing economic growth. That disagreement is impeding action. It’s time for researchers to end their debate. The world needs them to focus on the greater goals of stopping catastrophic environmental destruction and improving well-being.

… green-growth and post-growth scientists need to see the bigger picture. Right now, both are articulating different visions to policymakers, and there is a risk this will delay action. In 1972, there was still time to debate, and less urgency to act. Now, the world is running out of time.

If there’s disagreement about the solution, then the solution should be distributed, so that we can learn from different approaches. It’s easy to verify success, by checking the equilibrium conditions for sources and sinks: as long as they’re in decline, policies need to adjust. However, I don’t think lack of agreement about the solution is the real problem.

The real problem is that the research “consensus that human activities have irreversible environmental effects” has no counterpart in the political and economic spheres. Neither green-growth nor degrowth has de facto support. This is not a problem that will be solved by more environmental or economic research.

Escalator Solutions

As promised, here’s my solution to the escalator problem … several, actually.

Before getting into the models, a point about simulation vs. analytic solutions. You can solve this problem on pencil and paper with simple algebra. This has some advantages. First, you can be completely data free, by using symbols exclusively. You don’t need to know the height of the stair or a person’s climbing speed, because you can call these Hs and Vc and solve the problem for all possible values. A simulation, by contrast, needs at least notional values for these things. Second, you may be able to draw general conclusions about the solution from its structure. For example, if it takes the form t = H/V, you know there’s some kind of singularity at V=0. With a simulation, if you don’t think to test V=0, you might miss an important special case. It’s easy to miss these special cases in a parameter space with many dimensions.

On the other hand, if there are many dimensions, this may imply that the problem will be difficult or impossible to solve analytically, so simulation may be the only fallback. A simulation also makes it easier to play with the model interactively (e.g., Vensim’s Synthesim mode) and to incorporate features like model-data comparisons and optimization. The ability to play invites experimentation with parameter values you might not otherwise think of. Also, drawing a stock-flow diagram may allow you to access other forms of visual thinking, or analogies with structurally similar systems in different domains.

With that prelude, here’s how I conceived of the problem:

  • You’re in a building, at height=0 (feet in my model, but the particular unit doesn’t matter as long as you have and check units).
  • Stairs rise to height=100.
  • There’s an escalator from 100 to 200 ft.
  • Then stairs resume, to infinite height.
  • The escalator ascends at 1ft/sec and the climber at 1ft/sec whether on stairs or not.
  • At some point, the climber rests for 60sec, at which point their rate of climb is 0, but they continue to ascend if on the escalator.

Of course all the numbers can be changed on the fly, but these concepts at least have to exist.

I think of this as a problem of pure accumulation, with height as a stock. But it turned out that I still needed some feedback to determine where the climber was – on the stairs, or on the escalator:

At first it struck me that this was “fake” feedback – an accounting artifact – and that it might go away with an alternate conception. Here’s my implementation of Pradeesh Kumar’s idea, from the SDS Discussion Group on Facebook, with the height to be climbed on the stairs and escalator as a stock, with an outflow as climbing is accomplished:The logical loop is still there, and the rest of the accounting is more complex, so I think it’s inevitable.

Finally, I built the same model in Ventity, so I could use multiple entities to quickly store and replicate several scenarios:

Looking at the Ventity output, resting on the escalator is preferable:

While resting on the stairs, nothing happens. While resting on the escalator, you continue to make gains.

There’s an unstated assumption present in all the twitter answers I’ve seen: the escalator is the up escalator. I actually prefer to go up the down escalator, though it attracts weird looks. If you do that, resting on the escalator is catastrophic, because you lose ground that you previously gained:

I suspect there are other interesting edge cases to explore.

The models:

Vensim (any version): Escalator 1.mdl

Vensim, alternate conception: Escalator 1 alt.mdl

Vensim Pro/DSS/Model Reader – subscripted for multiple experiments: escalator 2.mdl

Ventity: Escalator

JJ Lauble has also created a version, posted at the Vensim forum. I haven’t had a chance to explore it yet, but it looks like he may have used Vensim to explore the algebraic solution, with the time axis as a way to scan the solution space with Synthesim overrides.