Vensim->Forio Simulate webinar tomorrow

Tomorrow I’ll be co-hosting a free webinar on development of web simulations using Vensim and Forio. Here’s the invite:

VENSIM/FORIO WEBINAR: How to create web simulations with Vensim using Forio Simulate

Vensim is ideally suited for creating sophisticated system dynamics simulation models, and Ventana UK’s Sable tool provides desktop deployment, but how can modelers make the insights from models accessible via the web?

Forio Simulate is a web hosting application that makes it easy for modelers to integrate Vensim models into end-user web applications. It allows modelers working in Vensim to publish VMF files to a server-based installation of Vensim hosted by Forio. Modelers can then use the interface design tool to create a web interface using a drag-and-drop application. No programming is necessary.

Date:
Wednesday, March 23rd @ 1 PM Eastern / 10 AM Pacific

Presenters:
Tom Fiddaman from Ventana Systems, Inc.
Billy Schoenberg from Forio Online Simulations

Cost:
Free

In this free webinar, Tom Fiddaman and Billy Schoenberg will show how Vensim modelers can combine interactive web applications with Vensim.

The webinar will cover:

1. Importing your Vensim model into Forio Simulate for use on the web.
2. Exploring your model with the Forio Simulate Model Explorer
3. Creating a web based user interface without writing code
4. Expanding past the drag and drop UI designer using Forio Simulate’s RESTful APIs

This webinar is suitable for all system dynamics modelers who would like to integrate their simulation into a web application.

There is no charge to attend the webinar. Reserve your spot now at https://www2.gotomeeting.com/register/474057034

Nuclear systems thinking roundup

Mengers & Sirelli call for systems thinking in the nuclear industry in IEEE Xplore:

Need for Change Towards Systems Thinking in the U.S. Nuclear Industry

Until recently, nuclear has been largely considered as an established power source with no need for new developments in its generation and the management of its power plants. However, this idea is rapidly changing due to reasons discussed in this study. Many U.S. nuclear power plants are receiving life extensions decades beyond their originally planned lives, which requires the consideration of new risks and uncertainties. This research first investigates those potential risks and sheds light on how nuclear utilities perceive and plan for these risks. After that, it examines the need for systems thinking for extended operation of nuclear reactors in the U.S. Finally, it concludes that U.S. nuclear power plants are good examples of systems in need of change from a traditional managerial view to a systems approach.

In this talk from the MIT SDM conference, NRC commissioner George Apostolakis is already there:

Systems Issues in Nuclear Reactor Safety

This presentation will address the important role system modeling has played in meeting the Nuclear Regulatory Commission’s expectation that the risks from nuclear power plants should not be a significant addition to other societal risks. Nuclear power plants are designed to be fundamentally safe due to diverse and redundant barriers to prevent radiation exposure to the public and the environment. A summary of the evolution of probabilistic risk assessment of commercial nuclear power systems will be presented. The summary will begin with the landmark Reactor Safety Study performed in 1975 and continue up to the risk-informed Reactor Oversight Process. Topics will include risk-informed decision making, risk assessment limitations, the philosophy of defense-in-depth, importance measures, regulatory approaches to handling procedural and human errors, and the influence of safety culture as the next level of nuclear power safety performance improvement.

The presentation is interesting, in that it’s about 20% engineering and 80% human factors. Figuring out how people interact with a really complicated control system is a big challenge.

This thesis looks like an example of what Apostolakis is talking about:

Perfect plant operation with high safety and economic performance is based on both good physical design and successful organization. However, in comparison with the affection that has been paid to technology research, the effort that has been exerted to enhance NPP management and organization, namely human performance, seems pale and insufficient. There is a need to identify and assess aspects of human performance that are predictive of plant safety and performance and to develop models and measures of these performance aspects that can be used for operation policy evaluation, problem diagnosis, and risk-informed regulation. The challenge of this research is that: an NPP is a system that is comprised of human and physics subsystems. Every human department includes different functional workers, supervisors, and managers; while every physical component can be in normal status, failure status, or a being-repaired status. Thus, an NPP’s situation can be expressed as a time-dependent function of the interactions among a large number of system elements. The interactions between these components are often non-linear and coupled, sometime there are direct or indirect, negative or positive feedbacks, and hence a small interference input either can be suppressed or can be amplified and may result in a severe accident finally. This research expanded ORSIM (Nuclear Power Plant Operations and Risk Simulator) model, which is a quantitative computer model built by system dynamics methodology, on human reliability aspect and used it to predict the dynamic behavior of NPP human performance, analyze the contribution of a single operation activity to the plant performance under different circumstances, diagnose and prevent fault triggers from the operational point of view, and identify good experience and policies in the operation of NPPs.

The cool thing about this, from my perspective, is that it’s a blend of plant control with classic SD maintenance project management. It looks at the plant as a bunch of backlogs to be managed, and defines instability as a circumstance in which the rate of creation of new work exceeds the capacity to perform tasks. This is made operational through explicit work and personnel stocks, right down to the matter of who’s in charge of the control room. Advisor Michael Golay has written previously about SD in the nuclear industry.

Others in the SD community have looked at some of the “outer loops” operating around the plant, using group model building. Not surprisingly, this yields multiple perspectives and some counterintuitive insights – for example:

Regulatory oversight was initially and logically believed by the group to be independent of the organization and its activities. It was therefore identified as a policy variable.

However in constructing the very first model at the workshop it became apparent that for the event and system under investigation the degree of oversight was influenced by the number of event reports (notifications to the regulator of abnormal occurrences or substandard conditions) the organization was producing. …

The top loop demonstrates the reinforcing effect of a good safety culture, as it encourages compliance, decreases the normalisation of unauthorised changes, therefore increasing vigilance for any outlining unauthorised deviations from approved actions and behaviours, strengthening the safety culture. Or if the opposite is the case an erosion of the safety culture results in unauthorised changes becoming accepted as the norm, this normalisation disguises the inherent danger in deviating from the approved process. Vigilance to these unauthorised deviations and the associated potential risks decreases, reinforcing the decline of the safety culture by reducing the means by which it is thought to increase. This is however balanced by the paradoxical notion set up by the feedback loop involving oversight. As safety improves, the number of reportable events, and therefore reported events can decrease. The paradoxical behaviour is induced if the regulator perceives this lack of event reports as an indication that the system is safe, and reduces the degree of oversight it provides.

Tsuchiya et al. reinforce the idea that change management can be part of the problem as well as part of the solution,

Markus Salge provides a nice retrospective on the Chernobyl accident, best summarized in pictures:

Salge Chernobyl

Key feedback structure of a graphite-moderated reactor like Chernobyl

Salge Flirting With Disaster

“Flirting with Disaster” dynamics

Others are looking at the nuclear fuel cycle and the role of nuclear power in energy systems.

Bad data, bad models

Baseline Scenario has a nice post on bad data:

To make a vast generalization, we live in a society where quantitative data are becoming more and more important. Some of this is because of the vast increase in the availability of data, which is itself largely due to computers. Some is because of the vast increase in the capacity to process data, which is also largely due to computers. …

But this comes with a problem. The problem is that we do not currently collect and scrub good enough data to support this recent fascination with numbers, and on top of that our brains are not wired to understand data. And if you have a lot riding on bad data that is poorly understood, then people will distort the data or find other ways to game the system to their advantage.

In spite of ubiquitous enterprise computing, bad data is the norm in my experience with corporate consulting. At one company, I had access to very extensive data on product pricing, promotion, advertising, placement, etc., but the information system archived everything inaccessibly on a rolling 3-year horizon. That made it impossible to see long term dynamics of brand equity, which was really the most fundamental driver of the firm’s success. Our experience with large projects includes instances where managers don’t want to know the true state of the system, and therefore refuse to collect or provide needed data – even when billions are at stake. And some firms jealously guard data within stovepipes – it’s hard to optimize the system when the finance group keeps the true product revenue stream secret in order to retain leverage over the marketing group.

People worry about garbage-in-garbage out, but modeling can actually be the antidote to bad data. If you pay attention to quality, the process of building a model will reveal all kinds of gaps in data. We recently discovered that various sources of vehicle fleet data are in serious disagreement, because of double-counting of transactions and interstate sales, and undercounting of inspections. Once data issues are known, a model can be used to remove biases and filter noise (your GPS probably runs a Kalman Filter to combine a simple physical model of your trajectory with noisy satellite measurements).

Not just any model will do; causal models are important. It’s hard to discover that your data fails to observe physical laws or other reality checks with a model that permits negative cows and buries the acceleration of gravity in a regression coefficient.

The problem is, a lot of people have developed an immune response against models, because there are so many that don’t pay attention to quality and serve primarily propagandistic purposes. The only antidote for that, I think, is to teach modeling skills, or at least model consumption skills, so that they know the right questions to ask in order to separate the babies from the bathwater.

What do SD bibliography entries say about the health of the field?

Here’s a time series of the number of entries in the system dynamics bibliography:

SD bibliography entries

The peak was in 2000 with 420 entries. If you break out the types, it looks like the conference has saturated at about 250-300 papers, while journal, report and book publications have fallen off.

SD biblio detailI suspect that some of the decline is explained by a long reporting lag, and some is “defection” of SD work into journals that aren’t captured in the bibliography (probably a good thing). It would be interesting to see a corrected series, to see what it says about the health of the field. The ideal way to do the correction would be to build a simple dynamic model of actual and measured publication rates, estimating the parameters from data (student project, anyone?).

How Many Pairs of Rabbits Are Created by One Pair in One Year?

The Fibonacci numbers are often illustrated geometrically, with spirals or square tilings, but the nautilus is not their origin. I recently learned that the sequence was first reported as the solution to a dynamic modeling thought experiment, posed by Leonardo Pisano (Fibonacci) in his 1202 masterpiece, Liber Abaci.

How Many Pairs of Rabbits Are Created by One Pair in One Year?

A certain man had one pair of rabbits together in a certain enclosed place, and one wishes to know how many are created from the pair in one year when it is the nature of them in a single month to bear another pair, and in the second month those born to bear also. Because the abovewritten pair in the first month bore, you will double it; there will be two pairs in one month. One of these, namely the first, bears in the second month, and thus there are in the second month 3 pairs; of these in one month two are pregnant, and in the third month 2 pairs of rabbits are born, and thus there are 5 pairs in the month; in this month 3 pairs are pregnant, and in the fourth month there are 8 pairs, of which 5 pairs bear another 5 pairs; these are added to the 8 pairs making 13 pairs in the fifth month; these 5 pairs that are born in this month do not mate in this month, but another 8 pairs are pregnant, and thus there are in the sixth month 21 pairs; [p284] to these are added the 13 pairs that are born in the seventh month; there will be 34 pairs in this month; to this are added the 21 pairs that are born in the eighth month; there will be 55 pairs in this month; to these are added the 34 pairs that are born in the ninth month; there will be 89 pairs in this month; to these are added again the 55 pairs that are born in the tenth month; there will be 144 pairs in this month; to these are added again the 89 pairs that are born in the eleventh month; there will be 233 pairs in this month.

Source: http://www.math.utah.edu/~beebe/software/java/fibonacci/liber-abaci.html

The solution is the famous Fibonacci sequence, which can be written as a recurrent series,

F(n) = F(n-1)+F(n-2), F(0)=F(1)=1

This can be directly implemented as a discrete time Vensim model:

Fibonacci SeriesHowever, that representation is a little too abstract to immediately reveal the connection to rabbits. Instead, I prefer to revert to Fibonacci’s problem description to construct an operational representation:

Fibonacci Rabbits

Mature rabbit pairs are held in a stock (Fibonacci’s “certain enclosed space”), and they breed a new pair each month (i.e. the Reproduction Rate = 1/month). Modeling male-female pairs rather than individual rabbits neatly sidesteps concern over the gender mix. Importantly, there’s a one-month delay between birth and breeding (“in the second month those born to bear also”). That delay is captured by the Immature Pairs stock. Rabbits live forever in this thought experiment, so there’s no outflow from mature pairs.

You can see the relationship between the series and the stock-flow structure if you write down the discrete time representation of the model, ignoring units and assuming that the TIME STEP = Reproduction Rate = Maturation Time = 1:

Mature Pairs(t) = Mature Pairs(t-1) + Maturing
Immature Pairs(t) = Immature Pairs(t-1) + Reproducing - Maturing

Substituting Maturing = Immature Pairs and Reproducing = Mature Pairs,

Mature Pairs(t) = Mature Pairs(t-1) + Immature Pairs(t-1)
Immature Pairs(t) = Immature Pairs(t-1) + Mature Pairs(t-1) - Immature Pairs(t-1) = Mature Pairs(t-1)

So:

Mature Pairs(t) = Mature Pairs(t-1) + Mature Pairs(t-2)

The resulting model has two feedback loops: a minor negative loop governing the Maturing of Immature Pairs, and a positive loop of rabbits Reproducing. The rabbit population tends to explode, due to the positive loop:

Fibonacci Growth

In four years, there are about as many rabbits as there are humans on earth, so that “certain enclosed space” better be big. After an initial transient, the growth rate quickly settles down:

Fibonacci Growth RateIts steady-state value is .61803… (61.8%/month), which is the Golden Ratio conjugate. If you change the variable names, you can see the relationship to the tiling interpretation and the Golden Ratio:

Fibonacci Part Whole

Like anything that grows exponentially, the Fibonacci numbers get big fast. The hundredth is  354,224,848,179,261,915,075.

As before, we can play the eigenvector trick to suppress the growth mode. The system is described by the matrix:

-1 1
 1 0

which has eigenvalues {-1.618033988749895, 0.6180339887498949} – notice the appearance of the Golden Ratio. If we initialize the model with the eigenvector of the negative eigenvalue, {-0.8506508083520399, 0.5257311121191336}, we can get the bunny population under control, at least until numerical noise excites the growth mode, near time 25:

Fibonacci Stable

The problem is that we need negarabbits to do it, -.850653 immature rabbits initially, so this is not a physically realizable solution (which probably guarantees that it will soon be introduced in legislation).

I brought this up with my kids, and they immediately went to the physics of the problem: “Rabbits don’t live forever. How big is your cage? Do you have rabbit food? TONS of rabbit food? What if you have all males, or varying mixtures of males and females?”

It’s easy to generalize the structure to generate other sequences. For example, assuming that mature rabbits live for only two months yields the Padovan sequence. Its equivalent of the Golden Ratio is 1.3247…, i.e. the rabbit population grows more slowly at ~32%/month, as you’d expect since rabbit lives are shorter.

The model’s in my library.

The simple dynamics of violence

There’s simple, as in Occam’s Razor, and there’s simple, as in village idiot.

There’s a noble tradition in economics of using simple thought experiments to illuminate important dynamics. Sometimes things go wrong, though, like this (from a blog I usually like):

… suppose that you have the choice of providing gruesome rhetoric that will increase the probability of a killing spree but will also increase the probability of the passage of Universal Health Insurance. Suppose using the Arizona case as a baseline we say that the average killing spree causes the death of 6 people. Then if your rhetoric is at least 6/22,000 = 1/3667 times as likely to produce a the passage of universal health insurance as it is to induce a killing spree then you saved lives by engaging in fiery rhetoric.

http://modeledbehavior.com/2011/01/11/the-optimal-quantity-of-violent-rhetoric/

Here’s the apparent mental model behind this reasoning:

Linear ViolenceIt’s linear: use violent rhetoric, get the job done. There are two problems with this simple model. First, the sign of the relationships is ambiguous. I tend to suspect that anyone who needs to use violent rhetoric is probably a fanatic, who shouldn’t be making policy in the first place. Setting that aside, the bigger problem is that violence isn’t linear. Like potato chips, you can never have just one excessive outburst. Violent rhetoric escalates, and sometimes crosses into real violence. This is the classic escalation archetype:

Violence EscalationIn the escalation archetype, two sides struggle to maintain an advantage over each other. This creates two inner negative feedback loops, which together create a positive feedback loop (a figure-8 around the two negative loops). It’s interesting to note that, so far, the use of violent rhetoric is fairly one-sided – the escalation is happening within the political right (candidates vying for attention?) more than between left and right.

There are many other positive feedbacks involved in the process, which exacerbate the direct escalation of language. Here are some speculative examples:

Violence Other LoopsThe positive feedbacks around violent rhetoric create a societal trap, from which it may be difficult to extricate ourselves. If there’s a general systems insight about vicious cycles, it’s that the best policy is prevention – just don’t start down that road (if you doubt this, play the dollar auction or smoke some crack). Politicians who engage in violent rhetoric, or other races to the bottom of the intellectual barrel, risk starting a very destructive spiral:

violence Social

The bad news is that there’s no easy remedy for this behavior. Purveyors of violent rhetoric and their supporters need to self-reflect on the harm they do to society. The good news is that if public support for violent words and images reverses, the positive loops will help to repair the damage, and take us closer to a model of rational discourse for problem solving.

About that, there is at least a bit of wisdom in the article:

… if you genuinely care about the shooting death of six people then you ought to really, really care about endorsing wrong public policies which will result in the premature death of vastly more people. Hence you should devote yourself to actually discovering the right answers to these questions, rather than than coming up with ad hoc rhetoric – violent or polite – in support of the policy you happend to have been attracted to first.

Deeper Lessons

From the mailbag, regarding my last post on storytelling and playing with systems,

I read your blog post from the 19th and wondered how you would compare what was presented in the blog in contrast with what Forrester said on on pg 17, “Deeper Lessons” in the paper at

http://sysdyn.clexchange.org/sdep/papers/D-4434-3.pdf

That paper is Jay Forrester’s 1994 Learning Through System Dynamics as Preparation for the 21st Century. There’s a lot of good thinking in it. Unfortunately, the pdf is protected, so I have to give you a screenshot:

Forrester 4434 excerpt

The “important implications” that might be missed are things like, “we cause our own problems,” the notion that cause and effect are separated in time and space, and the differences between high- and low-leverage policies. (Go read the original for more.)

I see the blog and paper as complementary. Forrester’s deeper learnings are things that emerge from understanding the way things work, and that understanding – he argues – is developed through experimentation. This is also the rationale for management flight simulators and other games that teach systems principles. I think the guidance toward important implications that Forrester advocates is not much different than the kind of reporting the blog seeks – coverage that illuminates system structure and its consequences.

I don’t think stories per se are the problem. Sometimes they do degenerate into the equivalent of a bad history textbook – a litany of he-said-she-said opinions and events without any organizing structure. However,  a story can be crafted to reveal the way things work, and systems thinkers often advocate the use of stories to present system insights. Perhaps we should be more cautious about that.

I think it’s very natural to drop from an operational description of a system to stories that are so much about people and events that they lose track of structure. For example, the article on the steam engine at howstuffworks, which ought to be structural if anything is, starts off with, “They were first invented by Thomas Newcomen in 1705, and James Watt (who we remember each time we talk about “60-watt light bulbs” and the such) made big improvements to steam engines in 1769.” If it’s hard for steam engines, which are well-understood, imagine how hard it is for a reporter to get beyond the words of a controversial topic like health care, where even experts are likely to ambiguous and conflicting mental models.

The cautionary aspect of stories reminded me of a section in The Fifth Discipline, about what happens when you don’t convey systemic understanding:

Unfortunately, much more common are leaders who have a sense of purpose and genuine vision, but little ability to foster systemic understanding. Many greate “charismatic” leaders, despite having a deep sense of purpose and vision, manage almost exclusively at the level of events. Such leaders deal in visions and crises, and little in between. They foster a lofty sense of purpose and mission. They create tremendous energy and enthusiasm. But, under their leadership, an organization caroms from crisis to crisis. Eventually, the worldview of people in the organization becomes dominated by events and reactiveness. People experience being jerked continually from one crisis to another; they have no control over their time, let alone their destiny. Eventually, this will breed deep cynicism about the vision, and about visions in general. The soil within which a vision must take root – the belief that we can influence our future – becomes poisoned.

Such “visionary crisis managers” often become tragic figures. Their tragedy stems from the depth and genuineness of their vision. They often are truly committed to noble aspirations. But noble aspirations are not enough to overcome systemic forces contrary to the vision. As the ecologists say, “Nature bats last.” Systemic forces will win out over the most noble vision if we do not learn how to recognize, work with, and gently mold those forces.