Subversive modeling

This is a technical post, so Julian Assange fans can tune out. I’m actually writing about source code management for Vensim models.

Occasionally procurements try to treat model development like software development. That can end in tragedy, because modeling isn’t the same as coding. However, there are many common attributes, and therefore software tools can be useful for modeling.

One typical challenge in model development is version control. Model development is iterative, and I typically go through fifty or more named model versions in the course of a project. C-ROADS is at v142 of its second life. It takes discipline to keep track of all those model iterations, especially if you’d like to be able to document changes along the way and recover old versions. Having a distributed team adds to the challenge.

The old school way

Continue reading “Subversive modeling”

Storytelling and playing with systems

This journalist gets it:

Maybe journalists shouldn’t tell stories so much. Stories can be a great way of transmitting understanding about things that have happened. The trouble is that they are actually a very bad way of transmitting understanding about how things work. Many of the most important things people need to know about aren’t stories at all.

Our work as journalists involves crafting rewarding media experiences that people want to engage with. That’s what we do. For a story, that means settings, characters, a beginning, a muddle and an end. That’s what makes a good story.

But many things, like global climate change, aren’t stories. They’re issues that can manifest as stories in specific cases.

… the way that stories transmit understanding is only one way of doing so. When it comes to something else – a really big, national or world-spanning issue, often it’s not what happened that matters, so much as how things work.

…When it comes to understanding a system, though, the best way is to interact with it.

Play is a powerful way of learning. Of course the systems I’ve listed above are so big that people can’t play with them in reality. But as journalists we can create models that are accurate and instructive as ways of interactively transmitting understanding.

I use the word ‘play’ in its loosest sense here; one can ‘play’ with a model of a system the same way a mechanic ‘plays’ around with an engine when she’s not quite sure what might be wrong with it.

The act of interacting with a system – poking and prodding, and finding out how the system reacts to your changes – exposes system dynamics in a way nothing else can.

If this grabs you at all, take a look at the original – it includes some nice graphics and an interesting application to class in the UK. The endpoint of the forthcoming class experiment is something like a data visualization tool. It would be cool if they didn’t stop there, but actually created a way for people to explore the implications of different models accounting for the dynamics of class, as Climate Colab and Climate Interactive do with climate models.

Changes afoot

I’m looking forward to some changes in my role at Ventana Systems. From the recent Vensim announcement:

Dear Vensim Community,

We would like to alert you to some exciting changes to the Vensim team.

Bob Eberlein, who has been head of development since almost the beginning of Vensim, has decided to embark on a new chapter of his life, starting in January. While we are sad to see him go, we greatly appreciate all of his efforts and accomplishments over the past 22 years, and wish him the very best in his new adventures.

Vensim is extremely important to our efforts here at Ventana Systems and we know that it is also important to many of you in the System Dynamics community. We are fully committed to maintaining Vensim as the leading System Dynamics software platform and to extending its features and capabilities. We have increased our investment in Vensim with the following team:

Tom Fiddaman has taken on an additional role as Vensim Product Manager. He will make sure that new releases of Vensim address market requirements and opportunities. He will facilitate information flow between the community, our user base, and the Vensim design team.

• Tony Kennedy from Ventana Systems UK will lead the Customer Services functions, including order fulfillment, bug resolution, and the master training schedule. He will also support the Distributor network. Tony has been working with UK Vensim customers for over 10 years.

• Larry Yeager has recently joined Ventana Systems to head the future development of Vensim. Larry has led the development of many software products and applications, including the Jitia System Dynamics software for PA Consulting.

• We have formed a steering team that will provide guidance and expertise to our future product development. This team includes Alan Graham, Tony Kennedy, Tom Fiddaman, Marios Kagarlis, and David Peterson.

We are very excited about the future and look forward to continuing our great relationships with you, our clients and friends.

Most sincerely,

Laura Peterson

President & CEO

Ventana Systems, Inc.

Statistics >> Calculus ?

Another TED talk argues for replacing calculus with statistics at the pinnacle of mathematics education.

There’s an interesting discussion at Wild About Math!.

I’m a bit wary of the idea. First, I don’t think there needs to be a pinnacle – math can be a Bactrian camel. Second, some of the concepts are commingled anyway (limits and convergence, for example), so it hardly makes sense to treat them as competitors. Third, both are hugely important to good decision making (which is ultimately what we want out of education). Fourth, the world is a dynamic, stochastic system, so you need to understand a little of each.

Where the real opportunity lies, I think, is in motivating the teaching of both experientially. Start calculus with stocks and flows and physical systems, and start statistics with games of chance and estimation. Use both to help people learn how to make better inferences about a complex world. Then do the math as it gets interesting and necessary. Whether you come at the problem from the angle of dynamics or uncertainty first hardly matters.

Economists in the bathtub

Env-Econ is one of several econ sites to pick up on standupeconomist Yoram Bauman’s assessment, Grading Economics Textbooks on Climate Change.

Most point out the bad, but there’s also a lot of good. On Bauman’s curve, there are 4 As, 3 Bs, 5 Cs, 3 Ds, and one F. Still, the bad tends to be really bad. Bauman writes about one,

Overall, the book is not too bad if you ignore that it’s based on climate science that is almost 15 years out of date and that it has multiple errors that would make Wikipedia blush. The fact that this textbook has over 20 percent of the market shakes my faith in capitalism.

The interesting thing is that the worst textbooks go astray more on the science than on the economics. The worst cherry-pick outdated studies, distort the opinions of scientists, and toss in red herrings like “For Greenland, a warming climate is good economic news.”

I find the most egregious misrepresentation in Schiller’s The Economy Today (D+):

The earth’s climate is driven by solar radiation. The energy the sun absorbs must be balanced by outgoing radiation from the earth and the atmosphere. Scientists fear that a flow imbalance is developing. Of particular concern is a buildup of carbon dioxide (CO2) that might trap heat in the earth’s atmosphere, warming the planet. The natural release of CO2 dwarfs the emissions from human activities. But there’s a concern that the steady increase in man-made CO2 emissions—principally from burning fossil fuels like gasoline and coal—is tipping the balance….

First, there’s no “might” about the fact that CO2 traps heat (infrared radiation); the only question is how much, when feedback effects come into play.  But the bigger issue is Schiller’s implication about the cause of atmospheric CO2 buildup. Here’s a picture of Schiller’s words, with arrow width scaled roughly to actual fluxes:

CO2flows1

Apparently, nature is at fault for increasing atmospheric CO2. This is like worrying that the world will run out of air, because people are inhaling it all (Schiller may be inhaling something else). The reality is that the natural flux, while large, is a two way flow:

CO2flows2

What goes into the ocean and biosphere generally comes out again. For the last hundred centuries, those flows were nearly equal (i.e. zero net flow). But now that humans are emitting a lot of carbon, the net flow is actually from the atmosphere into natural systems, like this:

CO2flows3

That’s quite a different situation. If an author can’t paint an accurate verbal picture of a simple stock-flow system like this, how can a text help students learn to manage resources, money or other stocks?

Firefighting and other project dynamics

The tipping loop, a positive feedback that drives sequential or concurrent projects into permanent firefighting mode, is actually just one of a number of positive feedbacks that create project management traps. Here are some others:

  • Rework – the rework cycle is central to project dynamics. Rework arises when things aren’t done right the first time. When errors are discovered, tasks have to be reworked, and there’s no guarantee that they’ll be done right the second time either. This creates a reinforcing loop that bloats project tasks beyond what’s expected with perfect execution.
  • Brooks’ Law – adding resources to a late project makes it later. There are actually several feedback loops involved:
    • Rookie effects: new resources take time to get up to speed. Until they do, they eat up the time of senior staff, decreasing output. Also, they’re likely to be more error prone, creating more rework to be dealt with downstream.
    • Diseconomies of scale from communication overhead.
  • Burnout – under schedule pressure, it’s tempting to work harder and longer. That works briefly, but sustained overtime is likely to be counterproductive, due to decreases in productivity, turnover, and increases in error rates.
  • Congestion – in construction or assembly, a delay in early phases may not delay the arrival of materials from suppliers. Unused materials stack up, congesting the work site and slowing progress further.
  • Dilution – trying to overcome stalled phases by tackling too many tasks in parallel thins resources to the point that overhead consumes all available time, and progress grinds to a halt.
  • Hopelessness – death marches are no fun, and the mere fact that a project is going astray hurts morale, leading to decreased productivity and loss of resources as rats leave the sinking ship.

Any number of things can contribute to schedule pressure that triggers these traps. Often the trigger is external, such as late-breaking change orders or regulatory mandates. However, it can also arise internally through scope creep. As long as it appears that a project is on schedule (a supposition that’s likely to prove false in hindsight), it’s hard to resist additional feature requests and suppress gold-plating urges of developers.

Taylor & Ford integrate a number of these dynamics into a simple model of single-project tipping points. They generically characterize the “ripple effect” via a few parameters: one characterizes “the amount of impact that reworked portions of the project have on the total work required to complete the project” and another captures the effect of schedule pressure on generation of rework. They suggest a robust design approach that keeps projects out of trouble, by ensuring that the vicious cycles created by these loops do not become dominant.

Because projects are complicated nests of feedback, it’s not surprising that we manage them poorly. Cognitive biases and learned heuristics can exacerbate the effect of vicious cycles arising from the structure of the work itself. For example,

… many organizations reward and promote engineers based on their ability to save troubled projects. Consider, for example, one senior manager’s reflection on how developers in his organizations were rewarded:

Occasionally there is a superstar of an engineer or a manager that can take one of these late changes and run through the gauntlet of all the possible ways that it could screw up and make it a success. And then we make a hero out of that person. And everybody else who wants to be a hero says “Oh, that is what is valued around here.” It is not valued to do the routine work months in advance and do the testing and eliminate all the problems before they become problems. …

… allowing managers to “save” troubled projects, and therefore receive accolades and benefits, creates a situation in which, for those interested in advancement, there is little incentive to execute a project properly from start to finish. While allowing such heroics may help in the short run, the long run health of the development system is better served by not rewarding them.

Repenning, Gonçalves & Black (2001) CMR

… much of the complexity of concurrent development—and the implementation failures that plague many organizations—arises from interactions between the technical and behavioral dimensions. We use a dynamic project model that explicitly represents these interactions to investigate how a ‘‘Liar’s Club’’—concealing known rework requirements from managers and colleagues—can aggravate the ‘‘90% syndrome,’’ a common form of schedule failure, and disproportionately degrade schedule performance and project quality.

Sterman & Ford (2003) Concurrent Engineering

Once caught in a downward spiral, managers must make some attribution of cause. The psychology literature also contains ample evidence suggesting that managers are more likely to attribute the cause of low performance to the attitudes and dispositions of people working within the process rather than to the structure of the process itself …. Thus, as performance begins to decline due to the downward spiral of fire fighting, managers are not only unlikely to learn to manage the system better, they are also likely to blame participants in the process. To make matters even worse, the system provides little evidence to discredit this hypothesis. Once fire fighting starts, system performance continues to decline even if the workload returns to its initial level. Further, managers will observe engineers spending a decreasing fraction of their time on up-front activities like concept development, providing powerful evidence confirming the managers’ mistaken belief that engineers are to blame for the declining performance.

Finally, having blamed the cause of low performance on those who work within the process, what actions do managers then take? Two are likely. First, managers may be tempted to increase their control over the process via additional surveillance, more detailed reporting requirements, and increasingly bureaucratic procedures. Second, managers may increase the demands on the development process in the hope of forcing the staff to be more efficient. The insidious feature of these actions is that each amounts to increasing resource utilization and makes the system more prone to the downward spiral. Thus, if managers incorrectly attribute the cause of low performance, the actions they take both confirm their faulty attribution and make the situation worse rather than better. The end result of this dynamic is a management team that becomes increasingly frustrated with an engineering staff that they perceive as lazy, undisciplined, and unwilling to follow a pre-specified development process, and an engineering staff that becomes increasingly frustrated with managers that they feel do not understand the realities of the system and, consequently, set unachievable objectives.

Repenning (2001) JPIM

There’s a long history of the use of SD models to solve these problems, or to resolve conflicts over attribution after the fact.

The secret to successful system dynamics modeling

Whenever a student wandered in late to a lecture at MIT, Jim Hines would announce, “… and that’s the secret to system dynamics.” If you were one of those perpetually late students, who’s struggled without the secret ever since, I will now reveal it.

”The key to successful modeling is to keep one’s understanding of the model and what it says about the problem ahead of its size.” – Geoff Coyle, via JJ Lauble at the Vensim forum.

Maintaining understanding is really a matter of balancing model testing and critique against creation of new structure, which requires a little discipline in your modeling process. A few suggestions:

Dynamics of firefighting

SDM has a new post about failure modes in DoD procurement. One of the key dynamics is firefighting:

For example, McNew was working on a radar system attached to the belly of airplanes so they could track enemy ground movements for targeting by both ground and air fighters. “The contractor took used 707s,” McNew explains, “tore them down to the skin and stringers, determined their structural soundness, fixed what needed fixing, and then replaced the old systems and attached the new radar system.” But when the plane got to the last test station, some structural problems still had not been fixed, meaning the systems that had been installed had to be ripped out to fix the problems, and then the systems had to be reinstalled. In order to get that last airplane out the door on time, firefighting became the order of the day. “We had most of the people in the plant working on that one plane while other planes up the line were falling farther and farther behind schedule.”

Says McNew, putting on his systems thinking hat, “You think you’re going to get a one-to-one ratio of effort-to-result but you don’t. There’s no linear correlation. The project you’re firefighting isn’t helped as much as you think it will be, and the other project falls farther behind as it’s operating with fewer resources. In other words, you’ve doubled the dysfunction.

This has been well-characterized by a bunch of modeling work at MIT’s SD group. It’s hard to find at the moment, because Sloan seems to have vandalized its own web site. Here’s a sampling of what I could lay my hands on:

From Laura Black & Nelson Repenning in the SDR, Why Firefighting Is Never Enough: Preserving High-Quality Product Development:

… we add to insights already developed in single-project models about insufficient resource allocation and the “firefighting” and last-minute rework that often result by asking why dysfunctional resource allocation persists from project to project. …. The main insight of the analysis is that under-allocating resources to the early phases of a given project in a multi-project environment can create a vicious cycle of increasing error rates, overworked engineers, and declining performance in all future projects. Policy analysis begins with those that were under consideration by the organization described in our data set. Those policies turn out to offer relatively low leverage in offsetting the problem. We then test a sequence of new policies, each designed to reveal a different feature of the system’s structure and conclude with a strategy that we believe can significantly offset the dysfunctional dynamics we discuss. ….

The key dynamic is what they term tilting – a positive feedback that arises from the interactions among early and late phase projects. When a late phase project is in trouble, allocating more resources to it is the natural response (put out the fire; part of the balancing late phase work completion loop). The perverse side effect is that, with finite resources, firefighting steals from early phase projects that are tomorrow’s late phase projects. That means that, down the road, those projects – starved for resources earlier in their life – will be in even more trouble, and steal more resources from the next generation of early phase projects. Thus the descent into permanent firefighting begins …

blackrepenning

The positive feedback of tilting creates a trap that can snare incautious organizations. In the presence of such traps, well-intentioned policies can turn vicious.

… testing plays a paradoxical role in multi-project development environments. On the one hand, it is absolutely necessary to preserve the integrity of the final product. On the other hand, in an environment where resources are scarce, allocating additional resources to testing or to addressing the problems that testing identifies leaves fewer resources available to do the up-front work that prevents problems in the first place in subsequent projects. Thus, while a decision to increase the amount of testing can yield higher quality in the short run, it can also ignite a cycle of greater-than-expected resource requirements for projects in downstream phases, fewer resources allocated to early upstream phases, and increasingly delayed discovery of major problems.

In related work with a similar model, Repenning characterizes the tilting dynamic with a phase plot that nicely illustrates the point:

executionModes

To read the phase plot, start at any point on the horizontal axis, read up to the solid black line and then over to the vertical axis. So, for example, suppose that, in a given model year, the organization manages to accomplish about 60 percent of its planned concept development work, what happens next year? Reading up and over suggests that, if it accomplishes 60 percent of the up-front work this year, the dynamics of the system are such that about 70 percent of the up-front work will get done next year. Determining what happens in a subsequent model year requires simply returning to the horizontal axis and repeating; accomplishing 70 percent this year leads to almost 95 percent being accomplished in the year that follows. Continuing this mode of analysis shows that, if the system starts at any point to the right of the solid black circle in the center of the diagram, over time the concept development completion fraction will continue to increase until it reaches 100%. Here, the positive loop works as a virtuous cycle: Each year a little more up front work is done, decreasing errors and, thereby, reducing the need for resources in the downstream phase. …

In contrast, however, consider another example. Imagine this time that the organization starts to the left of the solid black dot and accomplishes only 40 percent of its planned concept development activities. Now, reading up and over, shows that instead of completing more early phase work in the next year, the organization completes less—in this case only about 25 percent. In subsequent years, the completion fraction declines further, creating a vicious cycle of declining attention to upfront activities and increasing error rates in design work. In this case, the system converges to a mode in which concept development work is ignored in favor of fixing problems in the downstream project.

The phase plot thus reveals two important features of the system. First, note from the discussion above that anytime the plot crosses the forty-five degree line … the execution mode in question will repeat itself. Formally, at these points the system is said to be in equilibrium. Practically, equilibria represent the possible “steady states” in the system, the execution modes that, once reached, are self-sustaining. As the plot highlights, this system has three equilibria (highlighted by the solid black circles), two at the corners and one in the center of the diagram.

Second, also note that the equilibria do not have identical characteristics. The equilibria at the two corners are stable, meaning that small excursions will be counteracted. If, for example, the system starts in the desired execution mode … and is slightly perturbed, perhaps pushing the completion fraction down to 60%, then, as the example above highlights, over time the system will return to the point from which it started  …. Similarly, if the system starts at f(s)=0 and receives an external shock, perhaps moving it to a completion fraction of 40%, then it will also eventually return to its starting point. The arrows on the plot line highlight the “direction” or trajectory of the system in disequilibrium situations. In contrast to those at the corners, the equilibrium at the center of the diagram is unstable (the arrows head “away” from it), meaning small excursions are not counteracted. Instead, once the system leaves this equilibrium, it does not return and instead heads toward one of the two corners. …

Formally, the unstable equilibrium represents the boundary between two basins of attraction. …. This boundary, or tipping point, plays a critical role in determining the system’s behavior because it is the point at which the positive loop changes direction. If the system starts in the desirable execution mode and then is perturbed, if the shock is large enough to push the system over the tipping point, it does not return to its initial equilibrium and desired execution mode. Instead, the system follows a new downward trajectory and eventually becomes trapped in the fire fighting equilibrium.

You’ll have to read the papers to get the interesting prescriptions for improvement, plus some additional dynamics of manager perceptions that accentuate the trap.

Stay tuned for a part II on this topic.