Bifurcations from Strogatz’ Nonlinear Dynamics and Chaos

The following models are replicated from Steven Strogatz’ excellent text, Nonlinear Dynamics and Chaos.

These are just a few of the many models in the text. They illustrate bifurcations in one-dimensional systems (saddle node, transcritical, pitchfork) and one two-dimensional system (Hopf). The pitchfork bifurcation is closely related to the cusp catastrophe in the climate model recently posted.

Spiral from a point near the unstable fixed point at the origin to a stable limit cycle after a Hopf bifurcation (mu=.075, r0 = .025)

These are in support of an upcoming post on bifurcations and tipping points, so I won’t say more at the moment. I encourage you to read the book. If you replicate more of the models in it, I’d love to have copies here.

These are systems in normal form and therefore dimensionless and lacking in physical interpretation, though they certainly crop up in many real-world systems.

3-1 saddle node bifurcation.mdl

3-2 transcritical bifurcation.mdl

3-4 pitchfork bifurcation.mdl

8.2 Hopf bifurcation.mdl

Update: A related generic model illustrating critical slowing down:

critical slowing.mdl

Tipping points

The concept of tipping points is powerful, but sometimes a bit muddled. Things that get described as tipping points often sound to me like mere dramatic events or nonlinear effects, simple thermodynamic irreversibilities, or exponential signals emerging unexpectedly from noise. These may play a role in tipping points, and lead to surprises, but I don’t think they capture the essence of the idea. You can see examples (good and bad) if you sift through the images describing tipping points on google.

I think of tipping points as a feedback phenomenon: positive feedback that amplifies a disturbance, such that change takes off, even if the disturbance is removed. The key outcome is a system that is stable or resistant to disturbances up to a point, beyond which surprising things may happen.

A simple example is sitting in a chair. The system has two stable equilibria: sitting upright, and lying flat on your back (tipped over). There’s also an unstable equilibrium – the precarious moment when you’re balanced on the back legs of the chair, and the force of gravity is neutral. As long as you lean just a little bit, gravity is a restoring force – it will pull you back to the desirable upright equilibrium if you pick up your feet. Lean a bit further, past the unstable tipping point, and gravity begins to pull you over backwards. Gravity gains leverage the further you lean – a positive feedback. Waving your arms and legs won’t help much; you’re going to be flat on your back.

A more generalized explanation is given  in catastrophe theory. The interesting twist is that a seemingly-stable system may acquire tipping points unexpectedly as its parameters drift into regimes that create new stable and unstable points, leading to surprises. Even without structural change to the system, its behavior mode can change unexpectedly as the state of the system moves from locally-stable territory to locally-unstable territory, which occurs due to shifting loop dominance from nonlinearities. (Think of the financial crisis and some kinds of aircraft accidents, for example.)

Anyone know some nice, simple tipping point models? I think I’ll have to mine my archives for some concrete examples…

Firefighting and other project dynamics

The tipping loop, a positive feedback that drives sequential or concurrent projects into permanent firefighting mode, is actually just one of a number of positive feedbacks that create project management traps. Here are some others:

  • Rework – the rework cycle is central to project dynamics. Rework arises when things aren’t done right the first time. When errors are discovered, tasks have to be reworked, and there’s no guarantee that they’ll be done right the second time either. This creates a reinforcing loop that bloats project tasks beyond what’s expected with perfect execution.
  • Brooks’ Law – adding resources to a late project makes it later. There are actually several feedback loops involved:
    • Rookie effects: new resources take time to get up to speed. Until they do, they eat up the time of senior staff, decreasing output. Also, they’re likely to be more error prone, creating more rework to be dealt with downstream.
    • Diseconomies of scale from communication overhead.
  • Burnout – under schedule pressure, it’s tempting to work harder and longer. That works briefly, but sustained overtime is likely to be counterproductive, due to decreases in productivity, turnover, and increases in error rates.
  • Congestion – in construction or assembly, a delay in early phases may not delay the arrival of materials from suppliers. Unused materials stack up, congesting the work site and slowing progress further.
  • Dilution – trying to overcome stalled phases by tackling too many tasks in parallel thins resources to the point that overhead consumes all available time, and progress grinds to a halt.
  • Hopelessness – death marches are no fun, and the mere fact that a project is going astray hurts morale, leading to decreased productivity and loss of resources as rats leave the sinking ship.

Any number of things can contribute to schedule pressure that triggers these traps. Often the trigger is external, such as late-breaking change orders or regulatory mandates. However, it can also arise internally through scope creep. As long as it appears that a project is on schedule (a supposition that’s likely to prove false in hindsight), it’s hard to resist additional feature requests and suppress gold-plating urges of developers.

Taylor & Ford integrate a number of these dynamics into a simple model of single-project tipping points. They generically characterize the “ripple effect” via a few parameters: one characterizes “the amount of impact that reworked portions of the project have on the total work required to complete the project” and another captures the effect of schedule pressure on generation of rework. They suggest a robust design approach that keeps projects out of trouble, by ensuring that the vicious cycles created by these loops do not become dominant.

Because projects are complicated nests of feedback, it’s not surprising that we manage them poorly. Cognitive biases and learned heuristics can exacerbate the effect of vicious cycles arising from the structure of the work itself. For example,

… many organizations reward and promote engineers based on their ability to save troubled projects. Consider, for example, one senior manager’s reflection on how developers in his organizations were rewarded:

Occasionally there is a superstar of an engineer or a manager that can take one of these late changes and run through the gauntlet of all the possible ways that it could screw up and make it a success. And then we make a hero out of that person. And everybody else who wants to be a hero says “Oh, that is what is valued around here.” It is not valued to do the routine work months in advance and do the testing and eliminate all the problems before they become problems. …

… allowing managers to “save” troubled projects, and therefore receive accolades and benefits, creates a situation in which, for those interested in advancement, there is little incentive to execute a project properly from start to finish. While allowing such heroics may help in the short run, the long run health of the development system is better served by not rewarding them.

Repenning, Gonçalves & Black (2001) CMR

… much of the complexity of concurrent development—and the implementation failures that plague many organizations—arises from interactions between the technical and behavioral dimensions. We use a dynamic project model that explicitly represents these interactions to investigate how a ‘‘Liar’s Club’’—concealing known rework requirements from managers and colleagues—can aggravate the ‘‘90% syndrome,’’ a common form of schedule failure, and disproportionately degrade schedule performance and project quality.

Sterman & Ford (2003) Concurrent Engineering

Once caught in a downward spiral, managers must make some attribution of cause. The psychology literature also contains ample evidence suggesting that managers are more likely to attribute the cause of low performance to the attitudes and dispositions of people working within the process rather than to the structure of the process itself …. Thus, as performance begins to decline due to the downward spiral of fire fighting, managers are not only unlikely to learn to manage the system better, they are also likely to blame participants in the process. To make matters even worse, the system provides little evidence to discredit this hypothesis. Once fire fighting starts, system performance continues to decline even if the workload returns to its initial level. Further, managers will observe engineers spending a decreasing fraction of their time on up-front activities like concept development, providing powerful evidence confirming the managers’ mistaken belief that engineers are to blame for the declining performance.

Finally, having blamed the cause of low performance on those who work within the process, what actions do managers then take? Two are likely. First, managers may be tempted to increase their control over the process via additional surveillance, more detailed reporting requirements, and increasingly bureaucratic procedures. Second, managers may increase the demands on the development process in the hope of forcing the staff to be more efficient. The insidious feature of these actions is that each amounts to increasing resource utilization and makes the system more prone to the downward spiral. Thus, if managers incorrectly attribute the cause of low performance, the actions they take both confirm their faulty attribution and make the situation worse rather than better. The end result of this dynamic is a management team that becomes increasingly frustrated with an engineering staff that they perceive as lazy, undisciplined, and unwilling to follow a pre-specified development process, and an engineering staff that becomes increasingly frustrated with managers that they feel do not understand the realities of the system and, consequently, set unachievable objectives.

Repenning (2001) JPIM

There’s a long history of the use of SD models to solve these problems, or to resolve conflicts over attribution after the fact.

Dynamics of firefighting

SDM has a new post about failure modes in DoD procurement. One of the key dynamics is firefighting:

For example, McNew was working on a radar system attached to the belly of airplanes so they could track enemy ground movements for targeting by both ground and air fighters. “The contractor took used 707s,” McNew explains, “tore them down to the skin and stringers, determined their structural soundness, fixed what needed fixing, and then replaced the old systems and attached the new radar system.” But when the plane got to the last test station, some structural problems still had not been fixed, meaning the systems that had been installed had to be ripped out to fix the problems, and then the systems had to be reinstalled. In order to get that last airplane out the door on time, firefighting became the order of the day. “We had most of the people in the plant working on that one plane while other planes up the line were falling farther and farther behind schedule.”

Says McNew, putting on his systems thinking hat, “You think you’re going to get a one-to-one ratio of effort-to-result but you don’t. There’s no linear correlation. The project you’re firefighting isn’t helped as much as you think it will be, and the other project falls farther behind as it’s operating with fewer resources. In other words, you’ve doubled the dysfunction.

This has been well-characterized by a bunch of modeling work at MIT’s SD group. It’s hard to find at the moment, because Sloan seems to have vandalized its own web site. Here’s a sampling of what I could lay my hands on:

From Laura Black & Nelson Repenning in the SDR, Why Firefighting Is Never Enough: Preserving High-Quality Product Development:

… we add to insights already developed in single-project models about insufficient resource allocation and the “firefighting” and last-minute rework that often result by asking why dysfunctional resource allocation persists from project to project. …. The main insight of the analysis is that under-allocating resources to the early phases of a given project in a multi-project environment can create a vicious cycle of increasing error rates, overworked engineers, and declining performance in all future projects. Policy analysis begins with those that were under consideration by the organization described in our data set. Those policies turn out to offer relatively low leverage in offsetting the problem. We then test a sequence of new policies, each designed to reveal a different feature of the system’s structure and conclude with a strategy that we believe can significantly offset the dysfunctional dynamics we discuss. ….

The key dynamic is what they term tilting – a positive feedback that arises from the interactions among early and late phase projects. When a late phase project is in trouble, allocating more resources to it is the natural response (put out the fire; part of the balancing late phase work completion loop). The perverse side effect is that, with finite resources, firefighting steals from early phase projects that are tomorrow’s late phase projects. That means that, down the road, those projects – starved for resources earlier in their life – will be in even more trouble, and steal more resources from the next generation of early phase projects. Thus the descent into permanent firefighting begins …

blackrepenning

The positive feedback of tilting creates a trap that can snare incautious organizations. In the presence of such traps, well-intentioned policies can turn vicious.

… testing plays a paradoxical role in multi-project development environments. On the one hand, it is absolutely necessary to preserve the integrity of the final product. On the other hand, in an environment where resources are scarce, allocating additional resources to testing or to addressing the problems that testing identifies leaves fewer resources available to do the up-front work that prevents problems in the first place in subsequent projects. Thus, while a decision to increase the amount of testing can yield higher quality in the short run, it can also ignite a cycle of greater-than-expected resource requirements for projects in downstream phases, fewer resources allocated to early upstream phases, and increasingly delayed discovery of major problems.

In related work with a similar model, Repenning characterizes the tilting dynamic with a phase plot that nicely illustrates the point:

executionModes

To read the phase plot, start at any point on the horizontal axis, read up to the solid black line and then over to the vertical axis. So, for example, suppose that, in a given model year, the organization manages to accomplish about 60 percent of its planned concept development work, what happens next year? Reading up and over suggests that, if it accomplishes 60 percent of the up-front work this year, the dynamics of the system are such that about 70 percent of the up-front work will get done next year. Determining what happens in a subsequent model year requires simply returning to the horizontal axis and repeating; accomplishing 70 percent this year leads to almost 95 percent being accomplished in the year that follows. Continuing this mode of analysis shows that, if the system starts at any point to the right of the solid black circle in the center of the diagram, over time the concept development completion fraction will continue to increase until it reaches 100%. Here, the positive loop works as a virtuous cycle: Each year a little more up front work is done, decreasing errors and, thereby, reducing the need for resources in the downstream phase. …

In contrast, however, consider another example. Imagine this time that the organization starts to the left of the solid black dot and accomplishes only 40 percent of its planned concept development activities. Now, reading up and over, shows that instead of completing more early phase work in the next year, the organization completes less—in this case only about 25 percent. In subsequent years, the completion fraction declines further, creating a vicious cycle of declining attention to upfront activities and increasing error rates in design work. In this case, the system converges to a mode in which concept development work is ignored in favor of fixing problems in the downstream project.

The phase plot thus reveals two important features of the system. First, note from the discussion above that anytime the plot crosses the forty-five degree line … the execution mode in question will repeat itself. Formally, at these points the system is said to be in equilibrium. Practically, equilibria represent the possible “steady states” in the system, the execution modes that, once reached, are self-sustaining. As the plot highlights, this system has three equilibria (highlighted by the solid black circles), two at the corners and one in the center of the diagram.

Second, also note that the equilibria do not have identical characteristics. The equilibria at the two corners are stable, meaning that small excursions will be counteracted. If, for example, the system starts in the desired execution mode … and is slightly perturbed, perhaps pushing the completion fraction down to 60%, then, as the example above highlights, over time the system will return to the point from which it started  …. Similarly, if the system starts at f(s)=0 and receives an external shock, perhaps moving it to a completion fraction of 40%, then it will also eventually return to its starting point. The arrows on the plot line highlight the “direction” or trajectory of the system in disequilibrium situations. In contrast to those at the corners, the equilibrium at the center of the diagram is unstable (the arrows head “away” from it), meaning small excursions are not counteracted. Instead, once the system leaves this equilibrium, it does not return and instead heads toward one of the two corners. …

Formally, the unstable equilibrium represents the boundary between two basins of attraction. …. This boundary, or tipping point, plays a critical role in determining the system’s behavior because it is the point at which the positive loop changes direction. If the system starts in the desirable execution mode and then is perturbed, if the shock is large enough to push the system over the tipping point, it does not return to its initial equilibrium and desired execution mode. Instead, the system follows a new downward trajectory and eventually becomes trapped in the fire fighting equilibrium.

You’ll have to read the papers to get the interesting prescriptions for improvement, plus some additional dynamics of manager perceptions that accentuate the trap.

Stay tuned for a part II on this topic.

Hansen on The Deal

Jim Hansen kicked off the Tällberg panel with a succinct summary of the argument for a 350ppm target in Hansen et al. (a short version is here). As I heard it,

  • The dangerous level of GHGs in the atmosphere is lower than we thought.
  • 3C climate sensitivity from fast feedbacks is confirmed; the risk is slow feedbacks, which are not as slow as we thought.
  • There is enough warming in pipeline to lose arctic ice, glaciers, reefs.
  • Good news: we need to go back to the stable Holocene climate.
  • The problem is solvable because conventional oil and gas are limited; we just need the will to not burn coal, oil shale, etc., except with CCS.
  • Among other things, that requires a price on carbon; for which a tax is the preferred mechanism.
  • The only loser is the fossil fuel industry; we simply need to bring them to heel.

Hansen was a little impatient with our bit of the forum, and argued that our focus on regions (and the challenges in reaching a regional accord) was too pessimistic. Instead, a focus on fuels (e.g., phasing out coal) provides clarity of purpose.

My counterargument, which I only partially articulated during the session, for fear of driving the conversation off on a tangent, is as follows:

As a technical solution, phasing out coal and letting peak oil run its course probably works. However, phasing out coal by 2030 implies a time constant of seven years or a rate of decline in coal utilization of about 10%/year (by the 3-tau rule of thumb). Coal-fired power plants have a long lifetime, so the natural rate of decline, assuming no new coal investment, is more like 2.5% or 3%/year. Phasing out coal at 10% per year implies not only halting construction, but also abandoning many plants before their natural economic lifetime is up. Age structure complicates things a bit, perhaps making it easier in the US (where plants are disproportionately old) and harder in China (where they’re new). Closing plants ahead of schedule is going to make the fossil fuel interests that Hansen proposes to control rather vocally upset. Also, eliminating coal emissions that fast requires some combination of rapid deployment of efficiency, noncarbon energy sources, and CCS above natural rates of capital turnover, and lifestyle change to pick up the slack. That in itself is a significant challenge.

That would be doable for a coalition with enough political power to either overpower or buy off the owners of stranded assets. But that coalition doesn’t now exist, and therein lies the reason that this is a political problem more than a technical one.