Coupled Catastrophes

I ran across this cool article on network dynamics, and thought the model would be an interesting application for Ventity:

Coupled catastrophes: sudden shifts cascade and hop among interdependent systems

Charles D. Brummitt, George Barnett and Raissa M. D’Souza

Abstract

An important challenge in several disciplines is to understand how sudden changes can propagate among coupled systems. Examples include the synchronization of business cycles, population collapse in patchy ecosystems, markets shifting to a new technology platform, collapses in prices and in confidence in financial markets, and protests erupting in multiple countries. A number of mathematical models of these phenomena have multiple equilibria separated by saddle-node bifurcations. We study this behaviour in its normal form as fast–slow ordinary differential equations. In our model, a system consists of multiple subsystems, such as countries in the global economy or patches of an ecosystem. Each subsystem is described by a scalar quantity, such as economic output or population, that undergoes sudden changes via saddle-node bifurcations. The subsystems are coupled via their scalar quantity (e.g. trade couples economic output; diffusion couples populations); that coupling moves the locations of their bifurcations. The model demonstrates two ways in which sudden changes can propagate: they can cascade (one causing the next), or they can hop over subsystems. The latter is absent from classic models of cascades. For an application, we study the Arab Spring protests. After connecting the model to sociological theories that have bistability, we use socioeconomic data to estimate relative proximities to tipping points and Facebook data to estimate couplings among countries. We find that although protests tend to spread locally, they also seem to ‘hop’ over countries, like in the stylized model; this result highlights a new class of temporal motifs in longitudinal network datasets.

Ventity makes sense here because the system consists of a network of coupled states. Ventity makes it easy to represent a wide variety of network architectures. This means there are two types of entities in the system: “Nodes” and “Couplings.”

The Node entitytype contains a single state (X), with local feedback, as well as a remote influence from Coupling and a few global parameters referenced from the Model entity:

A Coupling is simply a reference from one Node to another, with a strength parameter:

If you don’t create any Couplings, the Nodes run standalone, as in Section 2.1 of the paper. You can use that to see how the bistable dynamics of X create a tipping point, by running a set of nodes with different initial conditions:

By increasing the global Model.const a, you can induce a bifurcation that destabilizes the lower branch of the system, so that all trajectories tend to increase:

Section 2.2 of the paper illustrates a master-slave system, with two Nodes and a single Coupling by which the master Node influences the Slave. I actually set this up with a single master driving multiple slaves, where each slave has a different initial X. Then increasing the master’s initial X spills over to shift the stability of Slave 4’s initial state:

In Section 2.3, things get really interesting, with cascade hopping. In this scenario, there are three coupled Nodes, X -> Y -> Z. X (blue) is disturbed exogenously by changing its local const a parameter at time 8, causing it to transition from a stable value near 1 to about -1.2. This in turn influences a slight shift in Y’s state (red), but due to weak coupling that’s not enough to destabilize Y. However, the small shift in Y is enough to nudge Z out of its state, causing a sudden transition to -1.2 around time 18.

Consider what this would do to any simple correlation-based thinking, or a regression model. X has clearly caused a catastrophic change in Z, but without much of an obvious change in Y. In the presence of noise, it would be easy to conclude that this was all a coincidence. (If you harbor any doubt about the causality, just set Node X’s const a chg to zero and see what happens.)

I encourage you to take a look at the original paper – it has some nice phase diagrams and goes on to consider some interesting applications. I think the same structure could be used to implement another interesting network dynamics paper: State-dependent effective interactions in oscillator networks through coupling functions with dead zones. And if you like the topic, Network Catastrophe: Self-Organized Patterns Reveal both the Instability and the Structure of Complex Networks has more interesting data-centric applications.

An interesting extension of this model would be to generalize to larger networks, by modifying the input data or using actions to generate random networks.

The model: SaddleNodeNetwork4.zip

Stress, Burnout & Biology

In my last post, stress takes center stage as both a driver and an outcome of the cortisol-cytokine-serotonin system. But stress can arise endogenously in another way as well, from the interplay of personal goals and work performance. Jack Homer’s burnout model is a system dynamics classic that everyone should explore:

Worker burnout: A dynamic model with implications for prevention and control

Jack B. Homer

This paper explores the dynamics of worker burnout, a process in which a hard‐working individual becomes increasingly exhausted, frustrated, and unproductive. The author’s own two‐year experience with repeated cycles of burnout is qualitatively reproduced by a small system dynamics model that portrays the underlying psychology of workaholism. Model tests demonstrate that the limit cycle seen in the base run can be stabilized through techniques that diminish work‐related stress or enhance relaxation. These stabilizing techniques also serve to raise overall productivity, since they support a higher level of energy and more working hours on the average. One important policy lever is the maximum workweek or work limit; an optimal work limit at which overall productivity is at its peak is shown to exist within a region of stability where burnout is avoided. The paper concludes with a strategy for preventing burnout, which emphasizes the individual’s responsibility for understanding the self‐inflicted nature of this problem and pursuing an effective course of stability.

You can find a copy of the model in the help system that comes with Vensim.

Biological Dynamics of Stress: the Outer Loops

A while back I reviewed an interesting model of hormone interactions triggered by stress. The bottom line:

I think there might be a lot of interesting policy implications lurking in this model, waiting for an intrepid explorer with more subject matter expertise than I have. I think the crucial point here is that the structure identifies a mechanism by which patient outcomes can be strongly path dependent, where positive feedback preserves a bad state long after harmful stimuli are removed. Among other things, this might explain why it’s so hard to treat such patients. That in turn could be a basis for something I’ve observed in the health system – that a lot of doctors find autoimmune diseases mysterious and frustrating, and respond with a variation on the fundamental attribution error – attributing bad outcomes to patient motivation when delayed, nonlinear feedback is responsible.

Since then, I’ve been reflecting on the fact that the internal positive feedbacks that give the hormonal system a tipping point, allowing people to get stuck in a bad state, are complemented and amplified by a set of external loops that do the same thing. I’ve reorganized my version of the model to show how this works:

Stress-Hormone Interactions (See also Fig. 1 in the original paper.)

The trigger for the system is External Stress (highlighted in yellow). A high average rate of stress perception lengthens the relieving time for stress. This creates a reinforcing loop, R1. This is analogous to the persistent pollution loop in World3, where a high level of pollution poisons the mechanisms that alleviate pollution.

I’ve constructed R1 with dashed arrows, because the effects are transient – when stress perception stops, eventually the stress effect on the relieving time returns to normal. (If I were reworking the model, I think I would simplify this effect, so that the stock of Perceived Stress affected the relieving time directly, rather than including a separate smooth, but this would not change the transience of the effect.)

A second effect of stress, mediated by Pro-inflammatory Cytokines, produces another reinforcing loop, R2. That loop does create Internal Stress, which makes it potentially self-sustaining. (Presumably that would be something like stress -> cytokines -> inflammation -> pain -> stress.) However, in simulations I’ve explored, a self-sustaining effect does not occur – evidently the Pro-inflammatory Cytokines sector does not contain a state that is permanently affected, absent an external stress trigger.

Still more effects of stress are mediated by the Cortisol and Serotonin sectors, and Cortisol-Pro-inflammatory Cytokines interactions. These create still more reinforcing loops, R3, R4, R5 and R6. Cortisol-serotonin affects appear to have multiple signs, making the net effect (RB67) ambiguous in polarity (at least without further digging into the details). Like R1 (the stress self-effect), R3, R4 and R6 operate by extending the time over which stress is relieved, which tends to increase the stock of stress. Even with long relief times, stress still drains away eventually, so these do not create permanent effects on stress.

However, within the Cortisol sector, there are persistent states that are affected by stress and inflammation. These are related to glucocorticoid receptor function, and they can be durably altered, making the effects of stress long term.

These dynamics alone make the system hard to understand and manage. However, I think the real situation is still more complex. Consider the following red links, which produce stress endogenously:

One possibility, discussed in the original paper but out of scope for the model, is that cognitive processing of stress has its own effects. For example, if stress produces stress, e.g., through worrying about stress, it could become self-sustaining. There are plenty of other possible mechanisms. The cortisol system affects cardiovascular health and thyroid function, which could lead to additional symptoms provoking stress. Similarly, mood affects family relationships and job productivity, which may contribute to stress.

These effects can be direct, for example if elevated cortisol causes stressful cardiovascular symptoms. But they could also be indirect, via other subsystems in one’s life. If you incur large health expenses or miss a lot of work, you’re likely to suffer financial stress. Presumably diet and exercise are also tightly coupled to this system.

All of these loops create abundant opportunity for tipping points that can lock you into good health or bad. I think they’re a key mechanism in poverty traps. Certainly they provide a clear mechanism that explains why mental health is not all in your head. Lack of appreciation for the complexity of this system may also explain why traditional medicine is not very successful in treating its symptoms.

If you’re on the bad side of a tipping point, all is not lost however. Positive loops that preserve a stressful state, acting as vicious cycles, can also operate in reverse as virtuous cycles, if a small improvement can be initiated. Even if it’s hard to influence physiology, there are other leverage points here that might be useful: changing your own approach to stress, engaging your relationships in the solution, and looking for ways to lower external stresses that keep the internal causes activated may all help.

I think I’m just scratching the surface here, so I’m interested in your thoughts.

Noon Networks

My browser tabs are filling up with lots of cool articles on networks, which I’ve only had time to read superficially. So, dear reader, I’m passing the problem on to you:

Multiscale analysis of Medical Errors

Insights into Population Health Management Through Disease Diagnoses Networks

Community Structure in Time-Dependent, Multiscale, and Multiplex Networks

Simpler Math Tames the Complexity of Microbe Networks

Informational structures: A dynamical system approach for integrated information

In this paper we introduce a space-time continuous version for the level of integrated information of a network on which a dynamics is defined.

Understanding the dynamics of biological and neural oscillator networks through mean-field reductions: a review

A new framework to predict spatiotemporal signal propagation in complex networks

Scientists Discover Exotic New Patterns of Synchronization

 

Are Project Overruns a Statistical Artifact?

Erik Bernhardsson explores this possibility:

Anyone who built software for a while knows that estimating how long something is going to take is hard. It’s hard to come up with an unbiased estimate of how long something will take, when fundamentally the work in itself is about solving something. One pet theory I’ve had for a really long time, is that some of this is really just a statistical artifact.

Let’s say you estimate a project to take 1 week. Let’s say there are three equally likely outcomes: either it takes 1/2 week, or 1 week, or 2 weeks. The median outcome is actually the same as the estimate: 1 week, but the mean (aka average, aka expected value) is 7/6 = 1.17 weeks. The estimate is actually calibrated (unbiased) for the median (which is 1), but not for the the mean.

The full article is worth a read, both for its content and the elegant presentation. There are some useful insights, particularly that tasks with the greatest uncertainty rather than the greatest size are likely to dominate a project’s outcome. Interestingly, most lists of reasons for project failure neglect uncertainty just as they neglect dynamics.

However, I think the statistical explanation is only part of the story. There’s an important connection to project structure and dynamics.

First, if you accept that the distribution of task overruns is lognormal, you have to wonder where that heavy-tailed distribution is coming from in the first place. I think the answer is, positive feedbacks. Projects are chock full of reinforcing feedback, from rework cycles, Brooks’ Law, schedule pressure driving overtime leading to errors and burnout, site congestion and other effects. These amplify the right tail response to any disturbance.

Second, I think there’s some reason to think that the positive feedbacks operate primarily at a high level in projects. Schedule pressure, for example, doesn’t kick in when one little subtask goes wrong; it only becomes important when the whole project is off track. But if that’s the case, Bernhardsson’s heavy-tailed estimation errors will provide a continuous source of disturbances that stress the project, triggering the multitude of vicious cycles that lie in wait. In that case, a series of potentially modest misperceptions of uncertainty can be amplified by project structure into a catastrophic failure.

An interesting question is why people and organizations don’t simply adapt, adding a systematic fudge factor to estimates to account for overruns. Are large overruns to rare to perceive easily? Or do organizational pressures to set stretch goals and outcompete other projects favor naive optimism?

 

Emissions Pricing vs. Standards

You need an emissions price in your portfolio to balance effort across all tradeoffs in the economy.

The energy economy consists of many tradeoffs. Some of these are captured in the IPAT framework:

Emissions 
= Population x GDP per Capita x Energy per GDP x Emissions per Energy

IPAT shows that, to reduce emisisons, there are multiple points of intervention. One could, for example, promote lower energy intensity, or reduce the carbon intensity of energy, or both.

An ideal policy, or portfolio of policies, would:

  • Cover all the bases – ensure that no major opportunity is left unaddressed.
  • Balance the effort – an economist might express this as leveling the shadow prices across areas.

We have a lot of different ways to address each tradeoff: tradeable permits, taxes, subsidies, quantity standards, performance standards, command-and-control, voluntary limits, education, etc. So far, in the US, we have basically decided that taxes are a non-starter, and instead pursued subsidies and tax incentives, portfolio and performance standards, with limited use of tradeable permits.

Here’s the problem with that approach. You can decompose the economy a lot more than IPAT does, into thousands of decisions that have energy consequences. I’ve sampled a tiny fraction below.

Is there an incentive?

Decision Standards Emissions Price
Should I move to the city or the suburbs? No  Yes
Should I telecommute? No  Yes
Drive, bike, bus or metro today? No  Yes
Car, truck or SUV? No (CAFE gets this wrong)  Yes
Big SUV or small SUV? CAFE  Yes
Gasoline, diesel, hybrid or electric? ZEV, tax credits  Yes
Regular or biofuel? LCFS, CAFE credits  Yes
Detached house or condo? No  Yes
Big house or small? No  Yes
Gas or heat pump? No  Yes
How efficient? Energy Star  Yes
High performance building envelope or granite countertops? Building codes  Yes
Incandescent or LED lighting? Bulb Ban  Yes
LEDs are cheap – use more? No  Yes
Get up to turn out an unused light? No  Yes
Fridge: top freezer, bottom freezer or side by side? No  Yes
How efficient? Energy Star (badly)  Yes
Solar panels? Building codes, net metering, tax credits, cap & trade  Yes
Green electricity? Portfolio standards  Yes
2 kids or 8? No  Yes

The beauty of an emissions price – preferably charged at the minemouth and wellhead – is that it permeates every economic aspect of life. The extent to which it does so depends on the emissions intensity of the subject activity – when it’s high, there’s a strong price signal, and when it’s low, there’s a weak signal, leaving users free to decide on other criteria. But the signal is always there. Importantly, the signal can’t be cheated: you can fake your EPA mileage rating – for a while – but it’s hard to evade costs that arrive packaged with your inputs, be they fuel, capital, services or food.

The rules and standards we have, on the other hand, form a rather moth-eaten patchwork. They cover a few of the biggest energy decisions with policies like renewable portfolio standards for electricity. Some of those have been pretty successful at lowering emissions. But others, like CAFE and Energy Star, are deficient or perverse in a variety of ways. As a group, they leave out a number of decisions that are extremely consequential. Effort is by no means uniform – what is the marginal cost of a ton of carbon avoided by CAFE, relative to a state’s renewable energy portfolio? No one knows.

So, how is the patchwork working? Not too well, I’d say. Some, like the CAFE standard, have been diluted by loopholes and stalled due to lack of political will:

BTS

Others are making some local progress. The California LCFS, for example, has reduced carbon intensity of fuels 3.5% since authorization by AB32 in 2006:

ARB

But the LCFS’ progress has been substantially undone by rising vehicle miles traveled (VMT). The only thing that put a real dent in driving was the financial crisis:

AFDC

Caltrans


In spite of this, the California patchwork has worked – it has reached its GHG reduction target:
SF Chronicle

This is almost entirely due to success in the electric power sector. Hopefully, there’s more to come, as renewables continue to ride down their learning curves. But how long can the power sector carry the full burden? Not long, I think.

The problem is that the electricity supply side is the “easy” part of the problem. There are relatively few technologies and actors to worry about. There’s a confluence of federal and state incentives. The technology landscape is favorable, with cost-effective emerging technologies.

The technology landscape for clean fuels is not easy. That’s why LCFS credits are trading at $195/ton while electricity cap & trade allowances are at $16/ton. The demand side has more flexibility, but it is technically diverse and organizationally fragmented (like the questions in my table above), making it harder to regulate. Problems are coupled: getting people out of their cars isn’t just a car problem; it’s a land use problem. Rebound effects abound: every LED light bulb is just begging to be left on all the time, because it’s so cheap to do so, and electricity subsidies make it even cheaper.

Command-and-control regulators face an unpleasant choice. They can push harder and harder in a few major areas, widening the performance gap – and the shadow price gap – between regulated and unregulated decisions. Or, they can proliferate regulations to cover more and more things, increasing administrative costs and making innovation harder.

As long as economic incentives scream that the price of carbon is zero, every performance standard, subsidy, or limit is fighting an uphill battle. People want to comply, but evolution selects for those who can figure out how to comply the least. Every idea that’s not covered by a standard faces a deep “valley of death” when it attempts to enter the market.

At present, we can’t let go of this patchwork of standards (wingwalker’s rule – don’t let go of one thing until you have hold of another). But in the long run, we need to start activating every possible tradeoff that improves emissions. That requires a uniform that pervades the economy. Then rules and standards can backfill the remaining market failures, resulting in a system of regulation that’s more effective and less intrusive.

The end of the world is free!

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

In the New York Times, David Leonhardt ponders,

The Problem With Putting a Price on the End of the World

Economists have workable policy ideas for addressing
climate change. But what if they’re politically impossible?

I wrote about this exact situation nearly ten years ago, when the Breakthrough Institute (and others) proposed energy R&D as an alternative to politically-infeasible carbon taxes. What has R&D accomplished since then? All kinds of wonderful things, but the implications for climate are … diddly squat.

The emerging climate technology delusion

Leonhardt observes that emissions pricing programs have already failed to win approval several times, which is true. However, I think the diagnosis is partly incorrect. Cap and trade programs like Waxman Markey failed not because they imposed prices, but because they were incredibly complex and involved big property rights giveaways. Anyone who even understands the details of the program is right to wonder if anyone other than traders will profit from it.

In other cases, like the Washington carbon tax initiatives, I think the problem may be that potential backers required that it solve not only climate, but also environmental justice and income inequality more broadly. That’s an impossible task for a single policy.

Leonhardt proposes performance standards and a variety of other economically “second best” measures as alternatives.

The better bet seems to be an “all of the above” approach: Organize a climate movement around meaningful policies with a reasonable chance of near-term success, but don’t abandon the hope of carbon pricing.

At first blush, this seems reasonable to me. Performance standards and information policies have accomplished a lot over the years. Energy R&D is a good investment.

On second thought, these alternatives have already failed. The sum total of all such policies over the last few decades has been to reduce CO2 emissions intensity by 2% per year.

That’s slower than GDP growth, so emissions have actually risen. That’s far short of what we need to accomplish, and it’s not all attributable to policy. Even with twice the political will, and twice the progress, it wouldn’t be nearly enough.

All of the above have some role to play, but without prices as a keystone economic signal, they’re fighting the tide. Moreover, together they have a large cost in administrative complexity, which gives opponents a legitimate reason to whine about bureaucracy and promotes regulatory capture. This makes it hard to innovate and helps large incumbents contribute to worsening inequality.

Adapted from Tax Time

So, I think we need to do a lot more than not “abandon the hope” of carbon pricing. Every time we push a stopgap, second-best policy, we must also be building the basis for implementation of emissions prices. This means we have to get smarter about carbon pricing, and address the cognitive and educational gaps that explain failure so far. Leonhardt identifies one key point:

‘If we’re going to succeed on climate policy, it will be by giving people a vision of what’s in it for them.’

I think that vision has several parts.

  • One is multisolving – recognizing that clever climate policy can improve welfare now as well as in the future through health and equity cobenefits. This is tricky, because a practical policy can’t do everything directly; it just has to be compatible with doing everything.
  • Another is decentralization. The climate-economy system is too big to permit monolithic solution designs. We have to preserve diversity and put signals in place that allow it to evolve in beneficial directions.

Finally, emissions pricing has to be more than a vision – it has to be designed so that it’s actually good for the median voter:

As Nordhaus acknowledged in his speech, curbing dirty energy by raising its price “may be good for nature, but it’s not actually all that attractive to voters to reduce their income.”

Emissions pricing doesn’t have to be harmful to most voters, even neglecting cobenefits, as long as green taxes include equitable rebates, revenue finances good projects, and green sectors have high labor intensity. (The median voter has to understand this as well.)

Personally, I’m frustrated by decades of excuses for ineffective, complicated, inequitable policies. I don’t know how to put it in terms that don’t trigger cognitive dissonance, but I think there’s a question that needs to be asked over and over, until it sinks in:

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Breakthrough Optimism

From Models of Doom, the Sussex critique of the Limits to Growth:

Real challenges will no doubt arise if world energy consumption continues to grow in the long-term at the current rate, but limited reserves of non-renewable energy resources are unlikely to represent a serious threat on reasonable assumptions about the ultimate size of the reserves and technical progress. …

It is not unreasonable to expect that within 30 years a breakthrough with fusion power will provide virtually inexhaustible cheap energy supplies, but should this breakthrough take considerably longer, pessimism would still be unjustified. There are untapped reserves of non-conventional hydrocarbons which will become economic after further technical development and if prices of conventional fossil fuels continue to rise.

At AAAS in 2005, a fusion researcher pointed out that 1950s predictions of working fusion 50 years out had expired … with fusion prospects still 50 years out.

This MIT Project Says Nuclear Fusion Is 15 Years Away (No, Really, This Time)

Expert: “I’m 100 Percent Confident” Fusion Power Will Be Practical
Companies chasing after the elusive technology hope to build reactors by 2030.

Is fusion finally just around the corner? I wouldn’t count on it. Even if we do get a breakthrough in 10 to 15 years, or tomorrow, it’s still a long way from proof of concept to deployment on a scale that’s helpful for mitigating CO2 emissions and avoiding use of destructive resources like tar sands.