Snow is Normal in Montana

In this case, I think it’s quite literally Normal a.k.a. Gaussian:

Normally distributed snow

Here’s what I think is happening. On windless days with powder, the snow dribbles off the edge of the roof (just above the center of the hump). Flakes drift down in a random walk. The railing terminates the walk after about four feet, by which time the distribution of flake positions has already reached the Normal you’d expect from the Central Limit Theorem.

Enough of the geek stuff; I think I’ll go ski the field.

Forrester on Continuous Flows

I just published three short videos with sample models, illustrating representation of discrete and random events in Vensim.

Jay Forrester‘s advice from Industrial Dynamics is still highly relevant. Here’s an excerpt:

Chapter 5, Principles for Formulating Models

5.5 Continuous Flows

In formulating a model of an industrial operation, we suggest that the system be treated, at least initially, on the basis of continuous flows and interactions of the variables. Discreteness of events is entirely compatible with the concept of information-feedback systems, but we must be on guard against unnecessarily cluttering our formulation with the detail of discrete events that only obscure the momentum and continuity exhibited by our industrial systems.

In beginning, decisions should be formulated in the model as if they were continuously (but not implying instantaneously) responsive to the factors on which they are based. This means that decisions will not be formulated for intermittent reconsideration each week, month or year. For example, factory production capacity would vary continuously, not by discrete additions. Ordering would go on continuously, not monthly when the stock records are reviewed.

There are several reasons for recommending the initial formulation of a continuous model:

  • Real systems are more nearly continuous than is commonly supposed …
  • There will usually be considerable “aggregation” …
  • A continuous-flow system is usually an effective first approximation …
  • There is a natural tendency of model builders and executives to overstress the discontinuities of real situations. …
  • A continuous-flow model helps to concentrate attention on the central framework of the system. …
  • As a starting point, the dynamics of the continuous-flow model are usually easier to understand …
  • A discontinuous model, which is evaluated at infrequent intervals, such as an economic model solved for a new set of values annually, should never by justified by the fact that data in the real system have been collected at such infrequent intervals. …

These comments should never be construed as suggesting that the model builder should lack interest in the microscopic separate events that occur in a continuous-flow channel. The course of the continuous flow is the course of the separate events in it. By studying individual events we get a picture of how decisions are made and how the flows are delayed. The study of individual events is on of our richest sources of information about the way the flow channels of the model should be constructed. When a decision is actually being made regularly on a periodic basis, like once a month, the continuous-flow equivalent channel should contain a delay of half the interval; this represents the average delay encountered by information in the channel.

The preceding comments do not imply that discreteness is difficult to represent, nor that it should forever be excluded from a model. At times it will become significant. For example, it may create a disturbance that will cause system fluctuations that can be mistakenly interreted as externally generated cycles (…). When a model has progressed to the point where such refinements are justified, and there is reason to believe that discreteness has a significant influence on system behavior, discontinuous variables should then be explored to determine their effect on the model.

[Ellipses added – see the original for elaboration.]

A conversation about infrastructure

A conversation about infrastructure, with Carter Williams of iSelect and me:

The $3 Trillion Problem: Solving America’s Infrastructure Crisis

I can’t believe I forgot to mention one of the most obvious System Dynamics insights about infrastructure:

There are two ways to fill a leaky bucket – increase the inflow, or plug the outflows. There’s always lots of enthusiasm for increasing the inflow by building new stuff. But there’s little sense in adding to the infrastructure stock if you can’t maintain what you have. So, plug the leaks first, and get into a proactive maintenance mode. Then you can have fun building new things – if you can afford it.

Rats leaving a sinking Sears

Sears Roebuck & Co. was a big part of my extended family at one time. My wife’s grandfather started in the mail room and worked his way up to executive, through the introduction of computers and the firebombing in Caracas. Sadly, its demise appears imminent.

Business Insider has an interesting article on the dynamics of Sears’ decline. Here’s a quick causal loop diagram summarizing some of the many positive feedbacks that once drove growth, but now are vicious cycles:

sears_rats_sinking_ships_corr

h/t @johnrodat

CLD corrected, 1/9/17.

Privatizing Public Lands – Claim your 0.3 acres now!

BLM Public Lands Statistics show that the federal government holds about 643 million acres – about 2 acres for each person.

But what would you really get if these lands were transferred to the states and privatized by sale? Asset sales would distribute land roughly according to the existing distribution of wealth. Here’s how that would look:

The Forbes 400 has a net worth of $2.4 trillion, not quite 3% of US household net worth. If you’re one of those lucky few, your cut would be about 44,000 acres, or 69 square miles.

Bill Gates, Jeff Bezos, Warren Buffet, Mark Zuckerberg and Larry Ellison alone could split Yellowstone National Park (over 2 million acres).

The top 1% wealthiest Americans (35% of net worth) would average 70 acres each, and the next 19% (51% of net worth) would get a little over 5 acres.

The other 80% of America would split the remaining 14% of the land. That’s about a third of an acre each, which would be a good-sized suburban lot, if it weren’t in the middle of Nevada or Alaska.

You can’t even see the average person’s share on a graph, unless you use a logarithmic scale:

landpercaplog

Otherwise, the result just looks ridiculous, even if you ignore the outliers:

landpercap

Remembering Jay Forrester

I’m sad to report that Jay Forrester, pioneer in servo control, digital computing, System Dynamics, global modeling, and education has passed away at the age of 98.

forresterred

I’ve only begun to think about the ways Jay influenced my life, but digging through the archives here I ran across a nice short video clip on Jay’s hope for the future. Jay sounds as prescient as ever, given recent events:

“The coming century, I think, will be dominated by major social, political turmoil. And it will result primarily because people are doing what they think they should do, but do not realize that what they’re doing are causing these problems. So, I think the hope for this coming century is to develop a sufficiently large percentage of the population that have true insight into the nature of the complex systems within which they live.”

I delve into the roots of this thought in Election Reflection (2010).

Here’s a sampling of other Forrester ideas from these pages:

The Law of Attraction

Forrester on the Financial Crisis

Self-generated seasonal cycles

Deeper Lessons

Servo-chicken

Models

Market Growth

Urban Dynamics

Industrial Dynamics

World Dynamics

 

 

 

Dynamics of Term Limits

I am a little encouraged to see that the very top item on Trump’s first 100 day todo list is term limits:

* FIRST, propose a Constitutional Amendment to impose term limits on all members of Congress;

Certainly the defects in our electoral and campaign finance system are among the most urgent issues we face.

Assuming other Republicans could be brought on board (which sounds unlikely), would term limits help? I didn’t have a good feel for the implications, so I built a model to clarify my thinking.

I used our new tool, Ventity, because I thought I might want to extend this to multiple voting districts, and because it makes it easy to run several scenarios with one click.

Here’s the setup:

structure

The model runs over a long series of 4000 election cycles. I could just as easily run 40 experiments of 100 cycles or some other combination that yielded a similar sample size, because the behavior is ergodic on any time scale that’s substantially longer than the maximum number of terms typically served.

Each election pits two politicians against one another. Normally, an incumbent faces a challenger. But if the incumbent is term-limited, two challengers face each other.

The electorate assesses the opponents and picks a winner. For challengers, there are two components to voters’ assessment of attractiveness:

  • Intrinsic performance: how well the politician will actually represent voter interests. (This is a tricky concept, because voters may want things that aren’t really in their own best interest.) The model generates challengers with random intrinsic attractiveness, with a standard deviation of 10%.
  • Noise: random disturbances that confuse voter perceptions of true performance, also with a standard deviation of 10% (i.e. it’s hard to tell who’s really good).

Once elected, incumbents have some additional features:

  • The assessment of attractiveness is influenced by an additional term, representing incumbents’ advantages in electability that arise from things that have no intrinsic benefit to voters. For example, incumbents can more easily attract funding and press.
  • Incumbent intrinsic attractiveness can drift. The drift has a random component (i.e. a random walk), with a standard deviation of 5% per term, reflecting changing demographics, technology, etc. There’s also a deterministic drift, which can either be positive (politicians learn to perform better with experience) or negative (power corrupts, or politicians lose touch with voters), defaulting to zero.
  • The random variation influencing voter perceptions is smaller (5%) because it’s easier to observe what incumbents actually do.

There’s always a term limit of some duration active, reflecting life expectancy, but the term limit can be made much shorter.

Here’s how it behaves with a 5-term limit:

terms

Politicians frequently serve out their 5-term limit, but occasionally are ousted early. Over that period, their intrinsic performance varies a lot:

attractiveness

Since the mean challenger has 0 intrinsic attractiveness, politicians outperform the average frequently, but far from universally. Underperforming politicians are often reelected.

Over a long time horizon (or similarly, many districts), you can see how average performance varies with term limits:

long

With no learning, as above, term limits degrade performance a lot (top panel). With a 2-term limit, the margin above random selection is about 6%, whereas it’s twice as great (>12%) with a 10-term limit. This is interesting, because it means that the retention of high-performing politicians improves performance a lot, even if politicians learn nothing from experience.

This advantage holds (but shrinks) even if you double the perception noise in the selection process. So, what does it take to justify term limits? In my experiments so far, politician performance has to degrade with experience (negative learning, corruption or losing touch). Breakeven (2-term limits perform the same as 10-term limits) occurs at -3% to -4% performance change per term.

But in such cases, it’s not really the term limits that are doing the work. When politician performance degrades rapidly with time, voters throw them out. Noise may delay the inevitable, but in my scenario, the average politician serves only 3 terms out of a limit of 10. Reducing the term limit to 1 or 2 does relatively little to change performance.

Upon reflection, I think the model is missing a key feature: winner-takes-all, redistricting and party rules that create safe havens for incompetent incumbents. In a district that’s split 50-50 between brown and yellow, an incompetent brown is easily displaced by a yellow challenger (or vice versa). But if the split is lopsided, it would be rare for a competent yellow challenger to emerge to replace the incompetent yellow incumbent. In such cases, term limits would help somewhat.

I can simulate this by making the advantage of incumbency bigger (raising the saturation advantage parameter):

attractiveness2

However, long terms are a symptom of the problem, not the root cause. Therefore it probably necessary in addition to address redistricting, campaign finance, voter participation and education, and other aspects of the electoral process that give rise to the problem in the first place. I’d argue that this is the single greatest contribution Trump could make.

You can play with the model yourself using the Ventity beta/trial and this model archive:

termlimits4.zip

Climate and Competitiveness

Trump gets well-deserved criticism for denying having claimed that the Chinese invented climate change to make  US manufacturing non-competitive.

climatechinesehoax

The idea is absurd on its face. Climate change was proposed long before (or long after) China figured on the global economic landscape. There was only one lead author from China out of the 34 in the first IPCC Scientific Assessment. The entire climate literature is heavily dominated by the US and Europe.

But another big reason to doubt its veracity is that climate policy, like emissions pricing, would make Chinese manufacturing less competitive. In fact, at the time of the first assessment, China was the most carbon-intensive economy in the world, according to the World Bank:

chinaintensity

Today, China’s carbon intensity remains more than twice that of the US. That makes a carbon tax with a border adjustment an attractive policy for US competitiveness. What conspiracy theory makes it rational for China to promote that?

Feedback and project schedule performance

Yasaman Jalili and David Ford look take a deeper look at project model dynamics in the January System Dynamics Review. An excerpt:
projectloops

Quantifying the impacts of rework, schedule pressure, and ripple effect loops on project schedule performance

Schedule performance is often critical to construction project success. But many times projects experience large unforeseen delays and fail to meet their schedule targets. The failure of large construction projects has enormous economic consequences. …

… the persistence of large project delays implies that their importance has not been fully recognized and incorporated into practice. Traditional project management methods do not explicitly consider the effects of feedback (Pena-Mora and Park, 2001). Project managers may intuitively include some impacts of feedback loops when managing projects (e.g. including buffers when estimating activity durations), but the accuracy of the estimates is very dependent upon the experience and judgment of the scheduler (Sterman, 1992). Owing to the lack of a widely used systematic approach to incorporating the impacts of feedback loops in project management, the interdependencies and dynamics of projects are often ignored. This may be due to a failure of practicing project managers to understand the role and significance of commonly experienced feedback structures in determining project schedule performance. Practitioners may not be aware of the sizes of delays caused by feedback loops in projects, or even the scale of impacts. …

In the current work, a simple validated project model has been used to quantify the schedule impacts of three common reinforcing feedback loops (rework cycle, “haste makes waste”, and ripple effects) in a single phase of a project. Quantifying the sizes of different reinforcing loop impacts on project durations in a simple but realistic project model can be used to clearly show and explain the magnitude of these impacts to project management practitioners and students, and thereby the importance of using system dynamics in project management.

This is a more formal and thorough look at some issues that I raised a while ago, here and here.

I think one important aspect of the model outcome goes unstated in the paper. The results show dominance of the rework parameter:

The graph shows that, regardless of the value of the variables, the rework cycle has the most impact on project duration, ranging from 1.2 to 26.5 times more than the next most influential loop. As the high level of the variables increases, the impact of “haste makes waste” and “ripple effects” loops increases.

projectcauses

Yes, but why? I think the answer is in the nonlinear relationships among the loops. Here’s a simplified view (omitting some redundant loops for simplicity):

projectrework

Project failure occurs when it crosses the tipping point at which completing one task creates more than one task of rework (red flows). Some rework is inevitable due to the error rate (“rework fraction” – orange), i.e. the inverse of quality. A high rework fraction, all by itself, can torpedo the project.

The ripple effect is a little different – it creates new tasks in proportion to the discovery of rework (blue). This is a multiplicative relationship,

ripple work ≅ rework fraction * ripple strength

which means that the ripple effect can only cause problems if quality is poor to begin with.

Similarly, schedule pressure (green) only contributes to rework when backlogs are large and work accomplished is small relative to scheduled ambitions. For that to happen, one of two things must occur: rework and ripple effects delay completion, or the schedule is too ambitious at the outset.

With this structure, you can see why rework (quality) is a problem in itself, but ripple and schedule effects are contingent on the rework trigger. I haven’t run the simulations to prove it, but I think that explains the dominance of the rework parameter in the results. (There’s a followup article here!)

Update, H/T Michael Bean:

Update II

There’s a nice description of the tipping point dynamics here.