Dynamics of Term Limits

I am a little encouraged to see that the very top item on Trump’s first 100 day todo list is term limits:

* FIRST, propose a Constitutional Amendment to impose term limits on all members of Congress;

Certainly the defects in our electoral and campaign finance system are among the most urgent issues we face.

Assuming other Republicans could be brought on board (which sounds unlikely), would term limits help? I didn’t have a good feel for the implications, so I built a model to clarify my thinking.

I used our new tool, Ventity, because I thought I might want to extend this to multiple voting districts, and because it makes it easy to run several scenarios with one click.

Here’s the setup:

structure

The model runs over a long series of 4000 election cycles. I could just as easily run 40 experiments of 100 cycles or some other combination that yielded a similar sample size, because the behavior is ergodic on any time scale that’s substantially longer than the maximum number of terms typically served.

Each election pits two politicians against one another. Normally, an incumbent faces a challenger. But if the incumbent is term-limited, two challengers face each other.

The electorate assesses the opponents and picks a winner. For challengers, there are two components to voters’ assessment of attractiveness:

  • Intrinsic performance: how well the politician will actually represent voter interests. (This is a tricky concept, because voters may want things that aren’t really in their own best interest.) The model generates challengers with random intrinsic attractiveness, with a standard deviation of 10%.
  • Noise: random disturbances that confuse voter perceptions of true performance, also with a standard deviation of 10% (i.e. it’s hard to tell who’s really good).

Once elected, incumbents have some additional features:

  • The assessment of attractiveness is influenced by an additional term, representing incumbents’ advantages in electability that arise from things that have no intrinsic benefit to voters. For example, incumbents can more easily attract funding and press.
  • Incumbent intrinsic attractiveness can drift. The drift has a random component (i.e. a random walk), with a standard deviation of 5% per term, reflecting changing demographics, technology, etc. There’s also a deterministic drift, which can either be positive (politicians learn to perform better with experience) or negative (power corrupts, or politicians lose touch with voters), defaulting to zero.
  • The random variation influencing voter perceptions is smaller (5%) because it’s easier to observe what incumbents actually do.

There’s always a term limit of some duration active, reflecting life expectancy, but the term limit can be made much shorter.

Here’s how it behaves with a 5-term limit:

terms

Politicians frequently serve out their 5-term limit, but occasionally are ousted early. Over that period, their intrinsic performance varies a lot:

attractiveness

Since the mean challenger has 0 intrinsic attractiveness, politicians outperform the average frequently, but far from universally. Underperforming politicians are often reelected.

Over a long time horizon (or similarly, many districts), you can see how average performance varies with term limits:

long

With no learning, as above, term limits degrade performance a lot (top panel). With a 2-term limit, the margin above random selection is about 6%, whereas it’s twice as great (>12%) with a 10-term limit. This is interesting, because it means that the retention of high-performing politicians improves performance a lot, even if politicians learn nothing from experience.

This advantage holds (but shrinks) even if you double the perception noise in the selection process. So, what does it take to justify term limits? In my experiments so far, politician performance has to degrade with experience (negative learning, corruption or losing touch). Breakeven (2-term limits perform the same as 10-term limits) occurs at -3% to -4% performance change per term.

But in such cases, it’s not really the term limits that are doing the work. When politician performance degrades rapidly with time, voters throw them out. Noise may delay the inevitable, but in my scenario, the average politician serves only 3 terms out of a limit of 10. Reducing the term limit to 1 or 2 does relatively little to change performance.

Upon reflection, I think the model is missing a key feature: winner-takes-all, redistricting and party rules that create safe havens for incompetent incumbents. In a district that’s split 50-50 between brown and yellow, an incompetent brown is easily displaced by a yellow challenger (or vice versa). But if the split is lopsided, it would be rare for a competent yellow challenger to emerge to replace the incompetent yellow incumbent. In such cases, term limits would help somewhat.

I can simulate this by making the advantage of incumbency bigger (raising the saturation advantage parameter):

attractiveness2

However, long terms are a symptom of the problem, not the root cause. Therefore it probably necessary in addition to address redistricting, campaign finance, voter participation and education, and other aspects of the electoral process that give rise to the problem in the first place. I’d argue that this is the single greatest contribution Trump could make.

You can play with the model yourself using the Ventity beta/trial and this model archive:

termlimits4.zip

Arab Spring

Hard on the heels of commitment comes another interesting, small social dynamics model on Arxiv. This one’s about the dynamics of the Arab Spring.

The self-immolation of Mohamed Bouazizi on December 17, 2011 in the small Tunisian city of Sidi Bouzid, set off a sequence of events culminating in the revolutions of the Arab Spring. It is widely believed that the Internet and social media played a critical role in the growth and success of protests that led to the downfall of the regimes in Egypt and Tunisia. However, the precise mechanisms by which these new media affected the course of events remain unclear. We introduce a simple compartmental model for the dynamics of a revolution in a dictatorial regime such as Tunisia or Egypt which takes into account the role of the Internet and social media. An elementary mathematical analysis of the model identifies four main parameter regions: stable police state, meta-stable police state, unstable police state, and failed state. We illustrate how these regions capture, at least qualitatively, a wide range of scenarios observed in the context of revolutionary movements by considering the revolutions in Tunisia and Egypt, as well as the situation in Iran, China, and Somalia, as case studies. We pose four questions about the dynamics of the Arab Spring revolutions and formulate answers informed by the model. We conclude with some possible directions for future work.

http://arxiv.org/abs/1210.1841

The model has two levels, but since non revolutionaries = 1 – revolutionaries, they’re not independent, so it’s effectively first order. This permits thorough analytical exploration of the dynamics.

This model differs from typical SD practice in that the formulations for visibility and policing use simple discrete logic – policing either works or it doesn’t, for example. There are also no explicit perception processes or delays. This keeps things simple for analysis, but also makes the behavior somewhat bang-bang. An interesting extension of this model would be to explore more operational, behavioral decision rules.

The model can be used as is to replicate the experiments in Figs. 8 & 9. Further experiments in the paper – including parameter changes that reflect social media – should also be replicable, but would take a little extra structure or Synthesim overrides.

This model runs with any recent Vensim version.

ArabSpring1.mdl

ArabSpring1.vpm

I’d especially welcome comments on the model and analysis from people who know the history of events better than I do.

Encouraging Moderation

An interesting paper on Arxiv caught my eye the other day. It uses a simple model of a bipolar debate to explore policies that encourage moderation.

Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.

This is a very simple and stylized model, but in the best tradition of model-based theorizing, it yields provocative counter-intuitive results and raises lots of interesting questions. Technology Review’s Arxiv Blog has a nice qualitative take on the work.

See also: Dynamics of Scientific Revolutions, Bifurcations & Filter Bubbles

The model runs in discrete time, but I’ve added implicit rate constants for dimensional consistency in continuous time.

commitment2.mdl & commitment2.vpm

These should be runnable with any Vensim version.

If you add the asymmetric generalizations in the paper’s Supplemental Material, add your name to the model diagram, forward a copy back to me, and I’ll post the update.

Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution

This is a very interesting model, both because it tackles ‘soft’ dynamics of paradigm formation in ‘hard’ science, and because it is an aggregate approach to an agent problem. Unfortunately, until now, the model was only available in DYNAMO, which limited access severely. It turns out to be fairly easy to translate to Vensim using the dyn2ven utility, once you know how to map the DYNAMO array FOR loops to Vensim subscripts.

Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution

J. Wittenberg and J. D. Sterman, 1999

Abstract

What is the relative importance of structural versus contextual forces in the birth and death of scientific theories? We describe a dynamic model of the birth, evolution, and death of scientific paradigms based on Kuhn’s Structure of Scientific Revolutions. The model creates a simulated ecology of interacting paradigms in which the creation of new theories is stochastic and endogenous. The model captures the sociological dynamics of paradigms as they compete against one another for members. Puzzle solving and anomaly recognition are also endogenous. We specify various regression models to examine the role of intrinsic versus contextual factors in determining paradigm success. We find that situational factors attending the birth of a paradigm largely determine its probability of rising to dominance, while the intrinsic explanatory power of a paradigm is only weakly related to the likelihood of success. For those paradigms that do survive the emergence phase, greater explanatory power is significantly related to longevity. However, the relationship between a paradigm’s ‘strength’ and the duration of normal science is also contingent on the competitive environment during the emergence phase. Analysis of the model shows the dynamics of competition and succession among paradigms to be conditioned by many positive feedback loops. These self-reinforcing processes amplify intrinsically unobservable micro-level perturbations in the environment – the local conditions of science, society, and self faced by the creators of a new theory – until they reach macroscopic significance. Such dynamics are the hallmark of self-organizing evolutionary systems.

We consider the implications of these results for the rise and fall of new ideas in contexts outside the natural sciences such as management fads.

Cite as: J. Wittenberg and J. D. Sterman (1999) Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution. Organization Science, 10.

I believe that this version is faithful to the original, but it’s difficult to be sure because the model is stochastic, so the results differ due to differences in the random number streams. For the moment, this model should be regarded as a beta release.

Continue reading “Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution”

Polya urn with increasing returns

This set of models performs a variant of a Polya urn experiment, along the lines of that described in Bryan Arthur’s Increasing Returns and Path Dependence in the Economy, Chapter 10. There’s a small difference, which is that samples are drawn with replacement (Bernoulli distribution) rather than without (hypergeometric distribution).

The interesting dynamics arise from competing positive feedback loops through the stocks of red and white balls. There’s useful related reading at http://tuvalu.santafe.edu/~wbarthur/Papers/Papers.html

I did the physical version of this experiment with Legos with my kids:

I tried the Polya urns experiment over lunch. We put 5 red and 5 white legos in a bowl, then took turns drawing a sample of 5. We returned the sample to the bowl, plus one lego of whichever color dominated the sample. Iterate. At the start, and after 2 or 3 rounds, I solicited guesses about what would happen. Gratifyingly, the consensus was that the bowl would remain roughly evenly divided between red and white. After a few more rounds, the reality began to diverge, and we stopped when white had a solid 2:1 advantage. I wondered aloud whether using a larger or smaller sample would lead to faster convergence. With no consensus about the answer, we tried it – drawing samples of just 1 lego. I think the experimental outcome was somewhat inconclusive – we quickly reached dominance of red, but the sampling process was much faster, so it may have actually taken more rounds to achieve that. There’s a lot of variation possible in the outcome, which means that superstitious learning is a possible trap.

This model automates the experiment, which makes it easier and more reliable to explore questions like the sensitivity of the rate of divergence to the sample size.

PolyaUrn.vpm

This version works with Vensim PLE (though it’s not supposed to, because it uses the RANDOM BERNOULLI function). It performs a single experiment per run, but includes sensitivity control files for performing hundreds of runs at a time (requires PLE Plus). That makes for a nice map of outcomes:

Continue reading “Polya urn with increasing returns”

A System Zoo

I just picked up a copy of Hartmut Bossel’s excellent System Zoo 1, which I’d seen years ago in German, but only recently discovered in English. This is the first of a series of books on modeling – it covers simple systems (integration, exponential growth and decay), logistic growth and variants, oscillations and chaos, and some interesting engineering systems (heat flow, gliders searching for thermals). These are high quality models, with units that balance, well-documented by the book. Every one I’ve tried runs in Vensim PLE so they’re great for teaching.

I haven’t had a chance to work my way through the System Zoo 2 (natural systems – climate, ecosystems, resources) and System Zoo 3 (economy, society, development), but I’m pretty confident that they’re equally interesting.

You can get the models for all three books, in English, from the Uni Kassel Center for Environmental Systems Research, http://www.usf.uni-kassel.de/cesr/. Follow the Archiv(e) link on the home page and enter the downloads Archiv(e). This will put you in a file browser. Choose the Software folder, then the Zoo folder to obtain a .zip archive of the zoo models for the whole series, in Vensim .mdl format.

To tantalize you, here are some images of model output from Zoo 1. First, a phase map of a bistable oscillator, which was so interesting that I built one with my kids, using legos and neodymium magnets:

Continue reading “A System Zoo”

Urban Dynamics

This is an updated version of Urban Dynamics, the classic by Forrester et al.

John Richardson upgraded the diagrams and cleaned up a few variable names that had typos.

I added some units equivalents and fixed a few variables in order to resolve existing errors. The model is now free of units errors, except for 7 warnings about use of dimensioned inputs to lookups (not uncommon practice, but it would be good to normalize these to suppress the warnings and make the model parameterization more flexible). There are also some runtime warnings about lookup bounds that I have not investigated (take a look – there could be a good paper lurking here).

Behavior is identical to that of the original from the standard Vensim distribution.

Urban Dynamics 2010-06-14.vpm

Urban Dynamics 2010-06-14.mdl

Urban Dynamics 2010-06-14.vmf

Terrorism Dynamics

Contributed by Bruce Skarin

Introduction

This model is the product of my Major Qualifying Project (MQP) for my Bachelors degree in the field of system dynamics at Worcester Polytechnic Institute. There were two goals to this project:

1) To develop a model that reasonably simulates the historic attacks by the al-Qaida terrorist network against the United States.

2) To evaluate the usefulness of the model for developing public understanding of the terrorism problem.

The full model and report are available on my website.

Reference Mode

The reference mode for this model was the escalation of attacks linked to al-Qaida against the U.S., as shown below. The data for this chart is available through this Google Document.
Image:Terrorism_Reference_Mode.jpg

Causal View of the Model

Below is the causal diagram of the primary feedback loops in the model.

Image:Terrorism_Causal_Loop.png

Online Story Model

There is an online story version that explains the primary model structure as well as complete iThink and Vensim models on my MQP page.