Wikipedia has a nice article, which I used as the basis for this simple model.
This set of models performs a variant of a Polya urn experiment, along the lines of that described in Bryan Arthur’s Increasing Returns and Path Dependence in the Economy, Chapter 10. There’s a small difference, which is that samples are drawn with replacement (Bernoulli distribution) rather than without (hypergeometric distribution).
The interesting dynamics arise from competing positive feedback loops through the stocks of red and white balls. There’s useful related reading at http://tuvalu.santafe.edu/~wbarthur/Papers/Papers.html
I did the physical version of this experiment with Legos with my kids:
I tried the Polya urns experiment over lunch. We put 5 red and 5 white legos in a bowl, then took turns drawing a sample of 5. We returned the sample to the bowl, plus one lego of whichever color dominated the sample. Iterate. At the start, and after 2 or 3 rounds, I solicited guesses about what would happen. Gratifyingly, the consensus was that the bowl would remain roughly evenly divided between red and white. After a few more rounds, the reality began to diverge, and we stopped when white had a solid 2:1 advantage. I wondered aloud whether using a larger or smaller sample would lead to faster convergence. With no consensus about the answer, we tried it – drawing samples of just 1 lego. I think the experimental outcome was somewhat inconclusive – we quickly reached dominance of red, but the sampling process was much faster, so it may have actually taken more rounds to achieve that. There’s a lot of variation possible in the outcome, which means that superstitious learning is a possible trap.
This model automates the experiment, which makes it easier and more reliable to explore questions like the sensitivity of the rate of divergence to the sample size.
This version works with Vensim PLE (though it’s not supposed to, because it uses the RANDOM BERNOULLI function). It performs a single experiment per run, but includes sensitivity control files for performing hundreds of runs at a time (requires PLE Plus). That makes for a nice map of outcomes:
I just picked up a copy of Hartmut Bossel’s excellent System Zoo 1, which I’d seen years ago in German, but only recently discovered in English. This is the first of a series of books on modeling – it covers simple systems (integration, exponential growth and decay), logistic growth and variants, oscillations and chaos, and some interesting engineering systems (heat flow, gliders searching for thermals). These are high quality models, with units that balance, well-documented by the book. Every one I’ve tried runs in Vensim PLE so they’re great for teaching.
I haven’t had a chance to work my way through the System Zoo 2 (natural systems – climate, ecosystems, resources) and System Zoo 3 (economy, society, development), but I’m pretty confident that they’re equally interesting.
You can get the models for all three books, in English, from the Uni Kassel Center for Environmental Systems Research – it’s now easy to find a .zip archive of the zoo models for the whole series, in Vensim .mdl format, on CESR’s home page: www2.cesr.de/downloads.
To tantalize you, here are some images of model output from Zoo 1. First, a phase map of a bistable oscillator, which was so interesting that I built one with my kids, using legos and neodymium magnets:
This is an updated version of Urban Dynamics, the classic by Forrester et al.
John Richardson upgraded the diagrams and cleaned up a few variable names that had typos.
I added some units equivalents and fixed a few variables in order to resolve existing errors. The model is now free of units errors, except for 7 warnings about use of dimensioned inputs to lookups (not uncommon practice, but it would be good to normalize these to suppress the warnings and make the model parameterization more flexible). There are also some runtime warnings about lookup bounds that I have not investigated (take a look – there could be a good paper lurking here).
Behavior is identical to that of the original from the standard Vensim distribution.
There are many good things about this model, but also some bad. If you are thinking of using it as a platform for expansion, read my dissertation first.
I provide several versions:
- Model with simple heuristics replacing the time-vector decisions in the original; runs in Vensim PLE
- Full model, with decisions implemented as vectors of points over time; requires Vensim Pro or DSS
- Same as #2, but with VECTOR LOOKUP replaced with VECTOR ELM MAP; supports earlier versions of Pro or DSS
- DICE-vec-6-elm.mdl (you’ll also want a copy of DICE-vec-6.vpm above, so that you can extract the supporting optimization control files)
Note that there may be minor variances from the published versions, e.g. that transversality coefficients for the state variables (i.e. terminal values of the states for optimization) are not included. The optimizations use fewer time decision points than the original GAMS equivalents. These do not have any significant effect on the outcome.
This is the latest instance of the WORLD3 model, as in Limits to Growth – the 30 year update, from the standard Vensim distribution. It’s not much changed from the 1972 original used in Limits to Growth, which is documented in great detail in Dynamics of Growth in a Finite World (half off at Pegasus as of this moment).
There have been many critiques of this model, including the fairly famous Models of Doom. Many are ideological screeds that miss the point, and many modern critics do not appear to have read the book. The only good, comprehensive technical critique of World3 that I’m aware of is Wil Thissen’s thesis, Investigations into the Club of Rome’s WORLD3 model: lessons for understanding complicated models (Eindhoven, 1978). Portions appeared in IEEE Transactions.
My take on the more sensible critiques is that they show two things:
- WORLD3 is an imperfect expression of the underlying ideas in Limits to Growth.
- WORLD3 doesn’t have the policy space to capture competing viewpoints about the global situation; in particular it does not represent markets and technology as many see them.
It doesn’t necessarily follow from those facts that the underlying ideas of Limits are wrong. We still have to grapple with the consequences of exponential growth confronting finite planetary boundaries with long perception and action delays.
I’ve written some other material on limits here.
Files: WORLD3-03 (zipped archive of Vensim models and constant changes)
Jump to: navigation, search
Model Name: payments, penalties, and environmental ethic
Citation: Dudley, R. 2007. Payments, penalties, payouts, and environmental ethics: a system dynamics examination Sustainability: Science, Practice, & Policy 3(2):24-35. http://ejournal.nbii.org/archives/vol3iss2/0706-013.dudley.html.
Source: Richard G. Dudley
Copyright: Richard G. Dudley (2007)
License: Gnu GPL
Peer reviewed: Yes (probably when submitted for publication?)
Units balance: Yes
Target audience: People interested in the concept of payments for environmental services as a means of improving land use and conservation of natural resources.
Questions answered: How might land users’ environmental ethic be influenced by, and influence, payments for environmental services.
Replicated by: Tom Fiddaman
Citation: Hatlebakk, Magnus, & Moxnes, Erling (1992). Misperceptions and Mismanagement of the Greenhouse Effect? The Simulation Model . Report # CMR-92-A30009, December). Christian Michelsen Research.
This is a climate-economy model, of about the same scale and vintage as Nordhaus’ original DICE model. It’s more interesting in some respects, because it includes path-dependent reversible and irreversible emissions reductions. As I recall, the original also had some stochastic elements, not active here. This version has no units; hopefully I can get an improved version online at some point.
Model Name: A Behavioral Analysis of Learning Curve Strategy
Citation: A Behavioral Analysis of Learning Curve Strategy, John D. Sterman and Rebecca Henderson, Sloan School of Management, MIT and Eric D. Beinhocker and Lee I. Newman, McKinsey and Company.
Neoclassical models of strategic behavior have yielded many insights into competitive behavior, despite the fact that they often rely on a number of assumptions-including instantaneous market clearing and perfect foresight-that have been called into question by a broad range of research. Researchers generally argue that these assumptions are “good enough” to predict an industry’s probable equilibria, and that disequilibrium adjustments and bounded rationality have limited competitive implications. Here we focus on the case of strategy in the presence of increasing returns to highlight how relaxing these two assumptions can lead to outcomes quite different from those predicted by standard neoclassical models. Prior research suggests that in the presence of increasing returns, tight appropriability and accommodating rivals, in some circumstances early entrants can achieve sustained competitive advantage by pursuing Get Big Fast (GBF) strategies: rapidly expanding capacity and cutting prices to gain market share advantage and exploit positive feedbacks faster than their rivals. Using a simulation of the duopoly case we show that when the industry moves slowly compared to capacity adjustment delays, boundedly rational firms find their way to the equilibria predicted by conventional models. However, when market dynamics are rapid relative to capacity adjustment, forecasting errors lead to excess capacity, overwhelming the advantage conferred by increasing returns. Our results highlight the risks of ignoring the role of disequilibrium dynamics and bounded rationality in shaping competitive outcomes, and demonstrate how both can be incorporated into strategic analysis to form a dynamic, behavioral game theory amenable to rigorous analysis.
Source: Replicated by Tom Fiddaman
Units balance: Yes
Format: Vensim (the model uses subscripts, so it requires Pro, DSS, or Model Reader)