The ban was pushed by light bulb makers eager to up-sell customers on longer-lasting and much more expensive halogen, compact fluourescent, and LED lighting.
More expensive? Only in a universe where energy and labor costs don’t count (Texas?) and for a few applications (very low usage, or chicken warming).
Over the last couple years I’ve replaced almost all lighting in my house with LEDs. The light is better, the emissions are lower, and I have yet to see a failure (unlike cheap CFLs).
I built a little bulb calculator in Vensim, which shows huge advantages for LEDs in most situations, even with conservative assumptions (low social price of carbon, minimum wage) it’s hard to make incandescents look good. It’s also a nice example of using Vensim for spreadsheet replacement, on a problem that’s not very dynamic but has natural array structure.
Vensim‘s answer to exploring ill-behaved problem spaces is either to do hill-climbing with random restarts, or MCMC and simulated annealing. Either way, you need to start with some initial distribution of points to search.
It’s helpful if that distribution is somehow efficient at exploring the interesting parts of the space. I think this is closely related to the problem of selecting uninformative priors in Bayesian statistics. There’s lots of research about appropriate uninformative priors for various kinds of parameters. For example,
If a parameter represents a probability, one might choose the Jeffreys or Haldane prior.
Indifference to units, scale and inversion might suggest the use of a log uniform prior, where nothing else is known about a positive parameter
However, when a user specifies a parameter in Vensim, we don’t even know what it represents. So what’s the appropriate prior for a parameter that might be positive or negative, a probability, a time constant, a scale factor, an initial condition for a physical stock, etc.?
On the other hand, we aren’t quite as ignorant as the pure maximum entropy derivation usually assumes. For example,
All numbers have to lie between the largest and smallest float or double, i.e. +/- 3e38 or 2e308.
More practically, no one scales their models such that a parameter like 6.5e173 would ever be required. There’s a reason that metric prefixes range from yotta to yocto (10^24 to 10^-24). The only constant I can think of that approaches that range is Avogadro’s number (though there are probably others), and that’s not normally a changeable parameter.
For lots of things, one can impose more constraints, given a little more information,
A time constant or delay must lie on [TIME STEP,infinity], and the “infinity” of interest is practically limited by the simulation duration.
A fractional rate of change similarly must lie on [-1/TIME STEP,1/TIME STEP] for stability
Other parameters probably have limits for stability, though it may be hard to discover them except by experiment.
A parameter with units of year is probably modern, [1900-2100], unless you’re doing Mayan archaeology or paleoclimate.
At some point, the assumptions become too heroic, and we need to rely on users for some help. But it would still be really interesting to see the distribution of all parameters in real models. (See next …)
The Earth, with its core-driven magnetic field, convective mantle, mobile lid tectonics, oceans of liquid water, dynamic climate and abundant life is arguably the most complex system in the known universe. This system has exhibited stability in the sense of, bar a number of notable exceptions, surface temperature remaining within the bounds required for liquid water and so a significant biosphere. Explanations for this range from anthropic principles in which the Earth was essentially lucky, to homeostatic Gaia in which the abiotic and biotic components of the Earth system self-organise into homeostatic states that are robust to a wide range of external perturbations. Here we present results from a conceptual model that demonstrates the emergence of homeostasis as a consequence of the feedback loop operating between life and its environment. Formulating the model in terms of Gaussian processes allows the development of novel computational methods in order to provide solutions. We find that the stability of this system will typically increase then remain constant with an increase in biological diversity and that the number of attractors within the phase space exponentially increases with the number of environmental variables while the probability of the system being in an attractor that lies within prescribed boundaries decreases approximately linearly. We argue that the cybernetic concept of rein control provides insights into how this model system, and potentially any system that is comprised of biological to environmental feedback loops, self-organises into homeostatic states.
Wonderland model by Sanderson et al.; see Alexandra Milik, Alexia Prskawetz, Gustav Feichtinger, and Warren C. Sanderson, “Slow-fast Dynamics in Wonderland,” Environmental Modeling and Assessment 1 (1996) 3-17.
Need to time model runs? One way to do it is with Vensim’s log commands, in a cmd script or Venapp:
LOG>MESSAGE|timing.txt|"About to run."
These commands were designed for logging user interaction, so they don’t offer millisecond resolution needed for small models. For that, another option is to use the .dll.
Generally, model execution time is close to proportional with equation count x time step count, with exceptions for iterative functions (FIND ZERO) and RK auto integration. You can use the .dll’s vensim_get_varattrib to count equations (expanding subscripts) if it’s helpful for planning to maximize simulation speed.
Catastrophic and sudden collapses of ecosystems are sometimes preceded by early warning signals that potentially could be used to predict and prevent a forthcoming catastrophe. Universality of these early warning signals has been proposed, but no formal proof has been provided. Here, we show that in relatively simple ecological models the most commonly used early warning signals for a catastrophic collapse can be silent. We underpin the mathematical reason for this phenomenon, which involves the direction of the eigenvectors of the system. Our results demonstrate that claims on the universality of early warning signals are not correct, and that catastrophic collapses can occur without prior warning. In order to correctly predict a collapse and determine whether early warning signals precede the collapse, detailed knowledge of the mathematical structure of the approaching bifurcation is necessary. Unfortunately, such knowledge is often only obtained after the collapse has already occurred.
This is a third-order ecological model with juvenile and adult prey and a predator:
Hard on the heels of commitment comes another interesting, small social dynamics model on Arxiv. This one’s about the dynamics of the Arab Spring.
The self-immolation of Mohamed Bouazizi on December 17, 2011 in the small Tunisian city of Sidi Bouzid, set off a sequence of events culminating in the revolutions of the Arab Spring. It is widely believed that the Internet and social media played a critical role in the growth and success of protests that led to the downfall of the regimes in Egypt and Tunisia. However, the precise mechanisms by which these new media affected the course of events remain unclear. We introduce a simple compartmental model for the dynamics of a revolution in a dictatorial regime such as Tunisia or Egypt which takes into account the role of the Internet and social media. An elementary mathematical analysis of the model identifies four main parameter regions: stable police state, meta-stable police state, unstable police state, and failed state. We illustrate how these regions capture, at least qualitatively, a wide range of scenarios observed in the context of revolutionary movements by considering the revolutions in Tunisia and Egypt, as well as the situation in Iran, China, and Somalia, as case studies. We pose four questions about the dynamics of the Arab Spring revolutions and formulate answers informed by the model. We conclude with some possible directions for future work.
The model has two levels, but since non revolutionaries = 1 – revolutionaries, they’re not independent, so it’s effectively first order. This permits thorough analytical exploration of the dynamics.
This model differs from typical SD practice in that the formulations for visibility and policing use simple discrete logic – policing either works or it doesn’t, for example. There are also no explicit perception processes or delays. This keeps things simple for analysis, but also makes the behavior somewhat bang-bang. An interesting extension of this model would be to explore more operational, behavioral decision rules.
The model can be used as is to replicate the experiments in Figs. 8 & 9. Further experiments in the paper – including parameter changes that reflect social media – should also be replicable, but would take a little extra structure or Synthesim overrides.
An interesting paper on Arxiv caught my eye the other day. It uses a simple model of a bipolar debate to explore policies that encourage moderation.
Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.
I haven’t had much time to write lately – too busy writing Vensim code, working on En-ROADS, and modeling the STEM workforce.
So, in the meantime, here’s a nice tutorial on the use of ODBC database links with Vensim DSS, from Mohammad Jalali:
This can be a powerful way to ingest a lot of data from diverse sources, and to share and archive simulations.
Big data is always a double-edged sword in consulting projects. Without it, you don’t know much. But with it, your time is consumed with discovering all the flaws of the data, which remain because most likely no one else ever looked at it seriously from a strategic/dynamic perspective before. It’s typically transactionally correct, because people verify that they get their orders and paychecks. But at an aggregate level it’s often rife with categorization mismatches across organizational boundaries and other pathologies.