What's the empirical distribution of parameters?

Vensim‘s answer to exploring ill-behaved problem spaces is either to do hill-climbing with random restarts, or MCMC and simulated annealing. Either way, you need to start with some initial distribution of points to search.

It’s helpful if that distribution is somehow efficient at exploring the interesting parts of the space. I think this is closely related to the problem of selecting uninformative priors in Bayesian statistics. There’s lots of research about appropriate uninformative priors for various kinds of parameters. For example,

  • If a parameter represents a probability, one might choose the Jeffreys or Haldane prior.
  • Indifference to units, scale and inversion might suggest the use of a log uniform prior, where nothing else is known about a positive parameter

However, when a user specifies a parameter in Vensim, we don’t even know what it represents. So what’s the appropriate prior for a parameter that might be positive or negative, a probability, a time constant, a scale factor, an initial condition for a physical stock, etc.?

On the other hand, we aren’t quite as ignorant as the pure maximum entropy derivation usually assumes. For example,

  • All numbers have to lie between the largest and smallest float or double, i.e. +/- 3e38 or 2e308.
  • More practically, no one scales their models such that a parameter like 6.5e173 would ever be required. There’s a reason that metric prefixes range from yotta to yocto (10^24 to 10^-24). The only constant I can think of that approaches that range is Avogadro’s number (though there are probably others), and that’s not normally a changeable parameter.
  • For lots of things, one can impose more constraints, given a little more information,
    • A time constant or delay must lie on [TIME STEP,infinity], and the “infinity” of interest is practically limited by the simulation duration.
    • A fractional rate of change similarly must lie on [-1/TIME STEP,1/TIME STEP] for stability
    • Other parameters probably have limits for stability, though it may be hard to discover them except by experiment.
    • A parameter with units of year is probably modern, [1900-2100], unless you’re doing Mayan archaeology or paleoclimate.

At some point, the assumptions become too heroic, and we need to rely on users for some help. But it would still be really interesting to see the distribution of all parameters in real models. (See next …)

In {R^n, n large}, no one can hear you scream.

I haven’t had time to write much lately. I spent several weeks in arcane code purgatory, discovering the fun of macros containing uninitialized thread pointers that only fail in 64 bit environments, and for different reasons on Windows, Mac and Linux. That’s a dark place that I hope never again to visit.

Now I’m working fun things again, but they’re secret, so I can’t discuss details. Instead, I’ll just share a little observation that came up in the process.

Frequently, we do calibration or policy optimization on models with a lot of parameters. “A lot” is actually a pretty small number – like 10 – when you have to do things by brute force. This works more often than we have a right to expect, given the potential combinatorial explosion this entails.

However, I suspect that we (at least I) don’t fully appreciate what’s going on. Here are two provable facts that make sense upon reflection, but weren’t part of my intuition about such problems:

In other words, R^n gets big really fast, and it’s all corners. The saving grace is probably that sensible parameters are frequently distributed on low-dimensional manifolds embedded in high dimensional spaces. But we should probably be more afraid than we typically are.

We already tried monarchy

Tom Perkins thinks votes should be proportional to taxes paid. (As if they weren’t already, to some degree!)

PerkinsHenryVIIIImages: CNN & The London Dungeon

You don’t have to look very far in history to find a system in which political power and ownership of assets were embodied in the same few people. We called its advocates “monarchists,” and there were remedies for that.

tar-and-featherThe founding fathers were rightfully aware of the need to prevent runaway positive feedback of wealth and power. Perkins evidently fears runaway negative feedback:

“The fear is wealth tax, higher taxes, higher death taxes — just more taxes until there is no more 1%. And that that will creep down to the 5% and then the 10%,” he said.

This is ignores conservation laws. If punitive taxation could really bring the wealth of the 1% down, where would all that money, and its underlying assets, actually go? And how can this be a real concern, when in fact incomes at the top are dramatically increasing by any measure?

So, Perkins is,

  • not a student of history
  • not a fan of democracy
  • not a keen observer of current trends
  • bad at economics

— or —

  • willing to fib about it all for personal gain

and we should give him a million votes?

Update: I’ve played this game before.

Species Restoration & Policy Resistance

I’ve seen a lot of attention lately to restoration of extinct species. It strikes me as a band-aid, not a solution.

Here’s the core of the system:

speciesReintroCritters don’t go extinct for lack of human intervention. They go extinct because the balance of birth and death rates is unfavorable, so that population declines, and (stochastically) winks out.

That happens naturally of course, but anthropogenic extinctions are happening much faster than usual. The drivers (red) are direct harvest and loss of the resource base on which species rely. The resource base is largely habitat, but also other species and ecosystem services that are themselves harvested, poisoned by pollutants, etc.

Reintroducing lost species may be helpful in itself (who wouldn’t want to see millions of passenger pigeons?), but unless the basic drivers of overharvest and resource loss are addressed, species are reintroduced into an environment in which the net gain of births and deaths favors re-extinction. What’s the point of that?

If the drivers of extinction – ultimately population and capital growth plus bad management – were under control, we wouldn’t need much restoration. If they’re out of control, genetic restoration seems likely to be overwhelmed, or perhaps even to contribute to problems through parachuting cats side effects.

speciesReintro2This is not where I’d be looking for leverage.

Geoengineering justice & governance

From Clive Hamilton via Technology Review,

If humans were sufficiently omniscient and omnipotent, would we, like God, use climate engineering methods benevolently? Earth system science cannot answer this question, but it hardly needs to, for we know the answer already. Given that humans are proposing to engineer the climate because of a cascade of institutional failings and self-interested behaviours, any suggestions that deployment of a solar shield would be done in a way that fulfilled the strongest principles of justice and compassion would lack credibility, to say the least.

Geoengineering seems sure to make a mess, even if the tech works.

Self-generated seasonal cycles

This time of year, systems thinkers should eschew sugar plum fairies and instead dream of Industrial Dynamics, Appendix N:

Self-generated Seasonal Cycles

Industrial policies adopted in recognition of seasonal sales patterns may often accentuate the very seasonality from which they arise. A seasonal forecast can lead to action that may cause fulfillment of the forecast. In closed-loop systems this is a likely possibility. The analysis of sales data in search of seasonality is fraught with many dangers. As discussed in Appendix F, random-noise disturbances contain a broad band of component frequencies. This means that any effort toward statistical isolation of a seasonal sales component will find some seasonality in the random disturbances. Should the seasonality so located lead to decisions that create actual seasonality, the process can become self-regenerative.

Self-induced seasonality appears to occur many places in American industry. Sometimes it is obvious and clearly recognized, and perhaps little can be done about it. An example of the obvious is the strong seasonality in items such as cameras sold in the Christmas trade. By bringing out new models and by advertising and other sales promotion in anticipation of Christmas purchases, the industry tends to concentrate its sales at this particular time of year.

Other kinds of seasonality are much less clear. Almost always when seasonality is expected, explanations can be found to justify whatever one believes to be true. A tradition can be established that a particular item sells better at a certain time of year. As this “fact” becomes more and more widely believed, it may tend to concentrate sales effort at the time when the customers are believed to wish to buy. This in turn still further heightens the sales at that particular time.

Retailer sales & e-commerce sales, from FRED

 

How I learned to stop worrying and love methane

RealClimate has a nice summary of recent atmospheric methane findings. Here’s the structure:

methane2The bad news (red) has been that methane release from permafrost and clathrates on the continental shelf appears to be significant. At the same time, methane release from natural gas seems to be larger than previously thought, and (partly for the same reason – fracking) gas resources appear to be larger. Both put upward pressure on atmospheric methane.

However, there are some constraints as well. The methane budget must be consistent with observations of atmospheric concentrations and gradients (green). Therefore, if one source is thought to be bigger, it must be the case historically that other natural or anthropogenic sources are smaller (or perhaps uptake is faster) by an offsetting amount (blue).

This bad-news-good-news story does not rule out positive feedbacks from temperature or atmospheric chemistry, but at least we’re not cooked yet.

Announcing Leverage Networks

Leverage Networks is filling the gap left by the shutdown of Pegasus Communications:

We are excited to announce our new company, Leverage Networks, Inc. We have acquired most of the assets of Pegasus Communications and are looking forward to driving its reinvention.  Below is our official press release which provides more details. We invite you to visit our interim website at leveragenetworks.com to see what we have planned for the upcoming months. You will soon be able to access most of the existing Pegasus products through a newly revamped online store that offers customer reviews, improved categorization, and helpful suggestions for additional products that you might find interesting. Features and applications will include a calendar of events, a service marketplace, and community forums

As we continue the reinvention, we encourage suggestions, thoughts, inquiries and any notes on current and future products, services or resources that you feel support our mission of bringing the tools of Systems Thinking, System Dynamics, and Organizational Learning to the world.

Please share or forward this email to friends and colleagues and watch for future emails as we roll out new initiatives.

Thank you,

Kris Wile, Co-President

Rebecca Niles, Co-President

Kate Skaare, Director

LeverageNetworks

As we create the Leverage Networks platform, it is important that the entire community surrounding Organizational Learning, Systems Thinking and System Dynamics be part of the evolution. We envision a virtual space that is composed both archival and newly generated (by partners, community members) resources in our Knowledge Base, a peer-supported Service Marketplace where service providers (coaches, graphic facilitators, modelers, and consultants) can hang a virtual “shingle” to connect with new projects, and finally a fully interactive Calendar of events for webinars, seminars, live conferences and trainings.

If you are interested in working with us as a partner or vendor, please email partners@leveragenetworks.com

Why ask why?

Why ask why?

Forward causal inference and reverse causal questions

Andrew Gelman & Guido Imbens

The statistical and econometrics literature on causality is more focused on effects of causes” than on causes of effects.” That is, in the standard approach it is natural to study the effect of a treatment, but it is not in general possible to determine the causes of any particular outcome. This has led some researchers to dismiss the search for causes as “cocktail party chatter” that is outside the realm of science. We argue here that the search for causes can be understood within traditional statistical frameworks as a part of model checking and hypothesis generation. We argue that it can make sense to ask questions about the causes of effects, but the answers to these questions will be in terms of effects of causes.

I haven’t had a chance to digest this yet, but it’s an interesting topic. It’s particularly relevant to system dynamics modeling, where we are seldom seeking only y = f(x), but rather an endogenous theory where x = g(y) also.

See also: Causality in Nonlinear Systems

h/t Peter Christiansen.

Are all models wrong?

Artem Kaznatcheev considers whether Box’s slogan, “all models are wrong,” should be framed as an empirical question.

Building on the theme of no unnecessary assumptions about the world, @BlackBrane suggested … a position I had not considered before … for entertaining the possibility of a mathematical universe:

[Box’s slogan is] an affirmative statement about Nature that might in fact not be true. Who’s to say that at the end of the day, Nature might not correspond exactly to some mathematical structure? I think the claim is sometimes guilty of exactly what it tries to oppose, namely unjustifiable claims to absolute truth.

I suspect that we won’t learn the answer, at least in my lifetime.

In a sense, the appropriate answer is “who cares?” Whether or not there can in principle be perfect models, the real problem is finding ones that are useful in practice. The slogan isn’t helpful for this. (NIPCC authors seem utterly clueless as well.)

In a related post, AK identifies a 3-part typology of models that suggests a basis for guidance:

  • “Insilications – In physics, we are used to mathematical models that correspond closely to reality. All of the unknown or system dependent parameters are related to things we can measure, and the model is then used to compute dynamics, and predict the future value of these parameters. …
  • Heuristics – … When George Box wrote that “all models are wrong, but some are useful”, I think this is the type of models he was talking about. It is standard to lie, cheat, and steal when you build these sort of models. The assumptions need not be empirically testable (or even remotely true, at times), and statistics and calculations can be used to varying degree of accuracy or rigor. … A theorist builds up a collection of such models (or fables) that they can use as theoretical case studies, and a way to express their ideas. It also allows for a way to turn verbal theories into more formal ones that can be tested for basic consistency. …
  • Abstractions – … These are the models that are most common in mathematics and theoretical computer science. They have some overlap with analytic heuristics, except are done more rigorously and not with the goal of collecting a bouquet of useful analogies or case studies, but of general statements. An abstraction is a model that is set up so that given any valid instantiation of its premises, the conclusions necessarily follow. …”

The social sciences are solidly in the heuristics realm, while a lot of science is in the insilication category. The difficulty is knowing where the boundary lies. Actually, I think it’s a continuum, not a categorical. One can get some hint by looking at the problem context for models. For example:

Known state variables? Reality Checks (conservation laws, etc.)? Data per concept? Structural information from more granular observations or models? Experiments? Computation?
Physics yes lots lots yes yes often easy
Climate yes some some for many things not at scale limited
Economics no some some – flaky microfoundations often lacking or unused not at scale limited

(Ironically, I’m implying a model here, which is probably wrong, but hopefully useful.)

A lot of our most interesting problems are currently at the heuristics end of the spectrum. Some may migrate toward better model performance, and others probably won’t – particularly models of decision processes that willfully ignore models.