Meta MetaSD

I was looking at my google stats the other day, curious what posts interest people most. The answer was surprising. Guess what’s #1?

It’s not “Are Causal Loop Diagrams Useful?” (That’s #2.)

It’s not what I’d consider my best technical work, like Bathtub Statistics or Fun with 1D Vector Fields.

It’s not about something controversial, like On Limits to Growth or The alien hail Mary, and other climate policy plays.

Nor is it a hot topic, like Data science meets the bottom line.

It’s not something practical, like Writing an SD Conference Paper.

#1 is the Fibonacci sequence, How Many Pairs of Rabbits Are Created by One Pair in One Year?

Go figure.

Problem Formulation

Nelson Repenning & colleagues have a nice new paper on problem formulation. It’s set in a manufacturing context, but the advice is as relevant for building models as for building motorcycles:

Anatomy of a Good Problem Statement
A good problem statement has five basic elements:
• it references something that the organization cares about and connects that element to a clear and specific goal or target;
• it contains a clear articulation of the gap between the current state and the goal;
• the key variables—the target, the current state and the gap—are quantifiable,if not immediately measurable;
• it is neutral as possible concerning possible diagnoses or solutions;
• it is sufficiently small in scope that you can tackle it quickly.

In {R^n, n large}, no one can hear you scream.

I haven’t had time to write much lately. I spent several weeks in arcane code purgatory, discovering the fun of macros containing uninitialized thread pointers that only fail in 64 bit environments, and for different reasons on Windows, Mac and Linux. That’s a dark place that I hope never again to visit.

Now I’m working fun things again, but they’re secret, so I can’t discuss details. Instead, I’ll just share a little observation that came up in the process.

Frequently, we do calibration or policy optimization on models with a lot of parameters. “A lot” is actually a pretty small number – like 10 – when you have to do things by brute force. This works more often than we have a right to expect, given the potential combinatorial explosion this entails.

However, I suspect that we (at least I) don’t fully appreciate what’s going on. Here are two provable facts that make sense upon reflection, but weren’t part of my intuition about such problems:

In other words, R^n gets big really fast, and it’s all corners. The saving grace is probably that sensible parameters are frequently distributed on low-dimensional manifolds embedded in high dimensional spaces. But we should probably be more afraid than we typically are.

Announcing Leverage Networks

Leverage Networks is filling the gap left by the shutdown of Pegasus Communications:

We are excited to announce our new company, Leverage Networks, Inc. We have acquired most of the assets of Pegasus Communications and are looking forward to driving its reinvention.  Below is our official press release which provides more details. We invite you to visit our interim website at leveragenetworks.com to see what we have planned for the upcoming months. You will soon be able to access most of the existing Pegasus products through a newly revamped online store that offers customer reviews, improved categorization, and helpful suggestions for additional products that you might find interesting. Features and applications will include a calendar of events, a service marketplace, and community forums

As we continue the reinvention, we encourage suggestions, thoughts, inquiries and any notes on current and future products, services or resources that you feel support our mission of bringing the tools of Systems Thinking, System Dynamics, and Organizational Learning to the world.

Please share or forward this email to friends and colleagues and watch for future emails as we roll out new initiatives.

Thank you,

Kris Wile, Co-President

Rebecca Niles, Co-President

Kate Skaare, Director

LeverageNetworks

As we create the Leverage Networks platform, it is important that the entire community surrounding Organizational Learning, Systems Thinking and System Dynamics be part of the evolution. We envision a virtual space that is composed both archival and newly generated (by partners, community members) resources in our Knowledge Base, a peer-supported Service Marketplace where service providers (coaches, graphic facilitators, modelers, and consultants) can hang a virtual “shingle” to connect with new projects, and finally a fully interactive Calendar of events for webinars, seminars, live conferences and trainings.

If you are interested in working with us as a partner or vendor, please email partners@leveragenetworks.com

Pindyck on Integrated Assessment Models

Economist Robert Pindyck takes a dim view of the state of integrated assessment modeling:

Climate Change Policy: What Do the Models Tell Us?

Robert S. Pindyck

NBER Working Paper No. 19244

Issued in July 2013

Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

Freepers seem to think that this means the whole SCC enterprise is GIGO. But this is not a case where uncertainty is your friend. Bear in mind that the deficiencies Pindyck discusses, discounting welfare and ignoring extreme outcomes, create a one-sided bias toward a SCC that is too low. Zero (the de facto internalized SCC in most places) is one number that’s virtually certain to be wrong.

Tasty Menu

From the WPI online graduate program and courses in system dynamics:

Truly a fine lineup!

Population Growth Up

According to Worldwatch, there’s been an upward revision in UN population projections. As things now stand, the end-of-century tally settles out just short of 11 billion (medium forecast of 10.9 billion, with a range of 6.8 to 16.6).

The change is due to higher than expected fertility:

Compared to the UN’s previous assessment of world p opulation trends, the new projected total population is higher, particularly after 2075. Part of the reason is that current fertility levels have been adjusted upward in a number of countries as new information has become available. In 15 high-fertil ity countries of sub-Saharan Africa, the estimated average number of children pe r woman has been adjusted upwards by more than 5 per cent.

The projections are essentially open loop with respect to major environmental or other driving forces, so the scenario range doesn’t reflect full uncertainty. Interestingly, the UN varies fertility but not mortality in projections. Small differences in fertility make big differences in population:

The “high-variant” projection, for example, which assumes an extra half of a child per woman (on average) than the medium variant, implies a world population of 10.9 billion in 2050. The “low-variant” projection, where women, on average, have half a child less than under the medium variant, would produce a population of 8.3 billion in 2050. Thus, a constant difference of only half a child above or below the medium variant would result in a global population of around 1.3 billion more or less in 2050 compared to the medium-variant forecast.

There’s a nice backgrounder on population projections, by Brian O’Neil et al., in Demographic Research. See Fig. 6 for a comparison of projections.