## Living Litigious

From a box of tea:

Sit cross-legged or in a chair with spine straight and feet flat. Curl your tongue down its length and extend slightly past lips, inhale deeply through tongue and exhale through nose. Continue for 1 to 3 minutes. Finally, inhale, pull tongue in and hold breath briefly, exhale and relax. Feel the cool burst of refreshment.

Before doing this exercise or participating in any exercise program, consult your physician.

Hello, doctor? Is it OK if I breathe?

## SD & ABM: Don't throw stones; build bridges

There’s an old joke:

Q: Why are the debates so bitter in academia?

A: Because the stakes are so low.

The stakes are actually very high when models intersect with policy, I think, but sometimes academic debates come across as needlessly petty. That joke came to mind when a colleague shared this presentation abstract:

Pathologies of System Dynamics Models or “Why I am Not a System Dynamicst”

by Dr. Robert Axtell

So-called system dynamics (SD) models are typically interpreted as a summary or aggregate representation of a dynamical system composed of a large number of interacting entities. The high dimensional microscopic system is abstracted-notionally if not mathematically-into a ‘compressed’ form, yielding the SD model. In order to be useful, the reduced form representation must have some fidelity to the original dynamical system that describes the phenomena under study. In this talk I demonstrate formally that even so-called perfectly aggregated SD models will in general display a host of pathologies that are a direct consequence of the aggregation process. Specifically, an SD model can exhibit spurious equilibria, false stability properties, modified sensitivity structure, corrupted bifurcation behavior, and anomalous statistical features, all with respect to the underlying microscopic system. Furthermore, perfect aggregation of a microscopic system into a SD representation will generally be either not possible or not unique.

Finally, imperfectly aggregated SD models-surely the norm-can possess still other troublesome features. From these purely mathematical results I conclude that there is a definite sense in which even the best SD models are at least potentially problematical, if not outright mischaracterizations of the systems they purport to describe. Such models may have little practical value in decision support environments, and their use in formulating policy may even be harmful if their inadequacies are insufficiently understood.

In a technical sense, I agree with everything Axtell says.

However, I could equally well give a talk titled, “pathologies of agent models.” The pathologies might include ungrounded representation of agents, overuse of discrete logic and discrete time, failure to nail down alternative hypotheses about agent behavior, and insufficient exploration of sensitivity and robustness. Notice that these are common problems in practice, rather than problems in principle, because in principle one would always prefer a disaggregate representation. The problem is that we don’t build models in principle; we build them in practice. In reality resources – including data, time, computing, statistical methods, and decision maker attention – are limited. If you want more disaggregation, you’ve got to have less of something else.

Clearly there are times when an aggregate approach could be misleading. To leap from the fact that one can demonstrate pathological special cases to the idea that aggregate models are dangerous strikes me as a gross overstatement. Is the danger of aggregating agents really any greater than the danger of omitting feedback by reducing scope in order to enable modeling disaggregate agents? Hopefully this talk will illuminate some of the ways that one might think about whether a situation is dangerous or not, and therefore make informed choices of method and tradeoffs between scope and detail.

Also, models seldom inform policy directly; their influence occurs through improvement of mental models. Agent models could have a lot to offer there, but I haven’t seen many instances where authors developed the lingo to communicate insights to decision makers at their level. (Examples appreciated – any links?) That relegates many agent models to the same role as other black-box models: propaganda.

It’s strange that Axtell is picking on SD. Why not tackle economics? Most economic models have the same aggregation issues, plus they assume equilibrium and rationality from the start, so any representational problems with SD are greatly amplified. Plus the economic models are far more numerous and influential on policy. It’s like Axtell is bullying the wimpy kid in the class, because he’s scared to take on the big one who smokes at recess and shaves in 5th grade.

The sad thing about this confrontational framing is that SD and agent based modeling are a match made in heaven. At some level disaggregate models still need aggregate representations of agents; modelers could learn a lot from SD about good representation of behavior and dynamics, not to mention good habits like units checking that are seldom followed. At the same time, SD modelers could learn a lot about emergent phenomena and the limitations of aggregate representations. A good example of a non-confrontational approach, recognizing shades of gray:

Heterogeneity and Network Structure in the Dynamics of Diffusion: Comparing Agent-Based and Differential Equation Models

When is it better to use agent-based (AB) models, and when should differential equation (DE) models be used? Whereas DE models assume homogeneity and perfect mixing within compartments, AB models can capture heterogeneity across individuals and in the network of interactions among them. AB models relax aggregation assumptions, but entail computational and cognitive costs that may limit sensitivity analysis and model scope. Because resources are limited, the costs and benefits of such disaggregation should guide the choice of models for policy analysis. Using contagious disease as an example, we contrast the dynamics of a stochastic AB model with those of the analogous deterministic compartment DE model. We examine the impact of individual heterogeneity and different network topologies, including fully connected, random, Watts-Strogatz small world, scale-free, and lattice networks. Obviously, deterministic models yield a single trajectory for each parameter set, while stochastic models yield a distribution of outcomes. More interestingly, the DE and mean AB dynamics differ for several metrics relevant to public health, including diffusion speed, peak load on health services infrastructure, and total disease burden. The response of the models to policies can also differ even when their base case behavior is similar. In some conditions, however, these differences in means are small compared to variability caused by stochastic events, parameter uncertainty, and model boundary. We discuss implications for the choice among model types, focusing on policy design. The results apply beyond epidemiology: from innovation adoption to financial panics, many important social phenomena involve analogous processes of diffusion and social contagion. (Paywall; full text of a working version here)

Details, in case anyone reading can attend – report back here!

Thursday, October 21 at 6:00 – 8:00 PM ** New Time **

Networking 6:00 – 6:45 PM (light refreshments) Presentation 6:45 – 8:00 PM Free and open to the public

** NEW Location **

Booz Allen Hamilton – Ballston-Virginia Square

3811 N. Fairfax Drive, Suite 600

Arlington, VA 22203

(703) 816-5200

Between Virginia Square and Ballston Metro stations, between Pollard St.

and Nelson St.

On-street parking is available, especially on 10th Street near the Arlington Library.

There will be a Booz Allen representative at the front of the building until 7:00 to greet and escort guests, or call 703-627-5268 to be let in.

RSVP by e-mail to Nicholas Nahas, nahas_nicholas@bah.com<mailto:nahas_nicholas@bah.com>, in order to have a rough count of attendees prior to the meeting. Come anyway even if you do not RSVP.

By METRO:

Take the Orange Line to the Ballston station. Exit Metro Station, walk towards the IHOP (right on N. Fairfax) continue for approximately 2-3 blocks. Booz Allen Hamilton (3811 N. Fairfax Dr. Suite 600) is on the left between Pollard St. and Nelson St.

OR Take the Orange Line to the Virginia Square station. Exit Metro Station and go left and walk approximately 2-3 blocks. Booz Allen Hamilton (3811 N. Fairfax Dr. Suite 600) is on the right between Pollard St. and Nelson St.

## Backyard Bear

From our wildlife cam in the woods. This is either bear #4 or 5 this season – not the mother & cub, and not the big black one that hangs out on our porch hoping for cat food. Not sure why, but it’s a very big year for bears.

## Coincidence?

The equinox brought a strange confluence of two recent posts: Brian Eno on intuitive design, via TOP.

The trouble begins with a design philosophy that equates “more options” with “greater freedom.” Designers struggle endlessly with a problem that is almost nonexistent for users: “How do we pack the maximum number of options into the minimum space and price?” In my experience, the instruments and tools that endure (because they are loved by their users) have limited options.

Although designers continue to dream of “transparency” – technologies that just do their job without making their presence felt – both creators and audiences actually like technologies with “personality.” A personality is something with which you can have a relationship. Which is why people return to pencils, violins, and the same three guitar chords.

## Brian Eno, meet Stafford Beer

Brian Eno reflects on feedback and self-organization in musical composition, influenced by the organization of complex systems in Stafford Beer’s The Brain of the Firm.

Stafford Beer was a member of the cybernetics thread of systems thought (if that sounds baffling, read George Richardson’s excellent book on the evolution of thinking about systems).

## The BC carbon tax – good idea, bad implementation

BC’s carbon tax was supposed to be “revenue neutral”, meaning all carbon tax revenue would be “recycled” to British Columbians through personal income tax cuts, corporate income tax cuts and a low-income credit. When the 2008 budget launched the carbon tax, we were provided with a forecast that had revenues precisely match recycling through tax cuts and credits, with about one-third of revenues going to each of PIT cuts, CIT cuts and the low-income credit.

But recent budgets have shown a carbon tax deficit: tax cuts have completely swamped carbon tax revenues. While some were concerned that the carbon tax would be a “tax grab”, instead we are a carbon tax is that is revenue negative not revenue neutral.

Corporate tax cuts are now absorbing the lion’s share of carbon tax revenues. In 2010/11, they will be equivalent to 57% of carbon tax revenues, compared to one-third in 2008/09. Cutting corporate taxes is the worst possible way of using carbon tax revenues. This is because of the intense concentration of ownership of capital at the top of the income distribution (when you hear corporate tax cuts think upper-income tax cuts), and also because shareholders outside BC, who pay no carbon tax, benefit from corporate tax cuts.

## Superstitious learning about the stimulus

Stimulus regret seems to be pretty widespread now. The undercurrent seems to be that, because unemployment is still 10% etc., the stimulus didn’t work or at least wasn’t cost effective. This conclusion is based on pattern matching thinking. Pattern matching assumes simple A->B correlation: Stimulus->Unemployment. Working backwards from that assumption, one concludes from ongoing high unemployment and the fact that stimulus did occur that the correlation between stimulus and unemployment is low.

There are two problems with this logic. First, there are many confounding factors in the A->B relationship that could be responsible for ongoing problems. Second, there’s feedback between A and B, which also means that there are (possibly large) intervening stocks (integrations, accumulations). Stocks decouple the temporal relationship between A and B, so that pattern matching doesn’t work .

Consider three possible worlds, schematically below. The blue scenario is the economy’s trajectory with no intervention. In the green scenario, stimulus spending is used, and it works, making recovery faster. In the red scenario, stimulus is counterproductive. If one evaluates the stimulus early, without accounting for delays and accumulation, one can’t help but conclude that the stimulus has failed, because things got worse. Pattern matching doesn’t account for the fact that things might have gotten worse more slowly.

For a politician evaluated by people who ignore system structure, this is a no-win situation. As long as things get worse, blame follows, regardless of what policy is chosen.

I’m not arguing that stimulus works, just that the public debate about it is vacuous. There’s little talk about delays, feedback, let alone model-driven discussion of the outcome, i.e. the only perspective through which one can understand the problem is largely confined to a small circle of wonks.

## Interactive diagrams – obesity dynamics

Food-nutrition-health-exercise-energy interactions are an amazing nest of positive feedbacks, with many win-win opportunities, but more on that another time.

Instead, I’m hoisting an interesting influence diagram about obesity from the comments. At first glance, it’s just another plate of spaghetti.

But when you follow the link (do it now), there’s an interesting innovation: the diagram is interactive. You can zoom, scroll, and highlight particular sectors and dynamics. There’s some narrative here and here. (Update: the interactive link seems to be down, but the diagram is still here: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/295153/07-1177-obesity-system-atlas.pdf)

It took me a while to decide whether I’d call this a causal loop diagram or not. I think the primary distinction between a CLD and other kinds of mindmaps or process diagrams is the use of variables. On a CLD, each label represents a quantity that can vary, with a definite direction – TV Watching, Stress, Use of Medicines. Items on other kinds of diagrams might represent events or fuzzier constellations of concepts. This diagram doesn’t have link polarities (too bad) or loop polarities (which would be pretty incomprehensible anyway), but many other CLDs also avoid such labels for simplicity.

I think there’s a lot of potential for further exploration of this idea. There’s a lot you could do to relate structure to behavior, or at least to explain the rationale for structure (both shortcomings of the diagram). Each link, for example, could have its tale revealed when clicked, and key loops could be animated individually, with stories told. Drill-down could be extended to provide links between top-level subsystem relationships and more microscopic views.

I think huge diagrams like the one above are always going to be overwhelming to a layperson. Also, it’s hard to make even a small CLD good, so making a big one really accurate is tough. Therefore, I’d rather see advanced CLD presentations used to improve the communication of simpler stories, with a few loops. However, big or small, there might be many common technological benefits from dedicated diagramming software.

The systems story on Lake Mead deepens (unlike the lake itself). I heard about some more interesting dynamics in a side conversation at the Balaton Group meeting in Iceland.

First, it’s not just Mead that’s impacted; upstream Lake Powell is also low. One consequence of this is that hydro generation is down, because the head is lower. Since both lakes are half full, it might make sense to drain Powell into Mead. That would raise the head at Mead, making up for the loss of generation at Powell. Water losses would also decrease. One possible obstacle to this strategy is that stakeholders in Powell fear that it could never be refilled, because endangered species would reinhabit the empty canyons.

Second, as the lakes get lower, bad things happen. Evidently the deep waters are stratified, and there are plumes of nasty saline gunk near the bottom. If lake levels continue to drop, there’s a possibility of serious water quality problems to go with the quantity issues.

One thing that’s striking about the media coverage of data and projections by agencies is that there’s little discussion of the nature or magnitude of variability. The implicit assumption behind current behavior is that droughts are cyclical or just noise. The hope seems to be that, since we’re in a low period for basin rainfall, the magic of reversion to the mean will soon bring forth the waters again. I don’t think there’s any good reason to act as if that will really happen, especially if climate makes the distribution nonstationary. Modelers seem to think that the Southwest will move to a drought regime as the earth warms, but what if they’re wrong, and the hydrologic cycle accelerates? Glen Canyon Dam was nearly lost in 1983, so a healthy increase in rainfall wouldn’t necessarily be a blessing either.

Current Bureau of Reclamation projections for Lake Mead elevation. Documentation is pretty opaque, but it looks like the projections are based on quantiles of historic inflows, i.e. they neglect autocorrelation or changes in the distribution of supply.

Edward Abbey must be smiling at least a little at this mess.

## Why is the arctic brown?

I’m blogging from a 757, somewhere over the North pole, returning from a sustainability meeting in Iceland. The world below is a wilderness of sea ice and clouds. I’d expect brilliant white, but there’s actually a brown haze over the landscape. It’s stratified, much like the odd sight of half-white, half-brown clouds one occasionally sees when flying into a polluted city. Where does it come from? Chinese coal fumes? Russian fires? American SUV tailpipes? Icelandic airplane exhaust?