There’s an old joke:
Q: Why are the debates so bitter in academia?
A: Because the stakes are so low.
The stakes are actually very high when models intersect with policy, I think, but sometimes academic debates come across as needlessly petty. That joke came to mind when a colleague shared this presentation abstract:
Pathologies of System Dynamics Models or “Why I am Not a System Dynamicst”
by Dr. Robert Axtell
So-called system dynamics (SD) models are typically interpreted as a summary or aggregate representation of a dynamical system composed of a large number of interacting entities. The high dimensional microscopic system is abstracted-notionally if not mathematically-into a ‘compressed’ form, yielding the SD model. In order to be useful, the reduced form representation must have some fidelity to the original dynamical system that describes the phenomena under study. In this talk I demonstrate formally that even so-called perfectly aggregated SD models will in general display a host of pathologies that are a direct consequence of the aggregation process. Specifically, an SD model can exhibit spurious equilibria, false stability properties, modified sensitivity structure, corrupted bifurcation behavior, and anomalous statistical features, all with respect to the underlying microscopic system. Furthermore, perfect aggregation of a microscopic system into a SD representation will generally be either not possible or not unique.
Finally, imperfectly aggregated SD models-surely the norm-can possess still other troublesome features. From these purely mathematical results I conclude that there is a definite sense in which even the best SD models are at least potentially problematical, if not outright mischaracterizations of the systems they purport to describe. Such models may have little practical value in decision support environments, and their use in formulating policy may even be harmful if their inadequacies are insufficiently understood.
In a technical sense, I agree with everything Axtell says.
However, I could equally well give a talk titled, “pathologies of agent models.” The pathologies might include ungrounded representation of agents, overuse of discrete logic and discrete time, failure to nail down alternative hypotheses about agent behavior, and insufficient exploration of sensitivity and robustness. Notice that these are common problems in practice, rather than problems in principle, because in principle one would always prefer a disaggregate representation. The problem is that we don’t build models in principle; we build them in practice. In reality resources – including data, time, computing, statistical methods, and decision maker attention – are limited. If you want more disaggregation, you’ve got to have less of something else.
Clearly there are times when an aggregate approach could be misleading. To leap from the fact that one can demonstrate pathological special cases to the idea that aggregate models are dangerous strikes me as a gross overstatement. Is the danger of aggregating agents really any greater than the danger of omitting feedback by reducing scope in order to enable modeling disaggregate agents? Hopefully this talk will illuminate some of the ways that one might think about whether a situation is dangerous or not, and therefore make informed choices of method and tradeoffs between scope and detail.
Also, models seldom inform policy directly; their influence occurs through improvement of mental models. Agent models could have a lot to offer there, but I haven’t seen many instances where authors developed the lingo to communicate insights to decision makers at their level. (Examples appreciated – any links?) That relegates many agent models to the same role as other black-box models: propaganda.
It’s strange that Axtell is picking on SD. Why not tackle economics? Most economic models have the same aggregation issues, plus they assume equilibrium and rationality from the start, so any representational problems with SD are greatly amplified. Plus the economic models are far more numerous and influential on policy. It’s like Axtell is bullying the wimpy kid in the class, because he’s scared to take on the big one who smokes at recess and shaves in 5th grade.
The sad thing about this confrontational framing is that SD and agent based modeling are a match made in heaven. At some level disaggregate models still need aggregate representations of agents; modelers could learn a lot from SD about good representation of behavior and dynamics, not to mention good habits like units checking that are seldom followed. At the same time, SD modelers could learn a lot about emergent phenomena and the limitations of aggregate representations. A good example of a non-confrontational approach, recognizing shades of gray:
Heterogeneity and Network Structure in the Dynamics of Diffusion: Comparing Agent-Based and Differential Equation Models
Hazhir Rahmandad, John Sterman
When is it better to use agent-based (AB) models, and when should differential equation (DE) models be used? Whereas DE models assume homogeneity and perfect mixing within compartments, AB models can capture heterogeneity across individuals and in the network of interactions among them. AB models relax aggregation assumptions, but entail computational and cognitive costs that may limit sensitivity analysis and model scope. Because resources are limited, the costs and benefits of such disaggregation should guide the choice of models for policy analysis. Using contagious disease as an example, we contrast the dynamics of a stochastic AB model with those of the analogous deterministic compartment DE model. We examine the impact of individual heterogeneity and different network topologies, including fully connected, random, Watts-Strogatz small world, scale-free, and lattice networks. Obviously, deterministic models yield a single trajectory for each parameter set, while stochastic models yield a distribution of outcomes. More interestingly, the DE and mean AB dynamics differ for several metrics relevant to public health, including diffusion speed, peak load on health services infrastructure, and total disease burden. The response of the models to policies can also differ even when their base case behavior is similar. In some conditions, however, these differences in means are small compared to variability caused by stochastic events, parameter uncertainty, and model boundary. We discuss implications for the choice among model types, focusing on policy design. The results apply beyond epidemiology: from innovation adoption to financial panics, many important social phenomena involve analogous processes of diffusion and social contagion. (Paywall; full text of a working version here)
Details, in case anyone reading can attend – report back here!
Thursday, October 21 at 6:00 – 8:00 PM ** New Time **
Networking 6:00 – 6:45 PM (light refreshments) Presentation 6:45 – 8:00 PM Free and open to the public
** NEW Location **
Booz Allen Hamilton – Ballston-Virginia Square
3811 N. Fairfax Drive, Suite 600
Arlington, VA 22203
(703) 816-5200
Between Virginia Square and Ballston Metro stations, between Pollard St.
and Nelson St.
On-street parking is available, especially on 10th Street near the Arlington Library.
There will be a Booz Allen representative at the front of the building until 7:00 to greet and escort guests, or call 703-627-5268 to be let in.
RSVP by e-mail to Nicholas Nahas, nahas_nicholas@bah.com<mailto:nahas_nicholas@bah.com>, in order to have a rough count of attendees prior to the meeting. Come anyway even if you do not RSVP.
By METRO:
Take the Orange Line to the Ballston station. Exit Metro Station, walk towards the IHOP (right on N. Fairfax) continue for approximately 2-3 blocks. Booz Allen Hamilton (3811 N. Fairfax Dr. Suite 600) is on the left between Pollard St. and Nelson St.
OR Take the Orange Line to the Virginia Square station. Exit Metro Station and go left and walk approximately 2-3 blocks. Booz Allen Hamilton (3811 N. Fairfax Dr. Suite 600) is on the right between Pollard St. and Nelson St.
Wouldn’t this critique potentially apply to all the differential equation models built in the various disciplines. I’m thinking of ecology where there is a deep tradition of aggregate models, biology, genetics, marketing (the bass model and all its derivatives) Some of the models in other disciplines have been extensively tested against data and found to be very useful. Axtell is challenging the value of a vast body of literature and I doubt his reearch is up to it.
Right. Let’s not throw the baby out with the bathwater.
In some cases, I’m sure that DE model builders would agree that they never got a handle on things until models with networks or agents came along. But other areas truly are lumpy systems where DE models inherently make sense. Still others may be non-pathological at least in the sense that DE models calibrated to aggregate outcomes of fields of agents work OK.
The key question is, how do you detect various failure modes? Have you aggregated something that can’t be? Have you narrowed scope to the exclusion of the endogenous explanation for something, in order to have more disaggregation? Have you replicated some macro phenomenon of interest with a spurious micro theory, but failed to notice for lack of micro data to validate agent behavior?
Shades of Nordhaus!
Having recently completed Mike Radzicki’s ABM course I now feel confident to state that he who is without fault is welcome to throw the first stone.
Here are some excerpts from a nice rant on economics (I call myself one) in the Financial Times:
Sweep economists off their throne
By Gideon Rachman
…
There has been some self-examination and soul-searching within the economics profession since the onset of the financial crisis. Joseph Stiglitz, another Nobel prize-winning economist, has suggested that: “If science is defined by its ability to forecast the future, the failure of much of the economics profession to see the crisis coming should be a cause of great concern.”
The serious study of history goes all the way back to Herodotus in the fifth century BC. And yet today’s historians are far humbler about what they can hope to achieve than modern economists. Historians know that no big question is ever definitively settled. They know that every big and interesting topic will be revisited, revised and examined from new angles. Each generation will reinterpret the past and deliver its own verdict.
…
This way of looking at the world is less obviously useful to practical men, seeking to make decisions. But maybe it is time for an alternative to the brash certainties, peddled by those pseudo-scientists, otherwise known as economists.
http://www.ft.com/cms/s/0/93d9ff2a-b9e1-11df-8804-00144feabdc0.html
I sometimes play an economist on TV. (Not really, but a county commissioner did once make fun of me for being like the Numb3rs guy.)
Computable general equilibrium models really are the poster child for the problems Axtell raises. Still, they’re not useless in all cases. The general thinking about pressures toward equilibrium and so forth is all very useful, it’s just incomplete.
SD models of the economy take a substantial step away from equilibrium, which raises all kinds of possibilities for gaps between desired and actual states, suboptimality of behavior, etc. The policy implications are pretty big.
The challenge is that there are many possible behavioral theories for various agent decisions, and the original SD resource (ask a manager) breaks down, because the economy is a mix of individual decisions you can ask about and evolutionary processes that you can only observe from afar.
Agent models of the economy help because you can test evolutionary arguments and determine whether they favor some theories of the behavior of individuals and organizations over others. However, in turn this introduces a bunch of new issues. I think we can learn a lot from such models, and we’ve been working on them at Ventana. But I think it’s going to be a long time before anything like https://metasd.com/2010/05/the-model-that-ate-europe/ really works as envisioned. This stuff is in its infancy.
Agent models of the economy are desperately needed as a counterpoint to the CGE models that currently dominate key areas like global climate policy. The problem is that alternative models (aggregate behavioral dynamic or ABM) are expensive, and funders seem to be more interested in driving existing models down their learning curves. Ironically, we are locked in to models that don’t do lock in.
Still, the route to success is cooperation, I think. SD could have a lot to offer to such efforts, because we have a huge library of existing thought on behavioral dynamic formulations that would be useful to the construction of firm-agents and so forth. We also would be good at building transparent meta-models illustrating insights that emerge from bigger ABM models.
John Maynard Smith once referred to the application of agent-based models to biological problems (“artificial life”) as “factless science.” ABM models often rely on “plausibility” rather than “validation,” a similar problem found in SD and other computer models.
What seems most interesting about Axtell’s comment is that a trend in modeling software is to combine SD, ABM and discrete-event simulation (DES) into a single tool that lets the modeler use the most appropriate approach. AnyLogic is one such example, creating “multiscale” models that use SD to focus on the interrelationships among subsystems, while key components may be modeled using ABM.
Anyone know if Axtell has written an articles on the topic? Or if the lecture notes will be posted anywhere?
Interesting comment from Smith. The source essay is here http://www2.econ.iastate.edu/tesfatsi/hogan.complexperplex.htm . Of course, “factless science” isn’t inherently pejorative; math is factless too.
I don’t see a hard line between plausibility and validation. Whether a model has power to predict anything in the real world is basically a function of how hard to tried to break it, through comparison to data, examination of face validity, extreme conditions tests, verification of dimensional consistency and adherence to conservation laws, etc. “How hard” needs to be considered in light of the complexity of the model – more relationships require more examination, but also admit direct comparison with more data. ABMs can go off the rails by positing a lot of micro behavior, while providing only indirect macro validation. But that’s only one of many failure modes.
I’d like to see the notes too, if anyone finds them or attends the lecture.
FYI, the lecture slides will eventually be posted on this site:
http://winforms.chapter.informs.org/meetings.html
Cool – thanks.
Rob Axtell’s presentation to the WINFORMS meeting has been posted:
http://winforms.chapter.informs.org/presentation/Pathologies_of_System_Dynamics_Models-Axtell-20101021.pdf
I did not attend the talk, but the assumption set (perfect knowledge of micro-behavior) takes us down a track that suits his argument, but not the underlying problem of policy analysis with imperfect information, feedback, and delays. As well as limited resources, unsure causal theories, multiple perspectives and mental models…
Is there anyone out there willing to write a rejoinder for the 2011 conference? Axtell’s home base is DC, and perhaps he would join us.
Thanks for the link. At least he references “and other aggregate models” in the title.
I agree with your assessment – the technical argument is correct and important, but not very helpful at dealing with strategies for real-world situations in which we lack micro knowledge.
Axtell’s basic framing of the problem seems to be ecological, which is pretty different from the organizational context in which SD arose. It’s also obviously different from some problems, like greenhouse gas accumulation, that really can be modeled as aggregates.
I would find more comparative work on actual problems, as in Hazhir’s thesis (link above, or http://dspace.mit.edu/handle/1721.1/33658 ) more helpful than Axtell’s fairly abstract presentation, but Axtell does do a nice job of formalizing the basic issues, which needs to be done at some point.
I like the final prescription, “Only cure if staying with aggregate models: build multi-models (many different models)” and would like to see more exploration of that idea.
I attended this presentation and found it to be very interesting. For what it’s worth, my doctoral research focused on the assessment of SD models, so I felt somewhat qualified to represent the discipline’s perspective. In addition, I have 25 years of modeling experiences that led to appreciation of the approach that Axtell prefers.
As has been mentioned here, while the title mentions SD, the presentation really focused on the challenges that aggregation brings to the modeling disciplines. Axtell presents some interesting mathematical constructs that could and should form the basis for conversations, thinking, and research in the SD and other aggregate modeling arenas.
At the presentation, we discussed some of the terminology that was used and how it could inflame, rather than encourage, the conversation. Axtell did not disagree that the wording was not necessarily conducive to conversation. All things considered, I learned quite a bit and was challenged in many ways.
In particular, the presentation raised the importance and value of considering the implications of the simplifying assumptions that are part of SD and all modeling disciplines. SD modelers have to be very careful not to “lock into” a single mental model (or even two or three) when modeling and analyzing systems.
Even if Axtell was not available for a presentation at the ISDC, this would be a potentially interesting thread to follow. There can only be value to be derived from increasing the understanding of the linkages between modeling disciplines.
Thanks for reporting back!
I guess if Axtell hadn’t chosen a provocative title, I would have ignored this.
It does seem like useful and important conversation, and it would be good if it stayed current.