Diagrams vs. Models

Following Bill Harris’ comment on Are causal loop diagrams useful? I went looking for Coyle’s hybrid influence diagrams. I didn’t find them, but instead ran across this interesting conversation in the SDR:

The tradition, one might call it the orthodoxy, in system dynamics is that a problem can only be analysed, and policy guidance given, through the aegis of a fully quantified model. In the last 15 years, however, a number of purely qualitative models have been described, and have been criticised, in the literature. This article briefly reviews that debate and then discusses some of the problems and risks sometimes involved in quantification. Those problems are exemplified by an analysis of a particular model, which turns out to bear little relation to the real problem it purported to analyse. Some qualitative models are then reviewed to show that they can, indeed, lead to policy insights and five roles for qualitative models are identified. Finally, a research agenda is proposed to determine the wise balance between qualitative and quantitative models.

… In none of this work was it stated or implied that dynamic behaviour can reliably be inferred from a complex diagram; it has simply been argued that describing a system is, in itself, a useful thing to do and may lead to better understanding of the problem in question. It has, on the other hand, been implied that, in some cases, quantification might be fraught with so many uncertainties that the model’s outputs could be so misleading that the policy inferences drawn from them might be illusory. The research issue is whether or not there are circumstances in which the uncertainties of simulation may be so large that the results are seriously misleading to the analyst and the client. … This stream of work has attracted some adverse comment. Lane has gone so far as to assert that system dynamics without quantified simulation is an oxymoron and has called it ‘system dynamics lite (sic)’. …

Coyle (2000) Qualitative and quantitative modelling in system dynamics: some research questions

Jack Homer and Rogelio Oliva aren’t buying it:

Geoff Coyle has recently posed the question as to whether or not there may be situations in which computer simulation adds no value beyond that gained from qualitative causal-loop mapping. We argue that simulation nearly always adds value, even in the face of significant uncertainties about data and the formulation of soft variables. This value derives from the fact that simulation models are formally testable, making it possible to draw behavioral and policy inferences reliably through simulation in a way that is rarely possible with maps alone. Even in those cases in which the uncertainties are too great to reach firm conclusions from a model, simulation can provide value by indicating which pieces of information would be required in order to make firm conclusions possible. Though qualitative mapping is useful for describing a problem situation and its possible causes and solutions, the added value of simulation modeling suggests that it should be used for dynamic analysis whenever the stakes are significant and time and budget permit.

Homer & Oliva (2001) Maps and models in system dynamics: a response to Coyle

Coyle rejoins:

This rejoinder clarifies that there is significant agreement between my position and that of Homer and Oliva as elaborated in their response. Where we differ is largely to the extent that quantification offers worthwhile benefit over and above analysis from qualitative analysis (diagrams and discourse) alone. Quantification may indeed offer potential value in many cases, though even here it may not actually represent ‘‘value for money’’. However, even more concerning is that in other cases the risks associated with attempting to quantify multiple and poorly understood soft relationships are likely to outweigh whatever potential benefit there might be. To support these propositions I add further citations to published work that recount effective qualitative-only based studies, and I offer a further real-world example where any attempts to quantify ‘‘multiple softness’’ could have lead to confusion rather than enlightenment. My proposition remains that this is an issue that deserves real research to test the positions of Homer and Oliva, myself, and no doubt others, which are at this stage largely based on personal experiences and anecdotal evidence.

Coyle (2001) Rejoinder to Homer and Oliva

My take: I agree with Coyle that qualitative models can often lead to insight. However, I don’t buy the argument that the risks of quantification of poorly understood soft variables exceeds the benefits. First, if the variables in question are really too squishy to get a grip on, that part of the modeling effort will fail. Even so, the modeler will have some other working pieces that are more physical or certain, providing insight into the context in which the soft variables operate. Second, as long as the modeler is doing things right, which means spending ample effort on validation and sensitivity analysis, the danger of dodgy quantification will reveal itself as large uncertainties in behavior subject to the assumptions in question. Third, the mere attempt  to quantify the qualitative is likely to yield some insight into the uncertain variables, which exceeds that derived from the purely qualitative approach. In fact, I would argue that the greater danger lies in the qualitative approach, because it is quite likely that plausible-looking constructs on a diagram will go unchallenged, yet harbor deep conceptual problems that would be revealed by modeling.

I see this as a cost-benefit question. With infinite resources, a model always beats a diagram. The trouble is that in many cases time, money and the will of participants are in short supply, or can’t be justified given the small scale of a problem. Often in those cases a qualitative approach is justified, and diagramming or other elicitation of structure is likely to yield a better outcome than pure talk. Also, where resources are limited, an overzealous modeling attempt could lead to narrow focus, overemphasis on easily quantifiable concepts, and implementation failure due to too much model and not enough process. If there’s a risk to modeling, that’s it – but that’s a risk of bad modeling, and there are many of those.

8 thoughts on “Diagrams vs. Models”

  1. Tom, I remember that discussion. I don’t have anything direct to add, but I can perhaps offer an analogy that’s helped me think about it.

    When I was a practicing electrical engineer doing circuit and systems design, I had many methods at my disposal. I could use rules of thumb to develop circuit topologies and select component values; I could use Kirchoff laws and simple active device models to write better equations as part of the design process; I could use simple simulation (with simple, generic device models); I could use more sophisticated simulation with specific, tested models; I could build prototype circuits; or I could build lots of prototypes to test over temperature, random device selection, and the like. Obviously there’s a progression of quality and cost. As a practicing engineer, I had to balance utility and cost in solving a problem.

    With experience, I developed a feel for when I could stick to the simple, when I had to start simple and get increasingly more rigorous, and when I had to start at the more rigorous end.

    I think that’s what we do with system dynamics models. Despite your recent comment about not getting out of bed without a model, I doubt we simulate how to do most of the things we do during the day; we’d simply be ineffective. So I largely agree with your comments. I’d only add that, since time isn’t an infinite resource for any one of us or our organizations, time (and thus cost) _is_ a factor, and so simulation _could_ be worse if it solves one problem at the expense of not getting to even more important problems.

  2. Yes … that’s the point I was trying to get at in the last paragraph. I think it’s pretty easy to sink too much time into a model, and leave too little for reflection on the boundary and limits of the model and for the implementation process. It’s also quite possible that a modeling team convened to solve a problem will have the wrong mix of skills (stereotypically, a bunch of nerds trying to solve a people problem).

    I also know of a number of cases where models were essential to the solution of messy problems. The common thread of those is that there were competing, plausible theories about a situation. All could probably have been well expressed by a CLD. The trouble was, the truth could only be ascertained by measuring some uncertain quantities that determined the relative gains of the various critical feedback loops, and surfacing some structure that wasn’t very explicit in mental models. There was plenty of systemic thinking involved that failed to solve the problem; only systems modeling worked. One such example is here http://ventanasystems.com/examplepage.php?exampleID=6

  3. I think we’re in agreement. Despite my advocating for the insights one can get from CLDs, I, too, realize that making a problem and its proposed solution concrete enough to simulate brings with it distinct benefits, and the results of a good simulation used well can break prior, unhelpful mental models.

  4. I am now ‘retired’ and have more or less llost interest in SD but here are some answers to questions raised in this blog.

    The detail on influence diagrams is in my ‘System Dynamics Modelling: A Practical Approach’, Chapman and Hall/CRC Press 1996.

    A Causal Loop Diagram is equivalent to a simplified ID drawn at Level 1 in the cone of IDs.

    My question has always been ‘ What is the value added by the etxra work lof quantification and what is rhe value lost by not quantifying?’ That needs to be asked on a case-by-case basis and seems perfectly reasonable.

    And, no, I’m not going to follow this blog and waste time on the metaphysics of SD. It’s only a problem-solving mnethod after all I(and I bet that raises slome hackles!)

  5. Pingback: Loopy – MetaSD

Leave a Reply

Your email address will not be published. Required fields are marked *

+ 63 = 64

This site uses Akismet to reduce spam. Learn how your comment data is processed.