## Loopy

I just gave Loopy a try, after seeing Gene Bellinger’s post about it.

It’s cool for diagramming, and fun. There are some clever features, like drawing a circle to create a node (though I was too dumb to figure that out right away). Its shareability and remixing are certainly useful.

However, I think one must be very cautious about simulating causal loop diagrams directly. A causal loop diagram is fundamentally underspecified, which is why no method of automated conversion of CLDs to models has been successful.

In this tool, behavior is animated by initially perturbing the system (e.g, increase the number of rabbits in a predator-prey system). Then you can follow the story around a loop via animated arrow polarity changes – more rabbits causes more foxes, more foxes causes less rabbits. This is essentially the storytelling method of determining loop polarity, which I’ve used many times to good effect.

However, as soon as the system has multiple loops, you’re in trouble. Link polarity tells you the direction of change, but not the gain or nonlinearity. So, when multiple loops interact, there’s no way to determine which is dominant. Also, in a real system it matters which nodes are stocks; it’s not sufficient to assume that there must be at least one integration somewhere around a loop.

You can test this for yourself by starting with the predator-prey example on the home page. The initial model is a discrete oscillator (more rabbits -> more foxes -> fewer rabbits). But the real system is nonlinear, with oscillation and other possible behaviors, depending on parameters. In Loopy, if you start adding explicit births and deaths, which should get you closer to the real system, simulations quickly result in a sea of arrows in conflicting directions, with no way to know which tendency wins. So, the loop polarity simulation could be somewhere between incomprehensible and dead wrong.

Similarly, if you consider an SIR infection model, there are three loops of interest: spread of infection by contact, saturation from running out of susceptibles, and recovery of infected people. Depending on the loop gains, it can exhibit different behaviors. If recovery is stronger than spread, the infection dies out. If spread is initially stronger than recovery, the infection shifts from exponential growth to goal seeking behavior as dominance shifts nonlinearly from the spread loop to the saturation loop.

I think it would be better if the tool restricted itself to telling the story of one loop at a time, without making the leap to system simulations that are bound to be incorrect in many multiloop cases. With that simplification, I’d consider this a useful item in the toolkit. As is, I think it could be used judiciously for explanations, but for conceptualization it seems likely to prove dangerous.

My mind goes back to Barry Richmond’s approach to systems here. Causal loop diagrams promote thinking about feedback, but they aren’t very good at providing an operational description of how things work. When you’re trying to figure out something that you don’t understand a priori, you need the bottom-up approach to synthesize the parts you understand into the whole you’re grasping for, so you can test whether your understanding of processes explains observed behavior. That requires stocks and flows, explicit goals and actual states, and all the other things system dynamics is about. If we could get to that as elegantly as Loopy gets to CLDs, that would be something.

I think it was George Richardson who coined the term “dead buffalo” to refer to a diagram that surrounds a central concept with a hail of inbound causal arrows explaining it. This arrangement can be pretty useful as a list of things to think about, but it’s not much help toward solving a systemic problem from an endogenous point of view.

I recently found the granddaddy of them all:

## Circling the Drain

“It’s Time to Retire ‘Crap Circles’,” argues Gardiner Morse in the HBR. I wholeheartedly agree. He’s assembled a lovely collection of examples. Some violate causality amusingly:

“Through some trick of causality, termination leads to deployment.”

Morse ridicules one diagram that actually shows an important process,

The friendly-looking sunburst that follows, captured from the website of a solar energy advocacy group, shows how to create an unlimited market for your product. Here, as the supply of solar energy increases, so does the demand — in an apparently endless cycle. If these folks are right, we’re all in the wrong business.

This is not a particularly well-executed diagram, but the positive feedback process (reinforcing loop) of increasing demand driving economies of scale, lowering costs and further increasing demand, is real. Obviously there are other negative loops that restrain this one from delivering infinite solar, but not every diagram needs to show every loop in a system.

Unfortunately, Morse’s prescription, “We could all benefit from a little more linear thinking,” is nearly as alarming as the illness. The vacuous linear processes are right there next to the cycles in PowerPoint’s Smart Art:

Linear thinking isn’t a get-out-of-chartjunk-free card. It’s an invitation to event-driven unidirectional causal thinking, laundry lists, and George Richardson’s Dead Buffalo Syndrome. What we really need is more understanding of causality and feedback, and more operational thinking, so that people draw meaningful graphics, employing cycles where they appropriately describe causality.

h/t John Sterman for pointing this out.

## Disinfographics

I cringed when I saw the awful infographics in a recent GreenBiz report, highlighted in a Climate Progress post. A site that (rightly) criticizes the scientific illiteracy of the GOP field shouldn’t be gushing over chartjunk that would make USA Today blush. Climate Progress dumped my mildly critical comment into eternal moderation queue purgatory, so I have to rant about this a bit.

Here’s one of the graphics, with my overlay of the data plotted correctly (in green):

“What We Found: The energy consumed per dollar of gross domestic product grew slightly in 2010, the first increase after steady declines for more than half a century.”

Notice that:

• No, there really wasn’t a great cosmic coincidence that caused energy intensity to progress at a uniform rate from 1950-1970 and 1980-2009, despite the impression given by the arrangements of points on the wire.
• The baseline of the original was apparently some arbitrary nonzero value.
• The original graphic vastly overstates the importance of the last two data points by using a nonuniform time axis.

The issues are not merely aesthetic; the bad graphics contribute to distorted interpretations of reality, as the caption above indicates. From another graphic (note the short horizon and nonzero baseline), CP extracts the headline, “US carbon intensity is flat lining.”

From any reasonably long sample of the data it should be clear that the 2009-2011 “flat lining” is just a blip, having little to do with the long term emission trends we need to modify to achieve deep emissions reductions.

The other graphics in the article are each equally horrific in their own special way.

My advice to analysts is simple. If you want to communicate information, find someone numerate who’s read Tufte to make your plots. If you must have a pretty picture for eye candy, use it as a light background to an accurate plot. If you want pretty pictures to persuade people without informing them, skip the data and use a picture of a puppy. Here, you can even use my puppy:

## Diagramming for thinking

Should science learners be challenged to draw more? Certainly making visualizations is integral to scientific thinking. Scientists do not use words only but rely on diagrams, graphs, videos, photographs, and other images to make discoveries, explain findings, and excite public interest. From the notebooks of Faraday and Maxwell to current professional practices of chemists, scientists imagine new relations, test ideas, and elaborate knowledge through visual representations.

Drawing to Learn in Science, Shaaron Ainsworth, Vaughan Prain, Russell Tytler (this link might not be paywalled)

Continuing,

However, in the science classroom, learners mainly focus on interpreting others’ visualizations; when drawing does occur, it is rare that learners are systematically encouraged to create their own visual forms to develop and show understanding. Drawing includes constructing a line graph from a table of values, sketching cells observed through a microscope, or inventing a way to show a scientific phenomenon (e.g., evaporation). Although interpretation of visualizations and other information is clearly critical to learning, becoming proficient in science also requires learners to develop many representational skills. We suggest five reasons why student drawing should be explicitly recognized alongside writing, reading, and talking as a key element in science education. …

The paper goes on to list a lot of reasons why this is important. Continue reading “Diagramming for thinking”

## Debt crisis in the European Minifigure Union

A clever visualization from a 9-year-old:

Click through to the original .pdf for the numbered legend.

This is isn’t quite a causal loop diagram; arrows indicate “where each entity would shift the burden of bailout costs,” but the network of relationships implies a lot of interesting dynamics.

Via 4D Pie Charts.

## Big data and the power of personal feedback

In a recent conversation about data requirements for future Vensim, a colleague observed that the availability of ready access to ‘big data’ in corporations has had curious side effects. One might have hoped for a flowering of model-driven conversations about the firm. Instead, ubiquitous access to data has led managers to spend less time contemplating what data might actually be important. Crucial data for model calibration are often harder to get than they were in the bad old days, because:

• The perceived time scale of relevance is shorter than ever; there are no enduring generic structures, only transient details, so old data gets tossed or ignored.
• Prevalent databases are still lousy at constructing aggregate time series.
• Zombie managerial instincts for hoarding data still walk the earth.
• Users are riveted by slick graphics which conceal quality issues in the underlying data.

Perhaps this is a consequence of the fact that data collection has become incredibly cheap. In the short run, business is about execution of essentially fixed strategies, and raw data is pretty darn useful for that. The problem is that the long run challenge of formulating strategies requires an investment of time to turn data into models (mental or formal), but modeling hasn’t experienced the same productivity revolution. This could leave companies more strategically blind than ever, and therefore accelerate the process of inadvertently walking off a cliff.

Around the same time, I ran into this Wired article about the power of feedback to change behavior. It details a variety of interesting innovations, from radar speed signs to brainwave headbands. I’ve experimented with similar stuff, like Daytum (found here, clever, but soon abandoned) and the Kill-a-watt (still used occasionally).

In the past two or three years, the plunging price of sensors has begun to foster a feedback-loop revolution. …

And today, their promise couldn’t be greater. The intransigence of human behavior has emerged as the root of most of the world’s biggest challenges. Witness the rise in obesity, the persistence of smoking, the soaring number of people who have one or more chronic diseases. Consider our problems with carbon emissions, where managing personal energy consumption could be the difference between a climate under control and one beyond help. And feedback loops aren’t just about solving problems. They could create opportunities. Feedback loops can improve how companies motivate and empower their employees, allowing workers to monitor their own productivity and set their own schedules. They could lead to lower consumption of precious resources and more productive use of what we do consume. They could allow people to set and achieve better-defined, more ambitious goals and curb destructive behaviors, replacing them with positive actions. Used in organizations or communities, they can help groups work together to take on more daunting challenges. In short, the feedback loop is an age-old strategy revitalized by state-of-the-art technology. As such, it is perhaps the most promising tool for behavioral change to have come along in decades.

But the applications don’t quite live up to these big ambitions:

… The GreenGoose concept starts with a sheet of stickers, each containing an accelerometer labeled with a cartoon icon of a familiar household object—a refrigerator handle, a water bottle, a toothbrush, a yard rake. But the secret to GreenGoose isn’t the accelerometer; that’s a less-than-a-dollar commodity. The key is the algorithm that Krejcarek’s team has coded into the chip next to the accelerometer that recognizes a particular pattern of movement. For a toothbrush, it’s a rapid back-and-forth that indicates somebody is brushing their teeth. … In essence, GreenGoose uses sensors to spray feedback loops like atomized perfume throughout our daily life—in our homes, our vehicles, our backyards. “Sensors are these little eyes and ears on whatever we do and how we do it,” Krejcarek says. “If a behavior has a pattern, if we can calculate a desired duration and intensity, we can create a system that rewards that behavior and encourages more of it.” Thus the first component of a feedback loop: data gathering.

Then comes the second step: relevance. GreenGoose converts the data into points, with a certain amount of action translating into a certain number of points, say 30 seconds of teeth brushing for two points. And here Krejcarek gets noticeably excited. “The points can be used in games on our website,” he says. “Think FarmVille but with live data.” Krejcarek plans to open the platform to game developers, who he hopes will create games that are simple, easy, and sticky. A few hours of raking leaves might build up points that can be used in a gardening game. And the games induce people to earn more points, which means repeating good behaviors. The idea, Krejcarek says, is to “create a bridge between the real world and the virtual world. This has all got to be fun.”

This strikes me as a rehash of the corporate experience: use cheap data to solve execution problems, but leave the big strategic questions unaddressed. The torrent of the measurable might even push the crucial intangibles – love, justice, happiness, wisdom – further toward the unmanaged margins of our existence.

My guess is that these technologies can help us solve our universal personal problems, particularly in areas like health and fitness where rewards are proximate in time and space. There might even be beneficial spillovers from healthier, happier personal lifestyles to reduced resource demand and

But I don’t see them doing much to solve global environmental problems, or even large-scale universal problems like urban decay and poverty. Those problems exist, not for lack of data, but for lack of feedback that is compelling to the same degree as the pressures of markets and other financial and social systems, which aren’t all about fun. In the US, we’re not even willing to entertain the idea of creating climate feedback loops. I suspect that the solutions to our biggest problems awaits some other technology that makes us much more productive at devising good strategies based on shared mental models.