Elk, wolves and dynamic system visualization

Bret Victor’s video of a slick iPad app for interactive visualization of the Lotka-Voltera equations has been making the rounds:

Coincidentally, this came to my notice around the same time that I got interested in the debate over wolf reintroduction here in Montana. Even simple models say interesting things about wolf-elk dynamics, which I’ll write about some other time (I need to get vaccinated for rabies first).

To ponder the implications of the video and predator-prey dynamics, I built a version of the Lotka-Voltera model in Vensim.

After a second look at the video, I still think it’s excellent. Victor’s two design principles, ubiquitous visualization and in-context manipulation, are powerful for communicating a model. Some aspects of what’s shown have been in Vensim since the introduction of SyntheSim a few years ago, though with less Tufte/iPad sexiness. But other features, like Causal Tracing, are not so easily discovered – they’re effective for pros, but not new users. The way controls appear at one’s fingertips in the iPad app is very elegant. The “sweep” mode is also clever, so I implemented a similar approach (randomized initial conditions across an array dimension) in my version of the model. My favorite trick, though, is the 2D control of initial conditions via the phase diagram, which makes discovery of the system’s equilibrium easy.

The slickness of the video has led some to wonder whether existing SD tools are dinosaurs. From a design standpoint, I’d agree in some respects, but I think SD has also developed many practices – only partially embodied in tools – that address learning gaps that aren’t directly tackled by the app in the video: Continue reading “Elk, wolves and dynamic system visualization”

Who moved my eigenvalues?

Change management is one of the great challenges in modeling projects. I don’t mean this in the usual sense of getting people to change on the basis of model results. That’s always a challenge, but there’s another.

Over the course of a project, the numerical results and maybe even the policy conclusions given by a model are going to change. This is how we learn from models. If the results don’t change, either we knew the answer from the outset (a perception that should raise lots of red flags), or the model isn’t improving.

The problem is that model consumers are likely to get anchored to the preliminary results of the work, and resist change when it arrives later in the form of graphs that look different or insights that contradict early, tentative conclusions.

Fortunately, there are remedies:

  • Start with the assumption that the model and the data are wrong, and to some extent will always remain so.
  • Recognize that the modeler is not the font of all wisdom.
  • Emphasize extreme conditions tests and reality checks throughout the modeling process, not just at the end, so bugs don’t get baked in while insights remain hidden.
  • Do lots of sensitivity analysis to determine the circumstances under which insights are valid.
  • Keep the model simpler than you think it needs to be, so that you have some hope of understanding it, and time for reflecting on behavior and communicating results.
  • Involve a broad team of model consumers, and set appropriate expectations about what the model will be and do from the start.

Vonnegut does the reference modes of stories

Via NPR,

All of us, even if we have no knack for science, look at the weather, at our children, at our markets, at the sky, and we see rhythms and patterns that seem to repeat, that give us the ability to predict. …

Do any of us live beyond pattern? …

I don’t think so. Artists may be, oddly, the most pattern-aware. Case in point: The totally unpredictable, one-of-a-kind novelist Kurt Vonnegut … once gave a lecture in which he presented — in graphic form — the basic plots of all the world’s great stories. Every story you’ve ever heard, he said, are reflections of a few, classic story shapes. They are so elementary, he said, he could draw them on an X/Y axis.

Systems thinkers, watch for:

  • one big reference mode diagram
  • quantification without measurement
  • a discrete event, modeled with finite slope

Cool videos of dynamics

I just discovered the Harvard Natural Sciences Lecture Demonstrations – a catalog of ways to learn and play with science. It’s all fun, but a few of the videos provide nice demonstrations of dynamic phenomena.

Here’s a pretty array of pendulums of different lengths and therefore different natural frequencies:

This is a nice demonstration of how structure (length) causes behavior (period of oscillation). You can also see a variety of interesting behavior patterns, like beats, as the oscillations move in and out of phase with one another.

Synchronized metronomes:

These metronomes move in and out of sync as they’re coupled and uncoupled. This is interesting because it’s a fundamentally nonlinear process. Sync provides a nice account of such things, and there’s a nifty interactive coupled pendulum demo here.

Mousetrap fission:

This is a physical analog of an infection model or the Bass diffusion model. It illustrates shifting loop dominance – initially, positive feedback dominates due to the chain reaction of balls tripping new traps, ejecting more balls. After a while, negative feedback takes over as the number of live traps is depleted, and the reaction slows.

The danger of path-dependent information flows on the web

Eli Pariser argues that “filter bubbles” are bad for us and bad for democracy:

As web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.

Filter bubbles are close cousins of confirmation bias, groupthink, polarization and other cognitive and social pathologies.

A key feedback is this reinforcing loop, from Sterman & Wittenberg’s model of path dependence in Kuhnian scientific revolutions:

Anomalies

As confidence in an idea grows, the delay in recognition (or frequency of outright rejection) of anomalous information grows larger. As a result, confidence in the idea – flat earth, 100mpg carburetor – can grow far beyond the level that would be considered reasonable, if contradictory information were recognized.

The dynamics resulting from this and other positive feedbacks play out in many spheres. Wittenberg & Sterman give an example:

The dynamics generated by the model resemble the life cycle of intellectual fads. Often a promising new idea rapidly becomes fashionable through excessive optimism, aggressive marketing, media hype, and popularization by gurus. Many times the rapid influx of poorly trained practitioners, or the lack of established protocols and methods, causes expectations to outrun achievements, leading to a backlash and disaffection. Such fads are commonplace, especially in (quack) medicine and most particularly in the world of business, where “new paradigms” are routinely touted in the pages of popular journals of management, only to be displaced in the next issue by what many business people have cynically come to call the next “flavor of the month.”

Typically, a guru proposes a new theory, tool, or process promising to address persistent problems facing businesses (that is, a new paradigm claiming to solve the anomalies that have undermined the old paradigm.) The early adopters of the guru’s method spread the word and initiate some projects. Even in cases where the ideas of the guru have little merit, the energy and enthusiasm a team can bring to bear on a problem, coupled with Hawthorne and placebo effects and the existence of “low hanging fruit” will often lead to some successes, both real and apparent. Proponents rapidly attribute these successes to the use of the guru’s ideas. Positive word of mouth then leads to additional adoption of the guru’s ideas. (Of course, failures are covered up and explained away; as in science there is the occasional fraud as well.) Media attention further spreads the word about the apparent successes, further boosting the credibility and prestige of the guru and stimulating additional adoption.

As people become increasingly convinced that the guru’s ideas work, they are less and less likely to seek or attend to disconfirming evidence. Management gurus and their followers, like many scientists, develop strong personal, professional, and financial stakes in the success of their theories, and are tempted to selectively present favorable and suppress unfavorable data, just as scientists grow increasingly unable to recognize anomalies as their familiarity with and confidence in their paradigm grows. Positive feedback processes dominate the dynamics, leading to rapid adoption of those new ideas lucky enough to gain a sufficient initial following. …

The wide range of positive feedbacks identified above can lead to the swift and broad diffusion of an idea with little intrinsic merit because the negative feedbacks that might reveal that the tools don’t work operate with very long delays compared to the positive loops generating the growth. …

For filter bubbles, I think the key positive loops are as follows:

FilterBubblesLoops R1 are the user’s well-worn path. We preferentially visit sites presenting information (theory x or y) in which we have confidence. In doing so, we consider only a subset of all information, building our confidence in the visited theory. This is a built-in part of our psychology, and to some extent a necessary part of the process of winnowing the world’s information fire hose down to a usable stream.

Loops R2 involve the information providers. When we visit a site, advertisers and other observers (Nielsen) notice, and this provides the resources (ad revenue) and motivation to create more content supporting theory x or y. This has also been a part of the information marketplace for a long time.

R1 and R2 are stabilized by some balancing loops (not shown). Users get bored with an all-theory-y diet, and seek variety. Providers seek out controversy (real or imagined) and sensationalize x-vs-y battles. As Pariser points out, there’s less scope for the positive loops to play out in an environment with a few broad media outlets, like city newspapers. The front page of the Bozeman Daily Chronicle has to work for a wide variety of readers. If the paper let the positive loops run rampant, it would quickly lose half its readership. In the online world, with information customized at the individual level, there’s no such constraint.

Individual filtering introduces R3. As the filter observes site visit patterns, and preferentially serves up information consistent with past preferences. This introduces a third set of reinforcing feedback processes, as users begin to see what they prefer, they also learn to prefer what they see. In addition, on Facebook and other social networking sites every person is essentially a site, and people include one another in networks preferentially. This is another mechanism implementing loop R1 – birds of a feather flock together and share information consistent with their mutual preferences, and potentially following one another down conceptual rabbit holes.

The result of the social web and algorithmic filtering is to upset the existing balance of positive and negative feedback. The question is, were things better before, or are they better now?

I’m not exactly sure how to tell. Presumably one could observe trends in political polarization and duration of fads for an indication of the direction of change, but that still leaves open the question of whether we have more or less than the “optimal” quantity of pet rocks, anti-vaccine campaigns and climate skepticism.

My suspicion is that we now have too much positive feedback. This is consistent with Wittenberg & Sterman’s insight from the modeling exercise, that the positive loops are fast, while the negative loops are weak or delayed. They offer a prescription for that,

The results of our model suggest that the long-term success of new theories can be enhanced by slowing the positive feedback processes, such as word of mouth, marketing, media hype, and extravagant claims of efficacy by which new theories can grow, and strengthening the processes of theory articulation and testing, which can enhance learning and puzzle-solving capability.

In the video, Pariser implores the content aggregators to carefully ponder the consequences of filtering. I think that also implies more negative feedback in algorithms. It’s not clear that providers have an incentive to do that though. The positive loops tend to reward individuals for successful filtering, while the risks (e.g., catastrophic groupthink) accrue partly to society. At the same time, it’s hard to imagine a regulatory that does not flirt with censorship.

Absent a global fix, I think it’s incumbent on individuals to practice good mental hygiene, by seeking diverse information that stands some chance of refuting their preconceptions once in a while. If enough individuals demand transparency in filtering, as Pariser suggests, it may even be possible to gain some local control over the positive loops we participate in.

I’m not sure that goes far enough though. We need tools that serve the social equivalent of “strengthening the processes of theory articulation and testing” to improve our ability to think and talk about complex systems. One such attempt is the “collective intelligence” behind Climate Colab. It’s not quite Facebook-scale yet, but it’s a start. Semantic web initiatives are starting to help by organizing detailed data, but we’re a long way from having a “behavioral dynamic web” that translates structure into predictions of behavior in a shareable way.

Update: From Tech Review, technology for breaking the bubble

Better Lies

Hoisted from the comments, Miles Parker has a nice reflection on modeling in this video, Why Model Reality.

It might be subtitled, “Better Lies,” a reference to modeling as the pursuit of better stories about the world, which remain never quite true (a variation on the famous Box quote, “All models are wrong but some are useful.”). A few nice points that I picked out along the way,

  • All thinking, even about the future, is retrospective.
  • Big Data is Big Dumb, because we’re collecting more and more detail about a limited subset of reality, and thus suffer from sampling and “if your only tool is a hammer …” bias.
  • A crucial component of a modeling approach is a “bullshit detector” – reality checks that identify problems at various levels on the ladder of inference.
  • Model design is more than software engineering.
  • Often the modeling process is a source of key insights, and you don’t even need to run the model.
  • Modeling is a social process.

Coming back to the comment,

I think one of the greatest values of a model is that it can bring you to the point where you say “There isn’t any way to build a model within this methodology that is not self-contradicting. Therefore everyone in this room is contradicting themselves before they even open their mouths.”

I think that’s close to what Dana Meadows was talking about when she placed paradigms and transcendence of paradigms on the list of places to intervene in systems.

It reminds me of Gödel’s incompleteness theorems. With that as a model, I’d argue that one can construct fairly trivial models that aren’t self-contradictory. They might contradict a lot of things we think we know about the world, but by virtue of their limited expressiveness remain at least true to themselves.

Going back to the elasticity example, if I assert that oilConsumption = oilPrice^epsilon, there’s no internal contradiction as long as I use the same value of epsilon for each proposition I consider. I’m not even sure what an internal contradiction would look like in such a simple framework. However, I could come up with a long list of external consistency problems with the model: dimensional inconsistency, lack of dynamics, omission of unobserved structure, failure to conform to data ….

In the same way, I would tend to argue that general equilibrium is an internally consistent modeling paradigm that just happens to have relatively little to do with reality, yet is sometimes useful. I suppose that Frank Ackerman might disagree with me, on the grounds that equilibria are not necessarily unique or stable, which could raise an internal contradiction by violating the premise of the modeling exercise (welfare maximization).

Once you step beyond models with algorithmically simple decision making (like CGE), the plot thickens. There’s Condorcet’s paradox and Arrow’s impossibility theorem, the indeterminacy of Arthur’s El Farol bar problem, and paradoxes of zero discount rates on welfare.

It’s not clear to me that all interesting models of phenomena that give rise to self-contradictions must be self-contradicting though. For example, I suspect that Sterman & Wittenberg’s model of Kuhnian scientific paradigm succession is internally consistent.

Maybe the challenge is that the universe is self-referential and full of paradoxes and irreconcilable paradigms. Therefore as soon as we attempt to formalize our understanding of such a mess, either with nontrivial models, or trivial models assisting complex arguments, we are dragged into the quagmire of self-contradiction.

Personally, I’m not looking for the cellular automaton that runs the universe. I’m just hoping for a little feedback control on things that might make life on earth a little better. Maybe that’s a paradoxical quest in itself.

Lakoff on “The Country We Believe In”

George Lakoff has an interesting take on the president’s April 13 budget speech,

Last week, on April 13, 2011, President Obama gave all Democrats and all progressives a remarkable gift. Most of them barely noticed. They looked at the President’s speech as if it were only about budgetary details. But the speech went well beyond the budget. It went to the heart of progressive thought and the nature of American democracy, and it gave all progressives a model of how to think and talk about every issue.

I’m definitely in the “barely noticed” category. The interesting thing, George argues, is that the speech is really about systems. Part concerns a system of values:

The policy topic happened to be the budget, but he called it “The Country We Believe In” for a reason. The real topic was how the progressive moral system defines the democratic ideals America was founded on, and how those ideals apply to specific issues.

More interesting to me, another key theme is systems in the “systems thinking” sense:

Systems Thinking

President Obama, in the same speech, laid the groundwork for another crucial national discussion: systems thinking, which has shown up in public discourse mainly in the form of “systemic risk” of the sort that led to the global economic meltdown. The president brought up systems thinking implicitly, at the center of his budget proposal. He observed repeatedly that budget deficits and “spending” do not occur in isolation. The choice of what to cut and what to keep is a matter of factors external to the budget per se. Long-term prosperity, economic recovery, and job creation, he argued, depend up maintaining “investments” — investments in infrastructure (roads, bridges, long-distance rail), education, scientific research, renewable energy, and so on. The maintenance of American values, he argued, is outside of the budget in itself, but is at the heart of the argument about what to cut. The fact is that the rich have gotten rich because of the government — direct corporate subsidies, access to publicly-owned resources, access to government research, favorable trade agreements, roads and other means of transportation, education that provides educated workers, tax loopholes, and innumerable government resources are taken advantage of by the rich, but paid for by all of us. What is called a ”tax break” for the rich is actually a redistribution of wealth from the poor and middle class—whose incomes have gone down—to those who have considerably more money than they need, money they have made because of tax investments by the rest of America.

The President provided a beautiful example of systems thinking. Under the Republican budget plan, the President would get a $200,000 a year tax break, which would be paid for by cutting programs for seniors, with the result that 33 seniors would be paying $6,000 more a year for health care to pay for his tax break. To see this, you have to look outside of the federal budget to the economic system at large, in which you can see what budget cuts will be balanced by increased in costs to others. A cut here in the budget is balanced by an increase outside the federal budget for real human beings.

When a system has causal effects, as in the above cases, we speak of “systemic causation.” “Systemic risks” are the risks created when there is systemic causation. Systemic causation contrasts with direct causation, as when, say, someone lifts something, or throws something, or shoots someone.

Linguists have discovered that every language studied has direct causation in its grammar, but no language has systemic causation in its grammar. Systemic causation is a harder concept and has to be learned either through socialization or education.

This got me interested in the original speech (transcript, video).

From our first days as a nation, we have put our faith in free markets and free enterprise as the engine of America’s wealth and prosperity. More than citizens of any other country, we are rugged individualists, a self-reliant people with a healthy skepticism of too much government.

But there has always been another thread running throughout our history – a belief that we are all connected; and that there are some things we can only do together, as a nation. We believe, in the words of our first Republican president, Abraham Lincoln, that through government, we should do together what we cannot do as well for ourselves.

There’s some feedback:

Ultimately, all this rising debt will cost us jobs and damage our economy. It will prevent us from making the investments we need to win the future. We won’t be able to afford good schools, new research, or the repair of roads and bridges – all the things that will create new jobs and businesses here in America. Businesses will be less likely to invest and open up shop in a country that seems unwilling or unable to balance its books. And if our creditors start worrying that we may be unable to pay back our debts, it could drive up interest rates for everyone who borrows money – making it harder for businesses to expand and hire, or families to take out a mortgage.

And recognition of systemic pressures for deficits:

But that starts by being honest about what’s causing our deficit. You see, most Americans tend to dislike government spending in the abstract, but they like the stuff it buys. Most of us, regardless of party affiliation, believe that we should have a strong military and a strong defense. Most Americans believe we should invest in education and medical research. Most Americans think we should protect commitments like Social Security and Medicare. And without even looking at a poll, my finely honed political skills tell me that almost no one believes they should be paying higher taxes.

Because all this spending is popular with both Republicans and Democrats alike, and because nobody wants to pay higher taxes, politicians are often eager to feed the impression that solving the problem is just a matter of eliminating waste and abuse –that tackling the deficit issue won’t require tough choices. Or they suggest that we can somehow close our entire deficit by eliminating things like foreign aid, even though foreign aid makes up about 1% of our entire budget.

There’s a bit of dynamics implicit in the discussion (e.g., the role of debt accumulation), but I think one thing is missing: straightforward grappling with worse-before-better behavior. The president proposes to go after waste (a favorite of all politicians) and tax breaks for the rich (far more sensible than the Ryan proposal), but doesn’t quite come to grips with the underlying question of how we can continue to feel prosperous and secure, when fundamentally we can’t (or at least shouldn’t) return to a previous pattern of unsustainable consumption in excess of our income funded by budget, trade and environmental deficits. What we really need, per yesterday’s post, is a reframing of what is now perceived as austerity as an opportunity to live with better health, relationships and security.

I part ways with Lakoff a bit on one topic:

Progressives tend to think more readily in terms of systems than conservatives. We see this in the answers to a question like, “What causes crime?” Progressives tend to give answers like economic hardship, or lack of education, or crime-ridden neighborhoods. Conservatives tend more to give an answer like “bad people — lock ‘em up, punish ‘em.” This is a consequence of a lifetime of thinking in terms of social connection (for progressives) and individual responsibility (for conservatives). Thus conservatives did not see the President’s plan, which relied on systemic causation, as a plan at all for directly addressing the deficit.

Differences in systemic thinking between progressives and conservatives can be seen in issues like global warming and financial reform. Conservatives have not recognized human causes of global warming, partly because they are systemic, not direct. When a huge snowstorm occurred in Washington DC recently, many conservatives saw it as disproving the existence of global warming — “How could warming cause snow?” Similarly, conservatives, thinking in terms of individual responsibility and direct causation, blamed homeowners for foreclosures on their homes, while progressives looked to systemic explanations, seeking reform in the financial system.

Certainly it is true that self-interested denial of feedback (or externalities, as an economist might describe some feedbacks) has found its home in the conservative and libertarian movements. But that doesn’t mean all conservative thought is devoid of systems thinking, and one can easily look back at history and find progressive or liberal policies that have also ignored systemic effects. Indeed, the conservative critique of progressive policies addressing crime and poverty issues has often been evolutionary arguments about the effects of incentives – a very systemic view. The problem is, words don’t provide enough formalism or connection to data to determine whose favorite feedback loops might dominate, so philosophical arguments about the merits of turn-the-other-cheek or an-eye-for-an-eye can go on forever. Models can assist with resolving these philosophical debates. However, at present public discourse is almost devoid of thinking, and often anti-intellectual, which makes it tough to contemplate sophisticated solutions to our problems.

Thanks to James McFarland for the tip.

Positive Feedback Pricing

Hat tip to John Sterman & Travis Franck for passing along this cool example of positive feedback, discovered on Amazon by evolutionary biologist Michael Eisen. Two sellers apparently used algorithmic pricing that led to exponential growth of the price of a book:

bookPriceThis reminds me of a phenomenon that’s puzzled me for some time: “new economy” firms have at least as many opportunities for systemic problems as any others, yet modeling remains somewhat “old economy” focused on physical products and supply chains and more traditional services like health care. Perhaps this is just my own observational sampling bias; I’d be curious to know whether others see things the same way.

Candy Causality Confusion

Candy Professor is confused:

Contagious Cavities

One of the favorite themes of the candy alarmists is dental decay: candy causes cavities! How many times have you heard that one? But it just ain’t so.

From no less an authority than the New York Times, this week’s Science section:

While candy and sugar get all the blame, cavities are caused primarily by bacteria that cling to teeth and feast on particles of food from your last meal.

Your last meal. Did you hear that? Not candy, not at all. It’s food, just plain old food, that those cavity-causing bacteria crave.

This is just what we’d all like to hear – cavities are a random act of bacterial promiscuity, so we can gorge on candy as much as we want without dental repercussions!

Unfortunately, this is highly misleading.

The NYT article mentions that streptococcus mutans is one of the common cavity precursor bacteria. A quick trip to wikipedia and microbe wiki reveals all. Here’s a rough picture of the process:

candy

click to enlarge

At top left, food (including candy) goes in. The output of this system that we’re interested in is healthy tooth enamel – i.e. the opposite of cavities. There are many causal pathways between candy and cavities. The simplest (in red) starts when candy (i.e. sugars) goes into the mouth. There, in the presence of bacteria, it’s metabolized to acid, which is neutralized by eroding enamel. That’s bad.

Things get worse if the candy contains sucrose. Sucrose is enzymatically degraded to fructose and glucose (green path), directly fueling the acid process. More importantly, S. mutans preferentially hijacks sucrose, consuming the fructose for energy and using the glucose to make a sticky polysacharide scaffolding for its colonies, which we come to know as plaque. That plaque becomes a home for other less hardy bacteria (orange path). The existence of food and housing allows bacterial populations of all sorts to flourish (blue paths). All of this increases enamel-eroding acid metabolism.

Admittedly, none of this would happen without bacteria around to metabolize sugars. But that’s a feedback loop – sugar intake fuels the growth of the bacterial populations. The idea that “It’s food, just plain old food, that those cavity-causing bacteria crave” is surely nonsense, because there’s a metabolic penalty and a delay in converting complex carbohydrates into cavity-causing sugars. That delay means that the shorter time constant, of chewing and swallowing your food, dominates, so that the primary fuel for bacteria must be simpler (or stickier) carbohydrates.

The existence of at least half a dozen causal pathways from candy intake to loss of tooth enamel gives the lie to the notion that it’s “Not candy, not at all.” You can blame the bacteria if you like, but that’s a victim’s approach to policy. Absent an S. mutans vaccine or similar innovations, there’s not much we can do about our resident bacteria. We can, however, choose not to feed them substances that are uniquely suited to fueling their populations and the destructive processes that result.

The Secret of the Universe in 6 sentences

Niall Palfreyman wrote this on the board to introduce a course in differential equations:

  1. The Secret of the Universe in 6 sentences
  2. Nature always integrates flows over time
  3. Flows always differentiate fields over space
  4. Structure determines behaviour
  5. Algebra is the study of structure
  6. Dynamics is the study of behaviour

I like it.

A little explanation is in order. I have my morning coffee in hand. It’s warmer than the room, so it’s cooling off. It’s heat winds up in the room. If I want to manage my coffee well, neither burning my tongue nor gagging down cold sludge, I need to be able to make some predictions about the future behavior of my cuppa joe. I won’t get far by postulating demons randomly stealing calorics from my cup, though that might provide a soothingly fatalistic outlook. I’m much better off if I understand how and why coffee cools.

#2, the “nature integrates flows” part of the system looks like this:

coffeeCooling

Each box represents an accumulation of heat (that’s the integral). Each pipe represents a flow of heat from one place to another. The heat currently in the house is simply the net result of all the inflows from coffee cups, and all the losses to the outside world, over all time (of course, there are other flows to consider, like my computers warming the room, and losses to the snowy outside).

In the same way, the number of people in a room is the net accumulation of all the people who ever entered, less all those who ever left. A neat thing about this is that the current heat in the cup, or count of people in a room, is a complete description of the state of the system. You don’t need to know the detailed history of inflows and outflows, because you can simply take the temperature of the cup or count the people in the room to measure the accumulated effects of all the past events.

The next question is, why does the heat flow? That’s what #3 is about. Heat follows temperature gradients, as water flows downhill. Here’s a temperature field for a coffee cup:

Coffee_applepie_infrared

wikimedia commons

Heat will flow from the hot (red) cup into the cool (green) environment. The flow will be fastest where the gradient is steepest – i.e. where there’s the greatest temperature difference over a unit of space. That’s the “flows differentiate fields” part. Other properties also matter, like the thermal conductivity of the cup, air currents in the room, insulation in the wall, and heat capacity of coffee, and these can also be described as distributions over space or fields. That adds the blue to the model above:

CoffeeStructure

The blue arrows describe why the flows flow. These are algebraic expressions, like Heat Transfer from Cup to Room = Cup to Room Gradient/Cup-Room Heat Transfer Coefficient. They describe the structure – the “why” – of the system (#5).

The behavior of the system, i.e. how fast my coffee cools, is determined by the structure described above (#4). If you change the structure, by using an insulated mug to change the cup-room heat transfer coefficient for example, you change the behavior – the coffee cools more slowly.* The search for understanding about coffee cups, nuclear reactors, and climate is essentially an effort to identify structures that explain the dynamics or patterns of behavior that we observe in the world.

* Update: added a sentence for clarification, and corrected numbering.