Election Reflection

Jay Forrester’s 1971 Counter Intuitive Behavior of Social Systems sums up this election pretty well for me.

… social systems are inherently insensitive to most policy changes that people choose in an effort to alter the behavior of systems. In fact, social systems draw attention to the very points at which an attempt to intervene will fail. Human intuition develops from exposure to simple systems. In simple systems, the cause of a trouble is close in both time and space to symptoms of the trouble. If one touches a hot stove, the burn occurs here and now; the cause is obvious. However, in complex dynamic systems, causes are often far removed in both time and space from the symptoms. True causes may lie far back in time and arise from an entirely different part of the system from when and where the symptoms occur. However, the complex system can mislead in devious ways by presenting an apparent cause that meets the expectations derived from simple systems. A person will observe what appear to be causes that lie close to the symptoms in both time and space—shortly before in time and close to the symptoms. However, the apparent causes are usually coincident occurrences that, like the trouble symptom itself, are being produced by the feedback-loop dynamics of a larger system.

Translation: economy collapses under a Republican administration. Democrats fail to fix it, partly for lack of knowledge of correct action but primarily because it’s unfixable on a two-year time scale. Voters who elected the Dems by a large margin forget the origins of the problem, become dissatisfied and throw the bums out, but replace them with more clueless bums.

… social systems seem to have a few sensitive influence points through which behavior can be changed. These high-influence points are not where most people expect. Furthermore, when a high-influence policy is identified, the chances are great that a person guided by intuition and judgment will alter the system in the wrong direction.

Translation: everyone suddenly becomes a deficit hawk at the worst possible time, even though they don’t know whether Obama is a Keynesian.

The root of the problem:

Mental models are fuzzy, incomplete, and imprecisely stated. Furthermore, within a single individual, mental models change with time, even during the flow of a single conversation. The human mind assembles a few relationships to fit the context of a discussion. As debate shifts, so do the mental models. Even when only a single topic is being discussed, each participant in a conversation employs a different mental model to interpret the subject. Fundamental assumptions differ but are never brought into the open. Goals are different but left unstated.

It is little wonder that compromise takes so long. And even when consensus is reached, the underlying assumptions may be fallacies that lead to laws and programs that fail.


… there is hope. It is now possible to gain a better understanding of dynamic behavior in social systems. Progress will be slow. There are many cross-currents in the social sciences which will cause confusion and delay. … If we proceed expeditiously but thoughtfully, there is a basis for optimism.

Now cap & trade is REALLY dead

From the WaPo:

[Obama] also virtually abandoned his legislation – hopelessly stalled in the Senate – featuring economic incentives to reduce carbon emissions from power plants, vehicles and other sources.

“I’m going to be looking for other means of addressing this problem,” he said. “Cap and trade was just one way of skinning the cat,” he said, strongly implying there will be others.

In the campaign, Republicans slammed the bill as a “national energy tax” and jobs killer, and numerous Democrats sought to emphasize their opposition to the measure during their own re-election races.

Brookings reflects, Toles nails it.

Modelers: you're not competing

Well, maybe a little, but it doesn’t help.

From time to time we at Ventana encounter consulting engagements where the problem space is already occupied by other models. Typically, these are big, detailed models from academic or national lab teams who’ve been working on them for a long time. For example, in an aerospace project we ran into detailed point-to-point trip generation models and airspace management simulations with every known airport and aircraft in them. They were good, but cumbersome and expensive to run. Our job was to take a top-down look at the big picture, integrating the knowledge from the big but narrow models. At first there was a lot of resistance to our intrusion, because we consumed some of the budget, until it became evident that the existence of the top-down model added value to the bottom-up models by placing them in context, making their results more relevant. The benefit was mutual, because the bottom-up models provided grounding for our model that otherwise would have been very difficult to establish. I can’t quite say that we became one big happy family, but we certainly developed a productive working relationship.

I think situations involving complementary models are more common than head-to-head competition among models that serve the same purpose. Even where head-to-head competition does exist, it’s healthy to have multiple models, especially if they embody different methods. (The trouble with global climate policy is that we have many models that mostly embody the same general equilibrium assumptions, and thus differ only in detail.) Rather than getting into methodological pissing matches, modelers should be seeking the synergy among their efforts and making it known to decision makers. That helps to grow the pie for all modeling efforts, and produces better decisions.

Certainly there are exceptions. I once ran across a competing vendor doing marketing science for a big consumer products company. We were baffled by the high R^2 values they were reporting (.92 to .98), so we reverse engineered their model from the data and some slides (easy, because it was a linear regression). It turned out that the great fits were due to the use of 52 independent parameters to capture seasonal variation on a weekly basis. Since there were only 3 years of data (i.e. 3 points per parameter), we dubbed that the “variance eraser.” Replacing the 52 parameters with a few targeted at holidays and broad variations resulted in more realistic fits, and also revealed problems with inverted signs (presumably due to collinearity) and other typical pathologies. That model deserved to be displaced. Still, we learned something from it: when we looked cross-sectionally at several variants for different products, we discovered that coefficients describing the sales response to advertising were dependent on the scale of the product line, consistent with our prior assertion that effects of marketing and other activities were multiplicative, not additive.

The reality is that the need for models is almost unlimited.  The physical sciences are fairly well formalized, but models span a discouragingly small fraction of the scope of human behavior and institutions. We need to get the cost of providing insight down, not restrict the supply through infighting. The real enemy is seldom other models, but rather superstition, guesswork and propaganda.