The seven-track melee

In boiled frogs I explored the implications of using local weather to reason about global climate. The statistical fallacies (local = global and weather = climate) are one example of the kinds of failures on my list of reasons for science denial.

As I pondered the challenge of upgrading mental models to cope with big problems like climate, I ran across a great paper by Barry Richmond (creator of STELLA, and my first SD teacher long ago). He inventories seven systems thinking skills, which nicely dovetail with my thinking about coping with complex problems.

Some excerpts:

Skill 1: dynamic thinking

Dynamic thinking is the ability to see and deduce behavior patterns rather than focusing on, and seeking to predict, events. It’s thinking about phenomena as resulting from ongoing circular processes unfolding through time rather than as belonging to a set of factors. …

Skill 2: closed-loop thinking

The second type of thinking process, closed-loop thinking, is closely linked to the first, dynamic thinking. As already noted, when people think in terms of closed loops, they see the world as a set of ongoing, interdependent processes rather than as a laundry list of one-way relations between a group of factors and a phenomenon that these factors are causing. But there is more. When exercising closed-loop thinking, people will look to the loops themselves (i.e., the circular cause-effect relations) as being responsible for generating the behavior patterns exhibited by a system. …

Skill 3: generic thinking

Just as most people are captivated by events, they are generally locked into thinking in terms of specifics. … was it Hitler, Napoleon, Joan of Arc, Martin Luther King who determined changes in history, or tides in history that swept these figures along on their crests? … Apprehending the similarities in the underlying feedback-loop relations that generate a predator-prey cycle, a manic-depressive swing, the oscillation in an L-C circuit, and a business cycle can demonstrate how generic thinking can be applied to virtually any arena.

Skill 4: structural thinking

Structural thinking is one of the most disciplined of the systems thinking tracks. It’s here that people must think in terms of units of measure, or dimensions. Physical conservation laws are rigorously adhered to in this domain. The distinction between a stock and a flow is emphasized. …

Skill 5: operational thinking

Operational thinking goes hand in hand with structural thinking. Thinking operationally means thinking in terms of how things really work—not how they theoretically work, or how one might fashion a bit of algebra capable of generating realistic-looking output. …

Skill 6: continuum thinking

Continuum thinking is nourished primarily by working with simulation models that have been built using a continuous, as opposed to discrete, modeling approach. … Although, from a mechanical standpoint, the differences between the continuous and discrete formulations may seem unimportant, the associated implications for thinking are quite profound. An “if, then, else” view of the world tends to lead to “us versus them” and “is versus is not” distinctions. Such distinctions, in turn, tend to result in polarized thinking.

Skill 7: scientific thinking

… Let me begin by saying what scientific thinking is not. My definition of scientific thinking has virtually nothing to do with absolute numerical measurement. … To me, scientific thinking has more to do with quantification than measurement. … Thinking scientifically also means being rigorous about testing hypotheses. … People thinking scientifically modify only one thing at a time and hold all else constant. They also test their models from steady state, using idealized inputs to call forth “natural frequency responses.”

When one becomes aware that good systems thinking involves working on at least these seven tracks simultaneously, it becomes a lot easier to understand why people trying to learn this framework often go on overload. When these tracks are explicitly organized, and separate attention is paid to develop each skill, the resulting bite-sized pieces make the fare much more digestible. …

The connections among the various physical, social, and ecological subsystems that make up our reality are tightening. There is indeed less and less “away,” both spatially and temporally, to throw things into. Unfortunately, the evolution of our thinking capabilities has not kept pace with this growing level of interdependence. The consequence is that the problems we now face are stubbornly resistant to our interventions. To “get back into the foot race,” we will need to coherently evolve our educational system

… By viewing systems thinking within the broader context of critical thinking skills, and by recognizing the multidimensional nature of the thinking skills involved in systems thinking, we can greatly reduce the time it takes for people to apprehend this framework. As this framework increasingly becomes the context within which we think, we will gain much greater leverage in addressing the pressing issues that await us …

Source: Barry Richmond, “Systems thinking: critical thinking skills for the 1990s and beyond” System Dynamics Review Volume 9 Number 2 Summer 1993

That was 18 years ago, and I’d argue that we’re still not back in the race. Maybe recognizing the inherent complexity of the challenge and breaking it down into digestible chunks will help though.

Hand over your cell phones

Adam Frank @NPR says, “Science Deniers: Hand Over Your Cellphones!”

I’m sympathetic to the notion that attitudes toward science are often a matter of ideological convenience rather than skeptical reasoning. However, we don’t have a cell phone denial problem. Why? I think it helps to identify the contributing factors in circumstances in which denial occurs:

  • Non-experimental science (reliance on observations of natural experiments; no controls or randomized assignment)
  • Infrequent replication (few examples within the experience of an individual or community)
  • High noise (more specifically, low signal-to-noise ratio)
  • Complexity (nonlinearity, integrations or long delays between cause and effect, multiple agents, emergent phenomena)
  • “Unsalience” (you can’t touch, taste, see, hear, or smell the variables in question)
  • Cost (there’s some social or economic penalty  imposed by the policy implications of the theory)
  • Commons (the risk of being wrong accrues to society more than the individual)

It’s easy to believe in radio waves used by cell phones, or general relativity corrected for by GPS, because their only problematic feature is invisibility. Calling grandma is a pretty compelling experiment, which one can repeat as often as needed to dispel any doubts about those mysterious electromagnetic waves.

At one time, the debate over the structure of the solar system was subject to these problems. There was a big social cost to believing the heliocentric model (the Inquisition), and little practical benefit to being right. Theory relied on observations that were imprecise and not salient to the casual observer. Now that we have low-noise observations, replicated experiments (space probe launches), and so on, there aren’t too many geocentrists around.

Climate, on the other hand, has all of these problems. Of particular importance, the commons and long-time-scale aspects of the problem shelter individuals from selection pressure against wrong beliefs.

Selection for deception?

Eric R. Weinstein on Edge’s 2011 question:

The sophisticated “scientific concept” with the greatest potential to enhance human understanding may be argued to come not from the halls of academe, but rather from the unlikely research environment of professional wrestling.

Evolutionary biologists Richard Alexander and Robert Trivers have recently emphasized that it is deception rather than information that often plays the decisive role in systems of selective pressures. Yet most of our thinking continues to treat deception as something of a perturbation on the exchange of pure information, leaving us unprepared to contemplate a world in which fakery may reliably crowd out the genuine. In particular, humanity’s future selective pressures appear likely to remain tied to economic theory which currently uses as its central construct a market model based on assumptions of perfect information.

If we are to take selection more seriously within humans, we may fairly ask what rigorous system would be capable of tying together an altered reality of layered falsehoods in which absolutely nothing can be assumed to be as it appears. Such a system, in continuous development for more than a century, is known to exist and now supports an intricate multi-billion dollar business empire of pure hokum. It is known to wrestling’s insiders as “Kayfabe”.

Were Kayfabe to become part of our toolkit for the twenty-first century, we would undoubtedly have an easier time understanding a world in which investigative journalism seems to have vanished and bitter corporate rivals cooperate on everything from joint ventures to lobbying efforts. Perhaps confusing battles between “freshwater” Chicago macro economists and Ivy league “Saltwater” theorists could be best understood as happening within a single “orthodox promotion” given that both groups suffered no injury from failing (equally) to predict the recent financial crisis. …

Reasoning was not designed to pursue the truth

Uh oh:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. – Mercier & Sperber via Edge.org, which has a video conversation with coauthor Mercier.

This makes sense to me, but I think it can’t be the whole story. There must be at least a little evolutionary advantage to an ability to predict the consequences of one’s actions. The fact that it appears to be dominated by confirmation bias and other pathologies may be indicative of how much we are social animals, and how long we’ve been that way.

It’s easy to see why this might occur by looking at the modern evolutionary landscape for ideas. There’s immediate punishment for touching a hot stove, but for any complex system, attribution is difficult. It’s easy to see how the immediate rewards from telling your fellow tribesmen crazy things might exceed the delayed and distant rewards of actually being right. In addition, wherever there are stocks of resources lying about, there are strong incentives to succeed by appropriation rather than creation. If you’re really clever with your argumentation, you can even make appropriation resemble creation.

The solution is to use our big brains to raise the bar, by making better use of models and other tools for analysis of and communication about complex systems.

Nothing that you will learn in the course of your studies will be of the slightest possible use to you in after life, save only this, that if you work hard and intelligently you should be able to detect when a man is talking rot, and that, in my view, is the main, if not the sole, purpose of education. – John Alexander Smith, Oxford, 1914

So far, though, models seem to be serving argumentation as much as reasoning. Are we stuck with that?

The myth of optimal depletion

Fifteen years ago, when I was working on my dissertation, I read a lot of the economic literature on resource management. I was looking for a behavioral model of the management of depletable resources like oil and gas. I never did find one (and still haven’t, though I haven’t been looking as hard in the last few years).

Instead, the literature focused on optimal depletion models. Essentially these characterize the extraction of resources that would occur in an idealized market – a single, infinitely-lived resource manager, perfect information about the resource base and about the future (!), no externalities, no lock-in effects.

It’s always useful to know the optimal trajectory for a managed resource – it identifies the upper bound for improvement and suggests strategic or policy changes to achieve the ideal. But many authors have transplanted these optimal depletion models into real-world policy frameworks directly, without determining whether the idealized assumptions hold in reality.

The problem is that they don’t. There are some obvious failings – for example, I’m pretty certain a priori that no resource manager actually knows the future. Unreal assumptions are reflected in unreal model behavior – I’ve seen dozens of papers that discuss results matching the classic Hotelling framework – prices rising smoothly at the interest rate, with the extraction rate falling to match, as if it had something to do with what we observe.

The fundamental failure is valuing the normative knowledge about small, analytically tractable problems above the insight that arises from experiments with a model that describes actual decision making – complete with cognitive limitations, agency problems, and other foibles.

In typical optimal depletion models, an agent controls a resource, and extracts it to maximize discounted utility. Firms succeed in managing other assets reasonably well, so why not? Well, there’s a very fundamental problem: in most places, firms don’t control resources. They control reserves. Governments control resources. As a result, firms’ ownership of the long term depletion challenge extends only as far as their asset exposure – a few decades at most. If there are principal-agent problems within firms, their effective horizon is even shorter – only as long as the tenure of a manager (worse things can happen, too).

Governments are no better; politicians and despots both have incentives to deplete resources to raise money to pacify the populace. This encourages a “sell low” strategy – when oil prices are low, governments have to sell more to meet fixed obligations (the other end of the backward-bending supply curve). And, of course, a government that wisely shepherds its resources can always lose them to a neighbor that extracts its resources quickly and invests the proceeds in military hardware.

The US is unusual in that many mineral rights are privately held, but still the government’s management of its share is instructive. I’ll just skip over the circus at the MMS and go to Montana’s trust lands. The mission of the trust is to provide a permanent endowment for public schools. But the way the trust is run could hardly be less likely to maximize or even sustain school revenue.

Fundamentally, the whole process is unmanaged – the trust makes no attempt to control the rate at which parcels are leased for extraction. Instead, trust procedures put the leasing of tracts in the hands of developers – parcels are auctioned whenever a prospective bidder requests.  Once anyone gets a whiff of information about the prospects of a tract, they must act to bid – if they’re early enough, they may get lucky and face little or no competition in the auction (easier than you’d think, because the trust doesn’t provide much notice of sales). Once buyers obtain a lease, they must drill within five years, or the lease expires. This land rush mentality leaves the trust with no control over price or the rate of extraction – they just take their paltry 16% cut (plus or minus), whenever developers choose to give it to them. When you read statements from the government resource managers, they’re unapologetically happy about it: they talk about the trust as if it were a jobs program, not an endowment.

This sort of structure is the norm, not the exception. It would be a strange world in which all of the competing biases in the process cancelled each other out, and yielded a globally optimal outcome in spite of local irrationality. The result, I think, is that policies in climate and energy models are biased, possibly in an unknown direction. On one hand, it seems likely that there’s a negative externality from extraction of public resources above the optimal rate, as in Montana. On the other hand, there might be harmful spillovers from climate or energy policies that increase the use of natural gas, if they exacerbate problems with a suboptimal extraction trajectory.

I’ve done a little sniffing around lately, and it seems that the state of the art in integrated assessment models isn’t too different from what it was in 1995 – most models still use exogenous depletion trajectories or some kind of optimization or equilibrium approach. The only real innovation I’ve seen is a stochastic model-within-a-model approach – essentially, agents know the structure of the system they’re in, but are uncertain about it’s state, so they make stochastically optimal decisions at each point in time. This is a step in the right direction, but still implies a very high cognitive load and degree of intended rationality that doesn’t square with real institutions. I’d be very interested to hear about anything new that moves toward a true behavioral model of resource management.

Football physics & perception

SEED has a nice story on perception of curving shots in football (soccer).

The physics of the curving trajectory is interesting. In short, a light spinning ball can transition from a circular trajectory to a tighter spiral, surprising the goalkeeper.

What I find really interesting, though, is that goalkeepers don’t anticipate this.

But goalkeepers see hundreds of free kicks in practice on a daily basis. Surely they’d eventually adapt to bending shots, wouldn’t they?

… Elite professionals from some of the top soccer clubs in the world were shown simulations of straight and bending free kicks, which disappeared from view 10 to 12.5 meters from the goal. They then had to predict the path of the ball. The players were accurate for straight kicks, but they made systematic errors on bending shots. Instead of taking the curve into account, players tended to assume the ball would continue straight along the path it was following when it disappeared. Even more surprisingly, goalkeepers were no better at predicting the path of bending balls than other players. …

I think the interesting question is, could they be trained to anticipate this? It’s fairly easy for the goalie to observe the early trajectory of a ball, but due to the nonlinear transition to a new curvature, that’s not helpful. To guess whether the ball might suddenly take a wicked turn, one would have to judge its spin, which has to be much harder. My guess is that prediction is difficult, so the only option is to take robust action. In the case of the famous Carlos shot, one might guess that the goalie should have moved to cover the pole, even if he judged that the ball would be wide. (But who am I to say? I’m a lousy soccer player – I find 9 year olds to be stiff competition.)

SEED has another example:

I wrote about a similar problem on my blog earlier this year: How baseball fielders track fly balls. Researchers found that even when the ball is not spinning, outfielders don’t follow the optimum path to the ball—instead they constantly update their position in response to the ball’s motion.

At first this sounds like a classic lion-gazelle pursuit problem. But there’s one key difference: in pursuit problems I’ve seen, the opponent’s location is known, so the questions are all about physics and (maybe) strategic behavior. In soccer and baseball, at least part of the ball’s state (spin, for example) is at best poorly observed by the receiver. Therefore trajectories that appear to be suboptimal might actually be robust responses to imperfect measurement.

The problems faced by goalies and outfielders are in some ways much like those facing managers: what do you do, given imperfect information about a surprisingly nonlinear world?