I’ve been watching a variety of explanations of the financial crisis. As a wise friend noticed, the only thing in short supply is acceptance of responsibility. I’ve seen theories that place the seminal event as far back as the Carter administration. Does that make sense, causally?
In a formal sense, it might in some cases. I could have inhaled a carcinogen a decade ago that only leads to cancer a decade from now, without any outside triggers. But I think that sort of system is a rarity. As a practical matter, we have to look elsewhere.
Socioeconomic systems are at a constant slow boil, with many potential threats existing below the threshold of imminent danger at any given time. Occasionally, one grows exponentially and emerges as a real catastrophe. It seems like a surprise, because of the hockey stick behavior of growth (the French riddle of the lily pond again). However, most apparent low-level threats never emerge from the noise. They don’t have enough gain to grow fast, or they get shut down by some unsuspected negative feedback.
If we tried to address every potential problem at its inception, we’d go crazy. We’d all be wearing tinfoil hats. There might be some cases in which we can predict the consequences of an action and avoid creating some danger, but in general there are simply too many potential problems to worry about. What we need, then, is not observation of initial conditions amidst chaos, but a feedback control system that detects significant threats when they’re big enough for the signal to emerge from the noise, but small enough to manage. “Significant threats” have at least two attributes: they’re propelled by reinforcing feedback, and they’re big enough to matter. You have to be careful about the latter criteria, because of the hockey stick effect and delays. A problem is “big enough to matter” when the time it would take to grow to fruition, accounting for problem in the pipeline, is comparable to the time delays involved in perception and solution.
So, with respect to the bailout, should we blame the initial condition (some obscure rule change when we were distracted by Billy Beer or Monica) or the control system (agencies and firms who ought to have noticed, at some point, that subprime lending, lack of transparency, and leverage were a deadly combination)? I think the latter. For climate, it would be rather silly to blame the industrial revolution, so again, we should be looking at whether the control system is adequately monitoring the consequences of emissions and acting on that information.
Of course, blame is a fairly pointless indulgence, unless it leads us to fix things. Usually, the proximate causes of problems get fixed. But we don’t seem to have evolved a systemic fix to the way we monitor and react to emerging problems.
It seemed to me that it ought to be simple to work out some rules of thumb for the timing of catastrophe avoidance, as long as the problem could be characterized simply. I built an illustrative Vensim model of a generic problem, and parameterized it for climate and financial crises. I won’t have a chance to write up results for a few days, but here’s the model for now.