Thinking systemically about safetey

Accidents involve much more than the reliability of parts. Safety emerges from the systemic interactions of devices, people and organizations. Nancy Leveson’s Engineering a Safer World (free pdf currently at the MIT press link, lower left) picks up many of the threads in Perrow’s classic Normal Accidents, plus much more, and weaves them into a formal theory of systems safety. It comes to life with many interesting examples and prescriptions for best practice.

So far, I’ve only had time to read this the way I read the New Yorker (cartoons first), but a few pictures give a sense of the richness of systems perspectives that are brought to bear on the problems of safety:

Leveson - Pharma safety
Leveson - Safety as control
Leveson - Aviation information flow
The contrast between the figure above and the one that follows in the book, showing links that were actually in place, is striking. (I won’t spoil the surprise – you’ll have to go look for yourself.)

Leveson - Columbia disaster

The Seven Deadly Sins of Managing Complex Systems

I was rereading the Fifth Discipline on the way to Boston the other way, and something got me started on this. Wrath, greed, sloth, pride, lust, envy, and gluttony are the downfall of individuals, but what about the downfall of systems? Here’s my list, in no particular order:

  1. Information pollution. Sometimes known as lying, but also common in milder forms, such as greenwash. Example: twenty years ago, the “recycled” symbol was redefined to mean “recyclable” – a big dilution of meaning.
  2. Elimination of diversity. Example: overconsolidation of industries (finance, telecom, …). As Jay Forrester reportedly said, “free trade is a mechanism for allowing all regions to reach all limits at once.”
  3. Changing the top-level rules in pursuit of personal gain. Example: the Starpower game. As long as we pretend to want to maximize welfare in some broad sense, the system rules need to provide an equitable framework, within which individuals can pursue self-interest.
  4. Certainty. Planning for it leads to fragile strategies. If you can’t imagine a way you could be wrong, you’re probably a fanatic.
  5. Elimination of slack. Normally this is regarded as a form of optimization, but a system without any slack can’t change (except catastrophically). How are teachers supposed to improve their teaching when every minute is filled with requirements?
  6. Superstition. Attribution of cause by correlation or coincidence, including misapplied pattern-matching.
  7. The four horsemen from classic SD work on flawed mental models: linear, static, open-loop, laundry-list thinking.

That’s seven (cheating a little). But I think there are more candidates that don’t quite make the big time:

  • Impatience. Don’t just do something, stand there. Sometimes.
  • Failure to account for delays.
  • Abstention from top-level decision making (essentially not voting).

The very idea of compiling such a list only makes sense if we’re talking about the downfall of human systems, or systems managed for the benefit of “us” in some loose sense, but perhaps anthropocentrism is a sin in itself.

I’m sure others can think of more! I’d be interested to hear about them in comments.

Four Legs and a Tail

An effective climate policy needs prices, technology, institutional rules, and preferences.

I’m continuously irked by calls for R&D to save us from climate change. Yes, we need it very badly, but it’s no panacea. Without other signals, like a price on carbon, technology isn’t going to do a lot. It’s a one-legged dog. True, we might get lucky with some magic bullet, but I’m not willing to count on that. An effective climate policy needs four legs:

  1. Prices
  2. Technology (the landscape of possibilities on which we make decisions)
  3. Institutional rules and procedures
  4. Preferences, operating within social networks

Continue reading “Four Legs and a Tail”