John Sterman at the MIT Museum, reflecting on bathtub dynamics, public perceptions and political conflict over climate change:
We report the sad news that longtime system dynamicist R. Geoffrey Coyle died on November 19, 2012. Geoff was 74. He started his career as a mining engineer. Having completed a PhD in Operations Research, he came to Cambridge, Massachusetts from the UK in the late 1960’s, and studied with Jay Forrester to learn system dynamics. Upon his return to the UK, he started to develop system dynamics in England. He was the founder of the first system dynamics group in the UK, at the University of Bradford in 1970. This group grew terrifically and produced some of the most important people in our field. Geoff and his students have made enormously important contributions to the field and the next generation of their students have as well, all following in Geoff’s footsteps and under his tutelage.
Geoff and the Bradford group also founded the first system dynamics journal, Dynamica. They created DYSMAP, the first system dynamics software that had built-in optimization and built-in dimensional consistency technique.
Geoff authored a number of very important books in the field including: Management in System Dynamics (1977), System Dynamics Modelling: A Practical Approach (1996) and Practical Strategy: Tools and Techniques (2004). In 1998, he was the first recipient of the Lifetime Achievement Award of the System Dynamics Society. More recently he returned to his first academic love and wrote a highly acclaimed history of mining in the UK: The riches beneath our feet (2010). This is a wonderful legacy in the field of system dynamics and beyond.
I realized that, while I’ve always enjoyed his irascibly interesting presentations, I’ve only read a few of his works. So, I’ve collected a Coyle reading list: Continue reading “A Geoff Coyle reading list”
Socialism. Communism. “Nazism.” American Exceptionalism. Indoctrination. Buddhism. Meditation. “Americanism.” These are not words or terms one would typically expect to hear in a Winston-Salem/Forsyth County School Board meeting. But in the Board’s last meeting on October 9th, they peppered the statements of public commenters and Board Members alike.
I know that, as a systems thinker, I should look for the unstated assumptions that led board members to their critiques, and establish a constructive dialog. But I just can’t do it – I have to call out the fools. While there are some voices of reason, several of the board members and commenters apparently have no understanding of the terms they bandy about, and have no business being involved in the education of anyone, particularly children.
The low point of the exchange:
Jeannie Metcalf said she “will never support anything that has to do with Peter Senge… I don’t care what [the teachers currently trained in System’s Thinking] are teaching. I don’t care what lessons they are doing. He’s is trying to sell a product. Once it insidiously makes its way into our school system, who knows what he’s going to do. Who knows what he’s going to do to carry out his Buddhist way of thinking and his hatred of Capitalism. I know y’all are gonna be thinkin’ I’m a crazy person, but I’ve been around a long time.”
Yep, you’re crazy all right. In your imaginary parallel universe, “hatred of capitalism” must be a synonym for writing one of the most acclaimed business books ever, sitting at one of the best business schools in the world, and consulting at the highest levels of many Fortune 50 companies.
The common thread among the ST critics appears to be a total failure to actually observe classrooms combined with shoot-the-messenger reasoning from consequences. They see, or imagine, a conclusion that they don’t like, something that appears vaguely environmental or socialist, and assume that it must be part of the hidden agenda of the curriculum. In fact, as supporters pointed out, ST is a method, which could as easily be applied to illustrate the benefits of individualism, markets, or whatnot, as long as they are logically consistent. Of course, if one’s pet virtue has limits or nuances, ST may also reveal those – particularly when simulation is used to formalize arguments. That is what the critics are really afraid of.
Nate Silver of 538 deserves praise for calling the election in all 50 states, using a fairly simple statistical model and lots of due diligence on the polling data. When the dust settles, I’ll be interested to see a more detailed objective evaluation of the forecast (e.g., some measure of skill, like likelihoods).
Many have noted that his approach stands in stark contrast to big-ego punditry:
- The Daily Show (source of the title)
- Paul Krugman
- RealClimate (don’t miss the comic)
- The Baseline Scenario framed the problem nicely before the election.
Another impressive model-based forecasting performance occurred just days before the election, with successful prediction of Hurricane Sandy’s turn to landfall on the East Coast, almost a week in advance.
On October 22, you blogged that there was a possibility it could hit the East Coast. How did you know that?
There are a few rather reliable global models. They’re models that run all the time, all year long, so they don’t focus on any one storm. They run for the entire globe, not just for North America. There are two types of runs these models can be configured to do. One is called a deterministic run and that’s where you get one forecast scenario. Then the other mode, and I think this is much more useful, especially at longer ranges where things become much more uncertain, is ensemble—where 20 or 40 or 50 runs can be done. They are not run at as high of a resolution as the deterministic run, otherwise it would take forever, but it’s still incredibly helpful to look at 20 runs.
Because you have variation? Do the ensemble runs include different winds, currents, and temperatures?
You can tweak all sorts of things to initialize the various ensemble members: the initial conditions, the inner-workings of the model itself, etc. The idea is to account for observational error, model error, and other sources of uncertainty. So you come up with 20-plus different ways to initialize the model and then let it run out in time. And then, given the very realistic spread of options, 15 of those ensemble members all recurve the storm back to the west when it reaches the East coast, and only five of them take it northeast. That certainly has some information content. And then, one run after the next, you can watch those. If all of the ensemble members start taking the same track, it doesn’t necessarily make them right, but it does mean it’s more likely to be right. You have much more confidence forecasting a track if the model guidance is in in good agreement. If it’s a 50/50 split, that’s a tough call.