Where are the dynamic project managers?

Project management has been one of the most productive and successful areas of system dynamics. And yet, when I recently looked at project management tools and advice, I couldn’t find a hint of SD dynamic insights into product management. Lists of reasons for project failure almost entirely neglect endogenous explanations.

Nothing about rework, late change orders, design/implementation balance, schedule pressure effects on quality and productivity, overtime burnout and turnover, Brooks’ Law, multiphase resource allocation, firefighting or tipping points.

I think there’s an insight and a puzzle here. The insight is that mismanaged dynamics and misperceptions of feedback aren’t the only way to screw up. There are exogenous and single-cause failure modes, like hiring people with the wrong skill set for a job, building something no one wants, or just failing to keep in touch with your team.

However, I’m pretty sure the dominant cause of execution failure is dynamic. Large projects are like sleeping monsters. They are full of positive feedback loops that, when triggered cause increasing delays and overruns, perhaps explaining the heavy-tailed distribution of massive project failures. So, the puzzle is, how could there be so little mention, and so few tools, for management of the internal causes of project success?

Not coincidentally, this problem is one of the major reasons we built Ventity. We’re currently working on project models that are entirely data driven, so you can switch from building a house to building a power plant just by changing some tables of input. We think this will be the missing link between data-oriented tools that manage projects statically in exquisite detail and dynamic models that realistically describe projects, but have traditionally been hard to build, calibrate and reuse.

6 thoughts on “Where are the dynamic project managers?”

  1. You may well know this, but one example of data-driven SD project model setup is the work led by Ken Cooper and Gregory Lee on a change impact analysis system for Fluor, which they published on and won an SDS applications award and an Edelman laureate. That was implemented via a toolkit that was very fast to recalibrate and reuse, and to work with in real time.

    You’re right that the SD project management insights are not widely recognized or applied. In the Fluor case I think this benefitted situationally from the projects being engineering ones executed by an experienced contractor, and personally from Ken’s gear experience in and passion for the SD approach, backed by Greg’s highly involved executive sponsorship and view of projects across the firm.

    Best of luck with your work on this. It would be very nice if an SD project modeling toolkit could be front-ended with something like an R Shiny dashboard.

  2. I am familiar with the players and background of the Fluor work – if not the actual work. But let’s not forget the abundant counter-examples. Ken’s long-time colleagues were contemporaneously billing massive amounts for SD project management of the F35 project. Now, you can find a lot of interesting and powerful things to say about the F-15… but try to find one extolling how well the development project was run!

    Of course, it may have been much worse without all the SD modeling. But, still, if you were seeking to get a project management role emphasizing an SD approach, I wouldn’t recommend you use this as a case study.

    While I believe every single one of the factors you cite – rework, undiscovered rework, delays, learning curve, Brooks, time-variance on productivity from fatigue, and all the others… are true and logical, that doesn’t mean that the model will correctly represent them all and weight them. Understanding that there is such a thing as friction is a necessary but far from sufficient condition for advancing the state of the art of race car tire design. SD modeling, in general, suffers from an ad hoc development discipline where each modeler builds from scratch. There’s not really a repository of well-tested code snippets to employ (no, the Pegasus Archetypes don’t count; too simplistic). Also, calibration is a HUGE problem for large models. Huge. Models are dramatically over-fitted against suspect data and the auto-calibrating code seeks an answer that is not necessarily verifiable in terms of logic structure — so long as it can match a historic data set.

    1. I think the calibration problem is as much about the data as it is about the algorithms. In a big integrated model, data necessarily comes from a variety of sources that are likely to vary a lot in quality and conflict with one another to some extent. Blind fitting doesn’t make sense unless you have some sensible priors and other constraints to prevent illogical choices, but that generally entails a lot of manual labor. Of course, having a big detailed work breakdown structure and updating it to reflect the omitted dynamics in a PM tool is also a lot of manual labor.

      I don’t know if the F35 was afflicted, but one thing we’ve seen in aerospace is that managers may not want transparency. They want to hide the fact that their component is behind schedule as long as possible, until someone else cracks. Then there’s a cascade of simultaneous admissions that, yes, pretty much everything is behind schedule. I think it was Daniel Kim that documented that behavior in the auto industry in his thesis. This leads to delay and obfuscation, so you can’t get, or can’t trust, the data you need to calibrate.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.