Who moved my eigenvalues?

Change management is one of the great challenges in modeling projects. I don’t mean this in the usual sense of getting people to change on the basis of model results. That’s always a challenge, but there’s another.

Over the course of a project, the numerical results and maybe even the policy conclusions given by a model are going to change. This is how we learn from models. If the results don’t change, either we knew the answer from the outset (a perception that should raise lots of red flags), or the model isn’t improving.

The problem is that model consumers are likely to get anchored to the preliminary results of the work, and resist change when it arrives later in the form of graphs that look different or insights that contradict early, tentative conclusions.

Fortunately, there are remedies:

  • Start with the assumption that the model and the data are wrong, and to some extent will always remain so.
  • Recognize that the modeler is not the font of all wisdom.
  • Emphasize extreme conditions tests and reality checks throughout the modeling process, not just at the end, so bugs don’t get baked in while insights remain hidden.
  • Do lots of sensitivity analysis to determine the circumstances under which insights are valid.
  • Keep the model simpler than you think it needs to be, so that you have some hope of understanding it, and time for reflecting on behavior and communicating results.
  • Involve a broad team of model consumers, and set appropriate expectations about what the model will be and do from the start.