Some thoughts I’ve been collecting, primarily oriented toward system dynamics modeling in Vensim, but relevant to any modeling endeavor:
- Know why you’re building the model.
- If you’re targeting a presentation or paper, write the skeleton first, so you know how the model will fill in the answers as you go.
- Organize your data first.
- No data? No problem. But surely you have some reference mode in mind, and some constraints on behavior, at least in extreme conditions.
- In Vensim, dump it all into a spreadsheet, database, or text file and import it into a data model, using the Model>Import data… feature, GET XLS DATA functions, or ODBC.
- Don’t put data in lookups (table functions) unless you must for some technical reason; they’re a hassle to edit and update, and lousy at distinguishing real data points from interpolation.
- Keep a lab notebook. An open word processor while you work is useful. Write down hypotheses before you run, so that you won’t rationalize surprises.
- Keep your models organized, and keep versions as you go.
- The simple way: keep all the files associated with a model in one directory. Save each major model improvement with a new number (mymodel v16.mdl). Periodically copy the whole directory and start fresh, so you don’t have too many models and ancillary files in one place.
- The sophisticated way: use Subversion
- Enter and balance units as you go. They prevent a lot of errors. Really.
- Document as you go, at least for nontrivial equations. Otherwise you’ll hate yourself in the morning, when you have to do it all at once.
- Use transparent variable names with a clear directional interpretation.
- Keep your diagrams organized.
- Clean up spaghetti from time to time.
- Use color and styles to identify key constants, driving data, etc.
- Build graphs to make comparison of your outputs with data and expectations easy.
- Test new structures in submodels, not in their full-feedback context. That way you can easily perform equilibrium and extreme conditions tests without the overhead of a full model.
- Look at the output of every variable from time to time. If you can’t do that, your model is probably too big.
- Keep a “todo” list in your model, or in your lab notebook. Include puzzles and anomalies to be solved later.
- Run frequently. Build large structures incrementally so that what you have is always tested and understood.
- Avoid nested IF…THEN logic and other confusing, discrete structures where possible.
- Break up long equations into digestible chunks. If it won’t fit in the equation editor’s text field, no human can understand it.
- Never embed nontrivial constants – especially those with implicit units – in equations. Pull them out as separate constants.
- Bad: Price = Variable_Cost*1.234
- Better: Price = Variable_Cost*(1+Markup)
- OK: Time_Constant = 1/Fractional_Rate
- Test those nontrivial constants for sensitivity.
- You can perform many tests manually, in Synthesim, by wiggling sliders.
- Automate brute force tests of every parameter in your model using the optimizer.
- Test decision inputs with particular thoroughness.
- Try every control at its extremes, and in combination with others.
- Try perturbing your decisions, looking for improved performance or to determine whether your simulated agents are systematically biased or too dumb.
- Test everything in extreme conditions. Synthesim overrides are a handy way to do this – freeze a stock at its initial level, and voila – you’ve broken all the feedback loops involving it.
- Build checksums for complex array operations (e.g., shares that sum to 1). Set limits on the variables concerned, so that you get a runtime warning if the checksum is violated.
- Write Reality Checks to watch your checksums and automate other kinds of tests. You’re likely to perform a test only once manually, but it’s easy to repeat if automated as an RC. Also, RC’s don’t slow down conventional model runs.
- Rules were made to be broken.
Questions? Additions? Comment!
1 thought on “Good modeling practices”