A number of developments are making model quality control increasingly crucial.
- Models are generally playing a wider role in policy debates. Efforts like the Climate CoLab are making models accessible to wide audiences for interactive use.
- The use of automated stochastic optimization and exploratory modeling and analysis (EMA) is likely to take models into parts of their parameter spaces that the modeler herself has not explored.
- Standards like SMILE/XMILE will make models and model components more reusable and shareable.
I think this could all come to a bad end, in which priesthoods are paid to develop competing models that are incomprehensible to the general public, thus reducing modeling to a sophisticated form of propaganda.
Fortunately, some elements of an antidote to this dystopia are at hand, including documentation standards and tools and languages for expressing Reality Checks on model behavior. But I think we need a lot more. For example,
- Standards could include metadata standards, so that model components are self-documenting in ways that make it possible for users to easily discover their limitations.
- EMA tools could be directed towards discovery of model problems before policy analysis commences.
- Tools that present models online could expose their innards as well as results.
- Languages are needed for meta-reality checks, that describe and test higher level assumptions like perfect foresight (or lack thereof).
Perhaps most importantly, model quality needs to become a pervasive part of the culture of model building and consumption in all disciplines.
1 thought on “Model quality: the missing link”