Hoisted from the comments, Miles Parker has a nice reflection on modeling in this video, Why Model Reality.
It might be subtitled, “Better Lies,” a reference to modeling as the pursuit of better stories about the world, which remain never quite true (a variation on the famous Box quote, “All models are wrong but some are useful.”). A few nice points that I picked out along the way,
- All thinking, even about the future, is retrospective.
- Big Data is Big Dumb, because we’re collecting more and more detail about a limited subset of reality, and thus suffer from sampling and “if your only tool is a hammer …” bias.
- A crucial component of a modeling approach is a “bullshit detector” – reality checks that identify problems at various levels on the ladder of inference.
- Model design is more than software engineering.
- Often the modeling process is a source of key insights, and you don’t even need to run the model.
- Modeling is a social process.
Coming back to the comment,
I think one of the greatest values of a model is that it can bring you to the point where you say “There isn’t any way to build a model within this methodology that is not self-contradicting. Therefore everyone in this room is contradicting themselves before they even open their mouths.”
I think that’s close to what Dana Meadows was talking about when she placed paradigms and transcendence of paradigms on the list of places to intervene in systems.
It reminds me of Gödel’s incompleteness theorems. With that as a model, I’d argue that one can construct fairly trivial models that aren’t self-contradictory. They might contradict a lot of things we think we know about the world, but by virtue of their limited expressiveness remain at least true to themselves.
Going back to the elasticity example, if I assert that oilConsumption = oilPrice^epsilon, there’s no internal contradiction as long as I use the same value of epsilon for each proposition I consider. I’m not even sure what an internal contradiction would look like in such a simple framework. However, I could come up with a long list of external consistency problems with the model: dimensional inconsistency, lack of dynamics, omission of unobserved structure, failure to conform to data ….
In the same way, I would tend to argue that general equilibrium is an internally consistent modeling paradigm that just happens to have relatively little to do with reality, yet is sometimes useful. I suppose that Frank Ackerman might disagree with me, on the grounds that equilibria are not necessarily unique or stable, which could raise an internal contradiction by violating the premise of the modeling exercise (welfare maximization).
Once you step beyond models with algorithmically simple decision making (like CGE), the plot thickens. There’s Condorcet’s paradox and Arrow’s impossibility theorem, the indeterminacy of Arthur’s El Farol bar problem, and paradoxes of zero discount rates on welfare.
It’s not clear to me that all interesting models of phenomena that give rise to self-contradictions must be self-contradicting though. For example, I suspect that Sterman & Wittenberg’s model of Kuhnian scientific paradigm succession is internally consistent.
Maybe the challenge is that the universe is self-referential and full of paradoxes and irreconcilable paradigms. Therefore as soon as we attempt to formalize our understanding of such a mess, either with nontrivial models, or trivial models assisting complex arguments, we are dragged into the quagmire of self-contradiction.
Personally, I’m not looking for the cellular automaton that runs the universe. I’m just hoping for a little feedback control on things that might make life on earth a little better. Maybe that’s a paradoxical quest in itself.
Thanks Tom. I’m sure you share this experience — simply having one person who understands what you are trying to say is the best reward for putting effort into it. To riff on another famous quote “I would have written an article, but I didn’t have the time”. But yes, I think this all comes — in our local part of space-time anyway — from Gödel. The most astonishing thing to my mind is how few scientists and software people actually have even a passing familiarity with this stuff. To me, that’s like calling yourself a Mechanical Engineer without having heard of the Second Law of Thermodynamics. Which explains why so many people working on the Semantic Web seem to be attempting to build perpetual motion machines. 🙂 Anyway, I wouldn’t be a bit surprised to find out that the subject isn’t even covered in a typical undergrad CS curriculum.
So yep, there is nothing at all keeping us from constructing internally consistent models. It is only when we actually want to do something with them that things fall apart. And there is — though I’m becoming more and more uncertain about this 😉 — little question that some simple models do match close enough to reality often enough that we can feel some comfort in them. But note that they are always about *very* simple things. If we throw a rock in the air, we can plot it’s trajectory. But if we throw a dozen pebbles up in the air together, that’s a different story. The weird thing is that we take the fact that we have an ok handle on this very small part as some kind of lazy proof that we can apply it to all things. You’ve also had the El Farol Koolaid so I don’t need to belabor that.
As my argument has been developing over the last couple of years, I’m actually coming to the point where I think I’d make a stronger argument than “all models are wrong, but..” The question is what do we mean when we say they are useful? What are we trying to use them for? I like the word “helpful” because it makes the point that we can no longer pretend that we don’t have an agenda with our models and that the reason that we think we are modeling is also completely contextualized. In other words, our intention — our heart — is actually what is driving the model, and thats something we have to be aware of and really appreciate. As a scientist, I guess I should be embarrassed to admit such a normative point of view, except that I know I share it with many others. I think looking for a “little feedback control on things that might make life on earth a little better” is a very good goal indeed.
I’m not sure about the Sterman Wittenberg argument — baby kept me up last night so I don’t trust myself to be open-minded about it, but again more and more I am taking a hard look at any sort of march of progress style arguments. I wasn’t sure form my bleary eyed reading of the abstract whether this was one of those or not. 😉
Regarding Wolfram, I guess it should be surprising that someone that is obviously so intelligent and creative can be so blind and simple-minded, but somehow it isn’t. It is clearly demonstrable that any such thing is impossible — for what it is worth, Nagarjuna made a devastating attack on that kind of nonsense in the third century — but what I don’t understand is why no-one seems to get that?
cheers,
Miles
S/W is definitely not a ‘march of progress’ fan piece. They argue that survival in the marketplace is a very weak enforcer of quality,
Most important, however, competition does not serve
to weed out the weak paradigms so the strong may grow.
On the contrary, competition decimates the strong and
weak alike—we found that intrinsic capability has but a
weak effect on survival.
Oh, I forgot to clarify — though it is probably not necessary — what I meant by “…Therefore everyone in this room is contradicting themselves before they even open their mouths.” I am making a presumption there — that the very fact of being in that same room implies that the participants have already bought into a particular methodology about how to study systems, and that that methodology has implicit assumptions built in, e.g., “prices in the ‘real world’ are governed by elasticity..”
Actually that helps … more (obliquely) related to this topic tomorrow, if I have time.