Most systems thinkers are familiar with the parable of the blind men and the elephant.
There are two ways to go about building a model.
- Plan A proceeds slowly. You build small or simple, aggregate components, and test each thoroughly before moving on.
- Plan B builds a rough model spanning the large scope that you think encompasses the problem, then incrementally improves the solution.
Ideally, both approaches converge to the same point.
Plan B is attractive, for several reasons. It helps you to explore a wide range of ideas. It gives a satisfying illusion of rapid progress. And, most importantly, it’s satisfying for stakeholders, who typically have a voracious appetite for detail and a limited appreciation of dynamics.
The trouble is, Plan B does not really exist. When you build a lot of structure quickly, the sacrifice you have to make is ignoring lots of potential interactions, consistency checks, and other relationships between components. You’re creating a large backlog of undiscovered rework, which the extensive SD literature tells us is fatal. So, you’re really on Path C, which leads to disaster: a large, incomprehensible, low-quality model.
In addition, you rarely have as much time as you think you do. When your work gets cut short, only Path A gives you an end product that you can be proud of.
So, resist pressures to include every detail. Embrace elegant simplicity and rich feedback. Check your units regularly, test often, and “always be done” (as Jim Hines puts it). Your life will be easier, and you’ll solve more problems in the long run.
I’ve been working with pharma brand tracking data, used to calibrate a part of an integrated model of prescriptions in a disease class. Understanding docs’ perceptions of drugs is pretty important, because it’s the major driver of rx. Drug companies spend a lot of money collecting this data; vendors work hard to collect it by conducting quarterly interviews with doctors in a variety of specialties.
Unfortunately, most of the data is poorly targeted for dynamic modeling. It seems to be collected to track and guide ad messaging, but that leads to turbulence that prevents drawing any long term conclusions from the data. That’s likely to lead to reactive decision making. Here’s how to minimize strategic information content:
- Ask a zillion questions. Be sure that interviewees have thorough decision fatigue by the time you get to anything important.
- Ask numerical questions that require recall of facts no one can remember (how many patients did you treat with X in the last 3 months?).
- Change the questions as often as possible, to ensure that you never revisit the same topic twice. (Consistency is so 2015.)
- Don’t document those changes.
- Avoid cardinal scales. Use vague nominal categories wherever possible. Don’t waste time documenting those categories.
- Keep the sample small, but report results in lots of segments.
- Confidence bounds? Bah! Never show weakness.
- Archive the data in PowerPoint.
On the other hand, please don’t! A few consistent, well-quantified questions are pure gold if you want to untangle causality that plays out over more than a quarter.
As a few people nearly guessed, the left side is “things a linear system can do” and the right side is “(additional) things a nonlinear system can do.”
On the left:
- decaying oscillation
- exponential decay
- simple accumulation
- exponential growth
- 2nd order goal seeking with damped oscillation
On the right:
- S-shaped growth
- chaos (as in the Lorenz model)
- punctuated equilibrium
- a limit cycle
- another limit cycle (like a predator-prey system)
- overshoot & collapse
Bongard problems test visual pattern recognition, but there’s no reason to be strict about that. Here’s a slightly nontraditional Bongard problem:
The six on the left conform to a pattern or rule, and your task is to discover it. As an aid, the six boxes on the right do not conform to the same pattern. They might conform to a different pattern, or simply reflect the negation of the rule on the left. It’s possible that more than one rule discriminates between the sets, but the one that I have in mind is not strictly visual (that’s a hint).
The NY Times has a terrific obituary of economist Kenneth Arrow, who died yesterday at age 95. It’s a great read, from the discussion of the Impossibility Theorem and General Equilibrium to the personal anecdote at the end.
Paul Romer (of endogenous growth fame) has a new, scathing critique of macroeconomics.
For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as “tight monetary policy can cause a recession.” Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. A parallel with string theory from physics hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.
Notice the Kuhnian finish: “a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.” This is one of the key features of Sterman & Wittenberg’s model of Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution:
The focal point of the model is a construct called “confidence.” Confidence captures the basic beliefs of practitioners regarding the epistemological status of their paradigm—is it seen as a provisional model or revealed truth? Encompassing logical, cultural, and emotional factors, confidence influences how anomalies are perceived, how practitioners allocate research effort to different activities (puzzle solving versus anomaly resolution, for example), and recruitment to and defection from the paradigm. …. Confidence rises when puzzle-solving progress is high and when anomalies are low. The impact of anomalies and progress is mediated by the level of confidence itself. Extreme levels of confidence hinder rapid changes in confidence because practitioners, utterly certain of the truth, dismiss any evidence contrary to their beliefs. ….
The external factors affecting confidence encompass the way in which practitioners in one paradigm view the accomplishments and claims of other paradigms against which they may be competing. We distinguish between the dominant paradigm, defined as the school of thought that has set the norms of inquiry and commands the allegiance of the most practitioners, and alternative paradigms, the upstart contenders. The confidence of practitioners in a new paradigm tends to increase if its anomalies are less than those of the dominant paradigm, or if it has greater explanatory power, as measured by cumulative solved puzzles. Confidence tends to decrease if the dominant paradigm has fewer anomalies or more solved puzzles. Practitioners in alternative paradigms assess their paradigms against one another as well as against the dominant paradigm. Confidence in an alternative paradigm tends to decrease (increase) if it has more (fewer) anomalies or fewer (more) solved puzzles than the most successful of its competitors.
In spite of its serious content, Romer’s paper is really quite fun, particularly if you get a little Schadenfreude from watching Real Business Cycles and Dynamic Stochastic General Equilibrium take a beating:
To allow for the possibility that monetary policy could matter, empirical DSGE models put sticky-price lipstick on this RBC pig.
But let me not indulge too much in hubris. Every field is subject to the same dynamics, and could benefit from Romer’s closing advice.
A norm that places an authority above criticism helps people cooperate as members of a belief field that pursues political, moral, or religious objectives. As Jonathan Haidt (2012) observes, this type of norm had survival value because it helped members of one group mount a coordinated defense when they were attacked by another group. It is supported by two innate moral senses, one that encourages us to defer to authority, another which compels self-sacrifice to defend the purity of the sacred.
Science, and all the other research fields spawned by the enlightenment, survive by “turning the dial to zero” on these innate moral senses. Members cultivate the conviction that nothing is sacred and that authority should always be challenged. In this sense, Voltaire is more important to the intellectual foundation of the research fields of the enlightenment than Descartes or Newton.
Trump pledges 4%/yr economic growth (but says his economists don’t want him to). His economists are right – political tinkering with growth is a fantasy:
The growth rate of real per capita GDP in the US, and all leading industrial nations, has been nearly constant since the industrial revolution, at about 2% per year. Over that time, marginal tax rates, infrastructure investments and a host of other policies have varied dramatically, without causing the slightest blip.
On the other hand, there are ways you can screw up, like having a war or revolution, or failing to provide rule of law and functioning markets. The key is to preserve the conditions that allow the engine of growth – innovation – to function. Trump seems utterly clueless about innovation. His view of the economy is zero-sum: that value is something you extract from your suppliers and customers, not something you create. That view, plus an affinity for authoritarianism and conflict and neglect of the Constitution, bodes ill for a Trump economy.
My posting rate unintentionally fell off a cliff a couple years back. I got busy working on Ventity, and one thing led to another …
Anyhow, I’ve migrated the site to a new host, and merged my Model Library into the content. I’m working on some substantive posts – it’s a good opportunity to reflect on new developments.
Stay tuned …
I’ve just acquired a pair of 18″ Dell XPS portable desktop tablets. It’s one slick piece of hardware, that makes my iPad seem about as sexy as a beer coaster.
They came with Win8 installed. Now I know why everyone hates it. It makes a good first impression with pretty colors and a simple layout. But after a few minutes, you wonder, where’s all my stuff? There’s no obvious way to run a desktop application, so you end up scouring the web for ways to resurrect the Start menu.
It’s bizarre that Microsoft seems to have forgotten the dynamics that made it a powerhouse in the first place. It’s basically this:
Software is a big nest of positive feedbacks, producing winner-take-all behavior. A few key loops are above. The bottom pair is the classic Bass diffusion model – reinforcing feedback from word of mouth, and balancing feedback from saturation (running out of potential customers). The top loop is an aspect of complementary infrastructure – the more users you have on your platform, the more attractive it is to build apps for it; the more apps there are, the more users you get.
There are lots of similar loops involving accumulation of knowledge, standards, etc. More importantly, this is not a one-player system; there are multiple platforms competing for users, each with its own reinforcing loops. That makes this a success-to-the-successful situation. Microsoft gained huge advantage from these reinforcing loops early in the PC game. Being the first to acquire a huge base of users and applications carried it through many situations in which its tech was not the most exciting thing out there.
So, if you’re Microsoft, and Apple throws you a curve ball by launching a new, wildly successful platform, what should you do? It seems to me that the first imperative should be to preserve the advantages conferred by your gigantic user and application base.
Win8 does exactly the opposite of that:
- Hiding the Start menu means that users have to struggle to find their familiar stuff, effectively chucking out a vast resource, in favor of new apps that are slicker, but pathetically few in number.
- That, plus other decisions, enrage committed users and cause them to consider switching platforms, when a smoother transition would have them comfortably loyal.
This strategy seems totally bonkers.