I’ve been working with pharma brand tracking data, used to calibrate a part of an integrated model of prescriptions in a disease class. Understanding docs’ perceptions of drugs is pretty important, because it’s the major driver of rx. Drug companies spend a lot of money collecting this data; vendors work hard to collect it by conducting quarterly interviews with doctors in a variety of specialties.
Unfortunately, most of the data is poorly targeted for dynamic modeling. It seems to be collected to track and guide ad messaging, but that leads to turbulence that prevents drawing any long term conclusions from the data. That’s likely to lead to reactive decision making. Here’s how to minimize strategic information content:
- Ask a zillion questions. Be sure that interviewees have thorough decision fatigue by the time you get to anything important.
- Ask numerical questions that require recall of facts no one can remember (how many patients did you treat with X in the last 3 months?).
- Change the questions as often as possible, to ensure that you never revisit the same topic twice. (Consistency is so 2015.)
- Don’t document those changes.
- Avoid cardinal scales. Use vague nominal categories wherever possible. Don’t waste time documenting those categories.
- Keep the sample small, but report results in lots of segments.
- Confidence bounds? Bah! Never show weakness.
- Archive the data in PowerPoint.
On the other hand, please don’t! A few consistent, well-quantified questions are pure gold if you want to untangle causality that plays out over more than a quarter.
As a few people nearly guessed, the left side is “things a linear system can do” and the right side is “(additional) things a nonlinear system can do.”
On the left:
- decaying oscillation
- exponential decay
- simple accumulation
- exponential growth
- 2nd order goal seeking with damped oscillation
On the right:
Bongard problems test visual pattern recognition, but there’s no reason to be strict about that. Here’s a slightly nontraditional Bongard problem:
The six on the left conform to a pattern or rule, and your task is to discover it. As an aid, the six boxes on the right do not conform to the same pattern. They might conform to a different pattern, or simply reflect the negation of the rule on the left. It’s possible that more than one rule discriminates between the sets, but the one that I have in mind is not strictly visual (that’s a hint).
The original problem was here.
The NY Times has a terrific obituary of economist Kenneth Arrow, who died yesterday at age 95. It’s a great read, from the discussion of the Impossibility Theorem and General Equilibrium to the personal anecdote at the end.
Paul Romer (of endogenous growth fame) has a new, scathing critique of macroeconomics.
For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as “tight monetary policy can cause a recession.” Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. A parallel with string theory from physics hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.
Notice the Kuhnian finish: “a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.” This is one of the key features of Sterman & Wittenberg’s model of Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution:
The focal point of the model is a construct called “confidence.” Confidence captures the basic beliefs of practitioners regarding the epistemological status of their paradigm—is it seen as a provisional model or revealed truth? Encompassing logical, cultural, and emotional factors, confidence influences how anomalies are perceived, how practitioners allocate research effort to different activities (puzzle solving versus anomaly resolution, for example), and recruitment to and defection from the paradigm. …. Confidence rises when puzzle-solving progress is high and when anomalies are low. The impact of anomalies and progress is mediated by the level of confidence itself. Extreme levels of confidence hinder rapid changes in confidence because practitioners, utterly certain of the truth, dismiss any evidence contrary to their beliefs. ….
The external factors affecting confidence encompass the way in which practitioners in one paradigm view the accomplishments and claims of other paradigms against which they may be competing. We distinguish between the dominant paradigm, defined as the school of thought that has set the norms of inquiry and commands the allegiance of the most practitioners, and alternative paradigms, the upstart contenders. The confidence of practitioners in a new paradigm tends to increase if its anomalies are less than those of the dominant paradigm, or if it has greater explanatory power, as measured by cumulative solved puzzles. Confidence tends to decrease if the dominant paradigm has fewer anomalies or more solved puzzles. Practitioners in alternative paradigms assess their paradigms against one another as well as against the dominant paradigm. Confidence in an alternative paradigm tends to decrease (increase) if it has more (fewer) anomalies or fewer (more) solved puzzles than the most successful of its competitors.
In spite of its serious content, Romer’s paper is really quite fun, particularly if you get a little Schadenfreude from watching Real Business Cycles and Dynamic Stochastic General Equilibrium take a beating:
To allow for the possibility that monetary policy could matter, empirical DSGE models put sticky-price lipstick on this RBC pig.
But let me not indulge too much in hubris. Every field is subject to the same dynamics, and could benefit from Romer’s closing advice.
A norm that places an authority above criticism helps people cooperate as members of a belief field that pursues political, moral, or religious objectives. As Jonathan Haidt (2012) observes, this type of norm had survival value because it helped members of one group mount a coordinated defense when they were attacked by another group. It is supported by two innate moral senses, one that encourages us to defer to authority, another which compels self-sacrifice to defend the purity of the sacred.
Science, and all the other research fields spawned by the enlightenment, survive by “turning the dial to zero” on these innate moral senses. Members cultivate the conviction that nothing is sacred and that authority should always be challenged. In this sense, Voltaire is more important to the intellectual foundation of the research fields of the enlightenment than Descartes or Newton.
Get the rest from Romer’s blog.
Trump pledges 4%/yr economic growth (but says his economists don’t want him to). His economists are right – political tinkering with growth is a fantasy:
The growth rate of real per capita GDP in the US, and all leading industrial nations, has been nearly constant since the industrial revolution, at about 2% per year. Over that time, marginal tax rates, infrastructure investments and a host of other policies have varied dramatically, without causing the slightest blip.
On the other hand, there are ways you can screw up, like having a war or revolution, or failing to provide rule of law and functioning markets. The key is to preserve the conditions that allow the engine of growth – innovation – to function. Trump seems utterly clueless about innovation. His view of the economy is zero-sum: that value is something you extract from your suppliers and customers, not something you create. That view, plus an affinity for authoritarianism and conflict and neglect of the Constitution, bodes ill for a Trump economy.
My posting rate unintentionally fell off a cliff a couple years back. I got busy working on Ventity, and one thing led to another …
Anyhow, I’ve migrated the site to a new host, and merged my Model Library into the content. I’m working on some substantive posts – it’s a good opportunity to reflect on new developments.
Stay tuned …
I’ve just acquired a pair of 18″ Dell XPS portable desktop tablets. It’s one slick piece of hardware, that makes my iPad seem about as sexy as a beer coaster.
They came with Win8 installed. Now I know why everyone hates it. It makes a good first impression with pretty colors and a simple layout. But after a few minutes, you wonder, where’s all my stuff? There’s no obvious way to run a desktop application, so you end up scouring the web for ways to resurrect the Start menu.
It’s bizarre that Microsoft seems to have forgotten the dynamics that made it a powerhouse in the first place. It’s basically this:
Software is a big nest of positive feedbacks, producing winner-take-all behavior. A few key loops are above. The bottom pair is the classic Bass diffusion model – reinforcing feedback from word of mouth, and balancing feedback from saturation (running out of potential customers). The top loop is an aspect of complementary infrastructure – the more users you have on your platform, the more attractive it is to build apps for it; the more apps there are, the more users you get.
There are lots of similar loops involving accumulation of knowledge, standards, etc. More importantly, this is not a one-player system; there are multiple platforms competing for users, each with its own reinforcing loops. That makes this a success-to-the-successful situation. Microsoft gained huge advantage from these reinforcing loops early in the PC game. Being the first to acquire a huge base of users and applications carried it through many situations in which its tech was not the most exciting thing out there.
So, if you’re Microsoft, and Apple throws you a curve ball by launching a new, wildly successful platform, what should you do? It seems to me that the first imperative should be to preserve the advantages conferred by your gigantic user and application base.
Win8 does exactly the opposite of that:
- Hiding the Start menu means that users have to struggle to find their familiar stuff, effectively chucking out a vast resource, in favor of new apps that are slicker, but pathetically few in number.
- That, plus other decisions, enrage committed users and cause them to consider switching platforms, when a smoother transition would have them comfortably loyal.
This strategy seems totally bonkers.
We recently discovered – after 8 years without TV – that we actually can get reception. We watched a bit of the Olympics. My sons were amused and amazed by the ads, which they otherwise seldom see.
That led them to postulate the beer-TV feedback loop, which is a self-reinforcing descent into ignorance and drunken sloth: TV watching -> + beer ad viewing -> + beer drinking -> – cognitive capacity, motivation -> TV watching.
The loop makes a cameo appearance in this CLD we dreamed up during a conversation about education, skill and motivation:
It’s a good thing we don’t get Fox, or they’d probably have a lot more to say.
Why ask why?
Forward causal inference and reverse causal questions
Andrew Gelman & Guido Imbens
The statistical and econometrics literature on causality is more focused on effects of causes” than on causes of effects.” That is, in the standard approach it is natural to study the effect of a treatment, but it is not in general possible to determine the causes of any particular outcome. This has led some researchers to dismiss the search for causes as “cocktail party chatter” that is outside the realm of science. We argue here that the search for causes can be understood within traditional statistical frameworks as a part of model checking and hypothesis generation. We argue that it can make sense to ask questions about the causes of effects, but the answers to these questions will be in terms of effects of causes.
I haven’t had a chance to digest this yet, but it’s an interesting topic. It’s particularly relevant to system dynamics modeling, where we are seldom seeking only y = f(x), but rather an endogenous theory where x = g(y) also.
See also: Causality in Nonlinear Systems
h/t Peter Christiansen.