Another great conversation at the Edge weaves together a number of themes I’ve been thinking about lately, like scientific revolutions, big data, learning from models, filter bubbles and the balance between content creation and consumption. I can’t embed, or do it full justice, so go watch the video or read the transcript (the latter is a nice rarity these days).
Pagel’s fundamental hypothesis is humans as social animals are wired for imitation more than innovation, for the very good reason that imitation is easy, while innovation is hard, error-prone and sometimes dangerous. Better communication intensifies the advantage to imitators, as it has become incredibly cheap to observe our fellows in large networks like Facebook. There are a variety of implications of this, including the possibility that, more than ever, large companies have strong incentives to imitate through acquisition of small innovators rather than to risk innovating themselves. This resonates very much with Ventana colleague David Peterson’s work on evolutionary simulation of the origins of economic growth and creativity.
At one point Pagel describes innovation as a combination of a generative process and a testing process. In nature, the generative process is mutation, while the testing process is natural selection. For humans, fortunately, the testing process often involves mental models, and lately formal models, for simulated selection, citing Popper, “our hypotheses die in our stead.” Pagel wonders whether the generative process in human creativity is little more than random. I suspect that it’s a mix. There are lots of problems for which we have somewhat routinized generative solutions. But, in some domains, we know so little that we have to resort to essentially random recombination or trial of ideas. I think of optimization as an analogy: for many problems, like finding the maximum of y=3-5*x+7x^2-2*x^3-5*x^4+x^5-2*x^6, we have an explicit formula or at least an algorithm (hill climbing) for finding the answer. But for others, the dimensionality is too great, there are multiple optima and we have no model of what the objective function should look like, so we have to resort to quasi-random approaches like simulated annealing.
As David’s simulations showed, the corporate world is not much different. There are tactical situations where firms have enough of a model of the situation to innovate effectively (e.g., figuring out the cheapest way to make a widget), and there are strategic situations where firms are clueless, and even large established companies are apt to walk off a cliff. To avoid such spectacular displays of evolution, large organizations evolved bureaucracy and hierarchy to prevent dangerous innovation. The problem with imitation as a fallback strategy in the absence of deliberate innovation is that, in dynamically complex situations, it’s not clear who to imitate. Is a big, fast-growing firm successful because it has a good idea, or because it is drawing down some unseen stock in some way that can’t go on forever at scale? (Think Enron.)
Still, I’m not convinced that rapid communication is making us dumber in aggregate. It seems like we ought to benefit from cheaper imitation, even if it shifts the balance between copying and innovation. I suspect that the real threat is not the scale of networks, but their homogeneity. Facebook makes it easier for the whole planet to aspire to imitate the dumber aspects of lifestyle in apparently successful places like the US. I’d like to see some things copied (democracy, while we still have some) while others (carbon intensity) remain subject to diverse local experiments. However, global-scale copying does not seem to favor that.