Fixed and Variable Limits

After I wrote my last post, it occurred to me that perhaps I should cut Ellis some slack. I still don’t think most people who ponder limits think of them as fixed. But, as a kind of shorthand, we sometimes talk about them that way. Consider my slides from the latest SD conference, in which I reflected on World Dynamics,

It would be easy to get the wrong impression here.

Of course, I was talking about World Dynamics, which doesn’t have an explicit technology stock – Forrester considered technology to be part of the capital accumulation process. That glosses over an important point, by fixing the ratios of economic activity to resource consumption and pollution. World3 shares this limitation, except in some specific technology experiments.

So, it’s really no wonder that, in 1973, it was hard to talk to economists, who were operating with exogenous technical progress (the Solow residual) and substitution along continuous production functions in mind.

Unlimited or exogenous technology doesn’t really make any more sense than no technology, so who’s right?

As I said last time, the answer boils down to whether technology proceeds faster than growth or not. That in turn depends on what you mean by “technology”. Narrowly, there’s fairly abundant evidence that the intensity (per capita or GDP) of use of a variety of materials is going down more slowly than growth. As a result, resource consumption (fossil fuels, metals, phosphorus, gravel, etc.) and persistent pollution (CO2, for example) are increasing steadily. By these metrics, sustainability requires a reversal in growth/tech trend magnitudes.

But taking a broad view of technology, including product scope expansions and lifestyle, what does that mean? The consequences of these material trends don’t matter if we can upload ourselves into computers or escape to space fast enough. Space doesn’t look very exponential yet, and I haven’t really seen credible singularity metrics. This is really the problem with the Marchetti paper that Ellis links, describing a global carrying capacity of 1 trillion humans, with more room for nature than today, living in floating cities. The question we face is not, can we imagine some future global equilibrium with spectacular performance, but, can we get there from here?

Nriagu, Tales Told in Lead, Science

For the Romans, there was undoubtedly a more technologically advanced  future state (modern Europe), but they failed to realize it, because social and environmental feedbacks bit first. So, while technology was important then as now, the possibility of a high tech future state does not guarantee its achievement.

For Ellis, I think this means that he has to specify much more clearly what he means by future technology and adaptive capacity. Will we geoengineer our way out of climate constraints, for example? For proponents of limits, I think we need to be clearer in our communication about the technical aspects of limits.

For all sides of the debate, models need to improve. Many aspects of technology remain inadequately formulated, and therefore many mysteries remain. Why does the diminishing adoption time for new technologies not translate to increasing GDP growth? What do technical trends look like when measured by welfare indices rather than GDP? To what extent does social IT change the game, vs. serving as the icing on a classical material cake?

Are there limits?

Several people have pointed out Erle Ellis’ NYT opinion, Overpopulation Is Not the Problem:

MANY scientists believe that by transforming the earth’s natural landscapes, we are undermining the very life support systems that sustain us. Like bacteria in a petri dish, our exploding numbers are reaching the limits of a finite planet, with dire consequences. Disaster looms as humans exceed the earth’s natural carrying capacity. Clearly, this could not be sustainable.

This is nonsense.

There really is no such thing as a human carrying capacity. We are nothing at all like bacteria in a petri dish.

In part, this is just a rhetorical trick. When Ellis explains himself further, he says,

There are no environmental/physical limits to humanity.

Of course our planet has limits.

Clear as mud, right?

Here’s the petri dish view of humanity:

I don’t actually know anyone working on sustainability who operates under this exact mental model; it’s substantially a strawdog.

What Ellis has identified is technology.

Yet these claims demonstrate a profound misunderstanding of the ecology of human systems. The conditions that sustain humanity are not natural and never have been. Since prehistory, human populations have used technologies and engineered ecosystems to sustain populations well beyond the capabilities of unaltered “natural” ecosystems.

Well, duh.

The structure Ellis adds is essentially the green loops below:

Of course, the fact that the green structure exists does not mean that the blue structure does not exist. It just means that there are multiple causes competing for dominance in this system.

Ellis talks about improvements in adaptive capacity as if it’s coincident with the expansion of human activity. In one sense, that’s true, as having more agents to explore fitness landscapes increases the probability that some will survive. But that’s a Darwinian view that isn’t very promising for human welfare.

Ellis glosses over the fact that technology is a stock (red) – really a chain of stocks that impose long delays:

With this view, one must ask whether technology accumulates more quickly than the source/sink exhaustion driven by the growth of human activity. For early humans, this was evidently possible. But as they say in finance, past performance does not guarantee future returns. In spite of the fact that certain technical measures of progress are extremely rapid (Moore’s Law), it appears that aggregate technological progress (as measured by energy intensity or the Solow residual, for example) is fairly slow – at most a couple % per year. It hasn’t been fast enough to permit increasing welfare with decreasing material throughput.

Ellis half recognizes the problem,

Who knows what will be possible with the technologies of the future?

Somehow he’s certain, even in absence of recent precedent or knowledge of the particulars, that technology will outrace constraints.

To answer the question properly, one must really decompose technology into constituents that affect different transformations (resources to economic output, output to welfare, welfare to lifespan, etc.), and identify the social signals that will guide the development of technology and its embodiment in products and services. One should interpret technology broadly – it’s not just knowledge of physics and device blueprints; it’s also tech for organization of human activity embodied in social institutions.

When you look at things this way, I think it becomes obvious that the kinds of technical problems solved by neolithic societies and imperial China could be radically different from, and uninformative about, those we face today. Further, one should take the history of early civilizations, like the Mayans, as evidence that there are social multipliers that enable collapse even in the absence of definitive physical limits. That implies that, far from being irrelevant, brushes with carrying capacity can easily have severe welfare implications even when physical fundamentals are not binding in principle.

The fact that carrying capacity varies with technology does not free us from the fact that, for any given level of technology, it’s easier to deliver a given level of per capita welfare to fewer people rather than more. So the only loops that argue in favor of a larger population involve the links from population to increase learning and adaptive capacity (essentially Simon’s Ultimate Resource hypothesis). But Ellis doesn’t present any evidence that population growth has a causal effect on technology that outweighs its direct material implications. So, one might much better say, “overpopulation is not the only problem.”

Ultimately, I wonder why Ellis and many others are so eager to press the “no limits” narrative.

Most people I know who believe that limits are relevant are essentially advocating internalizing the externalities that comprise failure to recognize limits, to guide market allocations, technology and preferences in a direction that avoids constraints. Ellis seems to be asking for an emphasis on the same outcome, technology or adaptive capacity to evade limits. It’s hard to imagine how one would get such technology without signals that promote its development and adoption. So, in a sense, both camps are pursuing compatible policy agendas. The difference is that proclaiming “no limits” makes it a lot harder to make the case for internalizing externalities. If we aren’t willing to make our desire to avoid limits explicit in market signals and social institutions, then we’re relying on luck to deliver the tech we need. That strikes me as a spectacular failure to adopt one of the major technical breakthroughs of our time, the ability to understand earth systems.

Update: Gene Bellinger replicated this in InsightMaker. Replication is a great way to force yourself to think deeply about a model, and often reveals insights and mistakes you’d never get otherwise (short of building the model from scratch yourself). True to form, Gene found issues. In the last diagram, there should be a link from population to output, and maybe consuming should be driven by output rather than capital, as it’s the use, not the equipment, that does the consuming.

Pindyck on Integrated Assessment Models

Economist Robert Pindyck takes a dim view of the state of integrated assessment modeling:

Climate Change Policy: What Do the Models Tell Us?

Robert S. Pindyck

NBER Working Paper No. 19244

Issued in July 2013

Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

Freepers seem to think that this means the whole SCC enterprise is GIGO. But this is not a case where uncertainty is your friend. Bear in mind that the deficiencies Pindyck discusses, discounting welfare and ignoring extreme outcomes, create a one-sided bias toward a SCC that is too low. Zero (the de facto internalized SCC in most places) is one number that’s virtually certain to be wrong.

ISDC 2013 Capen quiz results

Participants in my Vensim mini-course at the 2013 System Dynamics Conference outperformed their colleagues from 2012 on the Capen Quiz (mean of 5 right vs. 4 last year).

5 right is well above the typical performance of the public, but sadly this means that few among us are destined to be CEOs, who are often wildly overconfident (console yourself – abject failure on the quiz can make you a titan of industry).

Take the quiz and report back!

Tasty Menu

From the WPI online graduate program and courses in system dynamics:

Truly a fine lineup!

The IAMs that ate the poor

Discounting has long been controversial in climate integrated assessment models (IAMs), with prevailing assumptions less than favorable to future generations.

The evidence in favor of aggressive discounting has generally been macro in nature – observed returns appear to be consistent with discounting of welfare, so that’s what we should do. To swallow this, you have to believe that markets faithfully reveal preferences and that only on-market returns count. Even then, there’s still the problem of confounding of time preference with inequality aversion. Given that this perspective is contradicted by micro behavior, i.e. actually asking people what they want, it’s hard to see a reason other than convenience for its upper hand in decision making. Ultimately, the situation is neatly self-fulfilling. We observe inflated returns consistent with myopia, so we set myopic hurdles for social decisions, yielding inflated short-term returns.

It gets worse.

Back in 1997, I attended a talk on an early version of the RICE model, a regional version of DICE. In an optimization model with uniform utility functions, there’s an immediate drive to level incomes across all the regions. That’s obviously contrary to the observed global income distribution. A “solution” is to use Negishi weights, which apply weights to each region’s welfare in proportion to the inverse of the marginal utility of consumption there. That prevents income leveling, by explicitly assuming that the rich are rich because they deserve it.

This is a reasonable practical choice if you don’t think you can do anything about income distribution, and you’re not worried that it confounds equity with human capital differences. But when you use the same weights to identify an optimal emissions trajectory, you’re baking the inequity of the current market order into climate policy. In other words, people in developed countries are worth 10x more than people in developing countries.

Way back when, I didn’t have the words at hand to gracefully ask why it was a good idea to model things this way, but I sure wish I’d had the courage to forge ahead anyway.

The silly thing is that there’s no need to make such inequitable assumptions to model this problem. Elizabeth Stanton analyzes Negishi weighting and suggests alternatives. Richard Tol explored alternative frameworks some time before. And there are still more options, I think.

In the intertemporal optimization framework, one could treat the situation as a game between self-interested regions (with Negishi weights) and an equitable regulator (with equal weights to welfare). In that setting, mitigation by the rich might look like a form of foreign aid that couldn’t be squandered by the elites of poor regions, and thus I would expect deep emissions cuts.

Better still, dump notions of equilibrium and explore the problem with behavioral models, reserving optimization for policy analysis with fair objectives.

Thanks to Ramon Bueno for passing along the Stanton article.

There's just enough time

In response to the question, “is there still time for a transition to sustainability,” John Sterman cited Donella Meadows,

The truth of the matter is that no one knows.

We have said many times that the world faces not a preordained future, but a choice. The choice is between different mental models, which lead logically to different scenarios. One mental model says that this world for all practical purposes has no limits. Choosing that mental model will encourage extractive business as usual and take the human economy even farther beyond the limits. The result will be collapse.

Another mental model says that the limits are real and close, and that there is not enough time, and that people cannot be moderate or responsible or compassionate. At least not in time. That model is self-fulfilling. If the world’s people choose to believe it, they will be proven right. The result will be collapse.

A third mental model says that the limits are real and close and in some cases below our current levels of throughput. But there is just enough time, with no time to waste. There is just enough energy, enough material, enough money, enough environmental resilience, and enough human virtue to bring about a planned reduction in the ecological footprint of humankind: a sustainabil­ity revolution to a much better world for the vast majority.

That third scenario might very well be wrong. But the evidence we have seen, from world data to global computer models, suggests that it could conceivably be made right. There is no way of knowing for sure, other than to try it.

Global modeling & C-ROADS

At the 2013 ISDC, John Sterman, Drew Jones and I presented a plenary talk on Global Models from Malthus to C-ROADS and Beyond. Our slides are in SDS 2013 Global Models Sterman Fid Jones.pdf and my middle section, annotated, is in SDS 2013 Global+ v12 TF excerpt.pdf.

There wasn’t actually much time to get into Malthus, but one thing struck me as I was reading his Essay on the Principle of Population. He identified the debate over limits as a paradigm conflict:

It has been said that the great question is now at issue, whether man shall henceforth start forwards with accelerated velocity towards illimitable, and hitherto unconceived improvement, or be condemned to a perpetual oscillation between happiness and misery, and after every effort remain still at an immeasurable distance from the wished-for goal.

Yet, anxiously as every friend of mankind must look forwards to the termination of this painful suspense, and eagerly as the inquiring mind would hail every ray of light that might assist its view into futurity, it is much to be lamented that the writers on each side of this momentous question still keep far aloof from each other. Their mutual arguments do not meet with a candid examination. The question is not brought to rest on fewer points, and even in theory scarcely seems to be approaching to a decision.

The advocate for the present order of things is apt to treat the sect of speculative philosophers either as a set of artful and designing knaves who preach up ardent benevolence and draw captivating pictures of a happier state of society only the better to enable them to destroy the present establishments and to forward their own deep-laid schemes of ambition, or as wild and mad-headed enthusiasts whose silly speculations and absurd paradoxes are not worthy the attention of any reasonable man.

The advocate for the perfectibility of man, and of society, retorts on the defender of establishments a more than equal contempt. He brands him as the slave of the most miserable and narrow prejudices; or as the defender of the abuses of civil society only because he profits by them. He paints him either as a character who prostitutes his understanding to his interest, or as one whose powers of mind are not of a size to grasp any thing great and noble, who cannot see above five yards before him, and who must therefore be utterly unable to take in the views of the enlightened benefactor of mankind.

In this unamicable contest the cause of truth cannot but suffer. The really good arguments on each side of the question are not allowed to have their proper weight. Each pursues his own theory, little solicitous to correct or improve it by an attention to what is advanced by his opponents.

Not much has changed in 200 years.

While much of the criticism of Limits to Growth remains completely spurious, and even its serious critics mostly failed to recognize that Limits discussed growth in material rather than economic/technological terms, I think the SD field missed some opportunities for learning and constructive dialog amid all the furor.

For example, one of the bitterest critics of Limits, William Nordhaus, wrote in 1974,

Economists have for the most part ridiculed the new view of growth, arguing that it is merely Chicken Little Run Wild. I think that the new view of growth must be taken seriously and analyzed carefully.

And he has, at least from the lens of the economic paradigm.

There are also legitimate technical critiques of the World3 model, as in Wil Thissen’s thesis, later published in IEEE Transactions, that have never been properly integrated into global modeling.

Through this failure to communicate, we find ourselves forty years down the road, without a sufficiently improved global model that permits exploration of both sides of the debate. Do exponential growth, finite limits, delays, and erosion of carrying capacity yield persistent overshoot and collapse, or will technology take care of the problem by itself?

Model quality: the missing link

A number of developments are making model quality control increasingly crucial.

  • Models are generally playing a wider role in policy debates. Efforts like the Climate CoLab are making models accessible to wide audiences for interactive use.
  • The use of automated stochastic optimization and exploratory modeling and analysis (EMA) is likely to take models into parts of their parameter spaces that the modeler herself has not explored.
  • Standards like SMILE/XMILE will make models and model components more reusable and shareable.

I think this could all come to a bad end, in which priesthoods are paid to develop competing models that are incomprehensible to the general public, thus reducing modeling to a sophisticated form of propaganda.

Fortunately, some elements of an antidote to this dystopia are at hand, including documentation standards and tools and languages for expressing Reality Checks on model behavior. But I think we need a lot more. For example,

  • Standards could include metadata standards, so that model components are self-documenting in ways that make it possible for users to easily discover their limitations.
  • EMA tools could be directed towards discovery of model problems before policy analysis commences.
  • Tools that present models online could expose their innards as well as results.
  • Languages are needed for meta-reality checks, that describe and test higher level assumptions like perfect foresight (or lack thereof).

Perhaps most importantly, model quality needs to become a pervasive part of the culture of model building and consumption in all disciplines.