A case for strict unit testing

Over on the Vensim forum, Jean-Jacques Laublé points out an interesting bug in the World3 population sector. His forum post includes the model, with a revealing extreme conditions test and a correction. I think it’s important enough to copy my take here:

This is a very interesting discovery. The equations in question are:

maturation 14 to 15 =
 ( ( Population 0 To 14 ) )
 * ( 1
 - mortality 0 to 14 )
 / 15
 Units: Person/year
 The fractional rate at which people aged 0-14 mature into the
 next age cohort (MAT1#5).

**************************************************************
 mortality 0 to 14=
 IF THEN ELSE(Time = 2020 * one year, 1 / one year, mortality 0 to 14 table
 ( life expectancy/one year ) )
 Units: 1/year
 The fractional mortality rate for people aged 0-14 (M1#4).

**************************************************************

(The second is the one modified for the pulse mortality test.)

In the ‘maturation 14 to 15′ equation, the obvious issue is that ’15’ is a hidden dimensioned parameter. One might argue that this instance is ‘safe’ because 15 years is definitionally the residence time of people in the 0 to 15 cohort – but I would still avoid this usage, and make the 15 yrs a named parameter, like “child cohort duration”, with a corresponding name change to the stock. If nothing else, this would make the structure easier to reuse.

The sneaky bit here, revealed by JJ’s test, is that the ‘1’ in the term (1 – mortality 0 to 14) is not a benign dimensionless number, as we often assume in constructions like 1/(1+a*x). This 1 actually represents the maximum feasible stock outflow rate, in fraction/year, implying that a mortality rate of 1/yr, as in the test input, would consume the entire outflow, leaving no children alive to mature into the next cohort. This is incorrect, because the maximum feasible outflow rate is 1/TIME STEP, and TIME STEP = 0.5, so that 1 should really be 2 ~ frac/year. This is why maturation wrongly goes to 0 in JJ’s experiment, where some children remain to age into the next cohort.

In addition, this construction means that the origin of units in the equation are incorrect – the ’15’ has to be assumed to be dimensionless for this to work. If we assign correct units to the inputs, we have a problem:

maturation 14 to 15 = ~ people/year/year
 ( ( Population 0 To 14 ) ) ~ people
 * ( 1 - mortality 0 to 14 ) ~ fraction/year
 / 15 ~ 1/year

Obviously the left side of this equation, maturation, cannot be people/year/year.

JJ’s correction is:

maturation 14 to 15=
 ( ( Population 0 To 14 ) )
 * ( 1 - (mortality 0 to 14 * TIME STEP))
 / size of the 0 to 14 population

In this case, the ‘1’ represents the maximum fraction of the population that can flow out in a time step, so it really is dimensionless. (mortality 0 to 14 * TIME STEP) represents the fractional outflow from mortality within the time step, so it too is properly dimensionless (1/year * year). You could also write this term as:

( 1/TIME STEP - mortality 0 to 14 ) / (1/TIME STEP)

In this case you can see that the term is reducing maturation by the fraction of cohort residents who don’t make it to the next age group. 1/TIME STEP represents the maximum feasible outflow, i.e. 2/year if TIME STEP = 0.5 year. In this form, it’s easy to see that this term approaches 1 (no effect) in the continuous time limit as TIME STEP approaches 0.

I should add that these issues probably have only a tiny influence on the kind of experiments performed in Limits to Growth and certainly wouldn’t change the qualitative conclusions. However, I think there’s still a strong argument for careful attention to units: a model that’s right for the wrong reasons is a danger to future users (including yourself), who might use it in unanticipated ways that challenge the robustness in extremes.

AI, population and limits

Elon says we’re in danger of a population crash.

Interestingly, he invokes Little’s Law: “UN projections are utter nonsense. Just multiply last year’s births by life expectancy.” Doing his math, 135 million births/year * 71 years life expectancy = 9.6 billion people in equilibrium. Hardly a crash. And, of course, life expectancy is going up (US excepted).

But Elon also says AI is going to do all the work.

So what exactly do we need all those people for? A lower population, with no work to do and more fun resources per capita sounds pretty good to me. But apparently, they’re not for here. “If there aren’t enough people for Earth, then there definitely won’t be enough for Mars.”

Surely he knows that the physics of moving a significant chunk of Earth’s population to Mars is sketchy, and that it will likely be a homegrown effort, unconstrained by the availability of Earthlings?

 

 

Nature Reverses on Limits

Last week Nature editorialized,

Are there limits to economic growth? It’s time to call time on a 50-year argument

Fifty years ago this month, the System Dynamics group at the Massachusetts Institute of Technology in Cambridge had a stark message for the world: continued economic and population growth would deplete Earth’s resources and lead to global economic collapse by 2070. This finding was from their 200-page book The Limits to Growth, one of the first modelling studies to forecast the environmental and social impacts of industrialization.

For its time, this was a shocking forecast, and it did not go down well. Nature called the study “another whiff of doomsday” (see Nature 236, 47–49; 1972). It was near-heresy, even in research circles, to suggest that some of the foundations of industrial civilization — mining coal, making steel, drilling for oil and spraying crops with fertilizers — might cause lasting damage. Research leaders accepted that industry pollutes air and water, but considered such damage reversible. Those trained in a pre-computing age were also sceptical of modelling, and advocated that technology would come to the planet’s rescue. Zoologist Solly Zuckerman, a former chief scientific adviser to the UK government, said: “Whatever computers may say about the future, there is nothing in the past which gives any credence whatever to the view that human ingenuity cannot in time circumvent material human difficulties.”

“Another Whiff of Doomsday” (unpaywalled: Nature whiff of doomsday 236047a0.pdf) was likely penned by Nature editor John Maddox, who wrote in his 1972 book, the Doomsday Syndrome,

“Tiny though the earth may appear from the moon, it is in reality an enormous object. The atmosphere of the earth alone weighs more than 5,000 million million tons, more than a million tons of air for each human being now alive. The water on the surface of the earth weights more than 300 times as much – in other words, each living person’s share of the water would just about fill a cube half a mile in each direction… It is not entirely out of the question that human intervention could at some stage bring changes, but for the time being the vast scale on which the earth is built should be a great comfort. In other words, the analogy of space-ship earth is probably not yet applicable to the real world. Human activity, spectacular though it may be, is still dwarfed by the human environment.”

Reciting the scale of earth’s resources hasn’t held up well as a counterargument to Limits., for the reason given by Forrester and Meadows et al. at the time: exponential growth approaches any finite limit in a relatively small number of doublings. The Nature editors were clearly aware of this back in ’72, but ignored its implications:

Instead, they subscribed to a “smooth approach” view, in which “a kind of restraint” limits population all by itself:

There are a lot of problems with this reasoning, not least of which is that economic activity is growing faster than population, yet there is no historic analog of the demographic transition for economies. However, I think the most fundamental problem with the editors’ mental model is that it’s effectively first order. Population is the only stock of interest; to the extent that they mention resources and pollution, it is only to propose that prices and preferences will take care of them. There’s no consideration of the possibility of a laissez-faire demographic transition resulting in absolute levels of population and economic activity requiring resource withdrawals that deplete resources and saturate sinks, leading to eventual overshoot and collapse. I’m reminded of Jay Forrester’s frequent comment, to the effect of, “if you have a model, you’ll be the only person in the room who can speak for 20 minutes without self-contradiction.” The ’72 Nature editorial clearly suffers for lack of a model.

While the ’22 editorial at last acknowledges the existence of the problem, its prescription is “more research.”

Researchers must try to resolve a dispute on the best way to use and care for Earth’s resources.

But the debates haven’t stopped. Although there’s now a consensus that human activities have irreversible environmental effects, researchers disagree on the solutions — especially if that involves curbing economic growth. That disagreement is impeding action. It’s time for researchers to end their debate. The world needs them to focus on the greater goals of stopping catastrophic environmental destruction and improving well-being.

… green-growth and post-growth scientists need to see the bigger picture. Right now, both are articulating different visions to policymakers, and there is a risk this will delay action. In 1972, there was still time to debate, and less urgency to act. Now, the world is running out of time.

If there’s disagreement about the solution, then the solution should be distributed, so that we can learn from different approaches. It’s easy to verify success, by checking the equilibrium conditions for sources and sinks: as long as they’re in decline, policies need to adjust. However, I don’t think lack of agreement about the solution is the real problem.

The real problem is that the research “consensus that human activities have irreversible environmental effects” has no counterpart in the political and economic spheres. Neither green-growth nor degrowth has de facto support. This is not a problem that will be solved by more environmental or economic research.

Limits and Markets

Almost fifty years ago, economists claimed that markets would save us from Limits to Growth. Here’s William Nordhaus, writing about World Dynamics in Measurement without Data (1973):

How’s that working out? I would argue, not well.

Certainly there are functional markets for commodities like oil and gas, but even then a substantial share of the resources are allocated by myopic regulators captive to industry interests.

But for practically everything else, the markets that would in theory allocate across resources, time and space simply don’t exist, even today.

Water markets haven’t prevented the decline of Lake Mead, and they’re resisted widely, including here in Bozeman:

Joseph Stiglitz explained in the WSJ:

A similar pattern could unfold again. But economic forces alone may not be able to fix the problems this time around. Societies as different as the U.S. and China face stiff political resistance to boosting water prices to encourage efficient use, particularly from farmers. …

This troubles some economists who used to be skeptical of the premise of “The Limits to Growth.” As a young economist 30 years ago, Joseph Stiglitz said flatly: “There is not a persuasive case to be made that we face a problem from the exhaustion of our resources in the short or medium run.”

Today, the Nobel laureate is concerned that oil is underpriced relative to the cost of carbon emissions, and that key resources such as water are often provided free. “In the absence of market signals, there’s no way the market will solve these problems,” he says. “How do we make people who have gotten something for free start paying for it? That’s really hard. If our patterns of living, our patterns of consumption are imitated, as others are striving to do, the world probably is not viable.”

What is the price of declining rainforests, reefs or insects? What would markets quote for killing a bird with neonicotinoids, or a wind turbine, or for your Italian songbird pan-fry? What do gravel pits pay for dust and noise emissions, and what will autonomous EVs pay for increased congestion? The answer is almost universally zero. Even things that have received much attention, like emissions of greenhouse gases and criteria air pollutants, are free in most places.

These public goods aren’t free because they’re abundant or unimportant. They’re free because there are no property rights for them, and people resist creating the market mechanisms needed. Everyone loves the free market, until it applies to them. This might be OK if other negative feedback mechanisms picked up the slack, but those clearly aren’t functioning sufficiently either.

Breakthrough Optimism

From Models of Doom, the Sussex critique of the Limits to Growth:

Real challenges will no doubt arise if world energy consumption continues to grow in the long-term at the current rate, but limited reserves of non-renewable energy resources are unlikely to represent a serious threat on reasonable assumptions about the ultimate size of the reserves and technical progress. …

It is not unreasonable to expect that within 30 years a breakthrough with fusion power will provide virtually inexhaustible cheap energy supplies, but should this breakthrough take considerably longer, pessimism would still be unjustified. There are untapped reserves of non-conventional hydrocarbons which will become economic after further technical development and if prices of conventional fossil fuels continue to rise.

At AAAS in 2005, a fusion researcher pointed out that 1950s predictions of working fusion 50 years out had expired … with fusion prospects still 50 years out.

This MIT Project Says Nuclear Fusion Is 15 Years Away (No, Really, This Time)

Expert: “I’m 100 Percent Confident” Fusion Power Will Be Practical
Companies chasing after the elusive technology hope to build reactors by 2030.

Is fusion finally just around the corner? I wouldn’t count on it. Even if we do get a breakthrough in 10 to 15 years, or tomorrow, it’s still a long way from proof of concept to deployment on a scale that’s helpful for mitigating CO2 emissions and avoiding use of destructive resources like tar sands.

Limits to Growth Redux

Every couple of years, an article comes out reviewing the performance of the World3 model against data, or constructing an alternative, extended model based on World3. Here’s the latest:

Abstract
This study investigates the notion of limits to socioeconomic growth with a specific focus on the role of climate change and the declining quality of fossil fuel reserves. A new system dynamics model has been created. The World Energy Model (WEM) is based on the World3 model (The Limits to Growth, Meadows et al., 2004) with climate change and energy production replacing generic pollution and resources factors. WEM also tracks global population, food production and industrial output out to the year 2100. This paper presents a series of WEM’s projections; each of which represent broad sweeps of what the future may bring. All scenarios project that global industrial output will continue growing until 2100. Scenarios based on current energy trends lead to a 50% increase in the average cost of energy production and 2.4–2.7 °C of global warming by 2100. WEM projects that limiting global warming to 2 °C will reduce the industrial output growth rate by 0.1–0.2%. However, WEM also plots industrial decline by 2150 for cases of uncontrolled climate change or increased population growth. The general behaviour of WEM is far more stable than World3 but its results still support the call for a managed decline in society’s ecological footprint.

The new paper puts economic collapse about a century later than it occurred in Limits. But that presumes that the phrase highlighted above is a legitimate simplification: GHGs are the only pollutant, and energy the only resource, that matters. Are we really past the point of concern over PCBs, heavy metals, etc., with all future chemical and genetic technologies free of risk? Well, maybe … (Note that climate integrated assessment models generally indulge in the same assumption.)

But quibbling over dates is to miss a key point of Limits to Growth: the model, and the book, are not about point prediction of collapse in year 20xx. The central message is about a persistent overshoot behavior mode in a system with long delays and finite boundaries, when driven by exponential growth.

We have deliberately omitted the vertical scales and we have made the horizontal time scale somewhat vague because we want to emphasize the general behavior modes of these computer outputs, not the numerical values, which are only approximately known.

Species Restoration & Policy Resistance

I’ve seen a lot of attention lately to restoration of extinct species. It strikes me as a band-aid, not a solution.

Here’s the core of the system:

speciesReintroCritters don’t go extinct for lack of human intervention. They go extinct because the balance of birth and death rates is unfavorable, so that population declines, and (stochastically) winks out.

That happens naturally of course, but anthropogenic extinctions are happening much faster than usual. The drivers (red) are direct harvest and loss of the resource base on which species rely. The resource base is largely habitat, but also other species and ecosystem services that are themselves harvested, poisoned by pollutants, etc.

Reintroducing lost species may be helpful in itself (who wouldn’t want to see millions of passenger pigeons?), but unless the basic drivers of overharvest and resource loss are addressed, species are reintroduced into an environment in which the net gain of births and deaths favors re-extinction. What’s the point of that?

If the drivers of extinction – ultimately population and capital growth plus bad management – were under control, we wouldn’t need much restoration. If they’re out of control, genetic restoration seems likely to be overwhelmed, or perhaps even to contribute to problems through parachuting cats side effects.

speciesReintro2This is not where I’d be looking for leverage.

Fixed and Variable Limits

After I wrote my last post, it occurred to me that perhaps I should cut Ellis some slack. I still don’t think most people who ponder limits think of them as fixed. But, as a kind of shorthand, we sometimes talk about them that way. Consider my slides from the latest SD conference, in which I reflected on World Dynamics,

It would be easy to get the wrong impression here.

Of course, I was talking about World Dynamics, which doesn’t have an explicit technology stock – Forrester considered technology to be part of the capital accumulation process. That glosses over an important point, by fixing the ratios of economic activity to resource consumption and pollution. World3 shares this limitation, except in some specific technology experiments.

So, it’s really no wonder that, in 1973, it was hard to talk to economists, who were operating with exogenous technical progress (the Solow residual) and substitution along continuous production functions in mind.

Unlimited or exogenous technology doesn’t really make any more sense than no technology, so who’s right?

As I said last time, the answer boils down to whether technology proceeds faster than growth or not. That in turn depends on what you mean by “technology”. Narrowly, there’s fairly abundant evidence that the intensity (per capita or GDP) of use of a variety of materials is going down more slowly than growth. As a result, resource consumption (fossil fuels, metals, phosphorus, gravel, etc.) and persistent pollution (CO2, for example) are increasing steadily. By these metrics, sustainability requires a reversal in growth/tech trend magnitudes.

But taking a broad view of technology, including product scope expansions and lifestyle, what does that mean? The consequences of these material trends don’t matter if we can upload ourselves into computers or escape to space fast enough. Space doesn’t look very exponential yet, and I haven’t really seen credible singularity metrics. This is really the problem with the Marchetti paper that Ellis links, describing a global carrying capacity of 1 trillion humans, with more room for nature than today, living in floating cities. The question we face is not, can we imagine some future global equilibrium with spectacular performance, but, can we get there from here?

Nriagu, Tales Told in Lead, Science

For the Romans, there was undoubtedly a more technologically advanced  future state (modern Europe), but they failed to realize it, because social and environmental feedbacks bit first. So, while technology was important then as now, the possibility of a high tech future state does not guarantee its achievement.

For Ellis, I think this means that he has to specify much more clearly what he means by future technology and adaptive capacity. Will we geoengineer our way out of climate constraints, for example? For proponents of limits, I think we need to be clearer in our communication about the technical aspects of limits.

For all sides of the debate, models need to improve. Many aspects of technology remain inadequately formulated, and therefore many mysteries remain. Why does the diminishing adoption time for new technologies not translate to increasing GDP growth? What do technical trends look like when measured by welfare indices rather than GDP? To what extent does social IT change the game, vs. serving as the icing on a classical material cake?

Are there limits?

Several people have pointed out Erle Ellis’ NYT opinion, Overpopulation Is Not the Problem:

MANY scientists believe that by transforming the earth’s natural landscapes, we are undermining the very life support systems that sustain us. Like bacteria in a petri dish, our exploding numbers are reaching the limits of a finite planet, with dire consequences. Disaster looms as humans exceed the earth’s natural carrying capacity. Clearly, this could not be sustainable.

This is nonsense.

There really is no such thing as a human carrying capacity. We are nothing at all like bacteria in a petri dish.

In part, this is just a rhetorical trick. When Ellis explains himself further, he says,

There are no environmental/physical limits to humanity.

Of course our planet has limits.

Clear as mud, right?

Here’s the petri dish view of humanity:

I don’t actually know anyone working on sustainability who operates under this exact mental model; it’s substantially a strawdog.

What Ellis has identified is technology.

Yet these claims demonstrate a profound misunderstanding of the ecology of human systems. The conditions that sustain humanity are not natural and never have been. Since prehistory, human populations have used technologies and engineered ecosystems to sustain populations well beyond the capabilities of unaltered “natural” ecosystems.

Well, duh.

The structure Ellis adds is essentially the green loops below:

Of course, the fact that the green structure exists does not mean that the blue structure does not exist. It just means that there are multiple causes competing for dominance in this system.

Ellis talks about improvements in adaptive capacity as if it’s coincident with the expansion of human activity. In one sense, that’s true, as having more agents to explore fitness landscapes increases the probability that some will survive. But that’s a Darwinian view that isn’t very promising for human welfare.

Ellis glosses over the fact that technology is a stock (red) – really a chain of stocks that impose long delays:

With this view, one must ask whether technology accumulates more quickly than the source/sink exhaustion driven by the growth of human activity. For early humans, this was evidently possible. But as they say in finance, past performance does not guarantee future returns. In spite of the fact that certain technical measures of progress are extremely rapid (Moore’s Law), it appears that aggregate technological progress (as measured by energy intensity or the Solow residual, for example) is fairly slow – at most a couple % per year. It hasn’t been fast enough to permit increasing welfare with decreasing material throughput.

Ellis half recognizes the problem,

Who knows what will be possible with the technologies of the future?

Somehow he’s certain, even in absence of recent precedent or knowledge of the particulars, that technology will outrace constraints.

To answer the question properly, one must really decompose technology into constituents that affect different transformations (resources to economic output, output to welfare, welfare to lifespan, etc.), and identify the social signals that will guide the development of technology and its embodiment in products and services. One should interpret technology broadly – it’s not just knowledge of physics and device blueprints; it’s also tech for organization of human activity embodied in social institutions.

When you look at things this way, I think it becomes obvious that the kinds of technical problems solved by neolithic societies and imperial China could be radically different from, and uninformative about, those we face today. Further, one should take the history of early civilizations, like the Mayans, as evidence that there are social multipliers that enable collapse even in the absence of definitive physical limits. That implies that, far from being irrelevant, brushes with carrying capacity can easily have severe welfare implications even when physical fundamentals are not binding in principle.

The fact that carrying capacity varies with technology does not free us from the fact that, for any given level of technology, it’s easier to deliver a given level of per capita welfare to fewer people rather than more. So the only loops that argue in favor of a larger population involve the links from population to increase learning and adaptive capacity (essentially Simon’s Ultimate Resource hypothesis). But Ellis doesn’t present any evidence that population growth has a causal effect on technology that outweighs its direct material implications. So, one might much better say, “overpopulation is not the only problem.”

Ultimately, I wonder why Ellis and many others are so eager to press the “no limits” narrative.

Most people I know who believe that limits are relevant are essentially advocating internalizing the externalities that comprise failure to recognize limits, to guide market allocations, technology and preferences in a direction that avoids constraints. Ellis seems to be asking for an emphasis on the same outcome, technology or adaptive capacity to evade limits. It’s hard to imagine how one would get such technology without signals that promote its development and adoption. So, in a sense, both camps are pursuing compatible policy agendas. The difference is that proclaiming “no limits” makes it a lot harder to make the case for internalizing externalities. If we aren’t willing to make our desire to avoid limits explicit in market signals and social institutions, then we’re relying on luck to deliver the tech we need. That strikes me as a spectacular failure to adopt one of the major technical breakthroughs of our time, the ability to understand earth systems.

Update: Gene Bellinger replicated this in InsightMaker. Replication is a great way to force yourself to think deeply about a model, and often reveals insights and mistakes you’d never get otherwise (short of building the model from scratch yourself). True to form, Gene found issues. In the last diagram, there should be a link from population to output, and maybe consuming should be driven by output rather than capital, as it’s the use, not the equipment, that does the consuming.