Environmental Homeostasis

Replicated from

The Emergence of Environmental Homeostasis in Complex Ecosystems

The Earth, with its core-driven magnetic field, convective mantle, mobile lid tectonics, oceans of liquid water, dynamic climate and abundant life is arguably the most complex system in the known universe. This system has exhibited stability in the sense of, bar a number of notable exceptions, surface temperature remaining within the bounds required for liquid water and so a significant biosphere. Explanations for this range from anthropic principles in which the Earth was essentially lucky, to homeostatic Gaia in which the abiotic and biotic components of the Earth system self-organise into homeostatic states that are robust to a wide range of external perturbations. Here we present results from a conceptual model that demonstrates the emergence of homeostasis as a consequence of the feedback loop operating between life and its environment. Formulating the model in terms of Gaussian processes allows the development of novel computational methods in order to provide solutions. We find that the stability of this system will typically increase then remain constant with an increase in biological diversity and that the number of attractors within the phase space exponentially increases with the number of environmental variables while the probability of the system being in an attractor that lies within prescribed boundaries decreases approximately linearly. We argue that the cybernetic concept of rein control provides insights into how this model system, and potentially any system that is comprised of biological to environmental feedback loops, self-organises into homeostatic states.

See my related blog post for details.

Continue reading “Environmental Homeostasis”

Random rein control

An interesting article in PLOS one explores the consequences of a system of random feedbacks:

The Emergence of Environmental Homeostasis in Complex Ecosystems

The Earth, with its core-driven magnetic field, convective mantle, mobile lid tectonics, oceans of liquid water, dynamic climate and abundant life is arguably the most complex system in the known universe. This system has exhibited stability in the sense of, bar a number of notable exceptions, surface temperature remaining within the bounds required for liquid water and so a significant biosphere. Explanations for this range from anthropic principles in which the Earth was essentially lucky, to homeostatic Gaia in which the abiotic and biotic components of the Earth system self-organise into homeostatic states that are robust to a wide range of external perturbations. Here we present results from a conceptual model that demonstrates the emergence of homeostasis as a consequence of the feedback loop operating between life and its environment. Formulating the model in terms of Gaussian processes allows the development of novel computational methods in order to provide solutions. We find that the stability of this system will typically increase then remain constant with an increase in biological diversity and that the number of attractors within the phase space exponentially increases with the number of environmental variables while the probability of the system being in an attractor that lies within prescribed boundaries decreases approximately linearly. We argue that the cybernetic concept of rein control provides insights into how this model system, and potentially any system that is comprised of biological to environmental feedback loops, self-organises into homeostatic states.

To get a handle on how this works, I replicated the model (see my library).

The basic mechanism of the model is rein control, in which multiple unidirectional forces on a system act together to yield bidirectional feedback control. By analogy, the reins on a horse can only pull in one direction, but with a pair of reins, it’s possible to turn both left and right.

In the model, there’s a large random array of reins, consisting of biotic feedbacks that occur near a particular system state. In the simple one-dimensional case, when you add a bunch of these up, you get a 1D vector field that looks like this:

If this looks familiar, there’s a reason. What’s happening along the E dimension is a lot like what happens along the time dimension in pink noise: at any given point, the sum of a lot of random impulses yield a wiggly net response, with a characteristic scale yielded by the time constant (pink noise) or niche width of biotic components (rein control).

What this yields is an alternating series of unstable (tipping) points and stable equilibria. When the system is perturbed by some external force, the disturbance shifts the aggregate response, as below. Generally, a few stable points may disappear, but the large features of the landscape are preserved, so the system resists the disturbance.

With a higher-dimensional environmental state, this creates convoluted basins of attraction:

This leads to a variety of conclusions about ecological stability, for which I encourage you to have a look at the full paper. It’s interesting to ponder the applicability and implications of this conceptual model for social systems.

GDP's … something?

While the government is shut down, it seems like a good time for a rousing round of Alan Atkisson‘s GDP Song:

The shutdown means GDP measurements are on ice, which is not all bad, though we can expect a 15 basis point drag on GDP per week to include some real harm.

Shutting down our measurement systems strikes me as alarmingly close to turning off the instruments on the flight deck of a plane, due to a route dispute between the pilot and copilot.

Equity, equality and positive feedback

I’m reflecting on Deborah Rogers‘ presentation on equity/equality at the Balaton Group meeting, concerning the apparent evolutionary drivers of the transition from a long human prehistory of egalitarian societies to today’s extreme inequity. A key point of terminology is that equity and equality are not quite the same thing – equality implies similar wealth or resource access, while equity implies something more like Rawlsian justice. But you can’t have one without the other, because inequality leads the haves to tilt the tables of justice against the have-nots.

This might not be a deliberate choice to exploit the masses. It could occur as an evolutionary consequence of the inability to predict the outcome of dynamically complex decisions.

I once described a complex theory of the emergence of inequality to Donella Meadows. I no longer remember the details, but perhaps it was the ancestor of this. Her answer was characteristically simple and insightful, to the effect of, “it doesn’t matter what the specific dynamics are, because the rich control the decisions, so the question boils down to how much inequ(al)ity the elite will tolerate.”

Evidence indicates that high inequality is bad for growth, so a possible irony is that policies that transfer wealth to the wealthy in the short run are bad for them in the long run, because growth eventually dominates allocation, even for the richest.

So, for me, the key question for society is, how much positive feedback should a civilization build into its social organization?

A bit of positive feedback can be helpful, if it creates a gradient that guides individuals who aren’t making the best decisions to imitate the habits of their more successful peers.

However, this probably requires a relatively low level of inequality. As soon as there’s stronger positive feedback, it’s likely that dysfunctional feedbacks take hold, as the wealthiest institutions use their market power to block innovation and good governance in service of maintaining their exalted positions.

I think the evidence that this occurs today is probably fairly simple. Look at the distribution of IQs or any other metric that might be an input to productivity in the economy. It’ll be relatively Normal (Gaussian). But the distributions of wealth and power are heavy tailed (Zipf or Double Laplace). That’s a pretty clear indication that there’s a lot of reinforcing feedback at work.

Fixed and Variable Limits

After I wrote my last post, it occurred to me that perhaps I should cut Ellis some slack. I still don’t think most people who ponder limits think of them as fixed. But, as a kind of shorthand, we sometimes talk about them that way. Consider my slides from the latest SD conference, in which I reflected on World Dynamics,

It would be easy to get the wrong impression here.

Of course, I was talking about World Dynamics, which doesn’t have an explicit technology stock – Forrester considered technology to be part of the capital accumulation process. That glosses over an important point, by fixing the ratios of economic activity to resource consumption and pollution. World3 shares this limitation, except in some specific technology experiments.

So, it’s really no wonder that, in 1973, it was hard to talk to economists, who were operating with exogenous technical progress (the Solow residual) and substitution along continuous production functions in mind.

Unlimited or exogenous technology doesn’t really make any more sense than no technology, so who’s right?

As I said last time, the answer boils down to whether technology proceeds faster than growth or not. That in turn depends on what you mean by “technology”. Narrowly, there’s fairly abundant evidence that the intensity (per capita or GDP) of use of a variety of materials is going down more slowly than growth. As a result, resource consumption (fossil fuels, metals, phosphorus, gravel, etc.) and persistent pollution (CO2, for example) are increasing steadily. By these metrics, sustainability requires a reversal in growth/tech trend magnitudes.

But taking a broad view of technology, including product scope expansions and lifestyle, what does that mean? The consequences of these material trends don’t matter if we can upload ourselves into computers or escape to space fast enough. Space doesn’t look very exponential yet, and I haven’t really seen credible singularity metrics. This is really the problem with the Marchetti paper that Ellis links, describing a global carrying capacity of 1 trillion humans, with more room for nature than today, living in floating cities. The question we face is not, can we imagine some future global equilibrium with spectacular performance, but, can we get there from here?

Nriagu, Tales Told in Lead, Science

For the Romans, there was undoubtedly a more technologically advanced  future state (modern Europe), but they failed to realize it, because social and environmental feedbacks bit first. So, while technology was important then as now, the possibility of a high tech future state does not guarantee its achievement.

For Ellis, I think this means that he has to specify much more clearly what he means by future technology and adaptive capacity. Will we geoengineer our way out of climate constraints, for example? For proponents of limits, I think we need to be clearer in our communication about the technical aspects of limits.

For all sides of the debate, models need to improve. Many aspects of technology remain inadequately formulated, and therefore many mysteries remain. Why does the diminishing adoption time for new technologies not translate to increasing GDP growth? What do technical trends look like when measured by welfare indices rather than GDP? To what extent does social IT change the game, vs. serving as the icing on a classical material cake?

Are there limits?

Several people have pointed out Erle Ellis’ NYT opinion, Overpopulation Is Not the Problem:

MANY scientists believe that by transforming the earth’s natural landscapes, we are undermining the very life support systems that sustain us. Like bacteria in a petri dish, our exploding numbers are reaching the limits of a finite planet, with dire consequences. Disaster looms as humans exceed the earth’s natural carrying capacity. Clearly, this could not be sustainable.

This is nonsense.

There really is no such thing as a human carrying capacity. We are nothing at all like bacteria in a petri dish.

In part, this is just a rhetorical trick. When Ellis explains himself further, he says,

There are no environmental/physical limits to humanity.

Of course our planet has limits.

Clear as mud, right?

Here’s the petri dish view of humanity:

I don’t actually know anyone working on sustainability who operates under this exact mental model; it’s substantially a strawdog.

What Ellis has identified is technology.

Yet these claims demonstrate a profound misunderstanding of the ecology of human systems. The conditions that sustain humanity are not natural and never have been. Since prehistory, human populations have used technologies and engineered ecosystems to sustain populations well beyond the capabilities of unaltered “natural” ecosystems.

Well, duh.

The structure Ellis adds is essentially the green loops below:

Of course, the fact that the green structure exists does not mean that the blue structure does not exist. It just means that there are multiple causes competing for dominance in this system.

Ellis talks about improvements in adaptive capacity as if it’s coincident with the expansion of human activity. In one sense, that’s true, as having more agents to explore fitness landscapes increases the probability that some will survive. But that’s a Darwinian view that isn’t very promising for human welfare.

Ellis glosses over the fact that technology is a stock (red) – really a chain of stocks that impose long delays:

With this view, one must ask whether technology accumulates more quickly than the source/sink exhaustion driven by the growth of human activity. For early humans, this was evidently possible. But as they say in finance, past performance does not guarantee future returns. In spite of the fact that certain technical measures of progress are extremely rapid (Moore’s Law), it appears that aggregate technological progress (as measured by energy intensity or the Solow residual, for example) is fairly slow – at most a couple % per year. It hasn’t been fast enough to permit increasing welfare with decreasing material throughput.

Ellis half recognizes the problem,

Who knows what will be possible with the technologies of the future?

Somehow he’s certain, even in absence of recent precedent or knowledge of the particulars, that technology will outrace constraints.

To answer the question properly, one must really decompose technology into constituents that affect different transformations (resources to economic output, output to welfare, welfare to lifespan, etc.), and identify the social signals that will guide the development of technology and its embodiment in products and services. One should interpret technology broadly – it’s not just knowledge of physics and device blueprints; it’s also tech for organization of human activity embodied in social institutions.

When you look at things this way, I think it becomes obvious that the kinds of technical problems solved by neolithic societies and imperial China could be radically different from, and uninformative about, those we face today. Further, one should take the history of early civilizations, like the Mayans, as evidence that there are social multipliers that enable collapse even in the absence of definitive physical limits. That implies that, far from being irrelevant, brushes with carrying capacity can easily have severe welfare implications even when physical fundamentals are not binding in principle.

The fact that carrying capacity varies with technology does not free us from the fact that, for any given level of technology, it’s easier to deliver a given level of per capita welfare to fewer people rather than more. So the only loops that argue in favor of a larger population involve the links from population to increase learning and adaptive capacity (essentially Simon’s Ultimate Resource hypothesis). But Ellis doesn’t present any evidence that population growth has a causal effect on technology that outweighs its direct material implications. So, one might much better say, “overpopulation is not the only problem.”

Ultimately, I wonder why Ellis and many others are so eager to press the “no limits” narrative.

Most people I know who believe that limits are relevant are essentially advocating internalizing the externalities that comprise failure to recognize limits, to guide market allocations, technology and preferences in a direction that avoids constraints. Ellis seems to be asking for an emphasis on the same outcome, technology or adaptive capacity to evade limits. It’s hard to imagine how one would get such technology without signals that promote its development and adoption. So, in a sense, both camps are pursuing compatible policy agendas. The difference is that proclaiming “no limits” makes it a lot harder to make the case for internalizing externalities. If we aren’t willing to make our desire to avoid limits explicit in market signals and social institutions, then we’re relying on luck to deliver the tech we need. That strikes me as a spectacular failure to adopt one of the major technical breakthroughs of our time, the ability to understand earth systems.

Update: Gene Bellinger replicated this in InsightMaker. Replication is a great way to force yourself to think deeply about a model, and often reveals insights and mistakes you’d never get otherwise (short of building the model from scratch yourself). True to form, Gene found issues. In the last diagram, there should be a link from population to output, and maybe consuming should be driven by output rather than capital, as it’s the use, not the equipment, that does the consuming.

Pindyck on Integrated Assessment Models

Economist Robert Pindyck takes a dim view of the state of integrated assessment modeling:

Climate Change Policy: What Do the Models Tell Us?

Robert S. Pindyck

NBER Working Paper No. 19244

Issued in July 2013

Very little. A plethora of integrated assessment models (IAMs) have been constructed and used to estimate the social cost of carbon (SCC) and evaluate alternative abatement policies. These models have crucial flaws that make them close to useless as tools for policy analysis: certain inputs (e.g. the discount rate) are arbitrary, but have huge effects on the SCC estimates the models produce; the models’ descriptions of the impact of climate change are completely ad hoc, with no theoretical or empirical foundation; and the models can tell us nothing about the most important driver of the SCC, the possibility of a catastrophic climate outcome. IAM-based analyses of climate policy create a perception of knowledge and precision, but that perception is illusory and misleading.

Freepers seem to think that this means the whole SCC enterprise is GIGO. But this is not a case where uncertainty is your friend. Bear in mind that the deficiencies Pindyck discusses, discounting welfare and ignoring extreme outcomes, create a one-sided bias toward a SCC that is too low. Zero (the de facto internalized SCC in most places) is one number that’s virtually certain to be wrong.

ISDC 2013 Capen quiz results

Participants in my Vensim mini-course at the 2013 System Dynamics Conference outperformed their colleagues from 2012 on the Capen Quiz (mean of 5 right vs. 4 last year).

5 right is well above the typical performance of the public, but sadly this means that few among us are destined to be CEOs, who are often wildly overconfident (console yourself – abject failure on the quiz can make you a titan of industry).

Take the quiz and report back!

Tasty Menu

From the WPI online graduate program and courses in system dynamics:

Truly a fine lineup!