The IAMs that ate the poor

Discounting has long been controversial in climate integrated assessment models (IAMs), with prevailing assumptions less than favorable to future generations.

The evidence in favor of aggressive discounting has generally been macro in nature – observed returns appear to be consistent with discounting of welfare, so that’s what we should do. To swallow this, you have to believe that markets faithfully reveal preferences and that only on-market returns count. Even then, there’s still the problem of confounding of time preference with inequality aversion. Given that this perspective is contradicted by micro behavior, i.e. actually asking people what they want, it’s hard to see a reason other than convenience for its upper hand in decision making. Ultimately, the situation is neatly self-fulfilling. We observe inflated returns consistent with myopia, so we set myopic hurdles for social decisions, yielding inflated short-term returns.

It gets worse.

Back in 1997, I attended a talk on an early version of the RICE model, a regional version of DICE. In an optimization model with uniform utility functions, there’s an immediate drive to level incomes across all the regions. That’s obviously contrary to the observed global income distribution. A “solution” is to use Negishi weights, which apply weights to each region’s welfare in proportion to the inverse of the marginal utility of consumption there. That prevents income leveling, by explicitly assuming that the rich are rich because they deserve it.

This is a reasonable practical choice if you don’t think you can do anything about income distribution, and you’re not worried that it confounds equity with human capital differences. But when you use the same weights to identify an optimal emissions trajectory, you’re baking the inequity of the current market order into climate policy. In other words, people in developed countries are worth 10x more than people in developing countries.

Way back when, I didn’t have the words at hand to gracefully ask why it was a good idea to model things this way, but I sure wish I’d had the courage to forge ahead anyway.

The silly thing is that there’s no need to make such inequitable assumptions to model this problem. Elizabeth Stanton analyzes Negishi weighting and suggests alternatives. Richard Tol explored alternative frameworks some time before. And there are still more options, I think.

In the intertemporal optimization framework, one could treat the situation as a game between self-interested regions (with Negishi weights) and an equitable regulator (with equal weights to welfare). In that setting, mitigation by the rich might look like a form of foreign aid that couldn’t be squandered by the elites of poor regions, and thus I would expect deep emissions cuts.

Better still, dump notions of equilibrium and explore the problem with behavioral models, reserving optimization for policy analysis with fair objectives.

Thanks to Ramon Bueno for passing along the Stanton article.

There's just enough time

In response to the question, “is there still time for a transition to sustainability,” John Sterman cited Donella Meadows,

The truth of the matter is that no one knows.

We have said many times that the world faces not a preordained future, but a choice. The choice is between different mental models, which lead logically to different scenarios. One mental model says that this world for all practical purposes has no limits. Choosing that mental model will encourage extractive business as usual and take the human economy even farther beyond the limits. The result will be collapse.

Another mental model says that the limits are real and close, and that there is not enough time, and that people cannot be moderate or responsible or compassionate. At least not in time. That model is self-fulfilling. If the world’s people choose to believe it, they will be proven right. The result will be collapse.

A third mental model says that the limits are real and close and in some cases below our current levels of throughput. But there is just enough time, with no time to waste. There is just enough energy, enough material, enough money, enough environmental resilience, and enough human virtue to bring about a planned reduction in the ecological footprint of humankind: a sustainabil­ity revolution to a much better world for the vast majority.

That third scenario might very well be wrong. But the evidence we have seen, from world data to global computer models, suggests that it could conceivably be made right. There is no way of knowing for sure, other than to try it.

Global modeling & C-ROADS

At the 2013 ISDC, John Sterman, Drew Jones and I presented a plenary talk on Global Models from Malthus to C-ROADS and Beyond. Our slides are in SDS 2013 Global Models Sterman Fid Jones.pdf and my middle section, annotated, is in SDS 2013 Global+ v12 TF excerpt.pdf.

There wasn’t actually much time to get into Malthus, but one thing struck me as I was reading his Essay on the Principle of Population. He identified the debate over limits as a paradigm conflict:

It has been said that the great question is now at issue, whether man shall henceforth start forwards with accelerated velocity towards illimitable, and hitherto unconceived improvement, or be condemned to a perpetual oscillation between happiness and misery, and after every effort remain still at an immeasurable distance from the wished-for goal.

Yet, anxiously as every friend of mankind must look forwards to the termination of this painful suspense, and eagerly as the inquiring mind would hail every ray of light that might assist its view into futurity, it is much to be lamented that the writers on each side of this momentous question still keep far aloof from each other. Their mutual arguments do not meet with a candid examination. The question is not brought to rest on fewer points, and even in theory scarcely seems to be approaching to a decision.

The advocate for the present order of things is apt to treat the sect of speculative philosophers either as a set of artful and designing knaves who preach up ardent benevolence and draw captivating pictures of a happier state of society only the better to enable them to destroy the present establishments and to forward their own deep-laid schemes of ambition, or as wild and mad-headed enthusiasts whose silly speculations and absurd paradoxes are not worthy the attention of any reasonable man.

The advocate for the perfectibility of man, and of society, retorts on the defender of establishments a more than equal contempt. He brands him as the slave of the most miserable and narrow prejudices; or as the defender of the abuses of civil society only because he profits by them. He paints him either as a character who prostitutes his understanding to his interest, or as one whose powers of mind are not of a size to grasp any thing great and noble, who cannot see above five yards before him, and who must therefore be utterly unable to take in the views of the enlightened benefactor of mankind.

In this unamicable contest the cause of truth cannot but suffer. The really good arguments on each side of the question are not allowed to have their proper weight. Each pursues his own theory, little solicitous to correct or improve it by an attention to what is advanced by his opponents.

Not much has changed in 200 years.

While much of the criticism of Limits to Growth remains completely spurious, and even its serious critics mostly failed to recognize that Limits discussed growth in material rather than economic/technological terms, I think the SD field missed some opportunities for learning and constructive dialog amid all the furor.

For example, one of the bitterest critics of Limits, William Nordhaus, wrote in 1974,

Economists have for the most part ridiculed the new view of growth, arguing that it is merely Chicken Little Run Wild. I think that the new view of growth must be taken seriously and analyzed carefully.

And he has, at least from the lens of the economic paradigm.

There are also legitimate technical critiques of the World3 model, as in Wil Thissen’s thesis, later published in IEEE Transactions, that have never been properly integrated into global modeling.

Through this failure to communicate, we find ourselves forty years down the road, without a sufficiently improved global model that permits exploration of both sides of the debate. Do exponential growth, finite limits, delays, and erosion of carrying capacity yield persistent overshoot and collapse, or will technology take care of the problem by itself?

Model quality: the missing link

A number of developments are making model quality control increasingly crucial.

  • Models are generally playing a wider role in policy debates. Efforts like the Climate CoLab are making models accessible to wide audiences for interactive use.
  • The use of automated stochastic optimization and exploratory modeling and analysis (EMA) is likely to take models into parts of their parameter spaces that the modeler herself has not explored.
  • Standards like SMILE/XMILE will make models and model components more reusable and shareable.

I think this could all come to a bad end, in which priesthoods are paid to develop competing models that are incomprehensible to the general public, thus reducing modeling to a sophisticated form of propaganda.

Fortunately, some elements of an antidote to this dystopia are at hand, including documentation standards and tools and languages for expressing Reality Checks on model behavior. But I think we need a lot more. For example,

  • Standards could include metadata standards, so that model components are self-documenting in ways that make it possible for users to easily discover their limitations.
  • EMA tools could be directed towards discovery of model problems before policy analysis commences.
  • Tools that present models online could expose their innards as well as results.
  • Languages are needed for meta-reality checks, that describe and test higher level assumptions like perfect foresight (or lack thereof).

Perhaps most importantly, model quality needs to become a pervasive part of the culture of model building and consumption in all disciplines.

Wonderland

Wonderland model by Sanderson et al.; see Alexandra Milik, Alexia Prskawetz, Gustav Feichtinger, and Warren C. Sanderson, “Slow-fast Dynamics in Wonderland,” Environmental Modeling and Assessment 1 (1996) 3-17.

Here’s an excerpt from my 1998 critique of this model: Continue reading “Wonderland”

The Temperature-System Dynamics feedback

The recurrent heat waves coincident with system dynamics conferences have led me to some new insights about the co-evolution of systems thinking and climate. I’m hoping that I can get a last minute plenary slot for this blockbuster finding.

A priori, it should be obvious that temperature and system dynamics are linked. Here’s my dynamic hypothesis:

This hardly requires proof, but nevertheless data fully confirm the relationships.

Most obviously, the SD conference always occurs in July, the hottest month. The 2011 conference in Washington DC was the hottest July ever in that locale.

In addition, the timing of major works in SD coincides with warm years near Boston, the birthplace of the field.

I think we can consider this hypothesis definitively proven. All that remains is to put policies in place to ensure the continued health of SD, in order to prevent a global climatic catastrophe.

 

Population Growth Up

According to Worldwatch, there’s been an upward revision in UN population projections. As things now stand, the end-of-century tally settles out just short of 11 billion (medium forecast of 10.9 billion, with a range of 6.8 to 16.6).

The change is due to higher than expected fertility:

Compared to the UN’s previous assessment of world p opulation trends, the new projected total population is higher, particularly after 2075. Part of the reason is that current fertility levels have been adjusted upward in a number of countries as new information has become available. In 15 high-fertil ity countries of sub-Saharan Africa, the estimated average number of children pe r woman has been adjusted upwards by more than 5 per cent.

The projections are essentially open loop with respect to major environmental or other driving forces, so the scenario range doesn’t reflect full uncertainty. Interestingly, the UN varies fertility but not mortality in projections. Small differences in fertility make big differences in population:

The “high-variant” projection, for example, which assumes an extra half of a child per woman (on average) than the medium variant, implies a world population of 10.9 billion in 2050. The “low-variant” projection, where women, on average, have half a child less than under the medium variant, would produce a population of 8.3 billion in 2050. Thus, a constant difference of only half a child above or below the medium variant would result in a global population of around 1.3 billion more or less in 2050 compared to the medium-variant forecast.

There’s a nice backgrounder on population projections, by Brian O’Neil et al., in Demographic Research. See Fig. 6 for a comparison of projections.

Defense of the 1%?

Digitopoly has an interesting take on Greg Mankiw’s Defending the 1%.

You should go read the sources, but Mankiw’s basic scenario is,

Imagine a society with perfect economic equality. … Then, one day, this egalitarian utopia is disturbed by an entrepreneur with an idea for a new product. Think of the entrepreneur as Steve Jobs as he develops the iPod, …. When the entrepreneur’s product is introduced, everyone in society wants to buy it. They each part with, say, $100. The transaction is a voluntary exchange, so it must make both the buyer and the seller better off. But because there are many buyers and only one seller, the distribution of economic well-being is now vastly unequal.

Mankiw goes on to mention but dismiss other drivers, like rent seeking and monopoly. Krugman rejoins with a strong critique, and Digitopoly raises some interesting complications to the innovation policy arguments.

I think the thought experiment, framing the problem as a matter of innovation policy, oversimplifies and misses major drivers of what’s happening. As I wrote in Fortress USA,

The drivers of rising inequity in the US seem fairly simple. With globalization, capital has become mobile while labor remains tied to geography. So, capital investment flees high wage countries (US) and jobs follow. Asset income goes up, because capital is leveraged by cheaper labor and has good bargaining power among hungry host countries. There’s downward pressure on rich world wages, because with less capital per capita employed, the marginal productivity of labor is lower.

It’s not all bad for the rich world working class, because cheaper goods (WalMart) offset wage losses to some degree. If asset and wage income were uniformly distributed, there might even be a net benefit.

However, asset income and wages aren’t uniformly distributed, so income disparity goes up. Pre-globalization, this wasn’t so noticeable, because there was an implicit deal, in which wage earners knew that, even if they didn’t own all the capital in their country, at least they’d be the beneficiaries of it in some sense through employment and trickle down. Free trade and mobile capital turns the deal into a divorce, which puts a sharp point on questions of property rights allocations that were never quite fair, and sows the seeds of future discontent among the losers.

In addition to disparities in the fate of labor vs. capital, it’s hard not to see abundant rent seeking in the consolidation of firms and the pervasive role of money in government.

The simple, pure economic thought experiment often brings great insight. But I think this illustrates why models often have to get big before they can get small. Total analytic knowledge of a small model is fairly useless, unless that model encompasses the right structure. It’s hard, a priori, to decide what’s the right structure to include, without distilling that insight from a more complex model.

Do social negative feedbacks achieve smooth adjustment?

I’m rereading some of the history of global modeling, in preparation for the SD conference.

From Models of Doom, the Sussex critique of Limits to Growth:

Marie Jahoda, Chapter 14, Postscript on Social Change

The point is … to highlight a conception of man in world dynamics which seems to have led in all areas considered to an underestimation of negative feedback loops that bend the imaginary exponential growth curves to gentler slopes than “overshoot and collapse”. … Man’s fate is shaped not only by what happens to him but also by what he does, and he acts not just when faced with catastrophe but daily and continuously.

Meadows, Meadows, Randers & Behrens, A Response to Sussex:

The Sussex group confuses the numerical properties of our preliminary World models with the basic dynamic attributes of the world system described in the Limits to Growth. We suggest that exponential growth, physical limits, long adaptive delays, and inherent instability are obvious, general attributes of the present global system.

Who’s right?

I think we could all agree that the US housing market is vastly simpler than the world. It lies within a single political jurisdiction. Most of its value is private rather than a public good. It is fairly well observed, dense with negative feedbacks like price and supply/demand balance, and unfolds on a time scale that is meaningful to individuals. Delays like the pipeline of houses under construction are fairly salient. Do these benign properties “bend the imaginary exponential growth curves to gentler slopes than ‘overshoot and collapse'”?