Hell freezes over: Fox to go carbon neutral

I keep checking, but today is not April 1st:

In the Fox News universe, the world is definitely not warming. Quite the opposite: Climate change is “bunk,” a spectacular hoax perpetrated on the rest of us by a cabal of corrupt scientists. But while embracing climate skepticism may be good for ratings, the execs at Fox News’ parent company, News Corp., don’t see it as good for the long-term bottom line. By the end of this year, News Corp. aims to go carbon neutral — meaning that the home of über-global warming denialists like Sean Hannity and Glenn Beck may soon be one of the greener multinational corporations around.

News Corp. announced its plan in May 2007 with a groundbreaking speech from chairman Rupert Murdoch. “Climate change poses clear, catastrophic threats,” declared Murdoch. “We may not agree on the extent, but we certainly can’t afford the risk of inaction.” Formerly skeptical about global warming, Murdoch was reportedly converted by a presentation from Al Gore — whom Fox News commentators have described as “nuts” and “off his lithium” — and by his green-leaning son James, who is expected to inherit his business empire.

But Murdoch wasn’t acting out of altruism. For News Corp., he said, the move was “simply good business.” (Fox News barely mentioned the boss’ remarks.)

Murdoch’s logic was that higher energy costs are inevitable, given coming carbon regulations and dwindling supplies of conventional fuels such as oil. So why not get ahead of the game? “Whatever [going carbon neutral] costs will be minimal compared to our overall revenues,” the media mogul has remarked, “and we’ll get that back many times over.”

Read More at Wired

Writing a good system dynamics paper II

It’s SD conference paper review time again. Last year I took notes while reviewing, in an attempt to capture the attributes of a good paper. A few additional thoughts:

  • No model is perfect, but it pays to ask yourself, will your model stand up to critique?
  • Model-data comparison is extremely valuable and too seldom done, but trivial tests are not interesting. Fit to data is a weak test of model validity; it’s often necessary, but never sufficient as a measure of quality. I’d much rather see the response of a model to a step input or an extreme conditions test than a model-data comparison. It’s too easy to match the model to the data with exogenous inputs, so unless I see a discussion of a multi-faceted approach to validation, I get suspicious. You might consider how your model meets the following criteria:
    • Do decision rules use information actually available to real agents in the system?
    • Would real decision makers agree with the decision rules attributed to them?
    • Does the model conserve energy, mass, people, money, and other physical quantities?
    • What happens to the behavior in extreme conditions?
    • Do physical quantities always have nonnegative values?
    • Do units balance?
  • If you have time series output, show it with graphs – it takes a lot of work to “see” the behavior in tables. On the other hand, tables can be great for other comparisons of outcomes.
  • If all of your graphs show constant values, linear increases (ramps), or exponentials, my eyes glaze over, unless you can make a compelling case that your model world is really that simple, or that people fail to appreciate the implications of those behaviors.
  • Relate behavior to structure. I don’t care what happens in scenarios unless I know why it happens. One effective way to do this is to run tests with and without certain feedback loops or sectors of the model active.
  • Discuss what lies beyond the boundary of your model. What did you leave out and why? How does this limit the applicability of the results?
  • If you explore a variety of scenarios with your model (as you should), introduce the discussion with some motivation, i.e. why are the particular scenarios tested important, realistic, etc.?
  • Take some time to clean up your model diagrams. Eliminate arrows that cross unnecessarily. Hide unimportant parameters. Use clear variable names.
  • It’s easiest to understand behavior in deterministic experiments, so I like to see those. But the real world is noisy and uncertain, so it’s also nice to see experiments with stochastic variation or Monte Carlo exploration of the parameter space. For example, there are typically many papers on water policy in the ENV thread. Water availability is contingent on precipitation, which is variable on many time scales. A system’s response to variation or extremes of precipitation is at least as important as its mean behavior.
  • Modeling aids understanding, which is intrinsically valuable, but usually the real endpoint of a modeling exercise is a decision or policy change. Sometimes, it’s enough to use the model to characterize a problem, after which the solution is obvious. More often, though, the model should be used to develop and test decision rules that solve the problem you set out to conquer. Show me some alternative strategies, discuss their limitations and advantages, and describe how they might be implemented in the real world.
  • If you say that an SD model can’t predict or forecast, be very careful. SD practitioners recognized early on that forecasting was often a fool’s errand, and that insight into behavior modes for design of robust policies was a worthier goal. However, SD is generally about building good dynamic models with appropriate representations of behavior and so forth, and good models are a prerequisite to good predictions. An SD model that’s well calibrated can forecast as well as any other method, and will likely perform better out of sample than pure statistical approaches. More importantly, experimentation with the model will reveal the limits of prediction.
  • It never hurts to look at your paper the way a reviewer will look at it.

WORLD3-03

This is the latest instance of the WORLD3 model, as in Limits to Growth – the 30 year update, from the standard Vensim distribution. It’s not much changed from the 1972 original used in Limits to Growth, which is documented in great detail in Dynamics of Growth in a Finite World (half off at Pegasus as of this moment).

There have been many critiques of this model, including the fairly famous Models of Doom. Many are ideological screeds that miss the point, and many modern critics do not appear to have read the book. The only good, comprehensive technical critique of World3 that I’m aware of is Wil Thissen’s thesis, Investigations into the Club of Rome’s WORLD3 model: lessons for understanding complicated models (Eindhoven, 1978). Portions appeared in IEEE Transactions.

My take on the more sensible critiques is that they show two things:

  • WORLD3 is an imperfect expression of the underlying ideas in Limits to Growth.
  • WORLD3 doesn’t have the policy space to capture competing viewpoints about the global situation; in particular it does not represent markets and technology as many see them.

It doesn’t necessarily follow from those facts that the underlying ideas of Limits are wrong. We still have to grapple with the consequences of exponential growth confronting finite planetary boundaries with long perception and action delays.

I’ve written some other material on limits here.

Files: WORLD3-03 (zipped archive of Vensim models and constant changes)

Another look at inadequate Copenhagen pledges

Joeri Rogelj and others argue that Copenhagen Accord pledges are paltry in a Nature Opinion,

Current national emissions targets can’t limit global warming to 2 °C, calculate Joeri Rogelj, Malte Meinshausen and colleagues — they might even lock the world into exceeding 3 °C warming.

  • Nations will probably meet only the lower ends of their emissions pledges in the absence of a binding international agreement
  • Nations can bank an estimated 12 gigatonnes of Co2 equivalents surplus allowances for use after 2012
  • Land-use rules are likely to result in further allowance increases of 0.5 GtCO2-eq per year
  • Global emissions in 2020 could thus be up to 20% higher than today
  • Current pledges mean a greater than 50% chance that warming will exceed 3°C by 2100
  • If nations agree to halve emissions by 2050, there is still a 50% chance that warming will exceed 2°C and will almost certainly exceed 1.5°C

Via Nature’s Climate Feedback, Copenhagen Accord – missing the mark.

Computer models running the EU? Eruptions, models, and clueless reporting

The EU airspace shutdown provides yet another example of ignorance of the role of models in policy:

Computer Models Ruining EU?

Flawed computer models may have exaggerated the effects of an Icelandic volcano eruption that has grounded tens of thousands of flights, stranded hundreds of thousands of passengers and cost businesses hundreds of millions of euros. The computer models that guided decisions to impose a no-fly zone across most of Europe in recent days are based on incomplete science and limited data, according to European officials. As a result, they may have over-stated the risks to the public, needlessly grounding flights and damaging businesses. “It is a black box in certain areas,” Matthias Ruete, the EU’s director-general for mobility and transport, said on Monday, noting that many of the assumptions in the computer models were not backed by scientific evidence. European authorities were not sure about scientific questions, such as what concentration of ash was hazardous for jet engines, or at what rate ash fell from the sky, Mr. Ruete said. “It’s one of the elements where, as far as I know, we’re not quite clear about it,” he admitted. He also noted that early results of the 40-odd test flights conducted over the weekend by European airlines, such as KLM and Air France, suggested that the risk was less than the computer models had indicated. – Financial Times

Other venues picked up similar stories:

Also under scrutiny last night was the role played by an eight-man team at the Volcanic Ash Advisory Centre at Britain’s Meteorological Office. The European Commission said the unit started the chain of events that led to the unprecedented airspace shutdown based on a computer model rather than actual scientific data. – National Post

These reports miss a number of crucial points:

  • The decision to shut down the airspace was political, not scientific. Surely the Met Office team had input, but not the final word, and model results were only one input to the decision.
  • The distinction between computer models and “actual scientific data” is false. All measurements involve some kind of implicit model, required to interpret the result. The 40 test flights are meaningless without some statistical interpretation of sample size and so forth.
  • It’s not uncommon for models to demonstrate that data are wrong or misinterpreted.
  • The fact that every relationship or parameter in a model can’t be backed up with a particular measurement does not mean that the model is unscientific.
    • Numerical measurements are not the only valid source of data; there are also laws of physics, and a subject matter expert’s guess is likely to be better than a politician’s.
    • Calibration of the aggregate result of a model provides indirect measurement of uncertain components.
    • Feedback structure may render some parameters insensitive and therefore unimportant.
  • Good decisions sometimes lead to bad outcomes.

The reporters, and maybe also the director-general (covering his you-know-what), have neatly shifted blame, turning a problem in decision making under uncertainty into an anti-science witch hunt. What alternative to models do they suggest? Intuition? Prayer? Models are just a way of integrating knowledge in a formal, testable, shareable way. Sure, there are bad models, but unlike other bad ideas, it’s at least easy to identify their problems.

Thanks to Jack Dirmann, Green Technology for the tip.

Payments for Environmental Services

From ModelWiki

Jump to: navigation, search

Model Name: payments, penalties, and environmental ethic

Citation: Dudley, R. 2007. Payments, penalties, payouts, and environmental ethics: a system dynamics examination Sustainability: Science, Practice, & Policy 3(2):24-35. http://ejournal.nbii.org/archives/vol3iss2/0706-013.dudley.html.

Source: Richard G. Dudley

Copyright: Richard G. Dudley (2007)

License: Gnu GPL

Peer reviewed: Yes (probably when submitted for publication?)

Units balance: Yes

Format: Vensim

Target audience: People interested in the concept of payments for environmental services as a means of improving land use and conservation of natural resources.

Questions answered: How might land users’ environmental ethic be influenced by, and influence, payments for environmental services.

Software: Vensim

Files:

http://modelwiki.metasd.com/images/d/db/SSPP_PES_and_Env_Ethic_2007-09-25.vmf

Models in the Special Issue of the System Dynamics Review on Environmental and Resource Systems

Models in the Special Issue of the System Dynamics Review on Environmental and Resource Systems, Andrew Ford & Robert Cavana, Editors. System Dynamics Review, Volume 20, Number 2, Summer of 2004.

  • Modeling the Effects of a Log Export Ban in Indonesia by Richard G. Dudley
  • The Dynamics of Water Scarcity in Irrigated Landscapes: Mazarron and Aguilas in South-eastern Spain by Julia Martinez Fernandez & Angel Esteve Selma
  • Misperceptions of Basic Dynamics: The Case of Renewable Resource Management by Erling Moxnes
  • Models for Management of Wildlife Populations: Lessons from Spectacle Bears in Zoos and Gizzly Bears in Yellowstone by Lisa Faust, Rosemary Jackson, Andrew Ford, Joanne Earnhardt and Steven Thompson
  • Modeling a Blue-Green Algae Bloom by Steven Arquitt & Ron Johnstone

See the following web site for article summaries and downloadable models described in this special issue:  http://www.wsu.edu/~forda/SIOpen.html

Submitted by Richard Dudley, 23 April 2008

Rental car stochastic dynamics

This is a little experimental model that I developed to investigate stochastic allocation of rental cars, in response to a Vensim forum question.

There’s a single fleet of rental cars distributed around 50 cities, connected by a random distance matrix (probably not physically realizable on a 2D manifold, but good enough for test purposes). In each city, customers arrive at random, rent a car if available, and return it locally or in another city. Along the way, the dawdle a bit, so returns are essentially a 2nd order delay of rentals: a combination of transit time and idle time.

The two interesting features here are:

  • Proper use of Poisson arrivals within each time step, so that car flows are dimensionally consistent and preserve the integer constraint (no fractional cars)
  • Use of Vensim’s ALLOC_P/MARKETP functions to constrain rentals when car availability is low. The usual approach, setting actual = MIN(desired, available/TIME STEP), doesn’t work because available is subscripted by 50 cities, while desired has 50 x 50 origin-destination pairs. Therefore the constrained allocation could result in fractional cars. The alternative approach is to set up a randomized first-come, first-served queue, so that any shortfall preserves the integer constraint.

The interesting experiment with this model is to lower the fleet until it becomes a constraint (at around 10,000 cars).

Documentation is sparse, but units balance.

Requires an advanced Vensim version (for arrays) or the free Model Reader.

carRental.vpm carRental.vmf

Update, with improved distribution choice and smaller array dimensions for convenience:

carRental2.mdl carRental2.vpm

Cascading failures in interconnected networks

Wired covers a new article in Nature, investigating massive failures in linked networks.

interconNetworks

The interesting thing is that feedback between the connected networks destabilizes the whole:

“When networks are interdependent, you might think they’re more stable. It might seem like we’re building in redundancy. But it can do the opposite,” said Eugene Stanley, a Boston University physicist and co-author of the study, published April 14 in Nature.

The interconnections fueled a cascading effect, with the failures coursing back and forth. A damaged node in the first network would pull down nodes in the second, which crashed nodes in the first, which brought down more in the second, and so on. And when they looked at data from a 2003 Italian power blackout, in which the electrical grid was linked to the computer network that controlled it, the patterns matched their models’ math.

Wired

Interestingly, the interconnection alters the relationship between network structure (degree distribution) and robustness:

Surprisingly, a broader degree distribution increases the vulnerability of interdependent networks to random failure, which is opposite to how a single network behaves.

Nature

Chalk one up for counter-intuitive behavior of complex systems.

What looks like last year’s version of the paper is on arXiv.