Summer driving is an emergency?

A coordinated release of emergency oil stockpiles is underway. It’s almost as foolish as that timeless chain email, the Great American Gasout (now migrated to Facebook, it seems), and for the same stock-flow reasons.

Like the Gasout, strategic reserve operations don’t do anything about demand; they just shuffle it around in time. Releasing oil does increase supply by augmenting production, which causes a short term price break. But at some point you have to refill the reserve. All else equal, storing oil has to come at the expense of producing it for consumption, which means that price goes back up at some other time.

The implicit mental model here is that governments are going to buy low and sell high, releasing oil at high prices when there’s a crisis, and storing it when peaceful market conditions return. I rather doubt that political entities are very good at such things, but more importantly, where are the prospects for cheap refills, given tight supplies, strategic behavior by OPEC, and (someday) global recovery? It’s not even clear that agencies were successful at keeping the release secret, so a few market players may have captured a hefty chunk of the benefits of the release.

Setting dynamics aside, the strategic reserve release is hardly big enough to matter – the 60 million barrels planned isn’t even a day of global production. It’s only 39 days of Libyan production. Even if you have extreme views on price elasticity, that’s not going to make a huge difference – unless the release is extended. But extending the release through the end of the year would consume almost a quarter of world strategic reserves, without any clear emergency at hand.

We should be saving those reserves for a real rainy day, and increasing the end-use price through taxes, to internalize environmental and security costs and recapture OPEC rents.

The overconfidence of nuclear engineers

Rumors that the Fort Calhoun nuclear power station is subject to a media blackout appear to be overblown, given that the NRC is blogging the situation.

Apparently floodwaters at the plant were at 1006 feet ASL yesterday, which is a fair margin from the 1014 foot design standard for the plant. That margin might have been a lot less, if the NRC hadn’t cited the plant for design violations last year, which it estimated would lead to certain core damage at 1010 feet.

Still, engineers say things like this:

“We have much more safety measures in place than we actually need right now,” Jones continued. “Even if the water level did rise to 1014 feet above mean sea level, the plant is designed to handle that much water and beyond. We have additional steps we can take if we need them, but we don’t think we will. We feel we’re in good shape.” – suite101

The “and beyond” sounds like pure embellishment. The design flood elevation for the plant is 1014 feet. I’ve read some NRC documents on the plant, and there’s no other indication that higher design standards were used. Presumably there are safety margins in systems, but those are designed to offset unanticipated failures, e.g. from design deviations like those discovered by the NRC. Surely the risk of unanticipated problems would rise dramatically above the maximum anticipated flood level of 1014 feet.
Overconfidence is a major contributor to accidents in complex systems. How about a little humility?
Currently the Missouri River forecast is pretty flat, so hopefully we won’t test the limits of the plant design.

Setting up Vensim compiled simulation on Windows

If you don’t use Vensim DSS, you’ll find this post rather boring and useless. If you do, prepare for heart-pounding acceleration of your big model runs:

  • Get Vensim DSS.
  • Get a C compiler. Most flavors of Microsoft compilers are compatible; MS Visual C++ 2010 Express is a good choice (and free). You could probably use gcc, but I’ve never set it up. I’ve heard reports of issues with 2005 and 2008 versions, so it may be worth your while to upgrade.
  • Install Vensim, if you haven’t already, being sure to check the Install external function and compiled simulation support box.
  • Launch the program and go to Tools>Options…>Startup and set the Compiled simulation path to C:Documents and SettingsAll UsersVensimcomp32 (WinXP) or C:UsersPublicVensimcomp32 (Vista/7).
    • Check your mdl.bat in the location above to be sure that it points to the right compiler. This is a simple matter of checking to be sure that all options are commented out with “REM ” statements, except the one you’re using, for example:
  • Move to the Advanced tab and set the compilation options to Query or Compile (you may want to skip this for normal Simulation, and just do it for Optimization and Sensitivity, where speed really counts).

This is well worth the hassle if you’re working with a large model in SyntheSim or doing a lot of simulations for sensitivity analysis and optimization. The speedup is typically 4-5x.

Elk, wolves and dynamic system visualization

Bret Victor’s video of a slick iPad app for interactive visualization of the Lotka-Voltera equations has been making the rounds:

Coincidentally, this came to my notice around the same time that I got interested in the debate over wolf reintroduction here in Montana. Even simple models say interesting things about wolf-elk dynamics, which I’ll write about some other time (I need to get vaccinated for rabies first).

To ponder the implications of the video and predator-prey dynamics, I built a version of the Lotka-Voltera model in Vensim.

After a second look at the video, I still think it’s excellent. Victor’s two design principles, ubiquitous visualization and in-context manipulation, are powerful for communicating a model. Some aspects of what’s shown have been in Vensim since the introduction of SyntheSim a few years ago, though with less Tufte/iPad sexiness. But other features, like Causal Tracing, are not so easily discovered – they’re effective for pros, but not new users. The way controls appear at one’s fingertips in the iPad app is very elegant. The “sweep” mode is also clever, so I implemented a similar approach (randomized initial conditions across an array dimension) in my version of the model. My favorite trick, though, is the 2D control of initial conditions via the phase diagram, which makes discovery of the system’s equilibrium easy.

The slickness of the video has led some to wonder whether existing SD tools are dinosaurs. From a design standpoint, I’d agree in some respects, but I think SD has also developed many practices – only partially embodied in tools – that address learning gaps that aren’t directly tackled by the app in the video: Continue reading “Elk, wolves and dynamic system visualization”

The future

IBM was founded a hundred years ago today. Its stock has appreciated by a factor of 40 from 1962 (about 5 doublings in 50 years is 7%/yr).

Perhaps more importantly, the Magna Carta turned 796 yesterday. It was a major milestone in a long ascent of rule of law and civil liberties.

What will the next century and millennium bring?

Et tu, EJ?

I’m not a cap & trade fan, but I find it rather bizarre that the most successful opposition to California’s AB32 legislation comes from the environmental justice (EJ) movement, on the grounds that cap & trade might make emissions go up in areas that are already disadvantaged, and that Air Resources failed to adequately consider alternatives like a carbon tax.

I think carbon taxes did get short shrift in the AB32 design. Taxes were a second-place favorite among economists in the early days, but ultimately the MAC analysis focused on cap & trade, because it provided environmental certainty needed to meet legal targets (oops), but also because it was political suicide to say “tax” out loud at the time.

While cap & trade has issues with dynamic stability, allocation wrangling and complexity, it’s hard to imagine any way that those drawbacks would change the fundamental relationship between the price signal’s effect on GHGs vs. criteria air pollutants. In fact, GHGs and other pollutant emissions are highly correlated, so it’s quite likely that cap & trade will have ancillary benefits from other pollutant reductions.

To get specific, think of large point sources like refineries and power plants. For the EJ argument to make sense, you’d have to think that emitters would somehow meet their greenhouse compliance obligations by increasing their emissions of nastier things, or at least concentrating them all at a few facilities in disadvantaged areas. (An analogy might be removing catalytic converters from cars to increase efficiency.) But this can’t really happen, because the air quality permitting process is not superseded by the cap & trade system. In the long run, it’s also inconceivable that it could occur, because there’s no way you could meet compliance obligations for deep cuts by increasing emissions. A California with 80% cuts by 2050 isn’t going to have 18 refineries, and therefore it’s not going to emit as much.

The ARB concludes as much in a supplement to the AB32 scoping plan, released yesterday. It considers alternatives to cap & trade. There’s some nifty stuff in the analysis, including a table of existing emissions taxes (page 89).

It seems that to some extent ARB has tilted the playing field a bit by evaluating a dumb tax, i.e. one that doesn’t adapt its price level to meet environmental objectives without legislative intervention, and heightening leakage concerns that strike me as equally applicable to cap & trade. But they do raise legitimate legal concerns – a tax is not a legal option for ARB without a vote of the legislature, which would likely fail because it requires a supermajority, and tax-equivalent fees are a dubious proposition.

If there’s no Plan B alternative to cap and trade, I wonder what the EJ opposition was after? Surely failure to address emissions is not compatible with a broad notion of justice.

Hand over your cell phones

Adam Frank @NPR says, “Science Deniers: Hand Over Your Cellphones!”

I’m sympathetic to the notion that attitudes toward science are often a matter of ideological convenience rather than skeptical reasoning. However, we don’t have a cell phone denial problem. Why? I think it helps to identify the contributing factors in circumstances in which denial occurs:

  • Non-experimental science (reliance on observations of natural experiments; no controls or randomized assignment)
  • Infrequent replication (few examples within the experience of an individual or community)
  • High noise (more specifically, low signal-to-noise ratio)
  • Complexity (nonlinearity, integrations or long delays between cause and effect, multiple agents, emergent phenomena)
  • “Unsalience” (you can’t touch, taste, see, hear, or smell the variables in question)
  • Cost (there’s some social or economic penalty  imposed by the policy implications of the theory)
  • Commons (the risk of being wrong accrues to society more than the individual)

It’s easy to believe in radio waves used by cell phones, or general relativity corrected for by GPS, because their only problematic feature is invisibility. Calling grandma is a pretty compelling experiment, which one can repeat as often as needed to dispel any doubts about those mysterious electromagnetic waves.

At one time, the debate over the structure of the solar system was subject to these problems. There was a big social cost to believing the heliocentric model (the Inquisition), and little practical benefit to being right. Theory relied on observations that were imprecise and not salient to the casual observer. Now that we have low-noise observations, replicated experiments (space probe launches), and so on, there aren’t too many geocentrists around.

Climate, on the other hand, has all of these problems. Of particular importance, the commons and long-time-scale aspects of the problem shelter individuals from selection pressure against wrong beliefs.

Selection for deception?

Eric R. Weinstein on Edge’s 2011 question:

The sophisticated “scientific concept” with the greatest potential to enhance human understanding may be argued to come not from the halls of academe, but rather from the unlikely research environment of professional wrestling.

Evolutionary biologists Richard Alexander and Robert Trivers have recently emphasized that it is deception rather than information that often plays the decisive role in systems of selective pressures. Yet most of our thinking continues to treat deception as something of a perturbation on the exchange of pure information, leaving us unprepared to contemplate a world in which fakery may reliably crowd out the genuine. In particular, humanity’s future selective pressures appear likely to remain tied to economic theory which currently uses as its central construct a market model based on assumptions of perfect information.

If we are to take selection more seriously within humans, we may fairly ask what rigorous system would be capable of tying together an altered reality of layered falsehoods in which absolutely nothing can be assumed to be as it appears. Such a system, in continuous development for more than a century, is known to exist and now supports an intricate multi-billion dollar business empire of pure hokum. It is known to wrestling’s insiders as “Kayfabe”.

Were Kayfabe to become part of our toolkit for the twenty-first century, we would undoubtedly have an easier time understanding a world in which investigative journalism seems to have vanished and bitter corporate rivals cooperate on everything from joint ventures to lobbying efforts. Perhaps confusing battles between “freshwater” Chicago macro economists and Ivy league “Saltwater” theorists could be best understood as happening within a single “orthodox promotion” given that both groups suffered no injury from failing (equally) to predict the recent financial crisis. …

Reasoning was not designed to pursue the truth

Uh oh:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. – Mercier & Sperber via Edge.org, which has a video conversation with coauthor Mercier.

This makes sense to me, but I think it can’t be the whole story. There must be at least a little evolutionary advantage to an ability to predict the consequences of one’s actions. The fact that it appears to be dominated by confirmation bias and other pathologies may be indicative of how much we are social animals, and how long we’ve been that way.

It’s easy to see why this might occur by looking at the modern evolutionary landscape for ideas. There’s immediate punishment for touching a hot stove, but for any complex system, attribution is difficult. It’s easy to see how the immediate rewards from telling your fellow tribesmen crazy things might exceed the delayed and distant rewards of actually being right. In addition, wherever there are stocks of resources lying about, there are strong incentives to succeed by appropriation rather than creation. If you’re really clever with your argumentation, you can even make appropriation resemble creation.

The solution is to use our big brains to raise the bar, by making better use of models and other tools for analysis of and communication about complex systems.

Nothing that you will learn in the course of your studies will be of the slightest possible use to you in after life, save only this, that if you work hard and intelligently you should be able to detect when a man is talking rot, and that, in my view, is the main, if not the sole, purpose of education. – John Alexander Smith, Oxford, 1914

So far, though, models seem to be serving argumentation as much as reasoning. Are we stuck with that?

Who moved my eigenvalues?

Change management is one of the great challenges in modeling projects. I don’t mean this in the usual sense of getting people to change on the basis of model results. That’s always a challenge, but there’s another.

Over the course of a project, the numerical results and maybe even the policy conclusions given by a model are going to change. This is how we learn from models. If the results don’t change, either we knew the answer from the outset (a perception that should raise lots of red flags), or the model isn’t improving.

The problem is that model consumers are likely to get anchored to the preliminary results of the work, and resist change when it arrives later in the form of graphs that look different or insights that contradict early, tentative conclusions.

Fortunately, there are remedies:

  • Start with the assumption that the model and the data are wrong, and to some extent will always remain so.
  • Recognize that the modeler is not the font of all wisdom.
  • Emphasize extreme conditions tests and reality checks throughout the modeling process, not just at the end, so bugs don’t get baked in while insights remain hidden.
  • Do lots of sensitivity analysis to determine the circumstances under which insights are valid.
  • Keep the model simpler than you think it needs to be, so that you have some hope of understanding it, and time for reflecting on behavior and communicating results.
  • Involve a broad team of model consumers, and set appropriate expectations about what the model will be and do from the start.