Cap & trade is dead. Long live cap & trade?

Democrats have pulled the plug on a sweeping energy bill this year. There is no heir apparent. This is not cause for panic. In climate, as in education, there are no emergencies.

However, the underlying reasons may be cause for panic. It seems that voters are unwilling to accept any policy that will significantly raise the price of emissions. Given that price is a predominant information carrier in our economy, other polices are unlikely to work efficiently, absent a price signal. That leaves us in a bit of a pickle. What to do?

If you don’t want to buck public opinion, advise the people to invest in (then pray for) a technological miracle. Ask yourself, “Do I feel lucky?” It might even work.

Alternatively, you might conclude that the public hasn’t quite grasped the nature of the problem – that wait and see is not a good policy in systems with long delays. But then you’d be accused of scientism, for the equivalent of challenging the efficient market hypothesis or the notion that the customer is always right. That’s rather puzzling, given that there’s direct evidence that people don’t intuitively appreciate the dynamics of accumulation, and that snowstorms in the East cause half of Americans to question the reality of climate change.

The anti-scientism, pro-technology crowd takes opposition to meaningful mitigation policy as a sure sign that the public is on to something. The wisdom of crowds is powerful when there’s diverse information and rapid feedback, as in price discovery through a market. But it has a pretty disastrous history in the runup to bubbles and other catastrophes, as we’ve recently seen. Surely there are some legitimate worries about current climate proposals (I’ve expressed a number here), but it doesn’t follow that pricing emissions is a bad decision.

So, what’s a modeler to do? Opening up political debates is a good idea, though not quite in the way that I think proponents intend. We already have plenty of political debates. The problem is that they tend to lack ready access to scientific or other information that can be agreed upon or at least presented in a way that permits testing of hypotheses against data or evaluation of decisions against contingencies. That means that questions of values and distribution of benefits (which politics is rightfully about) get mixed up with muddled thinking about science, economics, and social system dynamics.

The solution typically proposed is to open up science and models to more public scrutiny. That’s a good idea for a variety of reasons, but by itself it’s a losing proposition for scientists- they get all the criticism, and the public process doesn’t assimilate much of their insights. What’s needed is a fair exchange, where everyone shows their hands. Scientists make their stuff accessible, and in return participants in policy debates actually use it, and additionally submit to formalization of their arguments to facilitate shared understanding and testing.

Coming back to cap & trade, I don’t see that the major political players are willing to do that. Following a successful round of multi-stakeholder workshops that brought a systems perspective to conversations about climate policy, funded by the petro industry in California, we spent a fair amount of time marketing the idea of a model-assisted deliberation process targeted at shared design of federal climate policy. Lobbyists at some of the big stakeholders told us very forthrightly that they were unwilling to engage in any process with an outcome that they couldn’t predict and control.

In an environment where everyone’s happy with their own entrenched position, their isn’t much hope for a good solution to emerge. The only solution I see is to make an end run around the big players, and go straight to the public with better information, in order to expand the set of things they’ll accept. I hope there’s time for that to work.

Policy Resistance – Immigration & Prohibition

Complex systems find many ways of resisting or evading pressures, resulting in policy failure, backlashes, whack-a-mole games and other unintended consequences. Some great examples just wandered by my desk:

Via Economist’s View:

Immigration reform has a long history of unintended consequences: More than two decades of increased enforcement since the passage of the Immigration Reform and Control Act of 1986 has done little to reduce the number of illegal immigrants. In fact, it seems to have increased their numbers. …

Princeton University sociologist Douglas Massey pointed out … that measures to secure the border seemed to produce almost the opposite of what was intended. … With increasing border enforcement, workers who used to shuttle between jobs in California or Texas and home in Zacatecas or Michoacán simply began to stay put and sent for their families, becoming permanent, if sometimes reluctant, residents. According to Massey, post-IRCA border enforcement may have increased the size of the permanent Mexican population in the United States by a factor of nearly four.

From a great article on Wayne Wheeler, The Man Who Turned Off the Taps, in Smithsonian:

But for all his political might, Wheeler could not do what he and all the other Prohibitionists had set out to do: they could not purge alcoholic beverages from American life. Drinking did decline at first, but a combination of legal loopholes, personal tastes and political expediency conspired against a dry regime.

As declarative as the 18th Amendment was—forbidding “the manufacture, sale, or transportation of intoxicating liquors”—the Volstead Act allowed exceptions. You were allowed to keep (and drink) liquor you had in your possession as of January 16, 1920; this enabled the Yale Club in New York, for instance, to stockpile a supply large enough to last the full 14 years that Prohibition was in force. Farmers and others were allowed to “preserve” their fruit through fermentation, which placed hard cider in cupboards across the countryside and homemade wine in urban basements. “Medicinal liquor” was still allowed, enriching physicians (who generally charged by the prescription) and pharmacists (who sold such “medicinal” brands as Old Grand-Dad and Johnnie Walker). A religious exception created a boom in sacramental wines, leading one California vintner to sell communion wine—legally—in 14 different varieties, including port, sherry, tokay and cabernet sauvignon.

By the mid-’20s, those with a taste for alcohol had no trouble finding it, especially in the cities of the East and West coasts and along the Canadian border. At one point the New York police commissioner estimated there were 32,000 illegal establishments selling liquor in his city. In Detroit, a newsman said, “It was absolutely impossible to get a drink…unless you walked at least ten feet and told the busy bartender what you wanted in a voice loud enough for him to hear you above the uproar.” Washington’s best-known bootlegger, George L. Cassiday (known to most people as “the man in the green hat”), insisted that “a majority of both houses” of Congress bought from him, and few thought he was bragging.

Worst of all, the nation’s vast thirst gave rise to a new phenomenon—organized crime, in the form of transnational syndicates that controlled everything from manufacture to pricing to distribution. A corrupt and underfunded Prohibition Bureau couldn’t begin to stop the spread of the syndicates, which considered the politicians who kept Prohibition in place their greatest allies. Not only did Prohibition create their market, it enhanced their profit margins: from all the billions of gallons of liquor that changed hands illegally during Prohibition, the bootleggers did not pay, nor did the government collect, a single penny of tax.

The prohibition article also poses an interesting puzzle. If prohibition was more or less quickly and broadly unpopular, how did it get passed by such landslide margins in the first place? I can’t believe that ignorance of the possible outcome was universal, so there must have been some powerful positive feedback behind the initial passage of the policy. Perhaps it was a tipping point effect: once a vote becomes sufficiently lopsided, fewer and fewer politicians want to be on the losing side of a landslide vote, so they join the herd. A modern analogy might be the post-9/11 authorization of the Iraq war.

Get a lawyer

That’s really the only advice I can give on models and copyrights.

Nevertheless, here are some examples of contract language that may be illuminating. Bear in mind that I AM NOT A LAWYER AND THIS IS NOT LEGAL ADVICE. I provide no warranty of any kind and assume no liability for your use or misuse of these examples. There are lots of deadly details, regional differences, and variations in opinion about good contract terms. Also, these terms have been slightly adapted to conceal their origins, which may have unintended consequences. Get an IP lawyer to review your plans before proceeding.

Continue reading “Get a lawyer”

Models and copyrights

Or, Friends don’t let friends work for hire.

opencontent

Image Copyright 2004 Lawrence Liang, Piet Zwart Institute, licensed under a Creative Commons License

Photographers and other media workers hate work for hire, because it’s often a bad economic tradeoff, giving up future income potential for work that’s underpaid in the first place. But at least when you give up rights to a photo, that’s the end of it. You can take future photos without worrying about past ones.

For models and software, that’s not the case, and therefore work for hire makes modelers a danger to themselves and to future clients. The problem is that models draw on a constrained space of possible formulations of a concept, and tend to incorporate a lot of prior art. Most of the author’s prior art is probably, in turn, things learned from other modelers. But when a modeler reuses a bit of structure – say, a particular representation of a supply chain or a consumer choice decision – under a work for hire agreement, title to those equations becomes clouded, because the work-for-hire client owns the new work, and it’s hard to distinguish new from old.

The next time you reuse components that have been used for work-for-hire, the previous client can sue for infringement, threatening both you and future clients. It doesn’t matter if the claim is legitimate; the lawsuit could be debilitating, even if you could ultimately win. Clients are often much bigger, with deeper legal pockets, than freelance modelers. You also can’t rely on a friendly working relationship, because bad things can happen in spite of good intentions: a hostile party might acquire copyright through a bankruptcy, for example.

The only viable approach, in the long run, is to retain copyright to your own stuff, and grant clients all the license they need to use, reproduce, produce derivatives, or whatever. You can relicense a snippet of code as often as you want, so no client is ever threatened by another client’s rights or your past agreements.

Things are a little tougher when you want to collaborate with multiple parties. One apparent option, joint ownership of copyright to the model, is conceptually nice but actually not such a hot idea. First, there’s legal doctrine to the effect that individual owners have a responsibility not to devalue joint property, which is a problem if one owner subsequently wants to license or give away the model. Second, in some countries, joint owners have special responsibilities, so it’s hard to write a joint ownership contract that works worldwide.

Again, a viable approach is cross-licensing, where creators retain ownership of their own contributions, and license contributions to their partners. That’s essentially the approach we’ve taken within the C-ROADS team.

One thing to avoid at all costs is agreements that require equation-level tracking of ownership. It’s fairly easy to identify individual contributions to software code, because people tend to work in containers, contributing classes, functions or libraries that are naturally modular. Models, by contrast, tend to be fairly flat and tightly interconnected, so contributions can be widely scattered and difficult to attribute.

Part of the reason this is such a big problem is that we now have too much copyright protection, and it lasts way too long. That makes it hard for copyright agreements to recognize where we see far because we stand on the shoulders of giants, and distorts the balance of incentives intended by the framers of the constitution.

In the academic world, model copyright issues have historically been ignored for the most part. That’s good, because copyright is a hindrance to progress (as long as there are other incentives to create knowledge). That’s also bad, because it means that there are a lot of models out there that have not been placed in the public domain, but which are treated as if they were. If people start asserting their copyrights to those, things could get messy in the future.

A solution to all of this could be open source or free software. Copyleft licenses like the GPL and permissive licenses like Apache facilitate collaboration and reuse of models. That would enable the field to move faster as a whole through open extension of prior work. C-ROADS and C-LEARN and component models are going out under an open license, and I hope to do more such experiments in the future.

Update: I’ve posted some examples.

Other bathtubs – capital

China is rapidly eliminating old coal generating capacity, according to Technology Review.

Draining Bathtub

Coal still meets 70 percent of China’s energy needs, but the country claims to have shut down 60 gigawatts’ worth of inefficient coal-fired plants since 2005. Among them is the one shown above, which was demolished in Henan province last year. China is also poised to take the lead in deploying carbon capture and storage (CCS) technology on a large scale. The gasifiers that China uses to turn coal into chemicals and fuel emit a pure stream of carbon dioxide that is cheap to capture, providing “an excellent opportunity to move CCS forward globally,” says Sarah Forbes of the World Resources Institute in Washington, DC.

That’s laudable. However, the inflow of new coal capacity must be even greater. Here’s the latest on China’s coal output:

ChinaCoalOutput

China Statistical Yearbook 2009 & 2009 main statistical data update

That’s just a hair short of 3 billion tons in 2009, with 8%/yr growth from ’07-’09, in spite of the recession. On a per capita basis, US output and consumption is still higher, but at those staggering growth rates, it won’t take China long to catch up.

A simple model of capital turnover involves two parallel bathtubs, a “coflow” in SD lingo:

CapitalTurnover

Every time you build some capital, you also commit to the energy needed to run it (unless you don’t run it, in which case why build it?). If you get fancy, you can consider 3rd order vintaging and retrofits, as here:

Capital Turnover 3o

To get fancier still, see the structure in John Sterman’s thesis, which provides for limited retrofit potential (that Gremlin just isn’t going to be a Prius, no matter what you do to the carburetor).

The basic challenge is that, while it helps to retire old dirty capital quickly (increasing the outflow from the energy requirements bathtub), energy requirements will go up as long as the inflow of new requirements is larger, which is likely when capital itself is growing and the energy intensity of new capital is well above zero. In addition, when capital is growing rapidly, there just isn’t much old stuff around (proportionally) to throw away, because the age structure of capital will be biased toward new vintages.

Hat tip: Travis Franck

Would you like fries with that?

Education is a mess, and well-motivated policy changes are making it worse.

I was just reading this and this, and the juices got flowing, so my wife and I brainstormed this picture:

Education CLD

Click to enlarge

Yep, it’s spaghetti, like a lot of causal brainstorming efforts. The underlying problem space is very messy and hard to articulate quickly, but I think the essence is simple. Educational outcomes are substandard, creating pressure to improve. In at least some areas, outcomes slipped a lot because the response to pressure was to erode learning goals rather than to improve (blue loop through the green goal). One benefit of No Child Left Behind testing is to offset that loop, by making actual performance salient and restoring the pressure to improve. Other intuitive responses (red loops) also have some benefit: increasing school hours provides more time for learning; standardization yields economies of scale in materials and may improve teaching of low-skill teachers; core curriculum focus aligns learning with measured goals.

The problem is that these measures have devastating side effects, especially in the long run. Measurement obsession eats up time for reflection and learning. Core curriculum focus cuts out art and exercise, so that lower student engagement and health diminishes learning productivity. Low engagement means more sit-down-and-shut-up, which eats up teacher time and makes teaching unattractive. Increased hours lead to burnout of both students and teachers. Long hours and standardization make teaching unattractive. Degrading the attractiveness of teaching makes it hard to attract quality teachers. Students aren’t mindless blank slates; they know when they’re being fed rubbish, and check out. When a bad situation persists, an anti-intellectual culture of resistance to education evolves.

The nest of reinforcing feedbacks within education meshes with one in broader society. Poor education diminishes future educational opportunity, and thus the money and knowledge available to provide future schooling. Economic distress drives crime, and prison budgets eat up resources that could otherwise go to schools. Dysfunction reinforces the perception that government is incompetent, leading to reduced willingness to fund schools, ensuring future dysfunction. This is augmented by flight of the rich and smart to private schools.

I’m far from having all the answers here, but it seems that standard SD advice on the counter-intuitive behavior of social systems applies. First, any single policy will fail, because it gets defeated by other feedbacks in the system. Perhaps that’s why technology-led efforts haven’t lived up to expectations; high tech by itself doesn’t help if teachers have no time to reflect on and refine its use. Therefore intervention has to be multifaceted and targeted to activate key loops. Second, things get worse before they get better. Making progress requires more resources, or a redirection of resources away from things that produce the short-term measured benefits that people are watching.

I think there are reasons to be optimistic. All of the reinforcing feedback loops that currently act as vicious cycles can run the other way, if we can just get over the hump of the various delays and irreversibilities to start the process. There’s enormous slack in the system, in a variety of forms: time wasted on discipline and memorization, burned out teachers who could be re-energized and students with unmet thirst for knowledge.

The key is, how to get started. I suspect that the conservative approach of privatization half-works: it successfully exploits reinforcing feedback to provide high quality for those who opt out of the public system. However, I don’t want to live in a two class society, and there’s evidence that high inequality slows economic growth. Instead, my half-baked personal prescription (which we pursue as homeschooling parents) is to make schools more open, connecting students to real-world trades and research. Forget about standardized pathways through the curriculum, because children develop at different rates and have varied interests. Replace quantity of hours with quality, freeing teachers’ time for process improvement and guidance of self-directed learning. Suck it up, and spend the dough to hire better teachers. Recover some of that money, and avoid lengthy review, by using schools year ’round. I’m not sure how realistic all of this is as long as schools function as day care, so maybe we need some reform of work and parental attitudes to go along.

[Update: There are of course many good efforts that can be emulated, by people who’ve thought about this more deeply than I. Pegasus describes some here. Two of note are the Waters Foundation and Creative Learning Exchange. Reorganizing education around systems is a great way to improve productivity through learner-directed learning, make learning exciting and relevant to the real world, and convey skills that are crucial for society to confront its biggest problems.]

Dynamics on the iPhone

Scott Johnson asks about C-LITE, an ultra-simple version of C-ROADS, built in Processing – a cool visually-oriented language.

C-LITE

(Click the image to try it).

With this experiment, I was striving for a couple things:

  • A reduced-form version of the climate model, with “good enough” accuracy and interactive speed, as in Vensim’s Synthesim mode (no client-server latency).
  • Tufte-like simplicity of the UI (no grids or axis labels to waste electrons). Moving the mouse around changes the emissions trajectory, and sweeps an indicator line that gives the scale of input and outputs.
  • Pervasive representation of uncertainty (indicated by shading on temperature as a start).

This is just a prototype, but it’s already more fun than models with traditional interfaces.

I wanted to run it on the iPhone, but was stymied by problems translating the model to Processing.js (javascript) and had to set it aside. Recently Travis Franck stepped in and did a manual translation, proving the concept, so I took another look at the problem. In the meantime, a neat export tool has made it easy. It turns out that my code problem was as simple as replacing “float []” with “float[]” so now I have a javascript version here. It runs well in Firefox, but there are a few glitches on Safari and iPhones – text doesn’t render properly, and I don’t quite understand the event model. Still, it’s cool that modest dynamic models can run realtime on the iPhone. [Update: forgot to mention that I sued Michael Schieben’s touchmove function modification to processing.js.]

The learning curve for all of this is remarkably short. If you’re familiar with Java, it’s very easy to pick up Processing (it’s probably easy coming from other languages as well). I spent just a few days fooling around before I had the hang of building this app. The core model is just standard Euler ODE code:

initialize parameters
initialize levels
do while time < final time
compute rates & auxiliaries
compute levels

The only hassle is that equations have to be ordered manually. I built a Vensim prototype of the model halfway through, in order to stay clear on the structure as I flew seat-of-the pants.

With the latest Processing.js tools, it’s very easy to port to javascript, which runs on nearly everything. Getting it running on the iPhone (almost) was just a matter of discovering viewport meta tags and a line of CSS to set zero margins. The total codebase for my most complicated version so far is only 500 lines. I think there’s a lot of potential for sharing model insights through simple, appealing browser tools and handheld platforms.

As an aside, I always wondered why javascript didn’t seem to have much to do with Java. The answer is in this funny programming timeline. It’s basically false advertising.

Complexity is not the enemy

Following its misguided attack on complex CLDs, a few of us wrote a letter to the NYTimes. Since they didn’t publish, here it is:

Dear Editors, Systemic Spaghetti Slide Snookers Scribe. Powerpoint Pleases Policy Players

“We Have Met the Enemy and He Is PowerPoint” clearly struck a deep vein of resentment against mindless presentations. However, the lead “spaghetti” image, while undoubtedly too much to absorb quickly, is in fact packed with meaning for those who understand its visual lingo. If we can’t digest a mere slide depicting complexity, how can we successfully confront the underlying problem?

The diagram was not created in Powerpoint. It is a “causal loop diagram,” one of a several ways to describe relationships that influence the evolution of messy problems like the war in the Middle East. It’s a perfect illustration of General McMaster’s observation that, “Some problems in the world are not bullet-izable.” Diagrams like this may not be intended for public consumption; instead they serve as a map that facilitates communication within a group. Creating such diagrams allows groups to capture and improve their understanding of very complex systems by sharing their mental models and making them open to challenge and modification. Such diagrams, and the formal computer models that often support them, help groups to develop a more robust understanding of the dynamics of a problem and to develop effective and elegant solutions to vexing challenges.

It’s ironic that so many call for a return to pure verbal communication as an antidote for Powerpoint. We might get a few great speeches from that approach, but words are ill-suited to describe some data and systems. More likely, a return to unaided words would bring us a forgettable barrage of five-pagers filled with laundry-list thinking and unidirectional causality.

The excess supply of bad presentations does not exist in a vacuum. If we want better presentations, then we should determine why organizational pressures demand meaningless propaganda, rather than blaming our tools.

Tom Fiddaman of Ventana Systems, Inc. & Dave Packer, Kristina Wile, and Rebecca Niles Peretz of The Systems Thinking Collaborative

Other responses of note:

We have met an ally and he is Storytelling (Chris Soderquist)

Why We Should be Suspect of Bullet Points and Laundry Lists (Linda Booth Sweeney)