Arab Spring

Hard on the heels of commitment comes another interesting, small social dynamics model on Arxiv. This one’s about the dynamics of the Arab Spring.

The self-immolation of Mohamed Bouazizi on December 17, 2011 in the small Tunisian city of Sidi Bouzid, set off a sequence of events culminating in the revolutions of the Arab Spring. It is widely believed that the Internet and social media played a critical role in the growth and success of protests that led to the downfall of the regimes in Egypt and Tunisia. However, the precise mechanisms by which these new media affected the course of events remain unclear. We introduce a simple compartmental model for the dynamics of a revolution in a dictatorial regime such as Tunisia or Egypt which takes into account the role of the Internet and social media. An elementary mathematical analysis of the model identifies four main parameter regions: stable police state, meta-stable police state, unstable police state, and failed state. We illustrate how these regions capture, at least qualitatively, a wide range of scenarios observed in the context of revolutionary movements by considering the revolutions in Tunisia and Egypt, as well as the situation in Iran, China, and Somalia, as case studies. We pose four questions about the dynamics of the Arab Spring revolutions and formulate answers informed by the model. We conclude with some possible directions for future work.

http://arxiv.org/abs/1210.1841

The model has two levels, but since non revolutionaries = 1 – revolutionaries, they’re not independent, so it’s effectively first order. This permits thorough analytical exploration of the dynamics.

This model differs from typical SD practice in that the formulations for visibility and policing use simple discrete logic – policing either works or it doesn’t, for example. There are also no explicit perception processes or delays. This keeps things simple for analysis, but also makes the behavior somewhat bang-bang. An interesting extension of this model would be to explore more operational, behavioral decision rules.

The model can be used as is to replicate the experiments in Figs. 8 & 9. Further experiments in the paper – including parameter changes that reflect social media – should also be replicable, but would take a little extra structure or Synthesim overrides.

This model runs with any recent Vensim version.

ArabSpring1.mdl

ArabSpring1.vpm

I’d especially welcome comments on the model and analysis from people who know the history of events better than I do.

Kon-Tiki & the STEM workforce

I don’t know if Thor Heyerdahl had Polynesian origins or Rapa Nui right, but he did nail the stovepiping of thinking in organizations:

“And there’s another thing,” I went on.
“Yes,” said he. “Your way of approaching the problem. They’re specialists, the whole lot of them, and they don’t believe in a method of work which cuts into every field of science from botany to archaeology. They limit their own scope in order to be able to dig in the depths with more concentration for details. Modern research demands that every special branch shall dig in its own hole. It’s not usual for anyone to sort out what comes up out of the holes and try to put it all together.

Carl was right. But to solve the problems of the Pacific without throwing light on them from all sides was, it seemed to me, like doing a puzzle and only using the pieces of one color.

Thor Heyerdahl, Kon-Tiki

This reminds me of a few of my consulting experiences, in which large firms’ departments jealously guarded their data, making global understanding or optimization impossible.

This is also common in public policy domains. There’s typically an abundance of micro research that doesn’t add up to much, because no one has bothered to build the corresponding macro theory, or to target the micro work at the questions you need to answer to build an integrative model.

An example: I’ve been working on STEM workforce issues – for DOE five years ago, and lately for another agency. There are a few integrated models of workforce dynamics – we built several, the BHEF has one, and I’ve heard of efforts at several aerospace firms and agencies like NIH and NASA. But the vast majority of education research we’ve been able to find is either macro correlation studies (not much causal theory, hard to operationalize for decision making) or micro examination of a zillion factors, some of which must really matter, but in a piecemeal approach that makes them impossible to integrate.

An integrated model needs three things: what, how, and why. The “what” is the state of the system – stocks of students, workers, teachers, etc. in each part of the system. Typically this is readily available – Census, NSF and AAAS do a good job of curating such data. The “how” is the flows that change the state. There’s not as much data on this, but at least there’s good tracking of graduation rates in various fields, and the flows actually integrate to the stocks. Outside the educational system, it’s tough to understand the matrix of flows among fields and economic sectors, and surprisingly difficult even to get decent measurements of attrition from a single organization’s personnel records. The glaring omission is the “why” – the decision points that govern the aggregate flows. Why do kids drop out of science? What attracts engineers to government service, or the finance sector, or leads them to retire at a given age? I’m sure there are lots of researchers who know a lot about these questions in small spheres, but there’s almost nothing about the “why” questions that’s usable in an integrated model.

I think the current situation is a result of practicality rather than a fundamental philosophical preference for analysis over synthesis. It’s just easier to create, fund and execute standalone micro research than it is to build integrated models.

The bad news is that vast amounts of detailed knowledge goes to waste because it can’t be put into a framework that supports better decisions. The good news is that, for people who are inclined to tackle big problems with integrated models, there’s lots of material to work with and a high return to answering the key questions in a way that informs policy.

Encouraging Moderation

An interesting paper on Arxiv caught my eye the other day. It uses a simple model of a bipolar debate to explore policies that encourage moderation.

Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.

This is a very simple and stylized model, but in the best tradition of model-based theorizing, it yields provocative counter-intuitive results and raises lots of interesting questions. Technology Review’s Arxiv Blog has a nice qualitative take on the work.

See also: Dynamics of Scientific Revolutions, Bifurcations & Filter Bubbles

The model runs in discrete time, but I’ve added implicit rate constants for dimensional consistency in continuous time.

commitment2.mdl & commitment2.vpm

These should be runnable with any Vensim version.

If you add the asymmetric generalizations in the paper’s Supplemental Material, add your name to the model diagram, forward a copy back to me, and I’ll post the update.

The model that ate Europe is back, and it's bigger than ever

The FuturICT Knowledge Accelerator, a grand unified model of everything, is back in the news.

What if global scale computing facilities were available that could analyse most of the data available in the world? What insights could scientists gain about the way society functions? What new laws of nature would be revealed? Could society discover a more sustainable way of living? Developing planetary scale computing facilities that could deliver answers to such questions is the long term goal of FuturICT.

I’ve been rather critical of this effort before, but I think there’s also much to like.

  • An infrastructure for curated public data would be extremely useful.
  • There’s much to be gained through a multidisciplinary focus on simulation, which is increasingly essential and central to all fields.
  • Providing a public portal into the system could have valuable educational benefits.
  • Creating more modelers, and more sophisticated model users, helps build capacity for science-based self governance.

But I still think the value of the project is more about creating an infrastructure, within which interesting models can emerge, than it is in creating an oracle that decision makers and their constituents will consult for answers to life’s pressing problems.

  • Even with Twitter and Google, usable data spans only a small portion of human existence.
  • We’re not even close to having all the needed theory to go with the data. Consider that general equilibrium is the dominant modeling paradigm in economics, yet equilibrium is not a prevalent feature of reality.
  • Combinatorial explosion can overwhelm any increase in computing power for the foreseeable future, so the very idea of simulating everything social and physical at once is laughable.
  • Even if the technical hurdles can be overcome,
    • People are apparently happy to hold beliefs that are refuted by the facts, as long as buffering stocks afford them the luxury of a persistent gap between reality and mental models.
    • Decision makers are unlikely to cede control to models that they don’t understand or can’t manipulate to generate desired results.

I don’t think you need to look any further than the climate debate and the history of Limits to Growth to conclude that models are a long way from catalyzing a sustainable world.

If I had a billion Euros to spend on modeling, I think less of it would go into a single platform and more would go into distributed efforts that are working incrementally. It’s easier to evolve a planetary computing platform than to design one.

With the increasing accessibility of computing and visualization, we could be on the verge of a model-induced renaissance. Or, we could be on the verge of an explosion of fun and pretty but vacuous, non-transparent and unvalidated model rubbish that lends itself more to propaganda than thinking. So, I’d be plowing a BIG chunk of that billion into infrastructure and incentives for model and data quality.

On the usefulness of big models

Steven Wright’s “life size map” joke is a lot older than I thought:

On Exactitude in Science
Jorge Luis Borges, Collected Fictions, translated by Andrew Hurley.
…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
—Suarez Miranda,Viajes de varones prudentes, Libro IV,Cap. XLV, Lerida, 1658

It’s no less relevant to big models, though.

h/t Benjamin Blonder

In search of SD conference excellence

I was pleasantly surprised by the quality of presentations I attended at the SD conference in St. Gallen. Many of the posters were also very good – the society seems to have been successful in overcoming the booby-prize stigma, making it a pleasure to graze on the often-excellent work in a compact format (if only the hors d’oeuvre line had had brevity to match its tastiness…).

In anticipation of an even better array of papers next year, here’s my quasi-annual reminder about resources for producing good work in SD:

I suppose I should add posts on good presentation technique and poster development (thoughts welcome).

Thanks to the organizers for a well-run enterprise in a pleasant venue.

Beggaring ourselves through coal mining

Old joke: How do you make a small fortune breeding horses? Start with a large fortune ….

It appears that the same logic applies to coal mining here in the Northern Rockies.

With US coal use in slight decline, exports are the growth market. Metallurgical and steam coal currently export for about $140 and $80 per short ton, respectively. But the public will see almost none of that, because unmanaged quantity and “competitive” auctions that are uncompetitive (just like Montana trust land oil & gas), plus low royalty, rent and bonus rates, result in a tiny slice of revenue accruing to the people (via federal and state governments) who actually own the resource.

For the Powder River Basin, here’s how it pencils out in rough terms:

Item $/ton
Minemouth price $10
Royalty, rents & bonus $2
Social Cost of Carbon (@ $21/tonCo2 medium value) -$55
US domestic SCC (at 15% of global, average of 7% damage share and 23% GDP share) -$8
Net US public benefit < -$6

In other words, the US public loses at least $3 for every $1 of coal revenue earned. The reality is probably worse, because the social cost of carbon estimate is extremely conservative, and other coal externalities are omitted. And of course the global harm is much greater than the US’ narrow interest.

Even if you think of coal mining as a jobs program, at Wyoming productivity, the climate subsidy alone is almost half a million dollars per worker.

This makes it hard to get enthusiastic about the planned expansion of exports.

Global lukewarming

Fred Krupp, President of EDF, has an opinion on climate policy in the WSJ. I have to give him credit for breaking into a venue that is staunchly ignorant the realities of climate change. An excerpt:

If both sides can now begin to agree on some basic propositions, maybe we can restart the discussion. Here are two:

The first will be uncomfortable for skeptics, but it is unfortunately true: Dramatic alterations to the climate are here and likely to get worse—with profound damage to the economy—unless sustained action is taken. As the Economist recently editorialized about the melting Arctic: “It is a stunning illustration of global warming, the cause of the melt. It also contains grave warnings of its dangers. The world would be mad to ignore them.”

The second proposition will be uncomfortable for supporters of climate action, but it is also true: Some proposed climate solutions, if not well designed or thoughtfully implemented, could damage the economy and stifle short-term growth. As much as environmentalists feel a justifiable urgency to solve this problem, we cannot ignore the economic impact of any proposed action, especially on those at the bottom of the pyramid. For any policy to succeed, it must work with the market, not against it.

If enough members of the two warring climate camps can acknowledge these basic truths, we can get on with the hard work of forging a bipartisan, multi-stakeholder plan of action to safeguard the natural systems on which our economic future depends.

I wonder, though, if the price of admission was too high. Krupp equates two risks: climate impacts, and policy side effects. But this is a form of false balance – these risks are not in the same league.

Policy side effects are certainly real – I’ve warned against inefficient policies multiple times (e.g., overuse of standards). But the effects of a policy are readily visible to well-defined constituencies, mostly short term, and diverse across jurisdictions with different implementations. This makes it easy to learn what’s working and to stop doing what’s not working (and there’s never a shortage of advocates for the latter), without suffering large cumulative effects. Most of the inefficient approaches (like banning the bulb) are economically miniscule.

Climate risk, on the other hand, accrues largely to people in far away places, who aren’t even born yet. It’s subject to reinforcing feedbacks (like civil unrest) and big uncertainties, known and unknown, that lend it a heavy tail of bad outcomes, which are not economically marginal.

The net balance of these different problem characteristics is that there’s little chance of catastrophic harm from climate policy, but a substantial chance from failure to have a climate policy. There’s also almost no chance that we’ll implement a too-stringent climate policy, or that it would stick if we did.

The ultimate irony is that EDF’s preferred policy is cap & trade, which trades illusory environmental certainty for considerable economic inefficiency.

Does this kind of argument reach a wavering middle ground? Or does it fail to convince skeptics, while weakening the position of climate policy proponents by conceding strawdog growth arguments?

Algebra, Eroding Goals and Systems Thinking

A NY Times editorial wonders, Is Algebra Necessary?*

I think the short answer is, “yes.”

The basic point of having a brain is to predict the consequences of actions before taking them, particularly where those actions might be expensive or fatal. There are two ways to approach this:

  • pattern matching or reinforcement learning – hopefully with storytelling as a conduit for cumulative experience with bad judgment on the part of some to inform the future good judgment of others.
  • inference from operational specifications of the structure of systems, i.e. simulation, mental or formal, on the basis of theory.

If you lack a bit of algebra and calculus, you’re essentially limited to the first option. That’s bad, because a lot of situations require the second for decent performance.

The evidence the article amasses to support abandonment of algebra does not address the fundamental utility of algebra. It comes in two flavors:

  • no one needs to solve certain arcane formulae
  • setting the bar too high for algebra discourages large numbers of students

I think too much reliance on the second point risks creating an eroding goals trap. If you can’t raise the performance, lower the standard:

eroding goals
B. Jana, Wikimedia Commons, Creative Commons Attribution-Share Alike 3.0 Unported

This is potentially dangerous, particularly when you also consider that math performance is coupled with a lot of reinforcing feedback.

As an alternative to formal algebra, the editorial suggests more practical math,

It could, for example, teach students how the Consumer Price Index is computed, what is included and how each item in the index is weighted — and include discussion about which items should be included and what weights they should be given.

I can’t really fathom how one could discuss weighting the CPI in a meaningful way without some elementary algebra, so it seems to me that this doesn’t really solve the problem.

However, I think there is a bit of wisdom here. What earthly purpose does solving the quadratic formula serve, until one is able to map that to some practical problem space? There is growing evidence that even high-performing college students can manipulate symbols without gaining the underlying intuition needed to solve real-world problems.

I think the obvious conclusion is not that we should give up on teaching algebra, but that we should teach it quite differently. It should emerge as a practical requirement, motivated by a student-driven search for the secrets of life and systems thinking in particular.

* Thanks to Richard Dudley for pointing this out.

Is Algebra Necessary?