Announcing Leverage Networks

Leverage Networks is filling the gap left by the shutdown of Pegasus Communications:

We are excited to announce our new company, Leverage Networks, Inc. We have acquired most of the assets of Pegasus Communications and are looking forward to driving its reinvention.  Below is our official press release which provides more details. We invite you to visit our interim website at leveragenetworks.com to see what we have planned for the upcoming months. You will soon be able to access most of the existing Pegasus products through a newly revamped online store that offers customer reviews, improved categorization, and helpful suggestions for additional products that you might find interesting. Features and applications will include a calendar of events, a service marketplace, and community forums

As we continue the reinvention, we encourage suggestions, thoughts, inquiries and any notes on current and future products, services or resources that you feel support our mission of bringing the tools of Systems Thinking, System Dynamics, and Organizational Learning to the world.

Please share or forward this email to friends and colleagues and watch for future emails as we roll out new initiatives.

Thank you,

Kris Wile, Co-President

Rebecca Niles, Co-President

Kate Skaare, Director

LeverageNetworks

As we create the Leverage Networks platform, it is important that the entire community surrounding Organizational Learning, Systems Thinking and System Dynamics be part of the evolution. We envision a virtual space that is composed both archival and newly generated (by partners, community members) resources in our Knowledge Base, a peer-supported Service Marketplace where service providers (coaches, graphic facilitators, modelers, and consultants) can hang a virtual “shingle” to connect with new projects, and finally a fully interactive Calendar of events for webinars, seminars, live conferences and trainings.

If you are interested in working with us as a partner or vendor, please email partners@leveragenetworks.com

Why ask why?

Why ask why?

Forward causal inference and reverse causal questions

Andrew Gelman & Guido Imbens

The statistical and econometrics literature on causality is more focused on effects of causes” than on causes of effects.” That is, in the standard approach it is natural to study the effect of a treatment, but it is not in general possible to determine the causes of any particular outcome. This has led some researchers to dismiss the search for causes as “cocktail party chatter” that is outside the realm of science. We argue here that the search for causes can be understood within traditional statistical frameworks as a part of model checking and hypothesis generation. We argue that it can make sense to ask questions about the causes of effects, but the answers to these questions will be in terms of effects of causes.

I haven’t had a chance to digest this yet, but it’s an interesting topic. It’s particularly relevant to system dynamics modeling, where we are seldom seeking only y = f(x), but rather an endogenous theory where x = g(y) also.

See also: Causality in Nonlinear Systems

h/t Peter Christiansen.

Are all models wrong?

Artem Kaznatcheev considers whether Box’s slogan, “all models are wrong,” should be framed as an empirical question.

Building on the theme of no unnecessary assumptions about the world, @BlackBrane suggested … a position I had not considered before … for entertaining the possibility of a mathematical universe:

[Box’s slogan is] an affirmative statement about Nature that might in fact not be true. Who’s to say that at the end of the day, Nature might not correspond exactly to some mathematical structure? I think the claim is sometimes guilty of exactly what it tries to oppose, namely unjustifiable claims to absolute truth.

I suspect that we won’t learn the answer, at least in my lifetime.

In a sense, the appropriate answer is “who cares?” Whether or not there can in principle be perfect models, the real problem is finding ones that are useful in practice. The slogan isn’t helpful for this. (NIPCC authors seem utterly clueless as well.)

In a related post, AK identifies a 3-part typology of models that suggests a basis for guidance:

  • “Insilications – In physics, we are used to mathematical models that correspond closely to reality. All of the unknown or system dependent parameters are related to things we can measure, and the model is then used to compute dynamics, and predict the future value of these parameters. …
  • Heuristics – … When George Box wrote that “all models are wrong, but some are useful”, I think this is the type of models he was talking about. It is standard to lie, cheat, and steal when you build these sort of models. The assumptions need not be empirically testable (or even remotely true, at times), and statistics and calculations can be used to varying degree of accuracy or rigor. … A theorist builds up a collection of such models (or fables) that they can use as theoretical case studies, and a way to express their ideas. It also allows for a way to turn verbal theories into more formal ones that can be tested for basic consistency. …
  • Abstractions – … These are the models that are most common in mathematics and theoretical computer science. They have some overlap with analytic heuristics, except are done more rigorously and not with the goal of collecting a bouquet of useful analogies or case studies, but of general statements. An abstraction is a model that is set up so that given any valid instantiation of its premises, the conclusions necessarily follow. …”

The social sciences are solidly in the heuristics realm, while a lot of science is in the insilication category. The difficulty is knowing where the boundary lies. Actually, I think it’s a continuum, not a categorical. One can get some hint by looking at the problem context for models. For example:

Known state variables? Reality Checks (conservation laws, etc.)? Data per concept? Structural information from more granular observations or models? Experiments? Computation?
Physics yes lots lots yes yes often easy
Climate yes some some for many things not at scale limited
Economics no some some – flaky microfoundations often lacking or unused not at scale limited

(Ironically, I’m implying a model here, which is probably wrong, but hopefully useful.)

A lot of our most interesting problems are currently at the heuristics end of the spectrum. Some may migrate toward better model performance, and others probably won’t – particularly models of decision processes that willfully ignore models.

Golf is the answer

Lots of golf.

I couldn’t resist a ClimateDesk article mocking carbon-sucking golf balls, so I took a look at the patent.

I immediately started wondering about the golf ball’s mass balance. There are rules about these things. But the clever Nike engineers thought of everything,

Generally, a salt may be formed as a result of the reaction between the carbon dioxide absorbent and the atmospheric carbon dioxide. The presence of this salt may cause the golf ball to increase in weight. This increase in weight may be largely negligible, or the increase in weight may be sufficient to be measurable and affect the play characteristics of the golf ball. The United States Golf Association (USGA) official Rules of Golf require that a regulation golf ball weigh no more than 45.93 grams. Therefore, a golf ball in accordance with this disclosure may be manufactured to weigh some amount less than 45.93, so that the golf ball may increase in weight as atmospheric carbon dioxide is absorbed. For example, a finished golf ball manufactured in accordance with this disclosure may weigh 45.5 grams before absorbing any significant amount of atmospheric carbon dioxide.

Let’s pretend that 0.43 grams of CO2 is “significant” and do the math here. World energy CO2 emissions were about 32.6 MMT in 2011. That’s 32.6 gigatons or petagrams, so you’d need about 76 petaballs per year to absorb it. That’s 76,000,000,000,000,000 balls per year.

It doesn’t sound so bad if you think of it as 11 million balls per capita per year. Think of the fun you could have with 11 million golf balls! Plus, you’d have 22 million next year, except for the ones you whacked into a water trap.

Because the conversion efficiency is so low (less than half a gram CO2 uptake per 45 gram ball, i.e about 1%), you need 100 grams of ball per gram of carbon. This means that the mass flow of golf balls would have to exceed the total mass flow of food, fuels, minerals and construction materials on the planet, by a factor of 50.

76 petaballs take up about 4850 cubic kilometers, so we’d soon have to decide where to put them. I think Scotland would be appropriate. We’d only have to add a 60-meter layer of balls to the country each year.

A train bringing 10,000 tons of coal to a power plant (three days of fuel for 500MW) would have to make a lot more trips to carry away the 1,000,000 tons of balls needed to offset its emissions. That’s a lot of rail traffic, so it might make sense to equip plants with an array of 820 rotary cannon retrofitted to fire balls into the surrounding countryside. That’s only 90,000 balls per second, after all. Perhaps that’s what analysts mean when they say that there are no silver bullets, only silver buckshot. In any case, the meaning of “climate impacts” would suddenly be very palpable.

Dealing with this enormous mass flow would be tough, but there would be some silver linings. For starters, the earth’s entire fossil fuel output would be diverted to making plastics, so emissions would plummet, and the whole scale of the problem would shrink to manageable proportions. Golf balls are pretty tough, so those avoided emissions could be sequestered for decades. In addition, golf balls float, and they’re white, so we could release them in the arctic to replace melting sea ice.

Who knows what other creative uses of petaballs the free market will invent?

Update, courtesy of Jonathan Altman:

animal house marbles

Facebook Reloaded 2013

Facebook has climbed out of its 2012 doldrums to a market cap of $115 billion today. So, I’ve updated my user tracking and valuation model, just for kicks.

As in my last update, user growth continues to modestly exceed the original estimates. The user “carrying capacity” now is about 1.35 billion users, vs. .95 originally (K950 on graph) and 1.07 in 2012 – within the range of scenarios I originally ran, but well above the “best guess”. My guess is that the model will continue to underpredict for a while, because this is an inevitable pitfall of using a single diffusion process to represent what is surely the aggregate of several processes – stationary vs. mobile, different regions and demographics, etc. Of course, in the long run, users could also go down, which the basic logistic model can’t represent.

You can see what’s going on if you plot growth against users -the right tail doesn’t go to 0 as fast as the logistic assumes:

User growth probably isn’t a huge component of valuation, because these are modest differences on a percentage basis. Marginal users may be less valuable as well.

With revenue per user at a constant $7/user/year, and 30% margins, and the current best-guess model, FB is now worth $35 billion. What does it take to get to the ballpark of current market capitalization? Here’s one way:

  • The carrying capacity ceiling for users continues to grow to 2 billion, and
  • revenue per user rises to $25/user/year

This preserves some optimistic base case assumptions,

  • The risk-free interest rate takes 5 more years to rise substantially above 0 to a (still low) long term rate of 3%
  • Margins stay at 30% as in 2009-2011 (vs. 18% y.t.d.)

Think it’ll happen?

facebook 3 update 2.vpm

Summary for Suckers

The NIPCC critique is, ironically, a compelling argument in favor of the IPCC assessment. Why? Well, science is about evaluation of competing hypotheses. The NIPCC report collects a bunch of alternatives to mainstream climate science in one place, where it’s easy to see how pathetic they are. If this is the best climate skeptics can muster, their science must be exceedingly weak.

The NIPCC (Nongovernmental International Panel on Climate Change, a.k.a. Not IPCC) is the Heartland Institute’s rebuttal of the IPCC assessments. Apparently the latest NIPCC report has been mailed to zillions of teachers. As a homeschooling dad, I’m disappointed that I didn’t get mine. Well, not really.

It would probably take more pages to debunk the NIPCC report than it occupies, but others are chipping away at it. Some aspects, like temperature cherry-picking, are like shooting fish in a barrel.

The SPM, and presumably the entire report that it summarizes, seems to labor under the misapprehension that the IPCC is itself a body that conducts science. In fact, the IPCC assessments are basically a giant literature review. So, when the Heartland panel writes,

In contradiction of the scientific method, the IPCC assumes its implicit hypothesis is correct and that its only duty is to collect evidence and make plausible arguments in the hypothesis’s favor.

we must remember that “the IPCC” is shorthand for a vast conspiracy of scientists, coordinated by an invisible hand.

The report organizes the IPPC argument into 3 categories: “Global Climate Model (GCM) projections,” “postulates,” and “circumstantial evidence.” This is a fairly ridiculous caricature of the actual body of work. Most of what is dismissed as postulates could better be described as, “things we’re too lazy to explore properly,” for example. But my eye strays straight to the report’s misconceptions about modeling.

First, the NIPCC seems to have missed the fact that GCMs are not the only models in use. There are EMICS (models of intermediate complexity) and low-order energy balance models as well.

The NIPCC has taken George Box’s “all models are wrong, some are useful” and run with it:

… Global climate models produce meaningful results only if we assume we already know perfectly how the global climate works, and most climate scientists say we do not (Bray and von Storch, 2010).

How are we to read this … all models are useless, unless they’re perfect? Of course, no models are perfect, therefore all models are useless. Now that’s science!

NIPCC trots out a von Neumann quote that’s almost as tired as Box:

with four parameters I can fit an elephant, and with five I can make him wiggle his trunk

In models with lots of reality checks available (i.e. laws of physics), it just isn’t that easy. And the earth is a very big elephant, which means that there’s a rather vast array of data to be fit.

The NIPCC seems to be aware of only a few temperature series, but the AR5 report devotes 200 pages (Chapter 9) to model evaluation, with results against a wide variety of spatial and temporal distributions of physical quantities. Models are obviously far from perfect, but a lot of the results look good, in ways that exceed the wildest dreams of social system modelers.

NIPCC doesn’t seem to understand how this whole “fit” thing works.

Model calibration is faulty as it assumes all temperature rise since the start of the industrial revolution has resulted from human CO2 emissions.

This is blatantly false, not only because it contradicts the actual practice of attribution, but because there is no such parameter as “fraction of temp rise due to anthro CO2.” One can’t assume the answer to the attribution question without passing through a lot of intermediate checks, like conforming to physics and data other than global temperature. In complex models, where the contribution of any individual parameter to the outcome is likely to be unknown to the modeler, and the model is too big to calibrate by brute force, the vast majority of parameters must be established bottom up, from physics or submodels, which makes it extremely difficult for the modeler to impose preconceptions on the complete model.

Similarly,

IPCC models stress the importance of positive feedback from increasing water vapor and thereby project warming of ~3-6°C, whereas empirical data indicate an order of magnitude less warming of ~0.3-1.0°C.

Data by itself doesn’t “indicate” anything. Data only speaks insofar as it refutes (or fails to refute) a model. So where is the NIPCC model that fits available data and yields very low climate sensitivity?

The bottom line is that, if it were really true that models have little predictive power and admit many alternative calibrations (a la the elephant), it should be easy for skeptics to show model runs that fit the data as well as mainstream results, with assumptions that are consistent with low climate sensitivity. They wouldn’t necessarily need a GCM and a supercomputer; modest EBMs or EMICs should suffice. This they have utterly failed to demonstrate.