Is London a big whale?

Why do cities survive atom bombs, while companies routinely go belly up?

Geoffrey West on The Surprising Math of Cities and Corporations:

There’s another interesting video with West in the conversations at Edge.

West looks at the metabolism of cities, and observes scale-free behavior of good stuff (income, innovation, input efficiency) as well as bad stuff (crime, disease – products of entropy). The destiny of cities, like companies, is collapse, except to the extent that they can innovate at an accelerating rate. Better hope the Singularity is on schedule.

Thanks to whoever it was at the SD conference who pointed this out!

The danger of path-dependent information flows on the web

Eli Pariser argues that “filter bubbles” are bad for us and bad for democracy:

As web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.

Filter bubbles are close cousins of confirmation bias, groupthink, polarization and other cognitive and social pathologies.

A key feedback is this reinforcing loop, from Sterman & Wittenberg’s model of path dependence in Kuhnian scientific revolutions:

Anomalies

As confidence in an idea grows, the delay in recognition (or frequency of outright rejection) of anomalous information grows larger. As a result, confidence in the idea – flat earth, 100mpg carburetor – can grow far beyond the level that would be considered reasonable, if contradictory information were recognized.

The dynamics resulting from this and other positive feedbacks play out in many spheres. Wittenberg & Sterman give an example:

The dynamics generated by the model resemble the life cycle of intellectual fads. Often a promising new idea rapidly becomes fashionable through excessive optimism, aggressive marketing, media hype, and popularization by gurus. Many times the rapid influx of poorly trained practitioners, or the lack of established protocols and methods, causes expectations to outrun achievements, leading to a backlash and disaffection. Such fads are commonplace, especially in (quack) medicine and most particularly in the world of business, where “new paradigms” are routinely touted in the pages of popular journals of management, only to be displaced in the next issue by what many business people have cynically come to call the next “flavor of the month.”

Typically, a guru proposes a new theory, tool, or process promising to address persistent problems facing businesses (that is, a new paradigm claiming to solve the anomalies that have undermined the old paradigm.) The early adopters of the guru’s method spread the word and initiate some projects. Even in cases where the ideas of the guru have little merit, the energy and enthusiasm a team can bring to bear on a problem, coupled with Hawthorne and placebo effects and the existence of “low hanging fruit” will often lead to some successes, both real and apparent. Proponents rapidly attribute these successes to the use of the guru’s ideas. Positive word of mouth then leads to additional adoption of the guru’s ideas. (Of course, failures are covered up and explained away; as in science there is the occasional fraud as well.) Media attention further spreads the word about the apparent successes, further boosting the credibility and prestige of the guru and stimulating additional adoption.

As people become increasingly convinced that the guru’s ideas work, they are less and less likely to seek or attend to disconfirming evidence. Management gurus and their followers, like many scientists, develop strong personal, professional, and financial stakes in the success of their theories, and are tempted to selectively present favorable and suppress unfavorable data, just as scientists grow increasingly unable to recognize anomalies as their familiarity with and confidence in their paradigm grows. Positive feedback processes dominate the dynamics, leading to rapid adoption of those new ideas lucky enough to gain a sufficient initial following. …

The wide range of positive feedbacks identified above can lead to the swift and broad diffusion of an idea with little intrinsic merit because the negative feedbacks that might reveal that the tools don’t work operate with very long delays compared to the positive loops generating the growth. …

For filter bubbles, I think the key positive loops are as follows:

FilterBubblesLoops R1 are the user’s well-worn path. We preferentially visit sites presenting information (theory x or y) in which we have confidence. In doing so, we consider only a subset of all information, building our confidence in the visited theory. This is a built-in part of our psychology, and to some extent a necessary part of the process of winnowing the world’s information fire hose down to a usable stream.

Loops R2 involve the information providers. When we visit a site, advertisers and other observers (Nielsen) notice, and this provides the resources (ad revenue) and motivation to create more content supporting theory x or y. This has also been a part of the information marketplace for a long time.

R1 and R2 are stabilized by some balancing loops (not shown). Users get bored with an all-theory-y diet, and seek variety. Providers seek out controversy (real or imagined) and sensationalize x-vs-y battles. As Pariser points out, there’s less scope for the positive loops to play out in an environment with a few broad media outlets, like city newspapers. The front page of the Bozeman Daily Chronicle has to work for a wide variety of readers. If the paper let the positive loops run rampant, it would quickly lose half its readership. In the online world, with information customized at the individual level, there’s no such constraint.

Individual filtering introduces R3. As the filter observes site visit patterns, and preferentially serves up information consistent with past preferences. This introduces a third set of reinforcing feedback processes, as users begin to see what they prefer, they also learn to prefer what they see. In addition, on Facebook and other social networking sites every person is essentially a site, and people include one another in networks preferentially. This is another mechanism implementing loop R1 – birds of a feather flock together and share information consistent with their mutual preferences, and potentially following one another down conceptual rabbit holes.

The result of the social web and algorithmic filtering is to upset the existing balance of positive and negative feedback. The question is, were things better before, or are they better now?

I’m not exactly sure how to tell. Presumably one could observe trends in political polarization and duration of fads for an indication of the direction of change, but that still leaves open the question of whether we have more or less than the “optimal” quantity of pet rocks, anti-vaccine campaigns and climate skepticism.

My suspicion is that we now have too much positive feedback. This is consistent with Wittenberg & Sterman’s insight from the modeling exercise, that the positive loops are fast, while the negative loops are weak or delayed. They offer a prescription for that,

The results of our model suggest that the long-term success of new theories can be enhanced by slowing the positive feedback processes, such as word of mouth, marketing, media hype, and extravagant claims of efficacy by which new theories can grow, and strengthening the processes of theory articulation and testing, which can enhance learning and puzzle-solving capability.

In the video, Pariser implores the content aggregators to carefully ponder the consequences of filtering. I think that also implies more negative feedback in algorithms. It’s not clear that providers have an incentive to do that though. The positive loops tend to reward individuals for successful filtering, while the risks (e.g., catastrophic groupthink) accrue partly to society. At the same time, it’s hard to imagine a regulatory that does not flirt with censorship.

Absent a global fix, I think it’s incumbent on individuals to practice good mental hygiene, by seeking diverse information that stands some chance of refuting their preconceptions once in a while. If enough individuals demand transparency in filtering, as Pariser suggests, it may even be possible to gain some local control over the positive loops we participate in.

I’m not sure that goes far enough though. We need tools that serve the social equivalent of “strengthening the processes of theory articulation and testing” to improve our ability to think and talk about complex systems. One such attempt is the “collective intelligence” behind Climate Colab. It’s not quite Facebook-scale yet, but it’s a start. Semantic web initiatives are starting to help by organizing detailed data, but we’re a long way from having a “behavioral dynamic web” that translates structure into predictions of behavior in a shareable way.

Update: From Tech Review, technology for breaking the bubble

The alien Hail Mary, and other climate policy plays

Cap & Trade is suspended in Europe and dead in the US, and the techno delusion may not be far behind. Some strange bedfellows have lined up behind the idea of R&D-driven climate policy. But now it appears that clean energy research is not a bipartisan no-brainer after all. Energy committee member Rand Paul’s bill would not only cut energy R&D funding by eliminating DOE altogether, it would cut our ability to even monitor the global environment by gutting NOAA and NASA. That only leaves one option:

13 In the otherwise dull year 2327, mankind successfully contacts aliens. Well, technically their answering machine, as the aliens themselves have gone to Alpha Centauri for the summer.

14 Desperate for help, humans leave increasingly stalker-y messages, turning off the aliens with how clingy our species is.

15 The aliens finally agree to equip Earth with a set of planet-saving carbon neutralizers, but work drags on as key parts must be ordered from a foreign supplier in the Small Magellanic Cloud.

16 The job comes in $3.7 quadrillion above estimate. Humanity thinks it is being taken advantage of but isn’t sure.

“20 things you didn’t know about the future,” in Discover

Seriously, where does that leave us? In terms of what we should do, I don’t think much has changed. As I wrote a while back, the climate policy table needs four legs:

  1. Prices
  2. Technology (the landscape of possibilities on which we make decisions)
  3. Institutional rules and procedures
  4. Preferences, operating within social networks

Preferences and technology are really the fundamentals among the four. Technology represents the set of options available to us for transforming energy and resources into life and play. Preferences guide how we choose among those options. Prices and rules are really just the information signals that allow us to coordinate those decisions.

However, neither preferences nor technology are as fundamental as they look. Models generally take preferences as a given, but in fact they’re endogenous. What we want on a day to day basis is far removed from our most existential needs. Instead, we construct preferences on the basis of technologies we know about, prices, rules, and the preferences and choices of others. That creates norms, fads, marketing, keep-up-with-the Joneses and other positive feedback mechanisms. Similarly, technology is more than discovery of principles and invention of devices. Those innovations don’t do anything until they’re woven into the fabric of society, guided by (you guessed it), prices, institutions, and preferences. That creates more positive feedbacks, like the chicken-egg problems of alternative fuel vehicle deployment.

If we could all get up in the morning and work out in our heads how to make Pareto-efficient decisions, we might not need prices and institutions, but we can’t, so we do. Prices matter because they’re a primary carrier of information through the economy. Not every decision is overtly economic, so we also have institutions, rules and routinized procedures to guide behavior. The key is that these signals should serve our values (the deeply held ones we’d articulate upon reflection, which might differ from the preferences revealed by transactions), not the other way around.

Preferences clearly can have a lot of direct leverage on behavior – if we all equated driving a big gas guzzler with breaking wind in a crowded elevator, we’d probably see different cars on the lot. However, most decisions are not so transparent. It’s already hard to choose “paper or plastic?” How about “desktop or server?” When you add multiple layers of supply chain and varied national origins to the picture, it becomes very hard to create a green information system paralleling the price system. It’s probably even harder to get individuals and firms to conform to such a system, when there are overwhelming evolutionary rewards to defection. Borrowing from Giraudoux, the secret to success is sustainability; once you can fake that you’ve got it made.

Similarly, the sheer complexity of society makes it hard to predict which technologies constitute a winning combination for creating low-carbon happiness. A technology-led strategy runs the risk of failing in the attempt to recreate a high-carbon lifestyle with low-carbon inputs.  I don’t think anyone has the foresight to select that portfolio. Even if we could do it, there’s no guarantee that, absent other signals, new technologies will be put to their intended uses, or that they will survive the “valley of death” between R&D and commercialization. It’s like airdropping a tyrannosaurus into an arctic ecosystem – sure, he’s got big teeth, but will he survive?

Complexity also militates against a rules-led approach. It’s simply too cumbersome to codify a rich set of tradeoffs in command-and-control regulations, which can become an impediment to innovation and are subject to regulatory capture. Also, systems like the CAFE standard create shadow prices of compliance, rather than explicit prices. This makes it hard to diagnose the effects of constraints and to coordinate them with other policies. There’s a niche for rules, but they shouldn’t be the big stick (on the other hand, eliminating the legacy of some past measures could be a win-win).

That’s why emissions pricing is really a keystone policy. Once you have prices aligned with the long term value of stable climate (and other resources), it’s easier to align the other legs of the table. Emissions prices create huge incentives for private R&D, leaving a smaller gap for government to fill – just the market failures in appropriation of benefits of technology. The points of pain where institutions are inadequate, or stand in the way of progress, will be more evident and easier to correct, and there will be less burden on policy making institutions, because they won’t have to coordinate many small programs to do the job of one big signal. Preferences will start evolving in a low-carbon direction, with rewards to those who (through luck or altruism) have already done so. Most importantly, emissions pricing gets some changes moving now, not after a decade or two of delay.

Concretely, I still think an upstream, revenue-neutral carbon tax is a practical implementation route. If there’s critical mass among trade partners, it could even evolve into a harmonized global system through the pressure of border carbon adjustments. The question is, how to get started?

The rebound delusion

Lately it’s become fashionable to claim that energy efficiency is useless, because the rebound effect will always eat it up. This is actually hogwash, especially in the short term. James Barrett has a nice critique of the super-rebound position at RCE. Some excerpts:

To be clear, the rebound effect is real. The theory behind it is sound: Lower the cost of anything and people will use more of it, including the cost of running energy consuming equipment. But as with many economic ideas that are sound theory (like the idea that you can raise government revenues by cutting tax rates), the trick is in knowing how far to take them in reality. (Cutting tax rates from 100% to 50% would certainly raise revenues. Cutting them from 50% to 0% would just as surely lower them.)

The problem with knowing how far to take things like this is that unlike real scientists who can run experiments in a controlled laboratory environment, economists usually have to rely on what we can observe in the real world. Unfortunately, the real world is complicated and trying to disentangle everything that’s going on is very difficult.

Owen cleverly avoids this problem by not trying to disentangle anything.

One supposed example of the Jevons paradox that he points to in the article is air conditioning. Citing a conversation with Stan Cox, author of Losing Our Cool, Owen notes that between 1993 and 2005, air conditioners in the U.S. increased in efficiency by 28%, but by 2005, homes with air conditioning increased their consumption of energy for their air conditioners by 37%.

Accounting only for the increased income over the timeframe and fixing Owen’s mistake of assuming that every air conditioner in service is new, a few rough calculations point to an increase in energy use for air conditioning of about 30% from 1993 to 2005, despite the gains in efficiency. Taking into account the larger size of new homes and the shift from room to central air units could easily account for the rest.

All of the increase in energy consumption for air conditioning is easily explained by factors completely unrelated to increases in energy efficiency. All of these things would have happened anyway. Without the increases in efficiency, energy consumption would have been much higher.

It’s easy to be sucked in by stories like the ones Owen tells. The rebound effect is real and it makes sense. Owen’s anecdotes reinforce that common sense. But it’s not enough to observe that energy use has gone up despite efficiency gains and conclude that the rebound effect makes efficiency efforts a waste of time, as Owen implies. As our per capita income increases, we’ll end up buying more of lots of things, maybe even energy. The question is how much higher would it have been otherwise.

Why is the rebound effect suddenly popular? Because an overwhelming rebound effect is needed to make sense of proposals to give up on near-term emissions prices and invest in technology, praying for a clean-energy-supply miracle in a few decades.

As Barrett points out, the notion that energy efficiency increases energy use is an exaggeration of the rebound effect. For efficiency to increase use, energy consumption has to be elastic (e<-1). I don’t remember ever seeing an economic study that came to that conclusion. In a production function, such values aren’t physically plausible, because they imply zero energy consumption at a finite energy price.

Therefore, the notion that pursuing energy efficiency makes the climate situation worse is a fabrication. Doubly so, because of an accounting sleight-of-hand. Consider two extremes:

  1. no rebound effects (elasticity ~ 0): efficiency policies work, because they reduce energy use and its associated negative social externalities.
  2. big rebound effects (elasticity < -1): efficiency policies increase energy use, but they do so because there’s a huge private benefit from the increase in mobility or illumination or whatever private purpose the energy is put to.

The super-rebound crowd pooh-poohs #1 and conveniently ignores the welfare outcome of #2, accounting only for the negative side effects.

If rebound effects are modest, as they surely are, it makes much more sense to guide R&D and deployment for both energy supply and demand with a current price signal on emissions. That way, firms make distributed decisions about where to invest, rather than the government picking winners, and appropriate tradeoffs between conservation and clean supply are possible. The price signal can be adapted to meet environmental constraints in the face of rising income. Progress starts now, rather than after decades of waiting for the discover->apply->deploy->embody pipeline.

If the public isn’t ready for it, that doesn’t mean analysts should bargain against their own good sense by recommending things that might be popular, but are unlikely to work. That’s like a doctor advising a smoker to give to cancer research, without mentioning that he really ought to quit.

Update: there’s an excellent followup at RCE.

Technology first?

The idea of a technology-led solution to climate is gaining ground, most recently with a joint AEI-Brookings proposal. Kristen Sheeran has a nice commentary at RCE on the prospects. Go read it.

I’m definitely bearish on the technology-first idea. I agree that technology investment is a winner, with or without environmental externalities. But for high tech to solve the climate problem by itself, absent any emissions pricing, may require technical discontinuities that are less than likely. That makes technology-first the Hail-Mary pass of climate policy: something you do when you’re out of options.

The world isn’t out of options in a physical sense; it’s just that the public has convinced itself otherwise. That’s a pity.

The emerging climate technology delusion

What do you do when feasible policies aren’t popular, and popular policies aren’t feasible?

Let’s start with a joke:

Lenin, Stalin, Khrushchev and Brezhnev are travelling together on a train. Unexpectedly the train stops. Lenin suggests: “Perhaps, we should call a subbotnik, so that workers and peasants fix the problem.” Kruschev suggests rehabilitating the engineers, and leaves for a while, but nothing happens. Stalin, fed up, steps out to intervene. Rifle shots are heard, but when he returns there is still no motion. Brezhnev reaches over, pulls the curtain, and says, “Comrades, let’s pretend we’re moving.” (Apologies to regulars for the repeat.)

The Soviet approach would be funny, if it weren’t the hottest new trend in climate policy. The latest installment is a Breakthrough article, The emerging climate technology consensus. An excerpt: Continue reading “The emerging climate technology delusion”

Enabling an R&D addiction

I actually mean that in a good way. A society addicted to learning and innovation would be pretty cool.

However, it’s not all about money. Quoting the OSTP Science of Science Policy Roadmap,

Investment in science and technology, however, is only one of the policy instruments available to science policy makers; others include fostering the role of competiton and openness in the promotion of discovery, the construction of intellectual property systems, tax policy, and investment in a STEM workforce. However, the probable impact of these various policies and interventions is largely unknown. This lack of knowledge can lead to serious and unintended consequences.

In other words, to spend $16 billion/year wisely, you have to get a number of moving parts coordinated, including:

  1. Prices & tax policy. If prices of natural resources, national security, clean air, health, etc. don’t reflect their true values to society, innovation policy will be pushing against the tide. Innovations will be DOA in the marketplace. The need for markets for products is matched by the need for markets for innovators:
  2. Workforce management. Just throwing money at a problem can create big dislocations in researcher demographics. Put it all into academic research, and you create a big glut of graduates who have no viable career path in science. Put it all into higher education, and your pipeline of talent will be starved by poor science preparation at lower levels. Put it all into labs and industry, and it’ll turn into pay raises for a finite pool of workers. Balance is needed.
  3. Intellectual property law. This needs to reflect the right mix of incentives for private investment and recognition that creations are only possible to the extent that we stand on the shoulders of giants and live in a society with rule of law. Currently I suspect that law has swung too far toward eternal protection that actually hinders innovation.

At the end of the day, #1 is most important. Regardless of the productivity of the science enterprise, someone will probably figure out how to make graphene cables or an aspen tree that bears tomatoes. The key question, then, is how society puts those things to use, to solve its problems and improve welfare. That requires a delicate balancing act, between preserving diversity and individual freedom to explore new ways of doing things, and preventing externalities from harming everyone else.

R&D – crack for techno-optimists

I like R&D. Heck, I basically do R&D. But the common argument, that people won’t do anything hard to mitigate emissions or reduce energy use, so we need lots of R&D to find solutions, strikes me as delusional.

The latest example to cross my desk (via the NYT) is the new American Energy Innovation Council’s recommendations,

Create an independent national energy strategy board.
Invest $16 billion per year in clean energy innovation.
Create Centers of Excellence with strong domain expertise.
Fund ARPA-E at $1 billion per year.
Establish and fund a New Energy Challenge Program to build large-scale pilot projects.

Let’s look at the meat of this – $16 billion per year in energy innovation funding. Historic funding looks like this:

R&D funding

Total public energy R&D, compiled from Gallagher, K.S., Sagar, A, Segal, D, de Sa, P, and John P. Holdren, “DOE Budget Authority for Energy Research, Development, and Demonstration Database,” Energy Technology Innovation Project, John F. Kennedy School of Government, Harvard University, 2007. I have a longer series somewhere, but no time to dig it up. Basically, spending was negligible (or not separately accounted for) before WWII, and ramped up rapidly after 1973.

The data above reflects public R&D; when you consider private spending, the jump to $16 billion represents maybe a factor of 3 or 4 increase. What does that do for you?

Consider a typical model of technical progress, the two-factor learning curve:

cost = (cumulative R&D)^A*(cumulative experience)^B

The A factor represents improvement from deliberate R&D, while the B factor reflects improvement from production experience like construction and installation of wind turbines. A and B are often expressed as learning rates, the multiple on cost that occurs per doubling of the relevant cumulative input. In other words, A,B = ln(learning rate)/ln(2). Typical learning rates reported are .6 to .95, or cost reductions of 40% to 5% per doubling, corresponding with A/B values of -.7 to -.15, respectively. Most learning rate estimates are on the high end (smaller reductions per doubling), particularly when the two-factor function is used (as opposed to just one component).

Let’s simplify so that

cost = (cumulative R&D)^A

and use an aggressive R&D learning rate (.7), for A=-0.5. In steady state, with R&D growing at the growth rate of the economy (call it g), cost falls at the rate A*g (because the integral of exponentially growing spending grows at the same rate, and exp(g*t)^A = exp(A*g*t)).

That’s insight number one: a change in R&D allocation has no effect on the steady-state rate of progress in cost. Obviously one could formulate alternative models of technology where that is not true, but compelling argument for this sort of relationship is that the per capita growth rate of GDP has been steady for over 250 years. A technology model with a stronger steady-state spending->cost relationship would grow super-exponentially.

Insight number two is what the multiple in spending (call it M) does get you: a shift in the steady-state growth trajectory to a new, lower-cost path, by M^A. So, for our aggressive parameter, a multiple of 4 as proposed reduces steady-state costs by a factor of about 2. That’s good, but not good enough to make solar compatible with baseload coal electric power soon.

Given historic cumulative public R&D, 3%/year baseline growth in spending, a 0.8 learning rate (a little less aggressive), a quadrupling of R&D spending today produces cost improvements like this:

R&D future 4x

Those are helpful, but not radical. In addition, even if R&D produces something more miraculous than it has historically, there are still big nontechnical lock-in humps to overcome (infrastructure, habits, …). Overcoming those humps is a matter of deployment more than research. The Energy Innovation Council is definitely enthusiastic about deployment, but without internalizing the externalities associated with energy production and use, how is that going to work? You’d either need someone to pick winners and implement them with a mishmash of credits and subsidies, or you’d have to hope for/wait for cleantech solutions to exceed the performance of conventional alternatives.

The latter approach is the “stone age didn’t end because we ran out of stones” argument. It says that cleantech (iron) will only beat conventional (stone) when it’s unequivocally better, not just for the environment, but also convenience, cost, etc. What does that say about the prospects for CCS, which is inherently (thermodynamically) inferior to combustion without capture? The reality is that cleantech is already better, if you account for the social costs associated with energy. If people aren’t willing to internalize those social costs, so be it, but let’s not pretend we’re sure that there’s a magic technical bullet that will yield a good outcome in spite of the resulting perverse incentives.

Gallagher, K.S., Sagar, A, Segal, D, de Sa, P, and John P. Holdren, “DOE Budget Authority for Energy Research, Development, and Demonstration Database,” Energy Technology Innovation Project, John F. Kennedy School of Government, Harvard University, 2007.

Stop talking, start studying?

Roger Pielke Jr. poses a carbon price paradox:

The carbon price paradox is that any politically conceivable price on carbon can do little more than have a marginal effect on the modern energy economy. A price that would be high enough to induce transformational change is just not in the cards. Thus, carbon pricing alone cannot lead to a transformation of the energy economy.

Put another way:

Advocates for a response to climate change based on increasing the costs of carbon-based energy skate around the fact that people react very negatively to higher prices by promising that action won’t really cost that much. … If action on climate change is indeed “not costly” then it would logically follow the only reasons for anyone to question a strategy based on increasing the costs of energy are complete ignorance and/or a crass willingness to destroy the planet for private gain. … There is another view. Specifically that the current ranges of actions at the forefront of the climate debate focused on putting a price on carbon in order to motivate action are misguided and cannot succeed. This argument goes as follows: In order for action to occur costs must be significant enough to change incentives and thus behavior. Without the sugarcoating, pricing carbon (whether via cap-and-trade or a direct tax) is designed to be costly. In this basic principle lies the seed of failure. Policy makers will do (and have done) everything they can to avoid imposing higher costs of energy on their constituents via dodgy offsets, overly generous allowances, safety valves, hot air, and whatever other gimmick they can come up with.

His prescription (and that of the Breakthrough Institute)  is low carbon taxes, reinvested in R&D:

We believe that soon-to-be-president Obama’s proposal to spend $150 billion over the next 10 years on developing carbon-free energy technologies and infrastructure is the right first step. … a $5 charge on each ton of carbon dioxide produced in the use of fossil fuel energy would raise $30 billion a year. This is more than enough to finance the Obama plan twice over.

… We would like to create the conditions for a virtuous cycle, whereby a small, politically acceptable charge for the use of carbon emitting energy, is used to invest immediately in the development and subsequent deployment of technologies that will accelerate the decarbonization of the U.S. economy.

Stop talking, start solving

As the nation begins to rely less and less on fossil fuels, the political atmosphere will be more favorable to gradually raising the charge on carbon, as it will have less of an impact on businesses and consumers, this in turn will ensure that there is a steady, perhaps even growing source of funds to support a process of continuous technological innovation.

This approach reminds me of an old joke:

Lenin, Stalin, Khrushchev and Brezhnev are travelling together on a train. Unexpectedly the train stops. Lenin suggests: “Perhaps, we should call a subbotnik, so that workers and peasants fix the problem.” Kruschev suggests rehabilitating the engineers, and leaves for a while, but nothing happens. Stalin, fed up, steps out to intervene. Rifle shots are heard, but when he returns there is still no motion. Brezhnev reaches over, pulls the curtain, and says, “Comrades, let’s pretend we’re moving.”

I translate the structure of Pielke’s argument like this:

Pielke Loops

Implementation of a high emissions price now would be undone politically (B1). A low emissions price triggers a virtuous cycle (R), as revenue reinvested in technology lowers the cost of future mitigation, minimizing public outcry and enabling the emissions price to go up. Note that this structure implies two other balancing loops (B2 & B3) that serve to weaken the R&D effect, because revenues fall as emissions fall.

If you elaborate on the diagram a bit, you can see why the technology-led strategy is unlikely to work:

PielkeLoopsSF

First, there’s a huge delay between R&D investment and emergence of deployable technology (green stock-flow chain). R&D funded now by an emissions price could take decades to emerge. Second, there’s another huge delay from the slow turnover of the existing capital stock (purple) – even if we had cars that ran on water tomorrow, it would take 15 years or more to turn over the fleet. Buildings and infrastructure last much longer. Together, those delays greatly weaken the near-term effect of R&D on emissions, and therefore also prevent the virtuous cycle of reduced public outcry due to greater opportunities from getting going. As long as emissions prices remain low, the accumulation of commitments to high-emissions capital grows, increasing public resistance to a later change in direction. Continue reading “Stop talking, start studying?”

NUMMI – an innovation killed by its host's immune system?

This American Life had a great show on the NUMMI car plant, a remarkable joint venture between Toyota and GM. It sheds light on many of the reasons for the decline of GM and the American labor movement. More generally, it’s a story of a successful innovation that failed to spread, due to policy resistance, inability to confront worse-before-better behavior and other dynamics.

I noticed elements of a lot of system dynamics work in manufacturing. Here’s a brief reading list: