Randomness in System Dynamics

A discrete event simulation guru asked Ventana colleague David Peterson about the representation of randomness in System Dynamics models. In the discrete event world, granular randomness is the whole game, and it often doesn’t make sense to look at something without doing a lot of Monte Carlo experiments, because any single run could be misleading. The reply:

  • Randomness 1:  System Dynamics models often incorporate random components, in two ways:
    • Internal:  the system itself is stochastic (e.g. parts failures, random variations in sales, Poisson arrivals, etc.
    • External:  All the usual Monte-Carlo explorations of uncertainty from either internal randomness or via replacing constant-but-unknown parameters with probability distributions as a form of sensitivity analysis.
  • Randomness 2:  There is also a kind of probabilistic flavor to the deterministic simulations in System Dynamics.  If one has a stochastic linear differential equation with deterministic coefficients and Gaussian exogenous inputs, it is easy to prove that all the state variables have time-varying Gaussian densities.  Further, the time-trajectories of the means of those Gaussian process can be computed immediately by the deterministic linear differential equation which is just the original stochastic equations, with all random inputs replaced by their mean trajectories.  In System Dynamics, this concept, rigorous in the linear case, is extended informally to the nonlinear case as an approximation.  That is, the deterministic solution of a System Dynamics model is often taken as an approximation of what would be concluded about the mean of a Monte-Carlo exploration.  Of course it is only an approximate notion, and it gives no information at all about the variances of the stochastic variables.
  • Randomness 3:  A third kind of randomness in System Dynamics models is also a bit informal:  delays, which might be naturally modeled as stochastic, are modeled as deterministic but distributed.  For example, if procurement orders are received on average 6 months later, with randomness of an unspecified nature, a typical System Dynamics model would represent the procurement delay as a deterministic subsystem, usually a first- or third-order exponential delay.  That is the output of the delay, in response to a pulse input, is a first- or third-order Erlang shape.  These exponential delays often do a good job of matching data taken from high-volume stochastic processes.
  • Randomness 4:  The Vensim software includes extended Kalman filtering to jointly process a model and data, to estimate the most likely time trajectories of the mean and variance/covariance of the state variables of the model. Vensim also includes the Schweppe algorithm for using such extended filters to compute maximum-likelihood estimates of parameters and their variances and covariances.  The system itself might be completely deterministic, but the state and/or parameters are uncertain trajectories or constants, with the uncertainty coming from a stochastic system, or unspecified model approximations, or measurement errors, or all three.

“Vanilla” SD starts with #2 and #3. That seems weird to people used to the pervasive randomness of discrete event simulation, but has a huge advantage of making it easy to understand what’s going on in the model, because there is no obscuring noise. As soon as things are nonlinear or non-Gaussian enough, or variance matters, you’re into the explicit representation of stochastic processes. But even then, I find it easier to build and debug  a model deterministically, and then turn on randomness. We explicitly reserve time for this in most projects, but interestingly, in top-down strategic environments, it’s the demand that lags. Clients are used to point predictions and take a while to get into the Monte Carlo mindset (forget about stochastic processes within simulations). The financial crisis seems to have increased interest in exploring uncertainty though.

Project Power Laws

An interesting paper finds a heavy-tailed (power law) distribution in IT project performance.

IT projects fall in to a similar category. Calculating the risk associated with an IT project using the average cost overrun is like creating building standards using the average size of earthquakes. Both are bound to be inadequate.

These dangers have yet to be fully appreciated, warn Flyvbjerg and Budzier. “IT projects are now so big, and they touch so many aspects of an organization, that they pose a singular new risk….They have sunk whole corporations. Even cities and nations are in peril.”

They point to the IT problems with Hong Kong’s new airport in the late 1990s, which reportedly cost the local economy some $600 million.

They conclude that it’s only a matter of time before something much more dramatic occurs. “It will be no surprise if a large, established company fails in the coming years because of an out-of-control IT project. In fact, the data suggest that one or more will,” predict Flyvbjerg and Budzier.

In a related paper, they identify the distribution of project outcomes:

We argue that these results show that project performance up to the first tipping point is politically motivated and project performance above the second tipping point indicates that project managers and decision – makers are fooled by random outliers, …

I’m not sure I buy the detailed interpretation of the political (yellow) and performance (green) regions, but it’s really the right tail (orange) that’s of interest. The probability of becoming a black swan is 17%, with mean 197% cost increase, 68% schedule increase, and some outcomes much worse.

The paper discusses some generating mechanisms for power law distributions (highly optimized tolerance, preferential attachment, …). A simple recipe for power laws is to start with some benign variation or heterogeneity, and add positive feedback. Voila – power laws on one or both tails.

What I think is missing in the discussion is some model of how a project actually works. This of course has been a staple of SD for a long time. And SD shows that projects and project portfolios are chock full of positive feedback: the rework cycle, Brooks’ Law, congestion, dilution, burnout, despair.

It would be an interesting experiment to take an SD project or project portfolio model and run some sensitivity experiments to see what kind of tail you get in response to light-tailed inputs (normal or uniform).

Circling the Drain

“It’s Time to Retire ‘Crap Circles’,” argues Gardiner Morse in the HBR. I wholeheartedly agree. He’s assembled a lovely collection of examples. Some violate causality amusingly:

“Through some trick of causality, termination leads to deployment.”

Morse ridicules one diagram that actually shows an important process,

The friendly-looking sunburst that follows, captured from the website of a solar energy advocacy group, shows how to create an unlimited market for your product. Here, as the supply of solar energy increases, so does the demand — in an apparently endless cycle. If these folks are right, we’re all in the wrong business.

This is not a particularly well-executed diagram, but the positive feedback process (reinforcing loop) of increasing demand driving economies of scale, lowering costs and further increasing demand, is real. Obviously there are other negative loops that restrain this one from delivering infinite solar, but not every diagram needs to show every loop in a system.

Unfortunately, Morse’s prescription, “We could all benefit from a little more linear thinking,” is nearly as alarming as the illness. The vacuous linear processes are right there next to the cycles in PowerPoint’s Smart Art:

Linear thinking isn’t a get-out-of-chartjunk-free card. It’s an invitation to event-driven unidirectional causal thinking, laundry lists, and George Richardson’s Dead Buffalo Syndrome. What we really need is more understanding of causality and feedback, and more operational thinking, so that people draw meaningful graphics, employing cycles where they appropriately describe causality.

h/t John Sterman for pointing this out.

Thorium Dreams

The NY Times nails it in In Search of Energy Miracles:

Yet not even the speedy Chinese are likely to get a sizable reactor built before the 2020s, and that is true for the other nuclear projects as well. So even if these technologies prove to work, it would not be surprising to see the timeline for widespread deployment slip to the 2030s or the 2040s. The scientists studying climate change tell us it would be folly to wait that long to start tackling the emissions problem.

Two approaches to the issue — spending money on the technologies we have now, or investing in future breakthroughs — are sometimes portrayed as conflicting with one another. In reality, that is a false dichotomy. The smartest experts say we have to pursue both tracks at once, and much more aggressively than we have been doing.

An ambitious national climate policy, anchored by a stiff price on carbon dioxide emissions, would serve both goals at once. In the short run, it would hasten a trend of supplanting coal-burning power plants with natural gas plants, which emit less carbon dioxide. It would drive some investment into low-carbon technologies like wind and solar power that, while not efficient enough, are steadily improving.

And it would also raise the economic rewards for developing new technologies that could disrupt and displace the ones of today. These might be new-age nuclear reactors, vastly improved solar cells, or something entirely unforeseen.

In effect, our national policy now is to sit on our hands hoping for energy miracles, without doing much to call them forth.

Yep.

h/t Travis Franck

Defense Against the Black Box

Baseline Scenario has a nice account of the role of Excel in the London Whale (aka Voldemort) blowup.

… To summarize: JPMorgan’s Chief Investment Office needed a new value-at-risk (VaR) model for the synthetic credit portfolio (the one that blew up) and assigned a quantitative whiz (“a London-based quantitative expert, mathematician and model developer” who previously worked at a company that built analytical models) to create it. The new model “operated through a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.” The internal Model Review Group identified this problem as well as a few others, but approved the model, while saying that it should be automated and another significant flaw should be fixed. After the London Whale trade blew up, the Model Review Group discovered that the model had not been automated and found several other errors. Most spectacularly,

“After subtracting the old rate from the new rate, the spreadsheet divided by their sum instead of their average, as the modeler had intended. This error likely had the effect of muting volatility by a factor of two and of lowering the VaR . . .”

Microsoft Excel is one of the greatest, most powerful, most important software applications of all time. …

As a consequence, Excel is everywhere you look in the business world—especially in areas where people are adding up numbers a lot, like marketing, business development, sales, and, yes, finance. …

But while Excel the program is reasonably robust, the spreadsheets that people create with Excel are incredibly fragile. There is no way to trace where your data come from, there’s no audit trail (so you can overtype numbers and not know it), and there’s no easy way to test spreadsheets, for starters. The biggest problem is that anyone can create Excel spreadsheets—badly. Because it’s so easy to use, the creation of even important spreadsheets is not restricted to people who understand programming and do it in a methodical, well-documented way.

This is why the JPMorgan VaR model is the rule, not the exception: manual data entry, manual copy-and-paste, and formula errors. This is another important reason why you should pause whenever you hear that banks’ quantitative experts are smarter than Einstein, or that sophisticated risk management technology can protect banks from blowing up. …

System Dynamics has a strong tradition of model quality control, dating all the way back to its origins in Industrial Dynamics. Some of it is embodied in software, while other bits are merely habits and traditions. If the London Whale model had been an SD model, would the crucial VaR error have occurred? Since the model might not have employed much feedback, one might also ask, had it been built with SD software, like Vensim, would the error have occurred?

There are multiple lines of defense against model errors:

  • Seeing the numbers. This is Excel’s strong suit. It apparently didn’t help in this case though.
  • Separation of model and data. A model is a structure that one can populate with different sets of parameters and data. In Excel, the structure and the data are intermingled, so it’s tough to avoid accidental replacement of structure (an equation) by data (a number), and tough to compare versions of models or model runs to recover differences. Vensim is pretty good at that. But it’s not clear that such comparisons would have revealed the VaR structure error.
  • Checking units of measure. When I was a TA for the MIT SD course, I graded a LOT of student models. I think units checking would have caught about a third of conceptual errors. In this case though, the sum and average of a variable have the same units, so it wouldn’t have helped.
  • Fit to data. Generally, people rely far too much on R^2, and too little on other quality checks, but the VaR error is exactly the kind of problem that might be revealed by comparison to history. However, if the trade was novel, there might not be any relevant data to use. In any case, there’s no real obstacle to evaluating fit in Excel, though the general difficulties of building time series models are an issue where time is relevant.
  • Conservation laws. SD practitioners are generally encouraged to observe conservation of people, money, material, etc. Software supports this with the graphical stock-flow convention, though it ought to be possible to do more. Excel doesn’t provide any help in this department, though it’s not clear whether it would have mattered to the Whale trade model.
  • Extreme conditions tests. “Kicking the tires” of models has been a good idea since the beginning. This is an ingrained SD habit, and Vensim provides Reality Check™ to automate it. It’s not clear that this would have revealed the VaR sum vs. average error, because that’s a matter of numerical sensitivity that might not reveal itself as a noticeable change in behavior. But I bet it would reveal lots of other problems with the model boundary and limitations to validity of relationships.
  • Abstraction. System Dynamics focuses on variables as containers for time series, and distinguishes stocks (state variables) from flows and other auxiliary conversions. Most SD languages also include some kind of array facility, like subscripts in Vensim, for declarative processing of detail complexity. Excel basically lacks such conventions, except for named ranges that are infrequently used. Time and other dimensions exist spatially as row-column layout. This means that an Excel model is full of a lot of extraneous material for handling dynamics, is stuck in discrete time, can’t be checked for DT stability, and requires a lot of manual row-column fill operations to express temporal phenomena that are trivial in SD and many other languages. With less busywork needed, it might have been much easier for auditors to discover the VaR error.
  • Readable equations. It’s not uncommon to encounter =E1*EXP($D$3)*SUM(B32:K32)^2/(1+COUNT(A32:K32)) in Excel. While it’s possible to create such gobbledygook in Vensim, it’s rare to actually encounter it, because SD software and habits encourage meaningful variable names and “chunking” equations into comprehensible components. Again, this might have made it much easier for auditors to discover the VaR error.
  • Graphical representation of structure. JPMorgan should get some credit for having a model audit process at all, even though it failed to prevent the error. Auditors’ work is much easier when they can see what the heck is going on in the model. SD software provides useful graphical conventions for revealing model structure. Excel has no graphics. There’s an audit tool, but it’s hampered by the lack of a variable concept, and it’s slower to use than Vensim’s Causal Tracing™.

I think the score’s Forrester 8, Gates 1. Excel is great for light data processing and presentation, but it’s way down my list of tools to choose for serious modeling. The secret to its success, cell-level processing that’s easy to learn and adaptable to many problems, is also its Achilles heel. Add in some agency problems and confirmation bias, and it’s a deadly combination:

There’s another factor at work here. What if the error had gone the wrong way, and the model had incorrectly doubled its estimate of volatility? Then VaR would have been higher, the CIO wouldn’t have been allowed to place such large bets, and the quants would have inspected the model to see what was going on. That kind of error would have been caught. Errors that lower VaR, allowing traders to increase their bets, are the ones that slip through the cracks. That one-sided incentive structure means that we should expect VaR to be systematically underestimated—but since we don’t know the frequency or the size of the errors, we have no idea of how much.

Sadly, the loss on this single trade would probably just about pay for all the commercial SD that’s ever been done.

Related:

The Trouble with Spreadsheets

Fuzzy VISION

Zombies in Great Falls and the SRLI

The undead are rising from their graves to attack the living in Montana, and people are still using the Static Reserve Life Index.

http://youtu.be/c7pNAhENBV4

The SRLI calculates the expected lifetime of reserves based on constant usage rate, as life=reserves/production. For optimistic gas reserves and resources of about 2200 Tcf (double the USGS estimate), and consumption of 24 Tcf/year (gross production is a bit more than that), the SRLI is about 90 years – hence claims of 100 years of gas.

How much natural gas does the United States have and how long will it last?

EIA estimates that there are 2,203 trillion cubic feet (Tcf) of natural gas that is technically recoverable in the United States. At the rate of U.S. natural gas consumption in 2011 of about 24 Tcf per year, 2,203 Tcf of natural gas is enough to last about 92 years.

Notice the conflation of SRLI as indicator with a prediction of the actual resource trajectory. The problem is that constant usage is a stupid assumption. Whenever you see someone citing a long SRLI, you can be sure that a pitch to increase consumption is not far behind. Use gas to substitute for oil in transportation or coal in electricity generation!

Substitution is fine, but increasing use means that the actual dynamic trajectory of the resource will show greatly accelerated depletion. For logistic growth in exploitation of the resource remaining, and a 10-year depletion trajectory for fields, the future must hold something like the following:

That’s production below today’s levels in less than 50 years. Naturally, faster growth now means less production later. Even with a hypothetical further doubling of resources (4400 Tcf, SRLI = 180 years), production growth would exhaust resources in well under 100 years. My guess is that “peak gas” is already on the horizon within the lifetime of long-lived capital like power plants.

Limits to Growth actually devoted a whole section to the silliness of the SRLI, but that was widely misinterpreted as a prediction of resource exhaustion by the turn of the century. So, the SRLI lives on, feasting on the brains of the unwary.

Energy rich or poor?

The Energy Collective echoes amazement at unconventional oil and gas,

Yergin, vice chairman of IHS CERA:

“The United States is in the midst of the ‘unconventional revolution in oil and gas’ that, it becomes increasingly apparent, goes beyond energy itself.

“Owing to the scale and impact of shale gas and tight oil, it is appropriate to describe their development as the most important energy innovation so far of the 21st century. … It is striking to think back to the hearings of even just half a decade ago, during the turmoil of 2008, when it was widely assumed that a permanent era of energy shortage was at hand. How different things look today.”

Mary J. Hutzler, Institute for Energy Research:

“The United States has vast resources of oil, natural gas, and coal. In a few short years, a forty-year paradigm – that we were energy resource poor – has been disproven. Instead of being resource poor, we are incredibly energy rich.”

Abundance is often attributed to a technical miracle, brought about by government R&D into unconventional fossil fuels. The articulated mental model is something like the following:

But is this really a revolutionary transition from scarcity to abundance, was it a surprise, and should technology get all the credit? I don’t think so.

(Abundance/Scarcity) = 1.03?

Contrast the 1995 and 2012 USGS National Assessments of onshore resources:

Resources, on an energy basis (EJ). Cumulative production from EIA; note that gas production data begins in 1980, so gas cumulative production is understated.

In spite of increasing unconventional resources, there’s actually less oil than there was, mainly because a lot of the 1995 resource has since been produced. (Certainly there are also other differences, including method changes.) For gas, where one can make a stronger case for a miracle due to the large increase in unconventional resources, the top line is up a whopping 3%. Even if you go with EIA/INTEK‘s ~2x larger estimate for shale gas, resources are up only 35%.

Call me conservative, but I think an abundance revolution that “disproves” scarcity would be a factor of 10 increase, not these piddly changes.

You could argue that the USGS hasn’t gotten the memo, and therefore has failed to appreciate new, vast unconventional resources. But given that they have reams of papers assessing unconventional fields, I think it more likely that they’re properly accounting for low recoverability, and not being bamboozled by large resources in place.

Reserves involve less guesswork, but more confounding dynamics. But reserves tell about the same story as resources. Oil reserves are more than 40% off their 1970 peak. Even gas reserves have only just regained the levels achieved 40 years ago.

EIA

Surprise?

In 1991, USGS’ Thomas Ahlbrandt wrote:

Unconventional natural gas resources are also becoming increasingly viable. Coalbed methane, which accounts for about 25 percent of potential natural gas resources in the U.S., will displace nearly a trillion cubic feet (TCF) of gas from conventional resources in the near term and perhaps several TCF by the turn of the century. Similarly, production of gas from low permeability resources may displace some production of conventional gas as increasingly smaller conventional accumulations are developed. Coalbed methane and tight gas, both abundant in the Rocky Mountain and Appalachian regions, will likely experience significant production increases. Optimistic scenarios suggest that tight gas and coalbed methane resources may provide more domestic natural gas production than conventional resources by the year 2010. Horizontal drilling technology will most likely unlock the large currently uneconomic gas resources in tight reservoirs. Technologies like this will most certainly change the status of what are presently considered unconventional resources.

I’d call that a “no.”

Should we be surprised to see supply increasing in the current price environment? Again, I’d say no. The idea that oil and gas have supply curves is certainly much older than its appearance in the 1995 USGS assessment. Perhaps the ongoing increase in shale gas development, when prices have collapsed, is a bit surprising. But then you have to consider that (a) drilling costs have tanked alongside the economy, (b) there are lags between price, perception, capital allocation, and production, and (c) it’s expectations of price, not current prices, that drive investment.

Does tech get the credit?

Certainly tech gets some credit. For example, the Bakken oil boom owes much to horizontal drilling:

EIA

But there’s more than tech going on. And much of the tech evolution is surely a function of industry activity funded out of revenue or accumulated through production experience, rather than pure government R&D.

If tech is the exclusive driver of increasing abundance, you’d expect costs and prices to be falling. Gas prices are indeed well off their recent peak, though one could wonder whether that’s a durable circumstance. Even so, gas is no cheaper than it was in the 90s, and more costly than in the pre-OPEC era. Oil isn’t cheap at all – it’s close to its historic highs.

So, if there’s anything here that one might call a tech fingerprint, it would have to be the decline in gas prices post-mid-2008. But that coincides better with the financial crisis than with the gas boom.

Cost data are less current, but if anything the cost picture is less sanguine. “Real gas equipment costs are 12 percent higher and operating costs are 37 percent higher than for the base year of 1976,” says EIA.

Bottom Line

First, let’s not kid ourselves. There’s less oil and gas under US soil than there has ever been.

Technology has at best done a little more than keep the wolf from the door, by lowering the cost of exploration and development by enough to offset the increases that would result from increasing physical scarcity.

It’s possible that the effects on shale and tight gas cost and availability have been dramatic, but there are plausible alternative hypotheses (financial crisis, moving up supply curves, and delays in production capital investment) for current prices.

Personally, I doubt that technology can keep up with physical scarcity and demand growth forever, so I don’t expect that gas prices will continue walking back to 1970 or 1960 levels. The picture for oil is even worse. But I hope that at some point, we’ll come to our senses and tax CO2 at a level high enough to reverse consumption growth. If that happens abruptly enough, it could drive down wellhead prices.

None of this sounds like the kind of tailfins and big-block V8 abundance that people seem to be hoping for.

 

 

Equation Soup

Most climate skepticism I encounter these days has transparently crappy technical content, if it has any at all. It’s become boring to read.

But every once in a while a paper comes along that is sufficiently complex and free of immediately obvious errors that it becomes difficult to evaluate. One recent example that came across my desk is,

Polynomial cointegration tests of anthropogenic impact on global warming Continue reading “Equation Soup”

Greek oil taxes – the real story

A guest post from Ventana colleague Marios Kagarlis, who writes about the NYT article on Greek heating oil taxes:

The problems in Greece are interdependent and all have their roots at the fact that the model of government that has been the status quo in Greece since WWII isn’t working and needs radical change, but the people who run the system know no other way, so the problems keep compounding with no solution in sight.There used to be two tiers of taxation for oil: one was for heating oil, which was relatively low, and the other was for oil used for all other purposes (e.g. for diesel cars etc) which was taxed at about 100% over the fuel cost.

Because of the inability of the government institutions to enforce the laws in Greece (which on paper are tough but in practice are not enforced because the system is incompetent), there has been widespread abuse of this: from refineries to gas stations, many oil merchants have been branding diesel as heating oil to evade the tax, and then selling at as non-heating oil, doubling their profit and ripping off both the consumers and the government.

The government has for years been attempting (supposedly) to crack down on this, with pitiable results. The international lenders have demanded from the Greek government, as a precondition for the continuation of the bailout installments paid every now and then (essentially going in their entirety toward servicing past debt, as opposed to relieving the economy), to crack down on tax evasion via illegal diesel sales of ‘heating oil’ as non-heating diesel. Because the tax collection system is broken and cannot control the diesel market or collect the taxes due, the Greek government had to do something quickly to meet the lenders’ demands. And this was the best they could come up with…

So they finally decided to do away with the two separate tiers of taxation and tax all oil as non-heating oil. To make up for the huge rise in cost to the end consumer they established obscure and bureaucratic criteria for lower income families to submit applications to the government for partial reimbursement of the extra tax, the idea being that this would deprive the sellers from a means to cheat and would still enable end consumers in need to get reasonably priced heating oil after reimbursements. However this didn’t work and instead people just massively stopped using oil for heating, which is by far prevalent in Greece (another government failure, for a country with no oil resources and lots of sun and wind). There are entire older building blocs in cities that were built without fireplaces (which up until recently in modern city apartments were more of a symbol of affluence than of any practical use – people essentially never using them) that have just turned off heating altogether, and fights amongst tenants are commonplace for disagreements over whether to turn on heating or not (which in older buildings is collective so it’s heating for all or for none). Those who cannot afford it just don’t pay so sooner or later most buildings in working class neighborhoods are forced to abandon central heating and sustain the cold or improvise.

Because the government again hadn’t foreseen any of this, and wood burning was never particularly widespread in Greece, there had not been standards for wood or pellet burning stoves. So the market is flooded with low quality wood-burning stoves which are totally inefficient and polluting. So suddenly from December the larger cities in Greece are filled with smog and particulates for the first time from inefficient wood-burning stoves, and from burning inappropriate wood (e.g. people burn disused lacquered furniture at their fireplaces, which is very polluting). Cases of asthma and respiratory illnesses in the larger cities since December have skyrocketed. In the meantime forests and even city parks are raided daily by desperate unemployed people who cannot afford heating (especially in northern Greece), who cut down any trees they can get their hands on.

It’s hard to see that there can be any short term solution to this, in the middle of the worst economic crisis Greece has faced since WWII.

Marios lives in Athens.

Oil tax forces single cause attribution folly

A silly NYT headline claims that Rise in Oil Tax Forces Greeks to Face Cold as Ancients Did.

The tax raised the cost of heating oil 46%, which hardly sends Greece back to the Bronze Age. Surely the runup in crude prices by a factor of 5 and a depression with 26% unemployment have a bit to do with the affordability of heat as well?  And doesn’t the unavailability of capital now make it difficult for people to respond sensibly with conservation, whereas a proactive historic energy policy would have left them much less vulnerable?

The kernel of wisdom here is that abrupt implementation of policies, or intrusion of realities, can be disruptive. The conclusion one ought to draw is that policies need to anticipate economic, thermodynamic, or environmental constraints that one must eventually face. But the headline instead plays into the hands of those who claim that energy taxes will doom the economy. In the long run, taxes are part of the solution, not the problem, and it’s the inability to organize ourselves to price externalities that will really hurt us.

Update: the real story.