Wedge furor

Socolow is quoted in Nat Geo as claiming the stabilization wedges were a mistake,

“With some help from wedges, the world decided that dealing with global warming wasn’t impossible, so it must be easy,” Socolow says.  “There was a whole lot of simplification, that this is no big deal.”

Pielke quotes & gloats:

Socolow’s strong rebuke of the misuse of his work is a welcome contribution and, perhaps optimistically, marks a positive step forward in the climate debate.

Romm refutes,

I spoke to Socolow today at length, and he stands behind every word of that — including the carefully-worded title.  Indeed, if Socolow were king, he told me, he’d start deploying some 8 wedges immediately. A wedge is a strategy and/or technology that over a period of a few decades ultimately reduces projected global carbon emissions by one billion metric tons per year (see Princeton website here). Socolow told me we “need a rising CO2 price” that gets to a serious level in 10 years.  What is serious?   “$50 to $100 a ton of CO2.”

Revkin weighs in with a broader view, but the tone is a bit Pielkeish,

From the get-go, I worried about the gushy nature of the word “solving,” particularly given that there was then, and remains, no way to solve the climate problem by 2050.

David Roberts wonders what the heck Socolow is thinking.

Who’s right? I think it’s best in Socolow’s own words (posted by Revkin):

1. Look closely at what is in quotes, which generally comes from my slides, and what is not in quotes. What is not in quotes is just enough “off” in several places to result in my messages being misconstrued. I have given a similar talk about ten times, starting in December 2010, and this is the first time that I am aware of that anyone in the audience so misunderstood me. I see three places where what is being attributed to me is “off.”

a. “It was a mistake, he now says.” Steve Pacala’s and my wedges paper was not a mistake. It made a useful contribution to the conversation of the day. Recall that we wrote it at a time when the dominant message from the Bush Administration was that there were no available tools to deal adequately with climate change. I have repeated maybe a thousand times what I heard Spencer Abraham, Secretary of Energy, say to a large audience in Alexandria. Virginia, early in 2004. Paraphrasing, “it will take a discovery akin to the discovery of electricity” to deal with climate change. Our paper said we had the tools to get started, indeed the tools to “solve the climate problem for the next 50 years,” which our paper defined as achieving emissions 50 years from now no greater than today. I felt then and feel now that this is the right target for a world effort. I don’t disown any aspect of the wedges paper.

b. “The wedges paper made people relax.” I do not recognize this thought. My point is that the wedges people made some people conclude, not surprisingly, that if we could achieve X, we could surely achieve more than X. Specifically, in language developed after our paper, the path we laid out (constant emissions for 50 years, emissions at stabilization levels after a second 50 years) was associated with “3 degrees,” and there was broad commitment to “2 degrees,” which was identified with an emissions rate of only half the current one in 50 years. In language that may be excessively colorful, I called this being “outflanked.” But no one that I know of became relaxed when they absorbed the wedges message.

c. “Well-­?intentioned groups misused the wedges theory.” I don’t recognize this thought. I myself contributed the Figure that accompanied Bill McKibben’s article in National Geographic that showed 12 wedges (seven wedges had grown to eight to keep emissions level, because of emissions growth post-­?2006 and the final four wedges drove emissions to half their current levels), to enlist the wedges image on behalf of a discussion of a two-­?degree future. I am not aware of anyone misusing the theory.

2. I did say “The job went from impossible to easy.” I said (on the same slide) that “psychologists are not surprised,” invoking cognitive dissonance. All of us are more comfortable with believing that any given job is impossible or easy than hard. I then go on to say that the job is hard. I think almost everyone knows that. Every wedge was and is a monumental undertaking. The political discourse tends not to go there.

3. I did say that there was and still is a widely held belief that the entire job of dealing with climate change over the next 50 years can be accomplished with energy efficiency and renewables. I don’t share this belief. The fossil fuel industries are formidable competitors. One of the points of Steve’s and my wedges paper was that we would need contributions from many of the available option. Our paper was a call for dialog among antagonists. We specifically identified CO2 capture and storage as a central element in climate strategy, in large part because it represents a way of aligning the interests of the fossil fuel industries with the objective of climate change.

It is distressing to see so much animus among people who have common goals. The message of Steve’s and my wedges paper was, above all, ecumenical.

My take? It’s rather pointless to argue the merits of 7 or 14 or 25 wedges. We don’t really know the answer in any detail. Do a little, learn, do some more. Socolow’s $50 to $100 a ton would be a good start.

this
three 

a. “It
It
time
available
thousand
audience
akin
the
tools
to
get
started,
indeed
the
tools
to
“solve
the
climate
problem
for
the
next
50

years,”
than
disown
any
aspect
of
the
wedges
paper.

b. “The
wedges
paper
made
people
relax.”
I
do
not
recognize
this
thought.
My
point
is
that

the
wedges
people
made
some
people
conclude,
not
surprisingly,
that
if
we
could

achieve
after
our
paper,
the
path
we
laid
out
(constant
emissions
for
50
years,
emissions
at

stabilization
was
only
half
the
current
one
in
50
years.
In
language
that
may
be
excessively
colorful,
I

called
this
being
“outflanked.”
But
no
one
that
I
know
of
became
relaxed
when
they

absorbed
the
wedges
message.

c.
“Well-­?intentioned
myself
contributed
the
Figure
that
accompanied
Bill
McKibben’s
article
in
National

Geographic
emissions
emissions
discussion

2011 Climate CoLab contest – How should the 21st century economy evolve bearing in mind the reality of climate change?

From my friends at the MIT Climate CoLab, a cool experiment in collective intelligence:

To the members of the Climate CoLab,

We are pleased to announce the launch of the 2011 Climate CoLab Contest. This year, the question that the CoLab poses is:

How should the 21st century economy evolve bearing in mind the reality of climate change?

This year’s contest will feature two competition pools:

  • Global, whose proposals outline how a feature of the world economy should evolve,
  • Regional/national, whose proposals outline how a feature of a regional or national economy should evolve.

The contest will run for six months from May 16 to November 15. Winners will be selected based on voting by community members and review by the judges.

The winning teams will present their proposals at briefings at the United Nations in New York City and U.S. Congress in Washington, D.C. The Climate CoLab will sponsor one representative from each of the winning teams.

We encourage you to form teams with other CoLab members who share your regional or global interests. Fill out your profile and start debating and brainstormingIf you would like to join a team, please send me a message.

Learn more about this year’s contest at http://climatecolab.org. Please tell your friends!

Best,

Lisa Jing
For the Climate CoLab Team

Modeling the Ryan proposal

Thanks Pete for pointing out that there is modeling behind the Ryan proposal after all. Macroeconomic Advisers has the kind of in-depth scrutiny of the model results that I love, in The Economic Effects of the Ryan Plan: Assuming the Answer?.

You really should read it, but here are some of the juicier excerpts:

Peek-a-boo

There were actually two sets of results. The first showed real GDP immediately rising by $33.7 billion in 2012 (or 0.2%) relative to the baseline, with total employment rising 831 thousand (or 0.6%) and the civilian unemployment rate falling a stunning 2 percentage points, a decline that persisted for a decade. (This path for the unemployment rate is labeled “First Result” in the table.) The decline in the unemployment rate was greeted — quite correctly, in our view — with widespread incredulity. Shortly thereafter, the initial results were withdrawn and replaced with a second set of results that made no mention of the unemployment rate, but not before we printed a hardcopy! (This is labeled “Second Result” in the table.)

Multiplier Mischief

The simulation shows real federal non-defense purchases down by $37.4 billion in 2012, but real GDP up by $33.7 billion, so the short-run “fiscal multiplier” is negative.[11] As noted above, that analysis was prepared using the GI model of the US economy. We are not intimately familiar with this model but have the impression it is a structural macro model in which near-term movements in GDP are governed by aggregate demand while long-term trends in output are determined by the labor force, the capital stock, and total factor productivity. Obviously we can’t object to this paradigm, since we rely on it, too.

However, precisely because we are so familiar with the characteristics of such systems, we doubt that the GI model, used as intended, shows a negative short-run fiscal multiplier. Indeed, GI’s own discussion of its model makes clear the system does, in fact, have a positive short-run fiscal multiplier.[12] This made us wonder how and on what grounds analysts at Heritage manipulated the system to produce the results reported.

Crowding Out Credibility

So, as we parsed the simulation results, we couldn’t see what was stimulating aggregate demand at unchanged interest rates and in the face of large cuts in government consumption and transfer payments…until we read this:

“Economic studies repeatedly find that government debt crowds out private investment, although the degree to which it does so can be debated. The structure of the model does not allow for this direct feedback between government spending and private investment variables. Therefore, the add factors on private investment variables were also adjusted to reflect percentage changes in publicly held debt (MA italics).”

In sum, we have never seen an investment equation specified this way and, in our judgment, adjusting up investment demand in this manner is tantamount to assuming the answer. If Heritage wanted to show more crowding in, it should have argued for a bigger drop in interest rates or more interest-sensitive investment, responses over which there is legitimate empirical debate. These kinds of adjustments would not have reversed the sign of the short-run fiscal multiplier in the manner that simply adjusting up investment spending did.

Hilarious Housing?

In the simulation, the component of GDP that initially increases most, both in absolute and in percentage terms, is residential investment. This is really hard to fathom. There’s no change in pre-tax interest rates to speak of, hence the after-tax mortgage rate presumably rises with the decline in marginal tax rates even as the proposed tax reform curtails some or all of the mortgage interest deduction. …

The list of problems goes on and on, and there are others. MacroAdviser’s bottom line:

In our opinion, however, the macroeconomic analysis released in conjunction with the House Budget Resolution is not relevant to the coming discussion. We believe that the main result — that aggressive deficit reduction immediately raises GDP at unchanged interest rates — was generated by manipulating a model that would not otherwise produce this result, and that the basis for this manipulation is not supported either theoretically or empirically. Other features of the results — while perhaps unintended — seem highly problematic to us and seriously undermine the credibility of the overall conclusions.

This is really unfortunate, both for the policy debate and the modeling profession. Using models as arguments from authority, while manipulating them to produce propagandistic output, poisons the well for all rational inputs to policy debates. Unfortunately, there’s a long history of such practice, particularly in economic forecasting:

Not surprisingly, the forecasts produced by econometric models often don’t square with the modeler’s intuition. When they feel the model output is wrong, many modelers, including those at the “big three” econometric forecasting firms—Chase Econometrics, Wharton Econometric Forecasting Associates, and Data Resources – simply adjust their forecasts. This fudging, or add factoring as they call it, is routine and extensive. The late Otto Eckstein of Data Resources admitted that their forecasts were 60 percent model and 40 percent judgment (“Forecasters Overhaul Models of Economy in Wake of 1982 Errors,” Wall Street Journal, 17 February 1983). Business Week (“Where Big Econometric Models Go Wrong,” 30 March 1981) quotes an economist who points out that there is no way of knowing where the Wharton model ends and the model’s developer, Larry Klein, takes over. Of course, the adjustments made by add factoring are strongly colored by the personalities and political philosophies of the modelers. In the article cited above, the Wall Street Journal quotes Otto Eckstein as conceding that his forecasts sometimes reflect an optimistic view: “Data Resources is the most influential forecasting firm in the country… If it were in the hands of a doom-and- gloomer, it would be bad for the country.” John Sterman, A Skeptic’s Guide to Computer Models

As a historical note, GI – Global Insight, maker of the model used by Heritage CDA for the Ryan analysis – is the product of a Wharton/DRI merger, though it appears that the use of the GI model may have been outside their purview in this case.

What’s the cure? I’m not sure there is one as long as people are cherry-picking plausible sounding arguments to back up their preconceived notions or narrow self-interest. But assuming that some people do want intelligent discourse, it’s fairly easy to get it by having high standards for model transparency and quality. This means more than peer review, which often entails only weak checks of face validity of output. It means actual interaction with models, supported by software that makes it easy to identify causal relationships and perform tests in extreme conditions. It also means archiving of models and results for long-term replication and quality improvement. It requires that modelers invest more in testing the limits of their own insights, communicating their learnings and tools, and fostering understanding of principles that help raise the average level of debate.

The delusional revenue side of the Ryan budget proposal

I think the many chapters of health care changes in the Ryan proposal are actually a distraction from the primary change. It’s this:

  • Provides individual income tax payers a choice of how to pay their taxes – through existing law, or through a highly simplified code …
  • Simplifies tax rates to 10 percent on income up to $100,000 for joint filers, and $50,000 for single filers; and 25 percent on taxable income above these amounts. … [A minor quibble: it’s stupid to have a stepwise tax rate, especially with a huge jump from 10 to 25%. Why can’t congress get a grip on simple ideas like piecewise linearity?]
  • Eliminates the alternative minimum tax [AMT].
  • Promotes saving by eliminating taxes on interest, capital gains, and dividends; also eliminates the death tax.
  • Replaces the corporate income tax – currently the second highest in the industrialized world – with a border-adjustable business consumption tax of 8.5 percent. …

This ostensibly results in a revenue trajectory that rises to a little less than 19% of GDP, roughly the postwar average. The CBO didn’t analyze this; it used a trajectory from Ryan’s staff. The numbers appear to me to be delusional.

For sub-$50k returns in the new 10% bracket, this does not appear to be a break. Of those returns, currently over 2/3 pay less than a 5% average tax rate. It’s not clear what the distribution of income is within this bracket, but an individual would only have to make about $25k to be worse off than the median earner, it appears. The same appears to be true in the $100k-200k bracket. A $150k return with a $39k exemption for a family of four would pay 18.5% on average, while the current median is 10-15%. This is certainly not a benefit to wage earners, though the net effect is ambiguous (to me at least) because of the change in treatment of asset income.

The elimination of tax on interest, dividends and capital gains is really the big story here. For returns over $200k, wages are less than 42% of AGI. Interest, dividends and gains are over 35%. The termination of asset taxes means that taxes fall by about a third on high income returns (the elimination of the mortgage interest deduction does little to change that). The flat 25% marginal rate can’t possibly make up for this, because it’s not different enough from the ~20% median effective tax rate in that bracket. For the top 400 returns in the US, exemption of asset income would reduce the income basis by 70%, and reduce the marginal tax rate from the ballpark of 35% to 25%.

It seems utterly delusional to imagine that this somehow returns to something resembling the postwar average tax burden, unless setting taxes on assets to zero is accompanied by a net increase in other taxes (i.e. wages, which constitute about 70% of total income). That in turn implies a tax increase for the lower brackets, a substantial cut on returns over $200k, and a ginormous cut for the very highest earners.

This is all exacerbated by the simultaneous elimination of corporate taxes, which are already historically low and presumably have roughly the same incidence as individual asset income, making the cut another gift to the top decile. With rates falling from 35% at the margin to 8.5% on “consumption” (a misnomer – the title calls it a “business consumption tax” but the language actually taxes “gross profits”, which is in turn a misnomer because investment is treated as a current year expense). The repeal of the estate tax, of which 80% is currently collected on estates over $5 million (essentially 0% below $2 million) has a similar distributional effect.

I think it’s reasonable to discuss cutting corporate taxes, which do appear to be cross sectionally high. But if you’re going to do that, you need to somehow maintain the distributional characteristics of the tax system, or come up with a rational reason not to, in the face of increasing inequity of wealth.

I can’t help wondering whether there’s any analysis behind these numbers, or if they were just pulled from a hat by lawyers and lobbyists. This simply isn’t a serious proposal, except for people who are serious about top-bracket tax cuts and drowning the government in a bathtub.

Given that the IRS knows the distribution of individual income in exquisite detail, and that much of the aggregate data needed to analyze proposals like those above is readily available on the web, it’s hard to fathom why anyone would even entertain the idea of discussing a complex revenue proposal like Ryan’s without some serious analytic support and visualization. This isn’t rocket science, or even bathtub dynamics. It’s just basic accounting – perfect stuff for a spreadsheet. So why are we reviewing this proposal with 19th century tools – an overwhelming legal text surrounded by a stew of bogus rhetoric?

The Ryan health care proposal

The Ryan budget proposal achieves the bulk of its savings by cutting health care outlays, particularly Medicare and Medicaid. The mechanism sounds a lot like a firm’s transition from a defined benefits pension plan to a defined contribution scheme. Medicaid becomes a system of block grants to states, and Medicare becomes a system of flat-rate vouchers. Along the way, it has some useful aspirations: to separate health insurance from employment and eliminate health’s favored tax status.

Reading some of the finer print, though, I don’t think it really fixes the fundamental flaws of the current system. It’s billed as “universal access” but that’s a misnomer. It guarantees universal access to a tax credit or voucher that can be used to purchase coverage, but not universal access to coverage. That’s because it doesn’t solve the adverse selection problem. As a result, any provider that doesn’t play the usual game of excluding anyone who’s really sick from coverage (using preexisting conditions and rotating plan changes) will suffer a variant of the utility death spiral: increasing costs drive the healthy out of the plan, leaving it to serve a diminishing set of members who had the misfortune to get sick, at an escalating cost.

Universal access to coverage is left to the states, who can create assigned risk pools or other methods to cover the uncoverable. Leaving things to the states strikes me as a reasonable strategy, because the health system is so complex that evolutionary learning is likely to beat the kind of deliberate design we’ll get out of congress. But it’s not clear to me that the proposal creates any real authority to raise money to support these assigned risk pools; without money, the state mechanisms will be rather perfunctory.

The real challenge seems to me to be to address three features of health:

  • Prevention beats cure by a long shot, in terms of both cost and quality of life. In the current system, patient churn through providers eliminates most of the provider-side incentive to address this. Patients have contributed by abdicating responsibility for their own health, and insurance exacerbates the problem by obscuring the costs of the quadruple bypass that follows from a life of Big Macs.
  • Health care expenditures are extremely skewed over one’s lifetime and within age cohorts. Good behavior can’t mitigate all risk, particularly the risk of getting old. (See below for a peek at the data.)
  • In some circumstances, the health care system is capable of expending an extremely large amount of resources on a person – sometimes for a miraculous outcome, and sometimes for rather marginal end-of-life extension.

What’s needed is a distributed way to share risk (which is why it’s called insurance), while preserving incentives for good behavior and matching total expenditures to resources. That’s a tall order. It’s not clear to me that the Ryan proposal tackles it in any serious way; it just extends the flaws of the current system to Medicare patients.

healthExpendAgeIncomeMEPSPer capita annual medical expenditures from the MEPS panel, by age and income. There’s surprisingly little variation by income, but a lot by age. The bill terminates the agency that collects this data.

healthExpendAgeDecileMEPSHealth expenditures by age and decile of cohort, showing the extreme concentration of expenditures at all ages.

The really fine print, the text of the bill itself, is daunting – 629 pages. This strikes me as simply unmanageable (like the deceased cap and trade legislation). There are simply too many opportunities for unintended consequences, and hidden agendas, in such a multifaceted approach, especially with the opaque analytic support available. Surely this could be tackled in a series of smaller bites – health, revenue, other expenditures. It calls to mind the criticism of the FAA’s repeated failure to redesign the air traffic control system, “you can’t design a system that evolved.” Well, maybe you can, but not with the kind of tools and discourse that now prevail.

A walk through the Ryan budget proposal

Since the budget deal was announced, I’ve been wondering what was in it. It’s hard to imagine that it really works like this:

“This is an agreement to invest in our country’s future while making the largest annual spending cut in our history,” Obama said.

However, it seems that there isn’t really much substance to the deal yet, so I thought I’d better look instead at one target: the Ryan budget roadmap. The CBO recently analyzed it, and put the $ conveniently in a spreadsheet.

Like most spreadsheets, this is very good at presenting the numbers, and lousy at revealing causality. The projections are basically open-loop, but they run to 2084. There’s actually some justification for open-loop budget projections, because many policies are open loop. The big health and social security programs, for example, are driven by demographics, cutoff ages and inflation adjustment formulae. The demographics and cutoff ages are predictable. It’s harder to fathom the possible divergence between inflation adjustments and broad inflation (which affects the health sector share) and future GDP growth. So, over long horizons, it’s a bit bonkers to look at the system without considering feedback, or at least uncertainty in the future trajectory of some key drivers.

There’s also a confounding annoyance in the presentation, with budgets and debt as percentages of GDP. Here’s revenue and “other” expenditures (everything but social security, health and interest):

RevenueOtherTransientThere’s a huge transient in each, due to the current financial mess. (Actually this behavior is to some extent deliberately Keynesian – the loss of revenue in a recession is amplified over the contraction of GDP, because people fall into lower tax brackets and profits are more volatile than gross activity. Increased borrowing automatically takes up the slack, maintaining more stable spending.) The transient makes it tough to sort out what’s real change, and what is merely the shifting sands of the GDP denominator. This graph also points out another irritation: there’s no history. Is this plausible, or unprecedented behavior?

The Ryan team actually points out some of the same problems with budgets and their analyses:

One reason the Federal Government’s major entitlement programs are difficult to control is that they are designed that way. A second is that current congressional budgeting provides no means of identifying the long-term effects of near-term program expansions. A third is that these programs are not subject to regular review, as annually appropriated discretionary programs are; and as a result, Congress rarely evaluates the costs and effectiveness of entitlements except when it is proposing to enlarge them. Nothing can substitute for sound and prudent policy choices. But an improved budget process, with enforceable limits on total spending, would surely be a step forward. This proposal calls for such a reform.

Unfortunately the proposed reforms don’t seem to change anything about the process for analyzing the budget or designing programs. We need transparent models with at least a little bit of feedback in them, and programs that are robust because they’re designed with that little bit of feedback in mind.

Setting aside these gripes, here’s what I can glean from the spreadsheet.

The Ryan proposal basically flatlines revenue at 19% of GDP, then squashes programs to fit. By contrast, the CBO Extended Baseline scenario expands programs per current rules and then raises revenue to match (very roughly – the Ryan proposal actually winds up with slightly more public debt 20 years from now).

RevenueIt’s not clear how the 19% revenue level arises; the CBO used a trajectory from Ryan’s staff, not its own analysis. Ryan’s proposal says:

  • Provides individual income tax payers a choice of how to pay their taxes – through existing law, or through a highly simplified code that fits on a postcard with just two rates and virtually no special tax deductions, credits, or exclusions (except the health care tax credit).
  • Simplifies tax rates to 10 percent on income up to $100,000 for joint filers, and $50,000 for single filers; and 25 percent on taxable income above these amounts. Also includes a generous standard deduction and personal exemption (totaling $39,000 for a family of four).
  • Eliminates the alternative minimum tax [AMT].
  • Promotes saving by eliminating taxes on interest, capital gains, and dividends; also eliminates the death tax.
  • Replaces the corporate income tax – currently the second highest in the industrialized world – with a border-adjustable business consumption tax of 8.5 percent. This new rate is roughly half that of the rest of the industrialized world.

It’s not clear that there’s any analysis to back up the effects of this proposal. Certainly it’s an extremely regressive shift. Real estate fans will flip when they find out that the mortgage interest deduction is gone (actually a good idea, I think).

On the outlay side, here’s the picture (CBO in solid lines; Ryan proposal with dashes):

OutlaysYou can see several things here:

  • Social security is untouched until some time after 2050. CBO says that the proposal doesn’t change the program; Ryan’s web site partially privatizes it after about a decade and “eventually” raises the retirement age. There seems to be some disconnect here.
  • Health care outlays are drastically lower; this is clearly where the bulk of the savings originate. Even so, there’s not much change in the trend until at least 2025 (the initial absolute difference is definitional – inclusion of programs other than Medicare/Medicaid in the CBO version).
  • Other noninterest outlays also fall substantially – presumably this means that all other expenditures would have to fit into a box not much bigger than today’s defense budget, which seems like a heroic assumption even if you get rid of unemployment, SSI, food stamps, Section 8, and all similar support programs.

You can also look at the ratio of outlays under Ryan vs. CBO’s Extended Baseline:

OutlayRatios

Since health care carries the flag for savings, the question is, will the proposal work? I’ll take a look at that next.

Then & Now

Time has an interesting article on the climate policy positions of the GOP front runners. It’s amazing how far we’ve backed away from regulating greenhouse emissions:

Then Now
Pawlenty signed the Next Generation Energy Act of 2007 in Minnesota, which called for a plan to “recommend how the state could adopt a regulatory system that imposes a cap on the aggregate air pollutant emissions of a group of sources.” The current Tim Pawlenty line on carbon is that “cap and trade would be a disaster.”
Here he is in Iowa in 2007, voicing concern about man-made global warming while supporting more government subsidies for new energy sources, new efficiency standards, and a new global carbon treaty. Mitt Romney regularly attacks Barack Obama for pushing a cap and trade system through Congress.

And so on…

I can’t say that I’ve ever been much of a cap and trade fan, and I’d lay a little of the blame for our current sorry state at the door of cap and trade supporters who were willing to ignore what a bloated beast the bills had become. Not much, though. Most of the blame falls to the anti-science and let’s pretend externalities don’t exist crowds, who wouldn’t give a carbon tax the time of day either.

How to be confused about nuclear safety

There’s been a long running debate about nuclear safety, which boils down to, what’s the probability of significant radiation exposure? That in turn has much to do with the probability of core meltdowns and other consequential events that could release radioactive material.

I asked my kids about an analogy to the problem: determining whether a die was fair. They concluded that it ought to be possible to simply roll the die enough times to observe whether the outcome was fair. Then I asked them how that would work for rare events – a thousand-sided die, for example. No one wanted to roll the dice that much, but they quickly hit on the alternative: use a computer. But then, they wondered, how do you know if the computer model is any good?

Those are basically the choices for nuclear safety estimation: observe real plants (slow, expensive), or use models of plants.

If you go the model route, you introduce an additional layer of uncertainty, because you have to validate the model, which in itself is difficult. It’s easy to misjudge reactor safety by doing five things:

  • Ignore the dynamics of the problem. For example, use a statistical model that doesn’t capture feedback. Presumably there have been a number of reinforcing feedbacks operating at the Fukushima site, causing spillovers from one system to another, or one plant to another:
    • Collateral damage (catastrophic failure of part A damages part B)
    • Contamination (radiation spewed from one reactor makes it unsafe to work on others)
    • Exhaustion of common resources (operators, boron)
  • Ignore the covariance matrix. This can arise in part from ignoring the dynamics above. But there are other possibilities as well: common design elements, or colocation of reactors, that render failure events non-independent.
  • Model an idealized design, not a real plant: ignore components that don’t perform to spec, nonlinearities in responses to extreme conditions, and operator error.
  • Draw a narrow boundary around the problem. Over the last week, many commentators have noted that reactor containment structures are very robust, and explicitly designed to prevent a major radiation release from a worst-case core meltdown. However, that ignores spent fuel stored outside of containment, which is apparently a big part of the Fukushima hazard now.
  • Ignore the passage of time. This can both help and hurt: newer reactor designs should benefit from learning about problems with older ones; newer designs might introduce new problems; life extension of old reactors introduces its own set of engineering issues (like neutron embrittlement of materials).
  • Ignore the unknown unknowns (easy to say, hard to avoid).

I haven’t read much of the safety literature, so I can’t say to what extent the above issues apply to existing risk analyses based on statistical models or detailed plant simulation codes. However, I do see a bit of a disconnect between actual performance and risk numbers that are often bandied about from such studies: the canonical risk of 1 meltdown per 10,000 reactor years, and other even smaller probabilities on the order of 1 per 100,000 or 1,000,000 reactor years.

I built myself a little model to assess the data, using WNA data to estimate reactor-years of operation and a wiki list of accidents. One could argue at length which accidents should be included. Only light water reactors? Only modern designs? I tend to favor a liberal policy for including accidents. As soon as you start coming up with excuses to exclude things, you’re headed toward an idealized world view, where operators are always faithful, plants are always shiny and new, or at least retired on schedule, etc. Still, I was a bit conservative: I counted 7 partial or total meltdown accidents in commercial or at least quasi-commercial reactors, including Santa Susana, Fermi, TMI, Chernobyl, and Fukushima (I think I missed Chapelcross). Then I looked at maximum likelihood estimates of meltdown frequency over various intervals. Using all the data, assuming Poisson arrivals of meltdowns, you get .6 failures per thousand reactor-years (95% confidence interval .3 to 1). That’s up from .4 [.1,.8] before Fukushima. Even if you exclude the early incidents and Fukushima, you’re looking at .2 [.04,.6] meltdowns per thousand reactor years – twice the 1-per-10,000 target. For the different subsets of the data, the estimates translate to an expected meltdown frequency of about once to thrice per decade, assuming continuing operations of about 450 reactors. That seems pretty bad.

In other words, the actual experience of rolling the dice seems to be yielding a riskier outcome than risk models suggest. One could argue that most of the failing reactors were old, built long ago, or poorly designed. Maybe so, but will we ever have a fleet of young rectors, designed and operated by demigods? That’s not likely, but surely things will get somewhat better with the march of technology. So, the question is, how much better? Areva’s 10x improvement seems inadequate if it’s measured against the performance of existing plants, at least if we plan to grow the plant fleet by much more than a factor of 10 to replace fossil fuels. There are newer designs around, but they depart from the evolutionary path of light water reactors, which means that “past performance is no indication of future returns” applies – will greater passive safety outweigh the effects of jumping to a new, less mature safety learning curve?

It seems to me that we need models of plant safety that square with the actual operational history of plants, to reconcile projected risk with real-world risk experience. If engineers promote analysis that appears unjustifiably optimistic, the public will do what it always does: discount the results of formal models, in favor of mental models that may be informed by superstition and visions of mushroom clouds.

The House Climate Science Hearing

Science’s Eli Kintishch and Gavin Schmidt liveblogged the House hearing on climate science this morning. My favorite tidbits:

Gavin Schmidt:

One theme that will be constant is that unilateral action by the US is meaningless if everyone else continues with business as usual. However, this is not a ethical argument for not doing anything. Edward Burke (an original conservative) rightly said: “Nobody made a greater mistake than he who did nothing because he could do only a little.” http://www.realclimate.org/index.php/archives/2009/05/the-tragedy-of-climate-commons/

Eli Kintisch:

If my doctor told me I had cancer, says Waxman, “I wouldn’t scour the country to find someone who said I didn’t need [treatment]”

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Because Congress has granted EPA authority to regulate, and the agency has followed its legislative mandate. If Congress wants to change how EPA operates, fine, but it must do it comprehensively, not by seeking to overturn the endangerment finding via fiat.

[Comment From Steven Leibo Ph.D. Steven Leibo Ph.D. : ]

If republicans thought this hearing would be helpful for their cause it was surely a big mistake..that from a non scientist

[Comment From J Bowers J Bowers : ]

There are no car parks or air conditioners in space.

Eli Kintisch:

Burress: US had popular “revulsion” against the Waxman Markey bill. “Voting no was not enough…people wanted us to stop that thing dead in its tracks” No action by India and China…

[Comment From thingsbreak thingsbreak : ]

This India and China bashing is perverse, from an emissions “pie slicing” perspective.

Eli Kintisch:

Inslee: “embarassment” that “chronic anti-science” syndrome by Republicans. Colleagues in GOP won’t believe, he says, “until the entire antarctic ice sheet has melted or hell has frozen over”

Eli Kintisch:

Rep Griffith (R-Va): Asks about melting ice caps on Mars. Is sun getting brighter, he asks?

[Comment From thingsbreak thingsbreak : ]

Mars ice caps melting. Drink!

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Mars ice caps, snore!

Eli Kintisch:

In general I would say this hearing is a disappointment: the issue of whether congress can/should have a close control on EPA decisions is at least an interesting one that different people who are reasonable can disagree about.

So far little discussion of that issue at all. 🙁

Maybe because these are scientists the real issue is just not coming up. Weird hearing.

Eli Kintisch:

Waxman: I would hate to see Congress take a position “that the science was false” by passing/marking up HR 910; wants to slow mark up on tuesday. But Whitfield disagrees; says that markup on thursday will proceed and debate will go on then…

Eli Kintisch:

Rush (who is the ranking member on this subcommittee) also asks Whitfield to delay the thursday markup. “Force.. the American people…we should be more deliberative”

Gavin Schmidt:

So that’s that. I can’t say I was particularly surprised at how it went. Far too much cherry-picking, strawmen arguments and posturing. Is it possible to have susbtantive discussion in public on these issues?

I think I shouldn’t have peeked into the sausage machine.