Will complex designs win the nuclear race?

Areva pursues “defense in depth” for reactor safety:

Areva SA (CEI) Chief Executive Officer Anne Lauvergeon said explosions at a Japanese atomic power site in the wake of an earthquake last week underscore her strategy to offer more complex reactors that promise superior safety.

“Low-cost reactors aren’t the future,” Lauvergeon said on France 2 television station yesterday. “There was a big controversy for one year in France about the fact that our reactors were too safe.”

Lauvergeon has been under pressure to hold onto her job amid delays at a nuclear plant under construction in Finland. The company and French utility Electricite de France SA, both controlled by the state, lost a contract in 2009 worth about $20 billion to build four nuclear stations in the United Arab Emirates, prompting EDF CEO Henri Proglio to publicly question the merits of Areva’s more complex and expensive reactor design.

Areva’s new EPR reactors, being built in France, Finland and China, boasts four independent safety sub-systems that are supposed to reduce core accidents by a factor 10 compared with previous reactors, according to the company.

The design has a double concrete shell to withstand missiles or a commercial plane crash, systems designed to prevent hydrogen accumulation that may cause radioactive release, and a core catcher in the containment building in the case of a meltdown. To withstand severe earthquakes, the entire nuclear island stands on a single six-meter (19.6 feet) thick reinforced concrete base, according to Paris-based Areva.

via Bloomberg

I don’t doubt that the Areva design is far better than the reactors now in trouble in Japan. But I wonder if this is really the way forward. Big, expensive hardware that uses multiple redundant safety systems to offset the fundamentally marginal stability of the reaction might indeed work safely, but it doesn’t seem very deployable on the kind of scale needed for either GHG emissions mitigation or humanitarian electrification of the developing world. The financing comes in overly large bites, huge piles of concretes increase energy and emission payback periods, and it would take ages to ramp up construction and training enough to make a dent in the global challenge.

I suspect that the future – if there is one – lies with simpler designs that come in smaller portions and trade some performance for inherent stability and antiproliferation features. I can’t say whether their technology can actually deliver on the promises, but at least TerraPower – for example – has the right attitude:

“A cheaper reactor design that can burn waste and doesn’t run into fuel limitations would be a big thing,” Mr. Gates says.

However, even simple/small-is-beautiful may come rather late in the game from a climate standpoint:

While Intellectual Ventures has caught the attention of academics, the commercial industry–hoping to stimulate interest in an energy source that doesn’t contribute to global warming–is focused on selling its first reactors in the U.S. in 30 years. The designs it’s proposing, however, are essentially updates on the models operating today. Intellectual Ventures thinks that the traveling-wave design will have more appeal a bit further down the road, when a nuclear renaissance is fully under way and fuel supplies look tight. Technology Review

Not surprisingly, the evolution of the TerraPower design relies on models,

Myhrvold: When you put a software guy on an energy project he turns it into a software project. One of the reasons were innovating around nuclear is that we put a huge amount of energy into computer modeling. We do very extensive computer modeling and have better computer modeling of reactor internals than anyone in the world. No one can touch us on software for designing the reactor. Nuclear is really expensive to do experiments on, so when you have good software it’s way more efficient and a shorter design cycle.

Computing is something that is very important for nuclear. The first fast reactors, which TerraPower is, were basically designed in the slide rule era. It was stunning to us that the guys back then did what they did. We have these incredibly accurate simulations of isotopes and these guys were all doing it with slide rules. My cell phone has more computing power than the computers that were used to design the world’s nuclear plants.

It’ll be interesting to see whether current events kindle interest in new designs, or throw the baby out with the bathwater (is it a regular baby, or a baby Godzilla?). From a policy standpoint, the trick is to create a level playing field for competition among nuclear and non-nuclear technologies, where government participation in the fuel cycle has been overwhelming and risks are thoroughly socialized.

The Rise and Fall of the Saturday Evening Post

Replicated by David Sirkin and Julio Gomez from Hall, R. I. 1976. A system pathology of an organization: The rise and fall of the old Saturday Evening Post. Administrative Science Quarterly 21(2): 185-211. (JSTOR link). Just updated for newer Vensim versions.

This is one of the classic models on the Desert Island Dynamics list.

There are some units issues, preserved from the original by David and Julio. As I update it, I also wonder if there are some inconsistencies in the accounting for the subscription pipeline. Please report back here if you find anything interesting.

satevepost2011b.mdl

satevepost2011b.vmf

Fortunately, the core ended up on the floor

I’ve been sniffing around for more information on the dynamics of boiling water reactors, particularly in extreme conditions. Here’s what I can glean (caveat: I’m not a nuclear engineer).

It turns out that there’s quite a bit of literature on reduced-form models of reactor operations. Most of this, though, is focused on operational issues that arise from nonlinear dynamics, on a time scale of less than a second or so. (Update: I’ve posted an example of such a model here.)

reactorBlockDiagram

Source: Instability in BWR NPPs – F. Maggini 2004

Those are important – it was exactly those kinds of fast dynamics that led to disaster when operators took the Chernobyl plant into unsafe territory. (Fortunately, the Chernobyl design is not widespread.)

However, I don’t think those are the issues that are now of interest. The Japanese reactors are now far from their normal operating point, and the dynamics of interest have time scales of hours, not seconds. Here’s a map of the territory:

reactorShutdown2

Source: Instability in BWR NPPs – F. Maggini 2004
colored annotations by me.

The horizontal axis is coolant flow through the core, and the vertical axis is core power – i.e. the rate of heat generation. The green dot shows normal full-power operation. The upper left part of the diagram, above the diagonal, is the danger zone, where high power output and low coolant flow creates the danger of a meltdown – like driving your car over a mountain pass, with nothing in the radiator.

It’s important to realize that there are constraints on how you move around this diagram. You can quickly turn off the nuclear chain reaction in a reactor, by inserting the control rods, but it takes a while for the power output to come down, because there’s a lot of residual heat from nuclear decay products.

On the other hand, you can turn off the coolant flow pretty fast – turn off the electricity to the pumps, and the flow will stop as soon as the momentum of the fluid is dissipated. If you were crazy enough to turn off the cooling without turning down the power (yellow line), you’d have an immediate catastrophe on your hands.

In an orderly shutdown, you turn off the chain reaction, then wait patiently for the power to come down, while maintaining coolant flow. That’s initially what happened at the Fukushima reactors (blue line). Seismic sensors shut down the reactors, and an orderly cool-down process began.

After an hour, things went wrong when the tsunami swamped backup generators. Then the reactor followed the orange line to a state with near-zero coolant flow (whatever convection provides) and nontrivial power output from the decay products. At that point, things start heating up. The process takes a while, because there’s a lot of thermal mass in the reactor, so if cooling is quickly restored, no harm done.

If cooling isn’t restored, a number of positive feedbacks (nasty vicious cycles) can set in. Boiling in the reactor vessel necessitates venting (releasing small amounts of mostly short-lived radioactive materials); if venting fails, the reactor vessel can fail from overpressure. Boiling reduces the water level in the reactor and makes heat transfer less efficient; fuel rods that boil dry heat up much faster. As fuel rods overheat, their zirconium cladding reacts with water to make hydrogen – which can explode when vented into the reactor building, as we apparently saw at reactors 1 & 3. That can cause collateral damage to systems or people, making it harder to restore cooling.

Things get worse as heat continues to accumulate. Melting fuel rods dump debris in the reactor, obstructing coolant flow, again making it harder to restore cooling. Ultimately, melted fuel could concentrate in the bottom of the reactor vessel, away from the control rods, making power output go back up (following the red line). At that point, it’s likely that the fuel is going to end up in a puddle on the floor of the containment building. Presumably, at that point negative feedback reasserts dominance, as fuel is dispersed over a large area, and can cool passively. I haven’t seen any convincing descriptions of this endgame, but nuclear engineers seem to think it benign – at least compared to Chernobyl. At Chernobyl, there was one less balancing feedback loop (ineffective containment) and an additional reinforcing feedback: graphite in the reactor, which caught fire.

So, the ultimate story here is a race against time. The bad news is that if the core is dry and melting, time is not on your side as you progress faster and faster up the red line. The good news is that, as long as that hasn’t happened yet, time is on the side of the operators – the longer they can hold things together with duct tape and seawater, the less decay heat they have to contend with. Unfortunately, it sounds like we’re not out of the woods yet.

Boiling Water Reactor Dynamics

Replicated from “Hybrid Simulation of Boiling Water Reactor Dynamics Using A University Research Reactor” by James A. Turso, Robert M. Edwards, Jose March-Leuba, Nuclear Technology vol. 110, Apr. 1995.

This is a simple 5th-order representation of the operation of a boiling water reactor around its normal operating point, which is subject to interesting limit cycle dynamics.

The original article documents the model well, with the exception of the bifurcation parameter K and a nonlinear term, for which I’ve identified plausible values by experiment.

TursoNuke1.mdl

Nuclear accident dynamics

There’s been a lot of wild speculation about the nuclear situation in Japan. Reporters were quick to start a “countdown to meltdown” based on only the sketchiest information about problems at plants, and then were quick to wonder if our troubles were over because the destruction of the containment structure at Fukushima I-1 didn’t breach the reactor vessel, based on equally sketchy information. Now the cycle repeats for reactor 3. Here’s my take on the fundamentals of the situation.

Boiling water reactors (BWRs), like those at Fukushima, are not inherently stable in all states. For a system analogy, think of a pendulum. It’s stable when it’s hanging, as in a grandfather clock. If you disturb it, it will oscillate for a while, but eventually return to hanging quietly. On the other hand, an inverted pendulum, where the arm stands above the pivot, like a broom balanced on your palm, is unstable – a small disturbance that starts it tipping is reinforced by gravity, and it quickly falls over.

Still, it is possible to balance a broom on your palm for a long time, if you’re diligent about it. The system of an inverted broomstick plus a careful person controlling it is stable, at least over a reasonable range of disturbances. Similarly, a BWR is at times dependent on a functional control system to maintain stability. Damage the control system (or tickle the broom-balancer), and the system may spiral out of control.

An inverted broom is, of course, an imperfect analogy for a nuclear power plant. A broom can be described by just a few variables – its angular and translational position and momentum. Those are all readily observable within a tenth of a second or so. A BWR, on the other hand, has hundreds of relevant state variables – pressure and temperature at various points, the open or closed states of valves, etc. Presumably some  have a lot of inertial – implying long delays in changing them. Many states are not directly observable – they have to be inferred from measurements at other points in the system. Unfortunately, those measurements are sometimes unreliable, leaving operators wondering whether the water in area A is rising because valve B failed to close, or if it’s just a faulty sensor.

No one can manage a 10th or 100th order differential equation with uncertain measurements in their head – yet that is essentially the task facing the Fukushima operators now. Their epic challenge is compounded by a number of reinforcing feedbacks.

  • First, there’s collateral damage, which creates a vicious cycle: part A breaks down, causing part B to overheat, causing part C to blow up, which ignites adjacent (but unrelated) part D, and so on. The destruction of the containment building around reactor 1 has to be the ultimate example of this. It’s hard to imagine that much of the control system remains functional after such a violent event – and that makes escalation of problems all the more likely.
  • Second, there are people in the loop. Managing a BWR in routine conditions is essentially boring. Long periods of boredom, punctuated by brief periods of panic, do not create conditions for good management decisions. Mistakes cause irreversible damage, worsening the circumstances under which further decisions must be made – another vicious cycle.
  • Third, there’s contamination. If things get bad enough, you can’t even safely approach the system to measure or fix it.

It appears that the main fallback for the out-of-control reactors is to exploit the most basic balancing feedback loop: pump a lot of water in to carry off heat, while you figure out what to do next. I hope it works.

Meanwhile, on the outside, some observers seem inexplicably optimistic – they cheerfully conclude that, because the reactor vessel itself remains intact (hopefully), the system works due to its redundant safety measures. Commentators on past accidents have said much the same thing. The problem was that, when the dust settled, the situation often proved much worse than thought at the time, and safety systems sometimes contributed as much to problems as they solved – not a huge surprise in a very complex system.

We seem to be learning the wrong lessons from such events:

The presidential commission investigating the Three Mile Island accident learned that the problems rested with people, not technology. http://www.technologyreview.com/article/23907/

This strikes me as absurd. No technology exists in a vacuum; they must be appropriate to people. A technology that requires perfect controllers for safe operation is a problem, because there’s no such thing.

If there’s a future for nuclear, I think it’ll have to lie with designs that incorporate many more passive safety features – the reactor system, absent control inputs, has to look a lot more like a hanging pendulum than a balanced broom, so that when the unlikely happens, it reacts benignly.

Earthquake stats & complex systems

I got curious about the time series of earthquakes around the big one in Japan after a friend posted a link to the USGS quake map of the area.

The data actually show a swarm of quakes before the big one – but looking at the data, it appears that those are a separate chain of events, beginning with a magnitude 7.2 on the 9th. By the 10th, it seemed like those events were petering out, though perhaps they set up the conditions for the 8.9 on the 11th. You can also see this on the USGS movie.

magnitude

If you look at the event on a recent global scale, it’s amazingly big by count of events of significant magnitude:

count

(Honshu is the region USGS reports for the quake, and ROW = Rest of World; honshu.xlsx)

The graph looks similar if you make a rough translation to units of energy dissipated (which is proportional to magnitude^(3/2)). It would be interesting to see even longer time series, but I suspect that this is actually not surprising, given that earthquake magnitudes have a roughly power law distribution. The heavy tail means “expect the unexpected” – as with financial market movements.

Interestingly, geophysicist-turned-econophysicist Didier Sornette, who famously predicted the bursting of the Shanghai bubble, and colleagues recently looked at Japan’s earthquake distribution and estimated distributions of future events. By their estimates, the 8.9 quake was quite extreme, even given the expectation of black swans:

distribution

The authors point out that predicting the frequency of earthquakes beyond the maximum magnitude in the data is problematic:

The main problem in the statistical study of the tail of the distribution of earthquake magnitudes (as well as in distributions of other rarely observable extremes) is the estimation of quantiles, which go beyond the data range, i.e. quantiles of level q > 1 – 1/n, where n is the sample size. We would like to stress once more that the reliable estimation of quantiles of levels q > 1 – 1/n can be made only with some additional assumptions on the behavior of the tail. Sometimes, such assumptions can be made on the basis of physical processes underlying the phenomena under study. For this purpose, we used general mathematical limit theorems, namely, the theorems of EVT. In our case, the assumptions for the validity of EVT boil down to assuming a regular (power-like) behavior of the tail 1 – F(m) of the distribution of earthquake magnitudes in the vicinity of its rightmost point Mmax. Some justification of such an assumption can serve the fact that, without them, there is no meaningful limit theorem in EVT. Of course, there is no a priori guarantee that these assumptions will hold in some concrete situation, and they should be discussed and possibly verified or supported by other means. In fact, because EVT suggests a statistical methodology for the extrapolation of quantiles beyond the data range, the question whether such interpolation is justified or not in a given problem should be investigated carefully in each concrete situation. But EVT provides the best statistical approach possible in such a situation.

Sornette also made some interesting remarks about self-organized criticality and quakes in a 1999 Nature debate.

The House Climate Science Hearing

Science’s Eli Kintishch and Gavin Schmidt liveblogged the House hearing on climate science this morning. My favorite tidbits:

Gavin Schmidt:

One theme that will be constant is that unilateral action by the US is meaningless if everyone else continues with business as usual. However, this is not a ethical argument for not doing anything. Edward Burke (an original conservative) rightly said: “Nobody made a greater mistake than he who did nothing because he could do only a little.” http://www.realclimate.org/index.php/archives/2009/05/the-tragedy-of-climate-commons/

Eli Kintisch:

If my doctor told me I had cancer, says Waxman, “I wouldn’t scour the country to find someone who said I didn’t need [treatment]”

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Because Congress has granted EPA authority to regulate, and the agency has followed its legislative mandate. If Congress wants to change how EPA operates, fine, but it must do it comprehensively, not by seeking to overturn the endangerment finding via fiat.

[Comment From Steven Leibo Ph.D. Steven Leibo Ph.D. : ]

If republicans thought this hearing would be helpful for their cause it was surely a big mistake..that from a non scientist

[Comment From J Bowers J Bowers : ]

There are no car parks or air conditioners in space.

Eli Kintisch:

Burress: US had popular “revulsion” against the Waxman Markey bill. “Voting no was not enough…people wanted us to stop that thing dead in its tracks” No action by India and China…

[Comment From thingsbreak thingsbreak : ]

This India and China bashing is perverse, from an emissions “pie slicing” perspective.

Eli Kintisch:

Inslee: “embarassment” that “chronic anti-science” syndrome by Republicans. Colleagues in GOP won’t believe, he says, “until the entire antarctic ice sheet has melted or hell has frozen over”

Eli Kintisch:

Rep Griffith (R-Va): Asks about melting ice caps on Mars. Is sun getting brighter, he asks?

[Comment From thingsbreak thingsbreak : ]

Mars ice caps melting. Drink!

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Mars ice caps, snore!

Eli Kintisch:

In general I would say this hearing is a disappointment: the issue of whether congress can/should have a close control on EPA decisions is at least an interesting one that different people who are reasonable can disagree about.

So far little discussion of that issue at all. 🙁

Maybe because these are scientists the real issue is just not coming up. Weird hearing.

Eli Kintisch:

Waxman: I would hate to see Congress take a position “that the science was false” by passing/marking up HR 910; wants to slow mark up on tuesday. But Whitfield disagrees; says that markup on thursday will proceed and debate will go on then…

Eli Kintisch:

Rush (who is the ranking member on this subcommittee) also asks Whitfield to delay the thursday markup. “Force.. the American people…we should be more deliberative”

Gavin Schmidt:

So that’s that. I can’t say I was particularly surprised at how it went. Far too much cherry-picking, strawmen arguments and posturing. Is it possible to have susbtantive discussion in public on these issues?

I think I shouldn’t have peeked into the sausage machine.

The myth of optimal depletion

Fifteen years ago, when I was working on my dissertation, I read a lot of the economic literature on resource management. I was looking for a behavioral model of the management of depletable resources like oil and gas. I never did find one (and still haven’t, though I haven’t been looking as hard in the last few years).

Instead, the literature focused on optimal depletion models. Essentially these characterize the extraction of resources that would occur in an idealized market – a single, infinitely-lived resource manager, perfect information about the resource base and about the future (!), no externalities, no lock-in effects.

It’s always useful to know the optimal trajectory for a managed resource – it identifies the upper bound for improvement and suggests strategic or policy changes to achieve the ideal. But many authors have transplanted these optimal depletion models into real-world policy frameworks directly, without determining whether the idealized assumptions hold in reality.

The problem is that they don’t. There are some obvious failings – for example, I’m pretty certain a priori that no resource manager actually knows the future. Unreal assumptions are reflected in unreal model behavior – I’ve seen dozens of papers that discuss results matching the classic Hotelling framework – prices rising smoothly at the interest rate, with the extraction rate falling to match, as if it had something to do with what we observe.

The fundamental failure is valuing the normative knowledge about small, analytically tractable problems above the insight that arises from experiments with a model that describes actual decision making – complete with cognitive limitations, agency problems, and other foibles.

In typical optimal depletion models, an agent controls a resource, and extracts it to maximize discounted utility. Firms succeed in managing other assets reasonably well, so why not? Well, there’s a very fundamental problem: in most places, firms don’t control resources. They control reserves. Governments control resources. As a result, firms’ ownership of the long term depletion challenge extends only as far as their asset exposure – a few decades at most. If there are principal-agent problems within firms, their effective horizon is even shorter – only as long as the tenure of a manager (worse things can happen, too).

Governments are no better; politicians and despots both have incentives to deplete resources to raise money to pacify the populace. This encourages a “sell low” strategy – when oil prices are low, governments have to sell more to meet fixed obligations (the other end of the backward-bending supply curve). And, of course, a government that wisely shepherds its resources can always lose them to a neighbor that extracts its resources quickly and invests the proceeds in military hardware.

The US is unusual in that many mineral rights are privately held, but still the government’s management of its share is instructive. I’ll just skip over the circus at the MMS and go to Montana’s trust lands. The mission of the trust is to provide a permanent endowment for public schools. But the way the trust is run could hardly be less likely to maximize or even sustain school revenue.

Fundamentally, the whole process is unmanaged – the trust makes no attempt to control the rate at which parcels are leased for extraction. Instead, trust procedures put the leasing of tracts in the hands of developers – parcels are auctioned whenever a prospective bidder requests.  Once anyone gets a whiff of information about the prospects of a tract, they must act to bid – if they’re early enough, they may get lucky and face little or no competition in the auction (easier than you’d think, because the trust doesn’t provide much notice of sales). Once buyers obtain a lease, they must drill within five years, or the lease expires. This land rush mentality leaves the trust with no control over price or the rate of extraction – they just take their paltry 16% cut (plus or minus), whenever developers choose to give it to them. When you read statements from the government resource managers, they’re unapologetically happy about it: they talk about the trust as if it were a jobs program, not an endowment.

This sort of structure is the norm, not the exception. It would be a strange world in which all of the competing biases in the process cancelled each other out, and yielded a globally optimal outcome in spite of local irrationality. The result, I think, is that policies in climate and energy models are biased, possibly in an unknown direction. On one hand, it seems likely that there’s a negative externality from extraction of public resources above the optimal rate, as in Montana. On the other hand, there might be harmful spillovers from climate or energy policies that increase the use of natural gas, if they exacerbate problems with a suboptimal extraction trajectory.

I’ve done a little sniffing around lately, and it seems that the state of the art in integrated assessment models isn’t too different from what it was in 1995 – most models still use exogenous depletion trajectories or some kind of optimization or equilibrium approach. The only real innovation I’ve seen is a stochastic model-within-a-model approach – essentially, agents know the structure of the system they’re in, but are uncertain about it’s state, so they make stochastically optimal decisions at each point in time. This is a step in the right direction, but still implies a very high cognitive load and degree of intended rationality that doesn’t square with real institutions. I’d be very interested to hear about anything new that moves toward a true behavioral model of resource management.

The rabble in the streets, calling for more power to the monarchy

When I see policies that formally allocate political power in proportion to wealth, I think back to a game I played in college, called Starpower.

It’s a simple trading game, using plastic chips. It starts with a trading round, where everyone has a chance to improve their lot by swapping to get better combinations of size and color. After trading, scores are tallied, and the players are divided into three groups: Triangles, Circles, and Squares. Then there’s a vote, in which players get to change the rules of the game. There’s a catch though: the Squares, who reaped the most points in the first round, get more votes. Subsequent rounds follow the same steps (lather, rinse, repeat).

When I played, I was lucky enough in the first round to wind up in the top group, the Squares. In the subsequent vote, no one proposed any significant rule changes, so we went back to trading. One of our fellow Squares was unlucky or incautious enough to make a few bad trades, and wound up demoted to the Circles when scores were tallied. That was a wake-up call – a challenge to the camaraderie of Squares. We promptly changed the rules, to slightly favor the accumulation of chips by those who already had many; we bribed the middle Circles to go along with it. We breathed a collective sigh of relief when, after the next trading round, we found that we were all still Squares. Then, we Squares abandoned all egalitarian thoughts. With our increased wealth, we voted to allocate future chip distributions so that the Circle and Triangle classes would perpetually trade places, never accumulating enough wealth to reach elite Square status. It worked, at least until the end of class (we were probably “saved by the bell” from having a revolution).

The interesting thing about the game is that it’s a perfect market economy. Property rights in chips are fully allocated, everyone walks in with a similar initial endowment of brains and chips, and there are mutual benefits to trade, even when wealth is distributed unequally. Yet the libertarian ideals are completely undone when the unequal allocation of wealth spills over to become an unequal allocation of power, where votes are weighted by money. That creates a reinforcing feedback:

starpower

Allocating votes in a zoning protest in proportion to acreage, or any other policy that matches power to wealth, has the same properties as the Starpower game, and will lead to the same ugly outcome if left unchecked. As Donella Meadows put it,

The wise Squares whom we call Founding Fathers, who set up the rules of our national game, knew that. They invented ingenious devices for giving everyone a chance to win — democratic elections, universal education, and a Bill of Rights. Out of their structure have come further methods for interrupting accumulations of power — anti-trust regulations, progressive taxation, inheritance restrictions, affirmative action programs.

All of which, you might note, have been weakened over the past decade or so. We have moved a long way toward a Starpower structure. One one the worst steps in that direction was the evolution of expensive, television-mediated election campaigns, which permit only Squares to run for office. That puts Squares increasingly in control of the rules, and they make rules to benefit Squares.

Is that the game we want to be playing?