Nuclear safety follies

I find panic-fueled iodine marketing and disingenuous comparisons of Fukushima to Chernobyl deplorable.

iodineBut those are balanced by pronouncements like this:

Telephone briefing from Sir John Beddington, the UK’s chief scientific adviser, and Hilary Walker, deputy director for emergency preparedness at the Department of Health.“Unequivocally, Tokyo will not be affected by the radiation fallout of explosions that have occurred or may occur at the Fukushima nuclear power stations.”

Surely the prospect of large scale radiation release is very low, but it’s not approximately zero, which is my interpretation of “unequivocally not.”

On my list of the seven deadly sins of complex systems management, number four is,

Certainty. Planning for it leads to fragile strategies. If you can’t imagine a way you could be wrong, you’re probably a fanatic.

Nuclear engineers disagree, but some seem to have a near-fanatic faith in plant safety. Normal Accidents documents some bizarrely cheerful post-accident reflections on safety. I found another when reading up over the last few days:

again Continue reading “Nuclear safety follies”

Fortunately, the core ended up on the floor

I’ve been sniffing around for more information on the dynamics of boiling water reactors, particularly in extreme conditions. Here’s what I can glean (caveat: I’m not a nuclear engineer).

It turns out that there’s quite a bit of literature on reduced-form models of reactor operations. Most of this, though, is focused on operational issues that arise from nonlinear dynamics, on a time scale of less than a second or so. (Update: I’ve posted an example of such a model here.)

reactorBlockDiagram

Source: Instability in BWR NPPs – F. Maggini 2004

Those are important – it was exactly those kinds of fast dynamics that led to disaster when operators took the Chernobyl plant into unsafe territory. (Fortunately, the Chernobyl design is not widespread.)

However, I don’t think those are the issues that are now of interest. The Japanese reactors are now far from their normal operating point, and the dynamics of interest have time scales of hours, not seconds. Here’s a map of the territory:

reactorShutdown2

Source: Instability in BWR NPPs – F. Maggini 2004
colored annotations by me.

The horizontal axis is coolant flow through the core, and the vertical axis is core power – i.e. the rate of heat generation. The green dot shows normal full-power operation. The upper left part of the diagram, above the diagonal, is the danger zone, where high power output and low coolant flow creates the danger of a meltdown – like driving your car over a mountain pass, with nothing in the radiator.

It’s important to realize that there are constraints on how you move around this diagram. You can quickly turn off the nuclear chain reaction in a reactor, by inserting the control rods, but it takes a while for the power output to come down, because there’s a lot of residual heat from nuclear decay products.

On the other hand, you can turn off the coolant flow pretty fast – turn off the electricity to the pumps, and the flow will stop as soon as the momentum of the fluid is dissipated. If you were crazy enough to turn off the cooling without turning down the power (yellow line), you’d have an immediate catastrophe on your hands.

In an orderly shutdown, you turn off the chain reaction, then wait patiently for the power to come down, while maintaining coolant flow. That’s initially what happened at the Fukushima reactors (blue line). Seismic sensors shut down the reactors, and an orderly cool-down process began.

After an hour, things went wrong when the tsunami swamped backup generators. Then the reactor followed the orange line to a state with near-zero coolant flow (whatever convection provides) and nontrivial power output from the decay products. At that point, things start heating up. The process takes a while, because there’s a lot of thermal mass in the reactor, so if cooling is quickly restored, no harm done.

If cooling isn’t restored, a number of positive feedbacks (nasty vicious cycles) can set in. Boiling in the reactor vessel necessitates venting (releasing small amounts of mostly short-lived radioactive materials); if venting fails, the reactor vessel can fail from overpressure. Boiling reduces the water level in the reactor and makes heat transfer less efficient; fuel rods that boil dry heat up much faster. As fuel rods overheat, their zirconium cladding reacts with water to make hydrogen – which can explode when vented into the reactor building, as we apparently saw at reactors 1 & 3. That can cause collateral damage to systems or people, making it harder to restore cooling.

Things get worse as heat continues to accumulate. Melting fuel rods dump debris in the reactor, obstructing coolant flow, again making it harder to restore cooling. Ultimately, melted fuel could concentrate in the bottom of the reactor vessel, away from the control rods, making power output go back up (following the red line). At that point, it’s likely that the fuel is going to end up in a puddle on the floor of the containment building. Presumably, at that point negative feedback reasserts dominance, as fuel is dispersed over a large area, and can cool passively. I haven’t seen any convincing descriptions of this endgame, but nuclear engineers seem to think it benign – at least compared to Chernobyl. At Chernobyl, there was one less balancing feedback loop (ineffective containment) and an additional reinforcing feedback: graphite in the reactor, which caught fire.

So, the ultimate story here is a race against time. The bad news is that if the core is dry and melting, time is not on your side as you progress faster and faster up the red line. The good news is that, as long as that hasn’t happened yet, time is on the side of the operators – the longer they can hold things together with duct tape and seawater, the less decay heat they have to contend with. Unfortunately, it sounds like we’re not out of the woods yet.

Nuclear accident dynamics

There’s been a lot of wild speculation about the nuclear situation in Japan. Reporters were quick to start a “countdown to meltdown” based on only the sketchiest information about problems at plants, and then were quick to wonder if our troubles were over because the destruction of the containment structure at Fukushima I-1 didn’t breach the reactor vessel, based on equally sketchy information. Now the cycle repeats for reactor 3. Here’s my take on the fundamentals of the situation.

Boiling water reactors (BWRs), like those at Fukushima, are not inherently stable in all states. For a system analogy, think of a pendulum. It’s stable when it’s hanging, as in a grandfather clock. If you disturb it, it will oscillate for a while, but eventually return to hanging quietly. On the other hand, an inverted pendulum, where the arm stands above the pivot, like a broom balanced on your palm, is unstable – a small disturbance that starts it tipping is reinforced by gravity, and it quickly falls over.

Still, it is possible to balance a broom on your palm for a long time, if you’re diligent about it. The system of an inverted broomstick plus a careful person controlling it is stable, at least over a reasonable range of disturbances. Similarly, a BWR is at times dependent on a functional control system to maintain stability. Damage the control system (or tickle the broom-balancer), and the system may spiral out of control.

An inverted broom is, of course, an imperfect analogy for a nuclear power plant. A broom can be described by just a few variables – its angular and translational position and momentum. Those are all readily observable within a tenth of a second or so. A BWR, on the other hand, has hundreds of relevant state variables – pressure and temperature at various points, the open or closed states of valves, etc. Presumably some  have a lot of inertial – implying long delays in changing them. Many states are not directly observable – they have to be inferred from measurements at other points in the system. Unfortunately, those measurements are sometimes unreliable, leaving operators wondering whether the water in area A is rising because valve B failed to close, or if it’s just a faulty sensor.

No one can manage a 10th or 100th order differential equation with uncertain measurements in their head – yet that is essentially the task facing the Fukushima operators now. Their epic challenge is compounded by a number of reinforcing feedbacks.

  • First, there’s collateral damage, which creates a vicious cycle: part A breaks down, causing part B to overheat, causing part C to blow up, which ignites adjacent (but unrelated) part D, and so on. The destruction of the containment building around reactor 1 has to be the ultimate example of this. It’s hard to imagine that much of the control system remains functional after such a violent event – and that makes escalation of problems all the more likely.
  • Second, there are people in the loop. Managing a BWR in routine conditions is essentially boring. Long periods of boredom, punctuated by brief periods of panic, do not create conditions for good management decisions. Mistakes cause irreversible damage, worsening the circumstances under which further decisions must be made – another vicious cycle.
  • Third, there’s contamination. If things get bad enough, you can’t even safely approach the system to measure or fix it.

It appears that the main fallback for the out-of-control reactors is to exploit the most basic balancing feedback loop: pump a lot of water in to carry off heat, while you figure out what to do next. I hope it works.

Meanwhile, on the outside, some observers seem inexplicably optimistic – they cheerfully conclude that, because the reactor vessel itself remains intact (hopefully), the system works due to its redundant safety measures. Commentators on past accidents have said much the same thing. The problem was that, when the dust settled, the situation often proved much worse than thought at the time, and safety systems sometimes contributed as much to problems as they solved – not a huge surprise in a very complex system.

We seem to be learning the wrong lessons from such events:

The presidential commission investigating the Three Mile Island accident learned that the problems rested with people, not technology. http://www.technologyreview.com/article/23907/

This strikes me as absurd. No technology exists in a vacuum; they must be appropriate to people. A technology that requires perfect controllers for safe operation is a problem, because there’s no such thing.

If there’s a future for nuclear, I think it’ll have to lie with designs that incorporate many more passive safety features – the reactor system, absent control inputs, has to look a lot more like a hanging pendulum than a balanced broom, so that when the unlikely happens, it reacts benignly.

Earthquake stats & complex systems

I got curious about the time series of earthquakes around the big one in Japan after a friend posted a link to the USGS quake map of the area.

The data actually show a swarm of quakes before the big one – but looking at the data, it appears that those are a separate chain of events, beginning with a magnitude 7.2 on the 9th. By the 10th, it seemed like those events were petering out, though perhaps they set up the conditions for the 8.9 on the 11th. You can also see this on the USGS movie.

magnitude

If you look at the event on a recent global scale, it’s amazingly big by count of events of significant magnitude:

count

(Honshu is the region USGS reports for the quake, and ROW = Rest of World; honshu.xlsx)

The graph looks similar if you make a rough translation to units of energy dissipated (which is proportional to magnitude^(3/2)). It would be interesting to see even longer time series, but I suspect that this is actually not surprising, given that earthquake magnitudes have a roughly power law distribution. The heavy tail means “expect the unexpected” – as with financial market movements.

Interestingly, geophysicist-turned-econophysicist Didier Sornette, who famously predicted the bursting of the Shanghai bubble, and colleagues recently looked at Japan’s earthquake distribution and estimated distributions of future events. By their estimates, the 8.9 quake was quite extreme, even given the expectation of black swans:

distribution

The authors point out that predicting the frequency of earthquakes beyond the maximum magnitude in the data is problematic:

The main problem in the statistical study of the tail of the distribution of earthquake magnitudes (as well as in distributions of other rarely observable extremes) is the estimation of quantiles, which go beyond the data range, i.e. quantiles of level q > 1 – 1/n, where n is the sample size. We would like to stress once more that the reliable estimation of quantiles of levels q > 1 – 1/n can be made only with some additional assumptions on the behavior of the tail. Sometimes, such assumptions can be made on the basis of physical processes underlying the phenomena under study. For this purpose, we used general mathematical limit theorems, namely, the theorems of EVT. In our case, the assumptions for the validity of EVT boil down to assuming a regular (power-like) behavior of the tail 1 – F(m) of the distribution of earthquake magnitudes in the vicinity of its rightmost point Mmax. Some justification of such an assumption can serve the fact that, without them, there is no meaningful limit theorem in EVT. Of course, there is no a priori guarantee that these assumptions will hold in some concrete situation, and they should be discussed and possibly verified or supported by other means. In fact, because EVT suggests a statistical methodology for the extrapolation of quantiles beyond the data range, the question whether such interpolation is justified or not in a given problem should be investigated carefully in each concrete situation. But EVT provides the best statistical approach possible in such a situation.

Sornette also made some interesting remarks about self-organized criticality and quakes in a 1999 Nature debate.

We're All Going to Die

Well, at least me and a few fellow Montanans. There’s an earthquake swarm in Yellowstone right now. The supervolcano is sure to blow us all to Kingdom Come. This elk my wife met seems unconcerned though:

Elk pthpt

A guy at Wolfram made a nice visualization example out of the data, though it’s not exactly a gripping movie.