Fortunately, the core ended up on the floor

I’ve been sniffing around for more information on the dynamics of boiling water reactors, particularly in extreme conditions. Here’s what I can glean (caveat: I’m not a nuclear engineer).

It turns out that there’s quite a bit of literature on reduced-form models of reactor operations. Most of this, though, is focused on operational issues that arise from nonlinear dynamics, on a time scale of less than a second or so. (Update: I’ve posted an example of such a model here.)

reactorBlockDiagram

Source: Instability in BWR NPPs – F. Maggini 2004

Those are important – it was exactly those kinds of fast dynamics that led to disaster when operators took the Chernobyl plant into unsafe territory. (Fortunately, the Chernobyl design is not widespread.)

However, I don’t think those are the issues that are now of interest. The Japanese reactors are now far from their normal operating point, and the dynamics of interest have time scales of hours, not seconds. Here’s a map of the territory:

reactorShutdown2

Source: Instability in BWR NPPs – F. Maggini 2004
colored annotations by me.

The horizontal axis is coolant flow through the core, and the vertical axis is core power – i.e. the rate of heat generation. The green dot shows normal full-power operation. The upper left part of the diagram, above the diagonal, is the danger zone, where high power output and low coolant flow creates the danger of a meltdown – like driving your car over a mountain pass, with nothing in the radiator.

It’s important to realize that there are constraints on how you move around this diagram. You can quickly turn off the nuclear chain reaction in a reactor, by inserting the control rods, but it takes a while for the power output to come down, because there’s a lot of residual heat from nuclear decay products.

On the other hand, you can turn off the coolant flow pretty fast – turn off the electricity to the pumps, and the flow will stop as soon as the momentum of the fluid is dissipated. If you were crazy enough to turn off the cooling without turning down the power (yellow line), you’d have an immediate catastrophe on your hands.

In an orderly shutdown, you turn off the chain reaction, then wait patiently for the power to come down, while maintaining coolant flow. That’s initially what happened at the Fukushima reactors (blue line). Seismic sensors shut down the reactors, and an orderly cool-down process began.

After an hour, things went wrong when the tsunami swamped backup generators. Then the reactor followed the orange line to a state with near-zero coolant flow (whatever convection provides) and nontrivial power output from the decay products. At that point, things start heating up. The process takes a while, because there’s a lot of thermal mass in the reactor, so if cooling is quickly restored, no harm done.

If cooling isn’t restored, a number of positive feedbacks (nasty vicious cycles) can set in. Boiling in the reactor vessel necessitates venting (releasing small amounts of mostly short-lived radioactive materials); if venting fails, the reactor vessel can fail from overpressure. Boiling reduces the water level in the reactor and makes heat transfer less efficient; fuel rods that boil dry heat up much faster. As fuel rods overheat, their zirconium cladding reacts with water to make hydrogen – which can explode when vented into the reactor building, as we apparently saw at reactors 1 & 3. That can cause collateral damage to systems or people, making it harder to restore cooling.

Things get worse as heat continues to accumulate. Melting fuel rods dump debris in the reactor, obstructing coolant flow, again making it harder to restore cooling. Ultimately, melted fuel could concentrate in the bottom of the reactor vessel, away from the control rods, making power output go back up (following the red line). At that point, it’s likely that the fuel is going to end up in a puddle on the floor of the containment building. Presumably, at that point negative feedback reasserts dominance, as fuel is dispersed over a large area, and can cool passively. I haven’t seen any convincing descriptions of this endgame, but nuclear engineers seem to think it benign – at least compared to Chernobyl. At Chernobyl, there was one less balancing feedback loop (ineffective containment) and an additional reinforcing feedback: graphite in the reactor, which caught fire.

So, the ultimate story here is a race against time. The bad news is that if the core is dry and melting, time is not on your side as you progress faster and faster up the red line. The good news is that, as long as that hasn’t happened yet, time is on the side of the operators – the longer they can hold things together with duct tape and seawater, the less decay heat they have to contend with. Unfortunately, it sounds like we’re not out of the woods yet.

Nuclear accident dynamics

There’s been a lot of wild speculation about the nuclear situation in Japan. Reporters were quick to start a “countdown to meltdown” based on only the sketchiest information about problems at plants, and then were quick to wonder if our troubles were over because the destruction of the containment structure at Fukushima I-1 didn’t breach the reactor vessel, based on equally sketchy information. Now the cycle repeats for reactor 3. Here’s my take on the fundamentals of the situation.

Boiling water reactors (BWRs), like those at Fukushima, are not inherently stable in all states. For a system analogy, think of a pendulum. It’s stable when it’s hanging, as in a grandfather clock. If you disturb it, it will oscillate for a while, but eventually return to hanging quietly. On the other hand, an inverted pendulum, where the arm stands above the pivot, like a broom balanced on your palm, is unstable – a small disturbance that starts it tipping is reinforced by gravity, and it quickly falls over.

Still, it is possible to balance a broom on your palm for a long time, if you’re diligent about it. The system of an inverted broomstick plus a careful person controlling it is stable, at least over a reasonable range of disturbances. Similarly, a BWR is at times dependent on a functional control system to maintain stability. Damage the control system (or tickle the broom-balancer), and the system may spiral out of control.

An inverted broom is, of course, an imperfect analogy for a nuclear power plant. A broom can be described by just a few variables – its angular and translational position and momentum. Those are all readily observable within a tenth of a second or so. A BWR, on the other hand, has hundreds of relevant state variables – pressure and temperature at various points, the open or closed states of valves, etc. Presumably some  have a lot of inertial – implying long delays in changing them. Many states are not directly observable – they have to be inferred from measurements at other points in the system. Unfortunately, those measurements are sometimes unreliable, leaving operators wondering whether the water in area A is rising because valve B failed to close, or if it’s just a faulty sensor.

No one can manage a 10th or 100th order differential equation with uncertain measurements in their head – yet that is essentially the task facing the Fukushima operators now. Their epic challenge is compounded by a number of reinforcing feedbacks.

  • First, there’s collateral damage, which creates a vicious cycle: part A breaks down, causing part B to overheat, causing part C to blow up, which ignites adjacent (but unrelated) part D, and so on. The destruction of the containment building around reactor 1 has to be the ultimate example of this. It’s hard to imagine that much of the control system remains functional after such a violent event – and that makes escalation of problems all the more likely.
  • Second, there are people in the loop. Managing a BWR in routine conditions is essentially boring. Long periods of boredom, punctuated by brief periods of panic, do not create conditions for good management decisions. Mistakes cause irreversible damage, worsening the circumstances under which further decisions must be made – another vicious cycle.
  • Third, there’s contamination. If things get bad enough, you can’t even safely approach the system to measure or fix it.

It appears that the main fallback for the out-of-control reactors is to exploit the most basic balancing feedback loop: pump a lot of water in to carry off heat, while you figure out what to do next. I hope it works.

Meanwhile, on the outside, some observers seem inexplicably optimistic – they cheerfully conclude that, because the reactor vessel itself remains intact (hopefully), the system works due to its redundant safety measures. Commentators on past accidents have said much the same thing. The problem was that, when the dust settled, the situation often proved much worse than thought at the time, and safety systems sometimes contributed as much to problems as they solved – not a huge surprise in a very complex system.

We seem to be learning the wrong lessons from such events:

The presidential commission investigating the Three Mile Island accident learned that the problems rested with people, not technology. http://www.technologyreview.com/article/23907/

This strikes me as absurd. No technology exists in a vacuum; they must be appropriate to people. A technology that requires perfect controllers for safe operation is a problem, because there’s no such thing.

If there’s a future for nuclear, I think it’ll have to lie with designs that incorporate many more passive safety features – the reactor system, absent control inputs, has to look a lot more like a hanging pendulum than a balanced broom, so that when the unlikely happens, it reacts benignly.

Earthquake stats & complex systems

I got curious about the time series of earthquakes around the big one in Japan after a friend posted a link to the USGS quake map of the area.

The data actually show a swarm of quakes before the big one – but looking at the data, it appears that those are a separate chain of events, beginning with a magnitude 7.2 on the 9th. By the 10th, it seemed like those events were petering out, though perhaps they set up the conditions for the 8.9 on the 11th. You can also see this on the USGS movie.

magnitude

If you look at the event on a recent global scale, it’s amazingly big by count of events of significant magnitude:

count

(Honshu is the region USGS reports for the quake, and ROW = Rest of World; honshu.xlsx)

The graph looks similar if you make a rough translation to units of energy dissipated (which is proportional to magnitude^(3/2)). It would be interesting to see even longer time series, but I suspect that this is actually not surprising, given that earthquake magnitudes have a roughly power law distribution. The heavy tail means “expect the unexpected” – as with financial market movements.

Interestingly, geophysicist-turned-econophysicist Didier Sornette, who famously predicted the bursting of the Shanghai bubble, and colleagues recently looked at Japan’s earthquake distribution and estimated distributions of future events. By their estimates, the 8.9 quake was quite extreme, even given the expectation of black swans:

distribution

The authors point out that predicting the frequency of earthquakes beyond the maximum magnitude in the data is problematic:

The main problem in the statistical study of the tail of the distribution of earthquake magnitudes (as well as in distributions of other rarely observable extremes) is the estimation of quantiles, which go beyond the data range, i.e. quantiles of level q > 1 – 1/n, where n is the sample size. We would like to stress once more that the reliable estimation of quantiles of levels q > 1 – 1/n can be made only with some additional assumptions on the behavior of the tail. Sometimes, such assumptions can be made on the basis of physical processes underlying the phenomena under study. For this purpose, we used general mathematical limit theorems, namely, the theorems of EVT. In our case, the assumptions for the validity of EVT boil down to assuming a regular (power-like) behavior of the tail 1 – F(m) of the distribution of earthquake magnitudes in the vicinity of its rightmost point Mmax. Some justification of such an assumption can serve the fact that, without them, there is no meaningful limit theorem in EVT. Of course, there is no a priori guarantee that these assumptions will hold in some concrete situation, and they should be discussed and possibly verified or supported by other means. In fact, because EVT suggests a statistical methodology for the extrapolation of quantiles beyond the data range, the question whether such interpolation is justified or not in a given problem should be investigated carefully in each concrete situation. But EVT provides the best statistical approach possible in such a situation.

Sornette also made some interesting remarks about self-organized criticality and quakes in a 1999 Nature debate.

The House Climate Science Hearing

Science’s Eli Kintishch and Gavin Schmidt liveblogged the House hearing on climate science this morning. My favorite tidbits:

Gavin Schmidt:

One theme that will be constant is that unilateral action by the US is meaningless if everyone else continues with business as usual. However, this is not a ethical argument for not doing anything. Edward Burke (an original conservative) rightly said: “Nobody made a greater mistake than he who did nothing because he could do only a little.” http://www.realclimate.org/index.php/archives/2009/05/the-tragedy-of-climate-commons/

Eli Kintisch:

If my doctor told me I had cancer, says Waxman, “I wouldn’t scour the country to find someone who said I didn’t need [treatment]”

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Because Congress has granted EPA authority to regulate, and the agency has followed its legislative mandate. If Congress wants to change how EPA operates, fine, but it must do it comprehensively, not by seeking to overturn the endangerment finding via fiat.

[Comment From Steven Leibo Ph.D. Steven Leibo Ph.D. : ]

If republicans thought this hearing would be helpful for their cause it was surely a big mistake..that from a non scientist

[Comment From J Bowers J Bowers : ]

There are no car parks or air conditioners in space.

Eli Kintisch:

Burress: US had popular “revulsion” against the Waxman Markey bill. “Voting no was not enough…people wanted us to stop that thing dead in its tracks” No action by India and China…

[Comment From thingsbreak thingsbreak : ]

This India and China bashing is perverse, from an emissions “pie slicing” perspective.

Eli Kintisch:

Inslee: “embarassment” that “chronic anti-science” syndrome by Republicans. Colleagues in GOP won’t believe, he says, “until the entire antarctic ice sheet has melted or hell has frozen over”

Eli Kintisch:

Rep Griffith (R-Va): Asks about melting ice caps on Mars. Is sun getting brighter, he asks?

[Comment From thingsbreak thingsbreak : ]

Mars ice caps melting. Drink!

[Comment From Roger Pielke, Jr. Roger Pielke, Jr. : ]

Mars ice caps, snore!

Eli Kintisch:

In general I would say this hearing is a disappointment: the issue of whether congress can/should have a close control on EPA decisions is at least an interesting one that different people who are reasonable can disagree about.

So far little discussion of that issue at all. 🙁

Maybe because these are scientists the real issue is just not coming up. Weird hearing.

Eli Kintisch:

Waxman: I would hate to see Congress take a position “that the science was false” by passing/marking up HR 910; wants to slow mark up on tuesday. But Whitfield disagrees; says that markup on thursday will proceed and debate will go on then…

Eli Kintisch:

Rush (who is the ranking member on this subcommittee) also asks Whitfield to delay the thursday markup. “Force.. the American people…we should be more deliberative”

Gavin Schmidt:

So that’s that. I can’t say I was particularly surprised at how it went. Far too much cherry-picking, strawmen arguments and posturing. Is it possible to have susbtantive discussion in public on these issues?

I think I shouldn’t have peeked into the sausage machine.

The myth of optimal depletion

Fifteen years ago, when I was working on my dissertation, I read a lot of the economic literature on resource management. I was looking for a behavioral model of the management of depletable resources like oil and gas. I never did find one (and still haven’t, though I haven’t been looking as hard in the last few years).

Instead, the literature focused on optimal depletion models. Essentially these characterize the extraction of resources that would occur in an idealized market – a single, infinitely-lived resource manager, perfect information about the resource base and about the future (!), no externalities, no lock-in effects.

It’s always useful to know the optimal trajectory for a managed resource – it identifies the upper bound for improvement and suggests strategic or policy changes to achieve the ideal. But many authors have transplanted these optimal depletion models into real-world policy frameworks directly, without determining whether the idealized assumptions hold in reality.

The problem is that they don’t. There are some obvious failings – for example, I’m pretty certain a priori that no resource manager actually knows the future. Unreal assumptions are reflected in unreal model behavior – I’ve seen dozens of papers that discuss results matching the classic Hotelling framework – prices rising smoothly at the interest rate, with the extraction rate falling to match, as if it had something to do with what we observe.

The fundamental failure is valuing the normative knowledge about small, analytically tractable problems above the insight that arises from experiments with a model that describes actual decision making – complete with cognitive limitations, agency problems, and other foibles.

In typical optimal depletion models, an agent controls a resource, and extracts it to maximize discounted utility. Firms succeed in managing other assets reasonably well, so why not? Well, there’s a very fundamental problem: in most places, firms don’t control resources. They control reserves. Governments control resources. As a result, firms’ ownership of the long term depletion challenge extends only as far as their asset exposure – a few decades at most. If there are principal-agent problems within firms, their effective horizon is even shorter – only as long as the tenure of a manager (worse things can happen, too).

Governments are no better; politicians and despots both have incentives to deplete resources to raise money to pacify the populace. This encourages a “sell low” strategy – when oil prices are low, governments have to sell more to meet fixed obligations (the other end of the backward-bending supply curve). And, of course, a government that wisely shepherds its resources can always lose them to a neighbor that extracts its resources quickly and invests the proceeds in military hardware.

The US is unusual in that many mineral rights are privately held, but still the government’s management of its share is instructive. I’ll just skip over the circus at the MMS and go to Montana’s trust lands. The mission of the trust is to provide a permanent endowment for public schools. But the way the trust is run could hardly be less likely to maximize or even sustain school revenue.

Fundamentally, the whole process is unmanaged – the trust makes no attempt to control the rate at which parcels are leased for extraction. Instead, trust procedures put the leasing of tracts in the hands of developers – parcels are auctioned whenever a prospective bidder requests.  Once anyone gets a whiff of information about the prospects of a tract, they must act to bid – if they’re early enough, they may get lucky and face little or no competition in the auction (easier than you’d think, because the trust doesn’t provide much notice of sales). Once buyers obtain a lease, they must drill within five years, or the lease expires. This land rush mentality leaves the trust with no control over price or the rate of extraction – they just take their paltry 16% cut (plus or minus), whenever developers choose to give it to them. When you read statements from the government resource managers, they’re unapologetically happy about it: they talk about the trust as if it were a jobs program, not an endowment.

This sort of structure is the norm, not the exception. It would be a strange world in which all of the competing biases in the process cancelled each other out, and yielded a globally optimal outcome in spite of local irrationality. The result, I think, is that policies in climate and energy models are biased, possibly in an unknown direction. On one hand, it seems likely that there’s a negative externality from extraction of public resources above the optimal rate, as in Montana. On the other hand, there might be harmful spillovers from climate or energy policies that increase the use of natural gas, if they exacerbate problems with a suboptimal extraction trajectory.

I’ve done a little sniffing around lately, and it seems that the state of the art in integrated assessment models isn’t too different from what it was in 1995 – most models still use exogenous depletion trajectories or some kind of optimization or equilibrium approach. The only real innovation I’ve seen is a stochastic model-within-a-model approach – essentially, agents know the structure of the system they’re in, but are uncertain about it’s state, so they make stochastically optimal decisions at each point in time. This is a step in the right direction, but still implies a very high cognitive load and degree of intended rationality that doesn’t square with real institutions. I’d be very interested to hear about anything new that moves toward a true behavioral model of resource management.

The rabble in the streets, calling for more power to the monarchy

When I see policies that formally allocate political power in proportion to wealth, I think back to a game I played in college, called Starpower.

It’s a simple trading game, using plastic chips. It starts with a trading round, where everyone has a chance to improve their lot by swapping to get better combinations of size and color. After trading, scores are tallied, and the players are divided into three groups: Triangles, Circles, and Squares. Then there’s a vote, in which players get to change the rules of the game. There’s a catch though: the Squares, who reaped the most points in the first round, get more votes. Subsequent rounds follow the same steps (lather, rinse, repeat).

When I played, I was lucky enough in the first round to wind up in the top group, the Squares. In the subsequent vote, no one proposed any significant rule changes, so we went back to trading. One of our fellow Squares was unlucky or incautious enough to make a few bad trades, and wound up demoted to the Circles when scores were tallied. That was a wake-up call – a challenge to the camaraderie of Squares. We promptly changed the rules, to slightly favor the accumulation of chips by those who already had many; we bribed the middle Circles to go along with it. We breathed a collective sigh of relief when, after the next trading round, we found that we were all still Squares. Then, we Squares abandoned all egalitarian thoughts. With our increased wealth, we voted to allocate future chip distributions so that the Circle and Triangle classes would perpetually trade places, never accumulating enough wealth to reach elite Square status. It worked, at least until the end of class (we were probably “saved by the bell” from having a revolution).

The interesting thing about the game is that it’s a perfect market economy. Property rights in chips are fully allocated, everyone walks in with a similar initial endowment of brains and chips, and there are mutual benefits to trade, even when wealth is distributed unequally. Yet the libertarian ideals are completely undone when the unequal allocation of wealth spills over to become an unequal allocation of power, where votes are weighted by money. That creates a reinforcing feedback:

starpower

Allocating votes in a zoning protest in proportion to acreage, or any other policy that matches power to wealth, has the same properties as the Starpower game, and will lead to the same ugly outcome if left unchecked. As Donella Meadows put it,

The wise Squares whom we call Founding Fathers, who set up the rules of our national game, knew that. They invented ingenious devices for giving everyone a chance to win — democratic elections, universal education, and a Bill of Rights. Out of their structure have come further methods for interrupting accumulations of power — anti-trust regulations, progressive taxation, inheritance restrictions, affirmative action programs.

All of which, you might note, have been weakened over the past decade or so. We have moved a long way toward a Starpower structure. One one the worst steps in that direction was the evolution of expensive, television-mediated election campaigns, which permit only Squares to run for office. That puts Squares increasingly in control of the rules, and they make rules to benefit Squares.

Is that the game we want to be playing?

The new Montana House of Lords

Feudalism is back in Montana – or at least if SB379 passes, we’ll be well on the way.

SB379is to protect real property owners from unreasonable land use restrictions and reductions in land value due to county zoning.” Translation: make zoning impossible by allowing a superminority of owners to protest its implementation.

The real devil is in the details:

Section 2.  Definitions. For purposes of [sections 1, 2, 4 through 9, 11, and 12], the following definitions apply:

(1) (a) “Affected property” means property taxed on an ad valorem basis on the county tax rolls and directly subject to a proposed zoning action.

(2) “Affected property owner” means the owner of affected property, including natural persons, corporations, trusts, partnerships, incorporated or unincorporated associations, and any other legal entity owning land in fee simple, as joint tenants, or as tenants in common.

(3) “Protest override procedure” means the procedures described in [sections 6 through 9].

(4) “Protesting landowner” means an affected property owner who protests a zoning action.

(5) “Successful protest” means a protest by owners of 25% or more of the affected property.

Section 5.  Protest. (1) Within 60 days of the date that notice of passage of the resolution of intention to take a zoning action pursuant to [section 4] is first published, affected property owners may protest the proposed zoning action by delivering written notification to the board of county commissioners.

Notice how this assigns the right to protest to owners on the basis of area. Owners don’t even have to be people. A protest is a de facto vote. In other words, this policy is “one acre, one vote.” This bill elevates property rights, as in the 5th Amendment,

No person shall … be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.

above the Equal Protection Clause of the 14th Amendment,

No State shall make or enforce any law which shall … deny to any person within its jurisdiction the equal protection of the laws.

If the drafters of this bill are unclear as to which principle is the more fundamental, they could consult the Declaration of Independence,

We hold these truths to be self-evident, that all men are created equal, …

Update: one could also check the Montana constitution,

Section 1. Popular sovereignty. All political power is vested in and derived from the people. All government of right originates with the people, is founded upon their will only, and is instituted solely for the good of the whole.

I’ll gladly admit that zoning is a blunt instrument. But a de facto ban on zoning, with the idea that it’s a taking, guarantees a tragedy-of-the-commons outcome (unless you live in the faux-libertarian lalaland where property rights are fully allocated, markets are complete and there are no externalities). Even if that’s the road we choose to take, the governing principle must be “one person, one vote.”

Let’s see: government of the landowners, by the landowners, for the landowners – check. Elevation of politics above science – check. Montana is two thirds of the way to the Middle Ages! All we need now is to get rid of separation of church and state.

Legislators' vision for Montana

This is it: a depleted mining wasteland:

NASA Berkeley Pit

Berkeley Pit, Butte MT, NASA Earth Observatory

The spearhead is an assault on the MT constitution’s language on the environment,

All persons are born free and have certain inalienable rights. They include the right to a clean, and healthful, and economically productive environment and the rights of pursuing life’s basic necessities, enjoying and defending their lives and liberties, acquiring, possessing and protecting property, and seeking their safety, health and happiness in all lawful ways. In enjoying these rights, all persons recognize corresponding responsibilities.

What does “economically productive” add that wasn’t already covered by “pursuing … acquiring … posessing” anyway? Ironically, this could cut both ways – would it facilitate restrictions on future resource extraction, because depleted mines become economically unproductive?

Other bills attempt to legalize gravel pits in residential areas, sell coal at discount prices, and dismantle or cripple any other environmental protection you could think of.

The real kicker is Joe Read’s HB 549, AN ACT STATING MONTANA’S POSITION ON GLOBAL WARMING:

Section 1.  Public policy concerning global warming. (1) The legislature finds that to ensure economic development in Montana and the appropriate management of Montana’s natural resources it is necessary to adopt a public policy regarding global warming.

At least we’re clear up front that the coal industry is in charge!

(2) The legislature finds:

I’m sure you can guess how many qualified climate scientists are in the Montana legislature.

(a) global warming is beneficial to the welfare and business climate of Montana;

I guess Joe didn’t get the memo, that skiing and fishing could be hard hit. Maybe he thinks crops and trees do just fine with too little water and warmth, or too much.

(b) reasonable amounts of carbon dioxide released into the atmosphere have no verifiable impacts on the environment; and

Yeah, and pi is 3.2, just like it was in Indiana in 1897. I guess you could argue about the meaning of “reasonable,” but apparently Joe even rejects chemistry (ocean acidification) and biology (CO2 fertilization) along with atmospheric science.

(c) global warming is a natural occurrence and human activity has not accelerated it.

Ahh, now we’re doing detection & attribution. Legislating the answers to scientific questions is a fool’s errand. How did this text go through peer review?

(3) (a) For the purposes of this section, “global warming” relates to an increase in the average temperature of the earth’s surface.

Well, at least one sentence in this bill makes sense – at least if you assume that “average” is over time as well as space.

(b) It does not include a one-time, catastrophic release of carbon dioxide.

Where did that strawdog come from? Apparently there’s a catastrophic release of CO2 every time Joe Read opens his mouth.

A few parts per million

IMG_1937

There’s a persistent rumor that CO2 concentrations are too small to have a noticeable radiative effect on the atmosphere. (It appears here, for example, though mixed with so much other claptrap that it’s hard to wrap your mind around the whole argument – which would probably cause your head to explode due to an excess of self-contradiction anyway.)

To fool the innumerate, one must simply state that CO2 constitutes only about 390 parts per million, or .039%, of the atmosphere. Wow, that’s a really small number! How could it possibly matter? To be really sneaky, you can exploit stock-flow misperceptions by talking only about the annual increment (~2 ppm) rather than the total, which makes things look another 100x smaller (apparently a part of the calculation in Joe Bastardi’s width of a human hair vs. a 1km bridge span).

Anyway, my kids and I got curious about this, so we decided to put 390ppm of food coloring in a glass of water. Our precision in shaving dye pellets wasn’t very good, so we actually ended up with about 450ppm. You can see the result above. It’s very obviously blue, in spite of the tiny dye concentration. We think this is a conservative visual example, because a lot of the tablet mass was apparently a fizzy filler, and the atmosphere is 1000 times less dense than water, but effectively 100,000 times thicker than this glass. However, we don’t know much about the molecular weight or radiative properties of the dye.

This doesn’t prove much about the atmosphere, but it does neatly disprove the notion that an effect is automatically small, just because the numbers involved sound small. If you still doubt this, try ingesting a few nanograms of the toxin infused into the period at the end of this sentence.

Monday tidbits- tools, courses

I neglected to cross-post an interesting new Vensim model documentation tool that’s in my model library.

Shameless commerce dept.: I’m teaching Vensim courses in Palo Alto in April and Bozeman in June. Following the June offering, Ventana’s Bill Arthur will be teaching “SMLOD” – Small Models with Lots of Data – a deep technical dive into the extraction of insight from large datasets.