DeSmogBlog documents scientists’ outrage at inclusion on Dennis Avery’s list of 500 Scientists with Documented Doubts of Man-Made Global Warming Scares. Amusingly, the scientists concerned, a few of whom are deceased, are listed as “Co-Authors”. I’m going to put this new “open coauthoring” concept to work – I’m already working on abstracts. First, I think I’ll lower my Erdős number – Paul, dude, you can be my second. Next, I think I’ll team up with Einstein and Bohr to finish up quantum gravity.
A pair of papers in Science this week refines the understanding of the acceleration of glacier flow from lubrication by meltwater. The bottom line:
Now a two-pronged study–both broader and more focused than the one that sounded the alarm–has confirmed that meltwater reaches the ice sheet’s base and does indeed speed the ice’s seaward flow. The good news is that the process is more leisurely than many climate scientists had feared. “Is it, ‘Run for the hills, the ice sheet is falling in the ocean’?” asks glaciologist Richard Alley of Pennsylvania State University in State College. “No. It matters, but it’s not huge.” The finding should ease concerns that Greenland ice could raise sea level a disastrous meter or more by the end of the century. Experts remain concerned, however, because meltwater doesn’t explain why Greenland’s rivers of ice have recently surged forward.
A remarkable excerpt:
The meltwater monitoring caught a 4-kilometer-long, 8-meter-deep lake disappearing into the ice in an hour and a half. As theorists had supposed, once the lake water was deep enough, its weight began to wedge open existing cracks, which only increased the weight of overlying water on the crack tip and accelerated cracking downward. Once the main crack reached the bottom of the ice, heat from churning water flow melted out parts of the fracture, and drainage took off. The lake disappeared in about 1.4 hours at an average rate of 8700 cubic meters per second, exceeding the average flow over Niagara Falls. That’s almost four Olympic pools a second.
Check it out (subscription required).
As an experiment, I’ve created a wiki for dynamic models. I’m gradually migrating my existing model library into the wiki, in the hope that it will be easier to maintain and more useful for visitors. I’ve also created a public model library section so that users can submit new material. Check it out!
Two interesting abstracts I ran across today:
A decrease in the globally averaged low level cloud cover, deduced from the ISCCP infrared data, as the cosmic ray intensity decreased during the solar cycle 22 was observed by two groups. The groups went on to hypothesize that the decrease in ionization due to cosmic rays causes the decrease in cloud cover, thereby explaining a large part of the currently observed global warming. We have examined this hypothesis to look for evidence to corroborate it. None has been found and so our conclusions are to doubt it. From the absence of corroborative evidence, we estimate that less than 23%, at the 95% confidence level, of the 11 year cycle change in the globally averaged cloud cover observed in solar cycle 22 is due to the change in the rate of ionization from the solar modulation of cosmic rays.
Almost one-quarter of carbon dioxide released to the atmosphere is emitted in the production of internationally traded goods and services. Trade therefore represents an unrivalled, and unused, tool for reducing greenhouse gas emissions.
The Fed has just doled out over $300 billion in loans to bail out Bear Stearns and other bad actors in the subprime mortgage mess. It’s hard to say what fraction of that capital is really at risk, but let’s say 10%. That’s a pretty big transfer to shareholders, especially considering that there’s nothing in it for the general public other than avoidance of financial contagion effects. If this were an environmental or public health issue, skeptics would be lined up to question whether contagion in fact exists, whether fixing it does more harm than good (e.g., by creating future moral hazard), and whether there’s a better way to spend the money. Contagion would have to be proven with models, subject to infinite scrutiny and delay. Yet here, billions are doled out with no visible analysis or public process, based on policies invented ad hoc. Perhaps a little feedback control is needed here: let’s create a bailout fund, supported by taxes on firms that are deemed too big to fail by some objective criteria. Then two negative feedbacks will operate: firms that get too large will be encouraged to split themselves into manageable chunks, and the potential beneficiaries of bailouts will have to ask themselves how badly they really want insurance. Let’s try it, and see how long the precautionary principle lasts in the financial sector.
Update: Paul Krugman has a nice editorial on the problem.
And if financial players like Bear are going to receive the kind of rescue previously limited to deposit-taking banks, the implication seems obvious: they should be regulated like banks, too.
Olive Heffernan has an interesting tidbit on Climate Feedback about unintended consequences of climate policy.
It’s worth noting that most of these side-effects are not consequences of climate policy per se. They are consequences of pursuing climate policy piecemeal, from the bottom up, and seeking technological fixes in the absence of market signals. If climate policy were pursued as part of a general agenda of internalizing environmental and social externalities through market signals, some of these perverse behaviors would not occur.
The side effects of the corn ethanol boom should not be laid at the door of climate policy. Apart from hopes for cellulosic, ethanol has little to offer with respect to greenhouse gas emissions, and perhaps much to answer for. Its real motivations are oil independence and largesse to the ag sector.
I’ve been stymied for some time over how to start this blog. Finally (thanks to my wife) I’ve realized that it’s really the same problem as conceptualizing a model, with the same solution.
Beginning modelers frequently face a blank sheet of paper with trepidation … where to begin? There’s lots of good advice that I should probably link here. Instead I’ll just observe that there’s really no good answer … you just have to start. The key is to remember that modeling is highly iterative. It’s OK if the first 10 attempts are bad; their purpose is not to achieve perfection. Colleagues and I are currently working on a model that is in version 99, and still full of challenges. The purpose of those first few rounds is to explore the problem space and capture as much of the “mess” as possible. As long as the modeling process exposes the work-in-progress to lots of user feedback and reality checks, and captures insight along the way, there’s nothing to worry about.