Dumb and Dumber

Not to be outdone by Utah, South Dakota has passed its own climate resolution.

They raise the ante – where Utah cherry-picked twelve years of data, South Dakotans are happy with only 8. Even better, their pattern matching heuristic violates bathtub dynamics:

WHEREAS, the earth has been cooling for the last eight years despite small increases in anthropogenic carbon dioxide

They have taken the skeptic claim, that there’s little warming in the tropical troposphere, and bumped it up a notch:

WHEREAS, there is no evidence of atmospheric warming in the troposphere where the majority of warming would be taking place

Nope, no trend here:

Satellite tropospheric temperature, RSS

Satellite tropospheric temperature (RSS, TLT)

Continue reading “Dumb and Dumber”

Sea level update – newish work

I linked some newish work on sea level by Aslak Grinsted et al. in my last post. There are some other new developments:

On the data front, Rohling et al. investigate sea level over the last half a million years and in the Pliocene (3+ million years ago). Here’s the relationship between CO2 and Antarctic temperatures:

Rohling Fig 2A

Two caveats and one interesting observation here:

  • The axes are flipped; if you think causally with CO2 on the x-axis, you need to mentally reflect this picture.
  • TAA refers to Antarctic temperature, which is subject to polar amplification
  • Notice that the empirical line (red) is much shallower than the relationship in model projections (green). Since the axes are flipped, that means that empirical Antarctic temperatures are much more sensitive to CO2 than projections, if it’s valid to extrapolate, and we wait long enough.

Continue reading “Sea level update – newish work”

Sea level update – Grinsted edition

I’m waaayyy overdue for an update on sea level models.

I’ve categorized my 6 previous posts on the Rahmstorf (2007) and Grinsted et al. models under sea level.

I had some interesting correspondence last year with Aslak Grinsted.

I agree with the ellipsis idea that you show in the figure on page IV. However, i conclude that if i use the paleo temperature reconstructions then the long response times are ‘eliminated’. You can sort of see why on this page: Fig2 here illustrates one problem with having a long response time:

http://www.glaciology.net/Home/Miscellaneous-Debris/rahmstorf2007lackofrealism

It seems it is very hard to make the turn at the end of the LIA with a large inertia.

I disagree with your statement “this suggests to me that G’s confidence bounds, +/- 67 years on the Moberg variant and +/- 501 years on the Historical variant are most likely slices across the short dimension of a long ridge, and thus understate the true uncertainty of a and tau.”

The inverse monte carlo method is designed not to “slice across” the distributions. I think the reason we get so different results is that your payoff function is very different from my likelihood function – as you also point out on page VI.

Aslak is politely pointing out that I screwed up one aspect of the replication. We agree that the fit payoff surface is an ellipse (I think the technical I used was “banana-ridge”). However, my hypothesis about the inexplicably narrow confidence bounds in the Grinsted et al. paper was wrong. It turns out that the actual origin of the short time constant and narrow confidence bounds is a constraint that I neglected to implement. The constraint involves the observation that variations in sea level over the last two millenia have been small. That basically chops off most of the long-time-constant portion of the banana, leaving the portion described in the paper. I’ve confirmed this with a quick experiment.

Continue reading “Sea level update – Grinsted edition”

Earthquakes != climate

Daniel Sarewitz has a recent column in Nature (paywall, unfortunately). It contains some wisdom, but the overall drift conclusion is bonkers.

First, the good stuff: Sarewitz rightly points out the folly of thinking that more climate science (like regional downscaling) will lead to action where existing science has failed to yield any. Similarly, he observes that good scientific information about the vulnerability of New Orleans didn’t lead to avoidance of catastrophe.

For complex, long-term problems such as climate change or nuclear-waste disposal, the accuracy of predictions is often unknowable, uncertainties are difficult to characterize and people commonly disagree about the outcomes they desire and the means to achieve them. For such problems, the belief that improved scientific predictions will compel appropriate behaviour and lead to desired outcomes is false.

Then things go off the rails. Continue reading “Earthquakes != climate”

The Health Care Death Spiral

Paul Krugman documents an ongoing health care death spiral in California:

Here’s the story: About 800,000 people in California who buy insurance on the individual market — as opposed to getting it through their employers — are covered by Anthem Blue Cross, a WellPoint subsidiary. These are the people who were recently told to expect dramatic rate increases, in some cases as high as 39 percent.

Why the huge increase? It’s not profiteering, says WellPoint, which claims instead (without using the term) that it’s facing a classic insurance death spiral.

Bear in mind that private health insurance only works if insurers can sell policies to both sick and healthy customers. If too many healthy people decide that they’d rather take their chances and remain uninsured, the risk pool deteriorates, forcing insurers to raise premiums. This, in turn, leads more healthy people to drop coverage, worsening the risk pool even further, and so on.

A death spiral arises when a positive feedback loop runs as a vicious cycle. Another example is Andy Ford’s utility death spiral. The existence of the positive feedback leads to counter-intuitive policy prescriptions: Continue reading “The Health Care Death Spiral”

Legislating Science

The Utah House has declared that CO2 is harmless. The essence of the argument in HJR 12: temperature’s going down, climategate shows that scientists are nefarious twits, whose only interest is in riding the federal funding gravy train, and emissions controls hurt the poor. While it’s reassuring that global poverty is a big concern of Utah Republicans, the scientific observations are egregiously bad:

29 WHEREAS, global temperatures have been level and declining in some areas over the
30 past 12 years;
31 WHEREAS, the “hockey stick” global warming assertion has been discredited and
32 climate alarmists’ carbon dioxide-related global warming hypothesis is unable to account for
33 the current downturn in global temperatures;
34 WHEREAS, there is a statistically more direct correlation between twentieth century
35 temperature rise and Chlorofluorocarbons (CFCs) in the atmosphere than CO2;
36 WHEREAS, outlawed and largely phased out by 1978, in the year 2000 CFC’s began to
37 decline at approximately the same time as global temperatures began to decline;

49 WHEREAS, Earth’s climate is constantly changing with recent warming potentially an
50 indication of a return to more normal temperatures following a prolonged cooling period from
51 1250 to 1860 called the “Little Ice Age”;

The list cherry-picks skeptic arguments that rely on a few papers (if that), nearly all thoroughly discredited. There are so many things wrong here that it’s not worth the electrons to refute them one by one. The quality of their argument calls to mind to the 1897 attempt in Indiana to legislate that pi = 3.2. It’s sad that this resolution’s supporters are too scientifically illiterate to notice, or too dishonest to care. There are real uncertainties about climate; it would be nice to see a legislative body really grapple with the hard questions, rather than chasing red herrings.

The Dynamics of Science

First, check out SEED’s recent article, which asks, When it comes to scientific publishing and fame, the rich get richer and the poor get poorer. How can we break this feedback loop?

For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.
—Matthew 25:29

Author John Wilbanks proposes to use richer metrics to evaluate scientists, going beyond publications to consider data, code, etc. That’s a good idea per se, but it’s a static solution to a dynamic problem. It seems to me that it spreads around the effects of the positive feedback from publications->resources->publications a little more broadly, but doesn’t necessarily change the gain of the loop. A better solution, if meritocracy is the goal, might be greater use of blind evaluation and changes to allocation mechanisms themselves.

Lamarckat35

The reason we care about this is that we’d like science to progress as quickly as possible. That involves crafting a reward system with some positive feedback, but not so much that it easily locks in to suboptimal paths. That’s partly a matter of the individual researcher, but there’s a larger question: how to ensure that good theories out-compete bad ones?

170px-Darwin_ape

Now check out the work of John Sterman and Jason Wittenberg on Kuhnian scientific revolutions.

Update: also check out filter bubbles.