Nordhaus on Subsidies

I’m not really a member of the neoclassical economics fan club, but I think this is on point:

“Subsidies pose a more general problem in this context. They attempt to discourage carbon-intensive activities by making other activities more attractive. One difficulty with subsidies is identifying the eligible low-carbon activities. Why subsidize hybrid cars (which we do) and not biking (which we do not)? Is the answer to subsidize all low carbon activities? Of course, that is impossible because there are just too many low-carbon activities, and it would prove astronomically expensive. Another problem is that subsidies are so uneven in their impact. A recent study by the National Academy of Sciences looked at the impact of several subsidies on GHG emissions. It found a vast difference in their effectiveness in terms of CO2removed per dollar of subsidy. None of the subsidies were efficient; some were horribly inefficient; and others such as the ethanol subsidy were perverse and actually increased GHG emissions. The net effect of all the subsidies taken together was effectively zero!” So in the end, it is much more effective to penalize carbon emissions than to subsidize everything else.” (Nordhaus, 2013, p. 266)

(Via a W. Hogan paper, https://scholar.harvard.edu/whogan/files/hogan_hepg_100418r.pdf)

Climate Skeptics in Search of Unity

The most convincing thing about mainstream climate science is not that the models are so good, but that the alternatives are so bad.

Climate skeptics have been at it for 40 years, but have produced few theories or predictions that have withstood the test of time. Even worse, where there were once legitimate measurement issues and model uncertainties to discuss, as those have fallen one by one, the skeptics are doubling down on theories that rely on “alternative” physics. The craziest ideas get the best acronyms and metaphors. The allegedly skeptical audience welcomes these bizarre proposals with enthusiasm. As they turn inward, they turn on each other.

The latest example is in the Lungs of Gaia at WUWT:

A fundamental concept at the heart of climate science is the contention that the solar energy that the disk of the Earth intercepts from the Sun’s irradiance must be diluted by a factor of 4. This is because the surface area of a globe is 4 times the interception area of the disk silhouette (Wilde and Mulholland, 2020a).

This geometric relationship of divide by 4 for the insolation energy creates the absurd paradox that the Sun shines directly onto the surface of the Earth at night. The correct assertion is that the solar energy power intensity is collected over the full surface area of a lit hemisphere (divide by 2) and that it is the thermal radiant exhaust flux that leaves from the full surface area of the globe (divide by 4).

Setting aside the weird pedantic language that seems to infect those with Galileo syndrome, these claims are simply a collection of errors. The authors seem to be unable to understand the geometry of solar flux, even though this is taught in first-year physics.

Some real college physics (divide by 4).

The “divide by 4” arises because the solar flux intercepted by the earth is over an area pi*r^2 (the disk of the earth as seen from the sun) while the average flux normal to the earth’s surface is over an area 4*pi*r^2 (the area of a sphere).

The authors’ notion of “divide by 2” resulting in 1368/2 = 684 w/m^2 average is laughable because it implies that the sun is somehow like a luminous salad bowl that delivers light at 1368 w/m^2 normal to the surface of one side of the earth only. That would make for pretty interesting sunsets.

In any case, none of this has much to do with the big climate models, which don’t “dilute” anything, because they have explicit geometry of the earth and day/night cycles with small time steps. So, all of this is already accounted for.

To his credit, Roy Spencer – a hero of the climate skeptics movement of the same magnitude as Richard Lindzen – arrives early to squash this foolishness:

How can some people not comprehend that the S/4 value of solar flux does NOT represent the *instantaneous* TOA illumination of the whole Earth, but instead the time-averaged (1-day or longer) solar energy available to the whole Earth. There is no flat-Earth assumption involved (in fact, dividing by 4 is because the Earth is approximately spherical). It is used in only simplistic treatments of Earth’s average energy budget. Detailed calculations (as well as 4D climate models as well as global weather forecast models) use the full day-night (and seasonal) cycle in solar illumination everywhere on Earth. The point isn’t even worth arguing about.

Responding to the clueless authors:

Philip Mulholland, you said: “Please confirm that the TOA solar irradiance value in a climate model cell follows the full 24 hour rotational cycle of daytime illumination and night time darkness.”

Oh, my, Philip… you cannot be serious.

Every one of the 24+ climate models run around the world have a full diurnal cycle at every gridpoint. This is without question. For example, for models even 20+ years ago start reading about the diurnal cycles in the models on page 796 of the following, which was co-authored by representatives from all of the modeling groups: https://www.ipcc.ch/site/assets/uploads/2018/02/WG1AR5_Chapter09_FINAL.pdf

Finally:

Philip, Ed Bo has hit the nail on the head. Your response to him suggests you do not understand even the basics of climate modeling, and I am a little dismayed that your post appeared on WUWT.

Undeterred, the WUWT crowd then proceeds to savage anyone, including their erstwhile hero Spencer, who dares to challenge the new “divide by 2” orthodoxy.

Dr roy with his fisher price cold warms hot physics tried to hold the line for the luke-warmers, but soon fecked off when he knew he would be embarrassed by the grown-ups in the room…..

This is not the first time a WUWT post has claimed to overturn climate science. There are others, like the 2011 Unified Theory of Climate. It’s basically technobabble, notable primarily for its utter obscurity in the nine years following. It’s not really worth analyzing, though I am a little curious how a theory driven by static atmospheric mass explains dynamics. Also, I notice that the perfect fit to the data for 7 planets in Fig. 5 has 7 parameters – ironic, given that accusations of overparameterization are a perennial favorite of skeptics. Amusingly, one of the authors of the “divide by two” revolution (Wilde) appears in the comments to point out his alternative “Unifying” Theory of Climate.

Are these alternate theories in agreement, mutually exclusive, or just not even wrong? It would be nice if skeptics would get together and decide which of their grand ideas is the right one. Does atmospheric pressure run the show, or is it sunspots? And which fundamentals that mathematicians and physicists screwed up have eluded verification for all these years? Is it radiative transfer, or the geometry of spheres and disks? Is energy itself misdefined? Inquiring minds want to know.

The bottom line is that Roy Spencer is right. It isn’t worth arguing about these things, any more than its worth arguing with flat earthers or perpetual motion enthusiasts. Engaging will just leave you wondering if proponents are serious, as in seriously deluded, or just yanking your chain while keeping a straight face.

 

Emissions Pricing vs. Standards

You need an emissions price in your portfolio to balance effort across all tradeoffs in the economy.

The energy economy consists of many tradeoffs. Some of these are captured in the IPAT framework:

Emissions 
= Population x GDP per Capita x Energy per GDP x Emissions per Energy

IPAT shows that, to reduce emisisons, there are multiple points of intervention. One could, for example, promote lower energy intensity, or reduce the carbon intensity of energy, or both.

An ideal policy, or portfolio of policies, would:

  • Cover all the bases – ensure that no major opportunity is left unaddressed.
  • Balance the effort – an economist might express this as leveling the shadow prices across areas.

We have a lot of different ways to address each tradeoff: tradeable permits, taxes, subsidies, quantity standards, performance standards, command-and-control, voluntary limits, education, etc. So far, in the US, we have basically decided that taxes are a non-starter, and instead pursued subsidies and tax incentives, portfolio and performance standards, with limited use of tradeable permits.

Here’s the problem with that approach. You can decompose the economy a lot more than IPAT does, into thousands of decisions that have energy consequences. I’ve sampled a tiny fraction below.

Is there an incentive?

Decision Standards Emissions Price
Should I move to the city or the suburbs? No  Yes
Should I telecommute? No  Yes
Drive, bike, bus or metro today? No  Yes
Car, truck or SUV? No (CAFE gets this wrong)  Yes
Big SUV or small SUV? CAFE (again)  Yes
Gasoline, diesel, hybrid or electric? ZEV, tax credits  Yes
Regular or biofuel? LCFS, CAFE credits  Yes
Detached house or condo? No  Yes
Big house or small? No  Yes
Gas or heat pump? No  Yes
High performance building envelope or granite countertops? Building codes (lowest common denominator)  Yes
Incandescent or LED lighting? Bulb Ban  Yes
LEDs are cheap – use more? No  Yes
Get up to turn out an unused light? No  Yes
Fridge: top freezer, bottom freezer or side by side? No  Yes
Efficient appliances? Energy Star (badly)  Yes
Solar panels? Building codes, net metering, tax credits, cap & trade  Yes
Green electricity? Portfolio standards  Yes
2 kids or 8? No  Yes

The beauty of an emissions price – preferably charged at the minemouth and wellhead – is that it permeates every economic aspect of life. The extent to which it does so depends on the emissions intensity of the subject activity – when it’s high, there’s a strong price signal, and when it’s low, there’s a weak signal, leaving users free to decide on other criteria. But the signal is always there. Importantly, the signal can’t be cheated: you can fake your EPA mileage rating – for a while – but it’s hard to evade costs that arrive packaged with your inputs, be they fuel, capital, services or food.

The rules and standards we have, on the other hand, form a rather moth-eaten patchwork. They cover a few of the biggest energy decisions with policies like renewable portfolio standards for electricity. Some of those have been pretty successful at lowering emissions. But others, like CAFE and Energy Star, are deficient or perverse in a variety of ways. As a group, they leave out a number of decisions that are extremely consequential. Effort is by no means uniform – what is the marginal cost of a ton of carbon avoided by CAFE, relative to a state’s renewable energy portfolio? No one knows.

So, how is the patchwork working? Not too well, I’d say. Some, like the CAFE standard, have been diluted by loopholes and stalled due to lack of political will:

BTS

Others are making some local progress. The California LCFS, for example, has reduced carbon intensity of fuels 3.5% since authorization by AB32 in 2006:

ARB

But the LCFS’ progress has been substantially undone by rising vehicle miles traveled (VMT). The only thing that put a real dent in driving was the financial crisis:

AFDC

Caltrans


In spite of this, the California patchwork has worked – it has reached its GHG reduction target:
SF Chronicle

This is almost entirely due to success in the electric power sector. Hopefully, there’s more to come, as renewables continue to ride down their learning curves. But how long can the power sector carry the full burden? Not long, I think.

The problem is that the electricity supply side is the “easy” part of the problem. There are relatively few technologies and actors to worry about. There’s a confluence of federal and state incentives. The technology landscape is favorable, with cost-effective emerging technologies.

The technology landscape for clean fuels is not easy. That’s why LCFS credits are trading at $195/ton while electricity cap & trade allowances are at $16/ton. The demand side has more flexibility, but it is technically diverse and organizationally fragmented (like the questions in my table above), making it harder to regulate. Problems are coupled: getting people out of their cars isn’t just a car problem; it’s a land use problem. Rebound effects abound: every LED light bulb is just begging to be left on all the time, because it’s so cheap to do so, and electricity subsidies make it even cheaper.

Command-and-control regulators face an unpleasant choice. They can push harder and harder in a few major areas, widening the performance gap – and the shadow price gap – between regulated and unregulated decisions. Or, they can proliferate regulations to cover more and more things, increasing administrative costs and making innovation harder.

As long as economic incentives scream that the price of carbon is zero, every performance standard, subsidy, or limit is fighting an uphill battle. People want to comply, but evolution selects for those who can figure out how to comply the least. Every idea that’s not covered by a standard faces a deep “valley of death” when it attempts to enter the market.

At present, we can’t let go of this patchwork of standards (wingwalker’s rule – don’t let go of one thing until you have hold of another). But in the long run, we need to start activating every possible tradeoff that improves emissions. That requires a uniform that pervades the economy. Then rules and standards can backfill the remaining market failures, resulting in a system of regulation that’s more effective and less intrusive.

The end of the world is free!

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

In the New York Times, David Leonhardt ponders,

The Problem With Putting a Price on the End of the World

Economists have workable policy ideas for addressing
climate change. But what if they’re politically impossible?

I wrote about this exact situation nearly ten years ago, when the Breakthrough Institute (and others) proposed energy R&D as an alternative to politically-infeasible carbon taxes. What has R&D accomplished since then? All kinds of wonderful things, but the implications for climate are … diddly squat.

The emerging climate technology delusion

Leonhardt observes that emissions pricing programs have already failed to win approval several times, which is true. However, I think the diagnosis is partly incorrect. Cap and trade programs like Waxman Markey failed not because they imposed prices, but because they were incredibly complex and involved big property rights giveaways. Anyone who even understands the details of the program is right to wonder if anyone other than traders will profit from it.

In other cases, like the Washington carbon tax initiatives, I think the problem may be that potential backers required that it solve not only climate, but also environmental justice and income inequality more broadly. That’s an impossible task for a single policy.

Leonhardt proposes performance standards and a variety of other economically “second best” measures as alternatives.

The better bet seems to be an “all of the above” approach: Organize a climate movement around meaningful policies with a reasonable chance of near-term success, but don’t abandon the hope of carbon pricing.

At first blush, this seems reasonable to me. Performance standards and information policies have accomplished a lot over the years. Energy R&D is a good investment.

On second thought, these alternatives have already failed. The sum total of all such policies over the last few decades has been to reduce CO2 emissions intensity by 2% per year.

That’s slower than GDP growth, so emissions have actually risen. That’s far short of what we need to accomplish, and it’s not all attributable to policy. Even with twice the political will, and twice the progress, it wouldn’t be nearly enough.

All of the above have some role to play, but without prices as a keystone economic signal, they’re fighting the tide. Moreover, together they have a large cost in administrative complexity, which gives opponents a legitimate reason to whine about bureaucracy and promotes regulatory capture. This makes it hard to innovate and helps large incumbents contribute to worsening inequality.

Adapted from Tax Time

So, I think we need to do a lot more than not “abandon the hope” of carbon pricing. Every time we push a stopgap, second-best policy, we must also be building the basis for implementation of emissions prices. This means we have to get smarter about carbon pricing, and address the cognitive and educational gaps that explain failure so far. Leonhardt identifies one key point:

‘If we’re going to succeed on climate policy, it will be by giving people a vision of what’s in it for them.’

I think that vision has several parts.

  • One is multisolving – recognizing that clever climate policy can improve welfare now as well as in the future through health and equity cobenefits. This is tricky, because a practical policy can’t do everything directly; it just has to be compatible with doing everything.
  • Another is decentralization. The climate-economy system is too big to permit monolithic solution designs. We have to preserve diversity and put signals in place that allow it to evolve in beneficial directions.

Finally, emissions pricing has to be more than a vision – it has to be designed so that it’s actually good for the median voter:

As Nordhaus acknowledged in his speech, curbing dirty energy by raising its price “may be good for nature, but it’s not actually all that attractive to voters to reduce their income.”

Emissions pricing doesn’t have to be harmful to most voters, even neglecting cobenefits, as long as green taxes include equitable rebates, revenue finances good projects, and green sectors have high labor intensity. (The median voter has to understand this as well.)

Personally, I’m frustrated by decades of excuses for ineffective, complicated, inequitable policies. I don’t know how to put it in terms that don’t trigger cognitive dissonance, but I think there’s a question that needs to be asked over and over, until it sinks in:

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Why should emitting greenhouse gases be free, when it contributes to the destruction of so much we care about?

Forest Tipping in the Rockies

Research shows that some forests in the Rockies aren’t recovering from wildfires.

Evidence for declining forest resilience to wildfires under climate change

Abstract
Forest resilience to climate change is a global concern given the potential effects of increased disturbance activity, warming temperatures and increased moisture stress on plants. We used a multi‐regional dataset of 1485 sites across 52 wildfires from the US Rocky Mountains to ask if and how changing climate over the last several decades impacted post‐fire tree regeneration, a key indicator of forest resilience. Results highlight significant decreases in tree regeneration in the 21st century. Annual moisture deficits were significantly greater from 2000 to 2015 as compared to 1985–1999, suggesting increasingly unfavourable post‐fire growing conditions, corresponding to significantly lower seedling densities and increased regeneration failure. Dry forests that already occur at the edge of their climatic tolerance are most prone to conversion to non‐forests after wildfires. Major climate‐induced reduction in forest density and extent has important consequences for a myriad of ecosystem services now and in the future.

I think this is a simple example of a tipping point in action.

Forest Cover Tipping Points

Using an example from Hirota et al., in my toy model article above, here’s what happens:

At high precipitation, a fire (red arrow, top) takes the forest down to zero tree cover, but regrowth (green arrow, top) restores the forest. At lower precipitation, due to climate change, the forest remains stable, until fire destroys it (lower red arrow). Then regrowth can’t get past the newly-stable savanna state (lower green arrow). No amount of waiting will take the trees from 30% cover to the original 90% tree cover. (The driving forces might be more complex than precipitation and fire; things like insects, temperature, snowpack and evaporation also matter.)

The insidious thing about this is that you can’t tell that the forest state has become destabilized until the tipping event happens. That means the complexity of the system defeats any simple heuristic for managing the trees. The existence of healthy, full tree cover doesn’t imply that they’ll grow back to the same state after a catastrophe or clearcut.

Climate Bathtub Chartjunk

I just ran across Twist and Shout: Images and Graphs in Skeptical Climate Media, a compendium of cherry picking and other chartjunk abuses.

I think it misses a large class of (often willful) errors: ignoring the climate bathtub. Such charts typically plot CO2 emissions or concentration against temperature, with the implication that any lack of correlation indicates a problem with the science. But this engages in a combination of a pattern matching fallacy and fallacy of the single cause. Sometimes these things make it into the literature, but most live on swampy skeptic sites.

An example, reportedly from John Christy, who should know better:

Notice how we’re supposed to make a visual correlation between emissions and temperature (even though two integrations separate them, and multiple forcings and noise influence temperature). Also notice how the nonzero minimum axis crossing for CO2 exaggerates the effect. That’s in addition to the usual tricks of inserting an artificial trend break at the 1998 El Nino and truncating the rest of history.

Silver Lining to the White House Climate Panel?

The White House is reportedly convening a panel to reexamine the scientific consensus on climate. How does that work, exactly? Are they going to publish thousands of new papers to shift the apparent balance of opinion in the scientific literature? And hasn’t analysis of consensus already been done to death, with a null result for the skeptics?

The problem is that there isn’t much for skeptics to work with. There aren’t any models that make useful predictions with very low climate sensitivity. In fact, skeptical predictions haven’t really panned out at all. Lindzen’s Adaptive Iris is still alive – sort of – but doesn’t result in a strong negative feedback. The BEST reanalysis didn’t refute previous temperature data. The surfacestations.org effort used crowdsourcing to reveal some serious weather station siting problems, which ultimately amounted to nothing.

And those are really the skeptics’ Greatest Hits. After that, it’s a rapid fall from errors to nuts. No, satellites temperatures don’t show a negative trend. Yes, Fourier and wavelet analyses are typically silly, but fortunately tend to refute themselves quickly. This list could grow long quickly, though skeptics are usually pretty reluctant to make testable models or predictions. That’s why even prominent outlets for climate skepticism have to resort to simple obfuscation.

So, if there’s a silver lining to the proposed panel, it’s that they’d have to put the alleged skeptics’ best foot forward, by collecting and identifying the best models, data and predictions. Then it would be readily apparent what a puny body of evidence that yielded.

 

Future Climate of the Bridgers

Ten years ago, I explored future climate analogs for my location in Montana:

When things really warm up, to +9 degrees F (not at all implausible in the long run), 16 of the top 20 analogs are in CO and UT, …

Looking at a lot of these future climate analogs on Google Earth, their common denominator appears to be rattlesnakes. I’m sure they’re all nice places in their own way, but I’m worried about my trees. I’ll continue to hope that my back-of-the-envelope analysis is wrong, but in the meantime I’m going to hedge by managing the forest to prepare for change.

I think there’s a lot more to worry about than trees. Fire, wildlife, orchids, snowpack, water availability, …

Recently I decided to take another look, partly inspired by the Bureau of Reclamation’s publication of downscaled data. This solves some of the bias correction issues I had in 2008. I grabbed the model output (36 runs from CMIP5) and observations for the 1/8 degree gridpoint containing Bridger Bowl:

Then I used Vensim to do a little data processing, converting the daily time series (which are extremely noisy weather) into 10-year moving averages (i.e., climate). Continue reading “Future Climate of the Bridgers”

The Nordhaus Nobel

Congratulations to William Nordhaus for winning a Nobel in Economics for work on climate. However … I find that this award leaves me conflicted. I’m happy to see the field proclaim that it’s optimal to do something about climate change. But if this is the best economics has to offer, it’s also an indication of just how far divorced the field is from reality. (Or perhaps not; not all economists agree that we have reached a Neoclassical nirvana.)

Nordhaus was probably the first big name in economics to tackle the problem, and has continued to refine the work over more than two decades. At the same time, Nordhaus’ work has never recommended more than a modest effort to solve the climate problem. In the original DICE model, the optimal policy reduced emissions about 10%, with a tiny carbon tax of $10-15/tonC – a lot less than a buck a gallon on gasoline, for example. (Contrast this perspective with Stopping Climate Change Is Hopeless. Let’s Do It.)

Nordhaus’ mild prescription for action emerges naturally from the model’s assumptions. Ask yourself if you agree with the following statements:

If you find yourself agreeing, congratulations – you’d make a successful economist! All of these and more were features of the original DICE and RICE models, and the ones that most influence the low optimal price of carbon survive to this day. That low price waters down real policies, like the US government’s social cost of carbon.

In any case, you’re not off the hook; even with these rosy assumptions Nordhaus finds that we still ought to have a real climate policy. Perhaps that is the greatest irony here – that even the most Neoclassical view of climate that economics has to offer still recommends action. The perspective that climate change doesn’t exist or doesn’t matter requires assumptions even more contorted than those above, in a mythical paradise where fairies and unicorns cavort with the invisible hand.

Limits to Growth Redux

Every couple of years, an article comes out reviewing the performance of the World3 model against data, or constructing an alternative, extended model based on World3. Here’s the latest:

Abstract
This study investigates the notion of limits to socioeconomic growth with a specific focus on the role of climate change and the declining quality of fossil fuel reserves. A new system dynamics model has been created. The World Energy Model (WEM) is based on the World3 model (The Limits to Growth, Meadows et al., 2004) with climate change and energy production replacing generic pollution and resources factors. WEM also tracks global population, food production and industrial output out to the year 2100. This paper presents a series of WEM’s projections; each of which represent broad sweeps of what the future may bring. All scenarios project that global industrial output will continue growing until 2100. Scenarios based on current energy trends lead to a 50% increase in the average cost of energy production and 2.4–2.7 °C of global warming by 2100. WEM projects that limiting global warming to 2 °C will reduce the industrial output growth rate by 0.1–0.2%. However, WEM also plots industrial decline by 2150 for cases of uncontrolled climate change or increased population growth. The general behaviour of WEM is far more stable than World3 but its results still support the call for a managed decline in society’s ecological footprint.

The new paper puts economic collapse about a century later than it occurred in Limits. But that presumes that the phrase highlighted above is a legitimate simplification: GHGs are the only pollutant, and energy the only resource, that matters. Are we really past the point of concern over PCBs, heavy metals, etc., with all future chemical and genetic technologies free of risk? Well, maybe … (Note that climate integrated assessment models generally indulge in the same assumption.)

But quibbling over dates is to miss a key point of Limits to Growth: the model, and the book, are not about point prediction of collapse in year 20xx. The central message is about a persistent overshoot behavior mode in a system with long delays and finite boundaries, when driven by exponential growth.

We have deliberately omitted the vertical scales and we have made the horizontal time scale somewhat vague because we want to emphasize the general behavior modes of these computer outputs, not the numerical values, which are only approximately known.