No, Climate Change CAN’T Be Stopped by Turning Air Into Gasoline

My award for dumbest headline of the week goes to The Atlantic:

Climate Change Can Be Stopped by Turning Air Into Gasoline

A team of scientists from Harvard University and the company Carbon Engineering announced on Thursday that they have found a method to cheaply and directly pull carbon-dioxide pollution out of the atmosphere.

If their technique is successfully implemented at scale, it could transform how humanity thinks about the problem of climate change. It could give people a decisive new tool in the race against a warming planet, but could also unsettle the issue’s delicate politics, making it all the harder for society to adapt.

Their research seems almost to smuggle technologies out of the realm of science fiction and into the real. It suggests that people will soon be able to produce gasoline and jet fuel from little more than limestone, hydrogen, and air. It hints at the eventual construction of a vast, industrial-scale network of carbon scrubbers, capable of removing greenhouse gases directly from the atmosphere.

The underlying article that triggered the story has nothing to do with turning CO2 into gasoline. It’s purely about lower-cost direct capture of CO2 from the air (DAC). Even if we assume that the article’s right, and DAC is now cheaper, that in no way means “climate change can be stopped.” There are several huge problems with that notion:

First, if you capture CO2 from the air, make a liquid fuel out of it, and burn that in vehicles, you’re putting the CO2 back in the air. This doesn’t reduce CO2 in the atmosphere; it just reduces the growth rate of CO2 in the atmosphere by displacing the fossil carbon that would otherwise be used. With constant radiative forcing from elevated CO2, temperature will continue to rise for a long time. You might get around this by burning the fuel in stationary plants and sequestering the CO2, but there are huge problems with that as well. There are serious sink constraint problems, and lots of additional costs.

Second, just how do you turn all that CO2 into fuel? The additional step is not free, nor is it conventional Fischer-Tropsch technology, which starts with syngas from coal or gas. You need relatively vast amounts of energy and hydrogen to do it on the necessary gigatons/year scale. One estimate puts the cost of such fuels at $3.80-9.20 a gallon (some of the costs overlap, but it’ll be more at the pump, after refining and marketing).

Third, who the heck is going to pay for all of this? If you want to just offset global emissions of ~40 gigatons CO2/year at the most optimistic cost of $100/ton, with free fuel conversion, that’s $4 trillion a year. If you’re going to cough up that kind of money, there are a lot of other things you could do first, but no one has an incentive to do it when the price of emissions is approximately zero.

Ironically, the Carbon Engineering team seems to be aware of these problems:

Keith said it was important to still stop emitting carbon-dioxide pollution where feasible. “My view is we should stick to trying to cut emissions first. As a voter, my view is it’s cheaper not to emit a ton of [carbon dioxide] than it is to emit it and recapture it.”

I think there are two bottom lines here:

  1. Anyone who claims to have a silver bullet for a problem that pervades all human enterprise is probably selling snake oil.
  2. Without a substantial emissions price as the primary incentive guiding market decisions about carbon intensity, all large scale abatement efforts are a fantasy.

Fancy Stats and Silly Climate Contests

Climate skeptics seem to have a thing for contests and bets. For example, there’s Armstrong’s proposed bet, baiting Al Gore. Amusingly (for data nerds anyway), the bet, which pitted a null forecast against the taker’s chosen climate model, could have been beaten easily by either a low-order climate model or a less-naive null forecast. And, of course, it completely fails to understand that climate science is not about fitting a curve to the global temperature record.

Another instance of such foolishness recently came to my attention. It doesn’t have a name that I know of, but here’s the basic idea:

  • The author generates 1000 time series:

Each series has length 135: the same length as that of the most commonly studied series of global temperatures (which span 1880–2014). The 1000 series were generated as follows. First, 1000 random series were obtained (for more details, see below). Then, some of those series were randomly selected and had a trend added to them. Each added trend was either 1°C/century or −1°C/century. For comparison, a trend of 1°C/century is greater than the trend that is claimed for global temperatures.

  • The challenger pays $10 for the privilege of attempting to detect which of the 1000 series are perturbed by a trend, winning $100,000 for correctly identifying 90% or more.

The best challenger managed to identify 860 series, so the prize went unclaimed. But only two challenges are described, so I have to wonder how many serious attempts were made. Had I known about the contest in advance, I would not have tried it. I know plenty about fitting dynamic models to data, though abstract statistical methods aren’t really my thing. But I still have to ask myself some questions:

  • Is there really money to be made, or will the author simply abscond to the pub with my $10? For the sake of argument, let’s assume that the author really has $100k at stake.
  • Is it even possible to win? The author did not reveal the process used to generate the series in advance. That alone makes this potentially a sucker bet. If you’re in control of the noise and structure of the process, it’s easy to generate series that are impossible to reliably disentangle. (Tellingly, the author later revealed the code to generate the series, but it appears there’s no code to successfully identify 90%!)

For me, the statistical properties of the contest make it an obvious non-starter. But does it have any redeeming social value? For example, is it an interesting puzzle that has something to do with actual science? Sadly, no.

The hidden assumption of the contest is that climate science is about estimating the trend of the global temperature time series. Yes, people do that. But it’s a tiny fraction of climate science, and it’s a diagnostic of models and data, not a real model in itself. Science in general is not about such things. It’s about getting a good model, not a good fit. In some places the author talks about real physics, but ultimately seems clueless about this – he’s content with unphysical models:

Moreover, the Contest model was never asserted to be realistic.

Are ARIMA models truly appropriate for climatic time series? I do not have an opinion. There seem to be no persuasive arguments for or against using ARIMA models. Rather, studying such models for climatic series seems to be a worthy area of research.

Liljegren’s argument against ARIMA is that ARIMA models have a certain property that the climate system does not have. Specifically, for ARIMA time series, the variance becomes arbitrarily large, over long enough time, whereas for the climate system, the variance does not become arbitrarily large. It is easy to understand why Liljegren’s argument fails.

It is a common aphorism in statistics that “all models are wrong”. In other words, when we consider any statistical model, we will find something wrong with the model. Thus, when considering a model, the question is not whether the model is wrong—because the model is certain to be wrong. Rather, the question is whether the model is useful, for a particular application. This is a fundamental issue that is commonly taught to undergraduates in statistics. Yet Liljegren ignores it.

As an illustration, consider a straight line (with noise) as a model of global temperatures. Such a line will become arbitrarily high, over long enough time: e.g. higher than the temperature at the center of the sun. Global temperatures, however, will not become arbitrarily high. Hence, the model is wrong. And so—by an argument essentially the same as Liljegren’s—we should not use a straight line as a model of temperatures.

In fact, a straight line is commonly used for temperatures, because everyone understands that it is to be used only over a finite time (e.g. a few centuries). Over a finite time, the line cannot become arbitrarily high; so, the argument against using a straight line fails. Similarly, over a finite time, the variance of an ARIMA time series cannot become arbitrarily large; so, Liljegren’s argument fails.

Actually, no one in climate science uses straight lines to predict future temperatures, because forcing is rising, and therefore warming will accelerate. But that’s a minor quibble, compared to the real problem here. If your model is:

global temperature = f( time )

you’ve just thrown away 99.999% of the information available for studying the climate. (Ironically, the author’s entire point is that annual global temperatures don’t contain a lot of information.)

No matter how fancy your ARIMA model is, it knows nothing about conservation laws, robustness in extreme conditions, dimensional consistency, or real physical processes like heat transfer. In other words, it fails every reality check a dynamic modeler would normally apply, except the weakest – fit to data. Even its fit to data is near-meaningless, because it ignores all other series (forcings, ocean heat, precipitation, etc.) and has nothing to say about replication of spatial and seasonal patterns. That’s why this contest has almost nothing to do with actual climate science.

This is also why data-driven machine learning approaches have a long way to go before they can handle general problems. It’s comparatively easy to learn to recognize the cats in a database of photos, because the data spans everything there is to know about the problem. That’s not true for systemic problems, where you need a web of data and structural information at multiple scales in order to understand the situation.

The CO2 record is no surprise

The 2016 record in CO2 concentration and increment is exactly what you’d expect for a system driven by growing emissions.

Here’s the data. The CO2 concentration at Mauna Loa has increased steadily since records began in 1958. Superimposed on the trend is a seasonal oscillation, which you can remove by a moving average over a monthly window (red):

In a noiseless system driven by increasing, you’d expect every year to be a concentration record, and that’s nearly true here. Almost 99% of 12-month intervals exceed all previous records.

If you look at the year-on-year difference in monthly concentrations, you can see that not only is the concentration rising, but the rate of increase is increasing as well:

This first difference is noisier, but consistent. As a natural consequence, you’d expect a typical point to be higher than any average of the interval preceding.

In other words, a record concentration coinciding with a record increase is not unusual, dynamically or statistically. Until emissions decline significantly, news outlets might as well post a standing item to this effect.

The CO2 concentration trajectory is, incidentially, closer to parabolic than to exponential. That’s because emissions have risen more or less linearly in recent decades,

CO2 emissions, GtC/yr

CO2 concentration (roughly) integrates emissions, so if emissions = c1*time, concentration = c2*time^2 is expected. The cause for concern here is that a peak in the rate of increase has occurred at a time with flat emissions for a few years, signalling that saturation of natural sinks may be to blame. I think it’s premature to draw that conclusion, given the level of noise in the system. But sooner or later our luck will run out, so reducing emissions is as important as ever.

After emissions do peak, you’d expect CO2 difference records to become rare. However, for CO2 concentrations to stop setting records requires that emissions fall below natural uptake, which will take longer to achieve.

Does statistics trump physics?

My dissertation was a critique and reconstruction of William Nordhaus’ DICE model for climate-economy policy (plus a look at a few other models). I discovered a lot of issues, for example that having a carbon cycle that didn’t conserve carbon led to a low bias in CO2 projections, especially in high-emissions scenarios.

There was one sector I didn’t critique: the climate itself. That’s because Nordhaus used an established model, from climatologists Schneider & Thompson (1981). It turns out that I missed something important: Nordhaus reestimated the parameters of the model from time series temperature and forcing data.

Nordhaus’ estimation focused on a parameter representing the thermal inertia of the atmosphere/surface ocean system. The resulting value was about 3x higher than Schneider & Thompson’s physically-based parameter choice. That delays the effects of GHG emissions by about 15 years. Since the interest rate in the model is about 5%, that lag substantially diminishes the social cost of carbon and the incentive for mitigation.

DICE Climate Sector
The climate subsystem of the DICE model, implemented in Vensim

So … should an economist’s measurement of a property of the climate, from statistical methods, overrule a climatologist’s parameter choice, based on physics and direct observations of structure at other scales?

I think the answer could be yes, IF the statistics are strong and reconcilable with physics or the physics is weak and irreconcilable with observations. So, was that the case?

Continue reading “Does statistics trump physics?”

Climate and Competitiveness

Trump gets well-deserved criticism for denying having claimed that the Chinese invented climate change to make  US manufacturing non-competitive.

climatechinesehoax

The idea is absurd on its face. Climate change was proposed long before (or long after) China figured on the global economic landscape. There was only one lead author from China out of the 34 in the first IPCC Scientific Assessment. The entire climate literature is heavily dominated by the US and Europe.

But another big reason to doubt its veracity is that climate policy, like emissions pricing, would make Chinese manufacturing less competitive. In fact, at the time of the first assessment, China was the most carbon-intensive economy in the world, according to the World Bank:

chinaintensity

Today, China’s carbon intensity remains more than twice that of the US. That makes a carbon tax with a border adjustment an attractive policy for US competitiveness. What conspiracy theory makes it rational for China to promote that?

Models, data and hidden hockey sticks

NPR takes a harder look at the much-circulated xkcd temperature reconstruction cartoon.

cartoonheader

The criticism:

Epic Climate Cartoon Goes Viral, But It Has One Key Problem

[…]

As you scroll up and down the graphic, it looks like the temperature of Earth’s surface has stayed remarkably stable for 10,000 years. It sort of hovers around the same temperature for some 10,000 years … until — bam! The industrial revolution begins. We start producing large amounts of carbon dioxide. And things heat up way more quickly.

Now look a bit closer at the bottom of the graphic. See how all of a sudden, around 150 years ago, the dotted line depicting average Earth temperature changes to a solid line. Munroe makes this change because the data used to create the lines come from two very different sources.

The solid line comes from real data — from scientists actually measuring the average temperature of Earth’s surface. These measurements allow us to see temperature fluctuations that occur over a very short timescale — say, a few decades or so.

But the dotted line comes from computer models — from scientists reconstructing Earth’s surface temperature. This gives us very, very coarse information. It averages Earth’s temperature over hundreds of years. So we can see temperature fluctuations that occur only over longer periods of time, like a thousand years or so. Any upticks, spikes or dips that occur in shorter time frames get smoothed out.

So in a way the graphic is really comparing apples and oranges: measurements of the recent past versus reconstructions of more ancient times.

Here’s the bit in question:

recent

The fundamental point is well taken, that fruit are mixed here. The cartoon even warns of that:

limits

I can’t fault the technical critique, but I take issue with a couple aspects of the tone of the piece. It gives the impression that “real data” is somehow exalted and models are inferior, thereby missing the real issues. And it lends credence to the “sh!t happens” theory of climate, specifically that the paleoclimate record could be full of temperature “hockey sticks” like the one we’re in now.

There’s no such thing as pure, assumption free “real data.” Measurement processes involve – gasp! – models. Even the lowly thermometer requires a model to be read, with the position of a mercury bubble converted to temperature via a calibrated scale, making various assumptions about physics of thermal expansion, linearity, etc.

There are no infallible “scientists actually measuring the average temperature of Earth’s surface.” Earth is a really big place, measurements are sparse, and instruments and people make mistakes. Reducing station observations to a single temperature involves reconstruction, just as it does for longer term proxy records. (If you doubt this, check the methodology for the Berkeley Earth Surface Temperature.)

Data combined with a model gives a better measurement than the raw data alone. That’s why a GPS unit combines measurements from satellites with a model of the device’s motion and noise processes to estimate position with greater accuracy than any single data point can provide.

In fact, there are three sources here:

  1. recent global temperature, reconstructed from land and sea measurements with high resolution in time and space (the solid line)
  2. long term temperature, reconstructed from low resolution proxies (the top dotted line)
  3. projections from models that translate future emissions scenarios into temperature

If you take the recent, instrumental global temperature record as the gold standard, there are then two consistency questions of interest. Does the smoothing in the long term paleo record hide previous hockey sticks? Are the models accurate prognosticators of the future?

On the first point, the median temporal resolution of the records contributing to the Marcott 11,300 year reconstruction used is 120 years. So, a century-scale temperature spike would be attenuated by a factor of 2. There is then some reason to think that missing high frequency variation makes the paleo record look different. But there are also good reasons to think that this is not terribly important. Marcott et al. address this:

Our results indicate that global mean temperature for the decade 2000 – 2009 ( 34 ) has not yet exceeded the warmest temperatures of the early Holocene (5000 to 10,000 yr B.P.). These temperatures are, however, warmer than 82% of the Holocene distribution as represented by the Standard 5×5 stack, or 72% after making plausible corrections for inherent smoothing of the high frequencies in the stack. In contrast, the decadal mean global temperature of the early 20th century (1900 – 1909) was cooler than >95% of the Holocene distribution under both the Standard 5×5 and high-frequency corrected scenarios. Global temperature, therefore, has risen from near the coldest to the warmest levels of the Holocene within the past century, reversing the long-term cooling trend that began ~5000 yr B.P.

Even if there were hockey sticks in the past, that’s not evidence for a natural origin for today’s warming. We know little about paleo forcings, so it would be hard to discern the origin of those variations. One might ask, if they are happening now, why can’t we observe them? Similarly, evidence for higher natural variability is evidence for less damping of the climate system, which favors higher climate sensitivity.

Finally, the question of the validity of model projections is too big to tackle, but I should point out that the distinction between a model that generates future projections and a model that assimilates historic measurements is not as great as one might think. Obviously the future hasn’t happened yet, so future projections are subject to an additional source of uncertainty, which is that we don’t know all the inputs (future solar output, volcanic eruptions, etc.), whereas in the past those have been realized, even if we didn’t measure them. Also, models the project may have somewhat different challenges (like getting atmospheric physics right) than data-driven models (which might focus more on statistical methods). But future-models and observational data-models also have one thing in common: there’s no way to be sure that the model structure is right. In one case, it’s because the future hasn’t happened yet, and in the other because there’s no oracle to reveal the truth about what did happen.

So, does the “one key problem” with the cartoon invalidate the point, that something abrupt and unprecedented in the historical record is underway or about to happen? Not likely.

Missing the point about efficiency rebounds … again

Breakthrough’s Nordhaus and Shellenberger (N&S) spot a bit of open-loop thinking about LED lighting:

ON Tuesday, the Royal Swedish Academy of Sciences awarded the 2014 Nobel Prize in Physics to three researchers whose work contributed to the development of a radically more efficient form of lighting known as light-emitting diodes, or LEDs.

In announcing the award, the academy said, “Replacing light bulbs and fluorescent tubes with LEDs will lead to a drastic reduction of electricity requirements for lighting.” The president of the Institute of Physics noted: “With 20 percent of the world’s electricity used for lighting, it’s been calculated that optimal use of LED lighting could reduce this to 4 percent.”

The problem of course is that lighting energy use would fall 20% to 4% only if there’s no feedback, so that LEDs replace incandescents 1 for 1 (and of course the multiplier can’t be that big, because CFLs and other efficient technologies already supply a lot of light).

N&S go on to argue:

But it would be a mistake to assume that LEDs will significantly reduce overall energy consumption.

Why? Because rebound effects will eat up the efficiency gains:

“The growing evidence that low-cost efficiency often leads to faster energy growth was recently considered by both the Intergovernmental Panel on Climate Change and the International Energy Agency.”

“The I.E.A. and I.P.C.C. estimate that the rebound could be over 50 percent globally.”

Notice the sleight-of-hand: the first statement implies a rebound effect greater than 100%, while the evidence they’re citing describes a rebound of 50%, i.e. 50% of the efficiency gain is preserved, which seems pretty significant.

Presumably the real evidence they have in mind is http://iopscience.iop.org/0022-3727/43/35/354001 – authors Tsao & Saunders are Breakthrough associates. Saunders describes a 100% rebound for lighting here http://thebreakthrough.org/index.php/programs/energy-and-climate/understanding-energy-efficiency-rebound-interview-with-harry-saunders

Now the big non sequitur:

But LED and other ultraefficient lighting technologies are unlikely to reduce global energy consumption or reduce carbon emissions. If we are to make a serious dent in carbon emissions, there is no escaping the need to shift to cleaner sources of energy.

Let’s assume the premise is true – that the lighting rebound effect is 100% or more. That implies that lighting use is highly price elastic, which in turn means that an emissions price like a carbon tax will have a strong influence on lighting energy. Therefore pricing can play a major role in reducing emissions. It’s probably still true that a shift to clean energy is unavoidable, but it’s not an exclusive remedy, and a stronger rebound effect actually weakens the argument for clean sources.

Their own colleagues point this out:

In fact, our paper shows that, for the two 2030 scenarios (with and without solid-state lighting), a mere 12% increase in real electricity prices would result in a net decline in electricity-for-lighting consumption.

What should the real takeaway be?

  • Subsidizing lighting efficiency is ineffective, and possibly even counterproductive.
  • Subsidizing clean energy lowers the cost of delivering lighting and other services, and therefore will also be offset by rebound effects.
  • Emissions pricing is a win-win, because it encourages efficiency, counteracts rebound effects and promotes substitution of clean sources.

Reflections on Virgin Earth

Colleagues just pointed out the Virgin Earth Challenge, “a US$25 million prize for an environmentally sustainable and economically viable way to remove greenhouse gases from the atmosphere.”

John Sterman writes:

I think it inevitable that we will see more and more interest in CO2 removal. And IF it can be done without undermining mitigation I’d be all for it. I do like biochar as a possibility; though I am very skeptical of direct air capture and CCS. But the IF in the prior sentence is clearly not true: if there were effective removal technology it would create moral hazard leading to less mitigation and more emissions.

Even more interesting, direct air capture is not thermodynamically favored; needs lots of energy. All the finalists claim that they will use renewable energy or “waste” heat from other processes to power their removal technology, but how about using those renewable sources and waste heat to directly offset fossil fuels and reduce emissions instead of using them to power less efficient removal processes? Clearly, any wind/solar/geothermal that is used to power a removal technology could have been used directly to reduce fossil emissions, and will be cheaper and offset more net emissions. Same for waste heat unless the waste heat is too low temp to be used to offset fossil fuels. Result: these capture schemes may increase net CO2 flux into the atmosphere.

Every business knows it’s always better to prevent the creation of a defect than to correct it after the fact. No responsible firm would say “our products are killing the customers; we know how to prevent that, but we think our money is best spent on settling lawsuits with their heirs.” (Oh: GM did exactly that, and look how it is damaging them). So why is it ok for people to say “fossil fuel use is killing us; we know how to prevent that, but we’ve decided to spend even more money to try to clean up the mess after the pollution is already in the air”?

To me, many of these schemes reflect a serious lack of systems thinking, and the desire for a technical solution that allows us to keep living the way we are living without any change in our behavior. Can’t work.

I agree with John, and I think there are some additional gaps in systemic thinking about these technologies. Here are some quick reflections, in pictures.

EmittingCapturingA basic point for any system is that you can lower the level of a stock (all else equal) by reducing the inflow or increasing the outflow. So the idea of capturing CO2 is not totally bonkers. In fact, it lets you do at least one thing that you can’t do by reducing emissions. When emissions fall to 0, there’s no leverage to reduce CO2 in the atmosphere further. But capture could actively draw down the CO2 stock. However, we are very far from 0 emissions, and this is harder than it seems:

AirCapturePushbackNatural sinks have been graciously absorbing roughly half of our CO2 emissions for a long time. If we reduce emissions dramatically, and begin capturing, nature will be happy to give us back that CO2, ton for ton. So, the capture problem is actually twice as big you’d think from looking at the excess CO2 in the atmosphere.

Currently, there’s also a problem of scale. Emissions are something like two orders of magnitude larger than potential markets for CO2, so there’s a looong way to go. And capture doesn’t scale like like a service running on Amazon Elastic Cloud servers; it’s bricks and mortar.

EmitCaptureScaleAnd where does that little cloud go, anyway? Several proposals gloss over this, as in:

The process involves a chemical solution (that naturally absorbs CO2) being brought into contact with the air. This solution, now containing the captured CO2, is sent to through a regeneration cycle which simultaneously extracts the CO2 as a high-pressure pipeline-quality product (ready to be put to numerous commercial uses) …

The biggest commercial uses I know of are beverage carbonation and enhanced oil recovery (EOR). Consider the beverage system:

BeverageCO2CO2 sequestered in beverages doesn’t stay there very long! You’d have to start stockpiling vast quantities of Coke in salt mines to accumulate a significant quantity. This reminds me of Nike’s carbon-sucking golf ball. EOR is just as bad, because you put CO2 down a hole (hopefully it stays there), and oil and gas come back up, which are then burned … emitting more CO2. Fortunately the biochar solutions do not suffer so much from this problem.

Next up, delays and moral hazard:

CO2moralHazardThis is a cartoonish view of the control system driving mitigation and capture effort. The good news is that air capture gives us another negative loop (blue, top) by which we can reduce CO2 in the atmosphere. That’s good, especially if we mismanage the green loop. The moral hazard side effect is that the mere act of going through the motions of capture R&D reduces the perceived scale of the climate problem (red link), and therefore reduces mitigation, which actually makes the problem harder to solve.

Capture also competes with mitigation for resources, as in John’s process heat example:

ProcessHeat

It’s even worse than that, because a lot of mitigation efforts have fairly rapid effects on emissions. There are certainly long-lived aspects of energy and infrastructure that must be considered, but behavior can change a lot of emissions quickly and with off-the-shelf technology. The delay between air capture R&D and actual capturing, on the other hand, is bound to be fairly long, because it’s in its infancy, and has to make it through multiple discover/develop/deploy hurdles.

One of those hurdles is cost. Why would anyone bother to pay for air capture, especially in cases where it’s a sure loser in terms of thermodynamics and capital costs? Altruism is not a likely candidate, so it’ll take a policy driver. There are essentially two choices: standards and emissions pricing.

A standard might mandate (as the EPA and California have) that new power plants above a certain emissions intensity must employ some kind of offsetting capture. If coal wants to stay in business, it has to ante up. The silly thing about this, apart from inevitable complexity, is that any technology that meets the standard without capture, like combined cycle gas electricity currently, pays 0 for its emissions, even though they too are harmful.

Similarly, you could place a subsidy or bounty on tons of CO2 captured. That would be perverse, because taxpayers would then have to fund capture – not likely a popular measure. The obvious alternative would be to price emissions in general – positive for emissions, negative for capture. Then all sources and sinks would be on a level playing field. That’s the way to go, but of course we ought to do it now, so that mitigation starts working, and air capture joins in later if and when it’s a viable competitor.

I think it’s fine if people work on carbon capture and sequestration, as long as they don’t pretend that it’s anywhere near a plausible scale, or even remotely possible without comprehensive changes in incentives. I won’t spend my own time on a speculative, low-leverage policy when there are more effective, immediate and cheaper mitigation alternatives. And I’ll certainly never advise anyone to pursue a geoengineered world, any more than I’d advise them to keep smoking but invest in cancer research.

 

 

Climate Interactive – #12 climate think tank

Climate Interactive is #12 (out of 210) in the International Center for Climate Governance’s Standardized Ranking of climate think tanks (by per capita productivity):

  1. Woods Hole Research Center (WHRC)
  2. Basque Centre for Climate Change (BC3)
  3. Centre for European Policy Studies (CEPS)*
  4. Centre for European Economic Research (ZEW)*
  5. International Institute for Applied Systems Analysis (IIASA)
  6. Worldwatch Institute
  7. Fondazione Eni Enrico Mattei (FEEM)
  8. Resources for the Future (RFF)
  9. Mercator Research Institute on Global Commons and Climate Change (MCC)
  10. Centre International de Recherche sur l’Environnement et le De?veloppement (CIRED)
  11. Institut Pierre Simon Laplace (IPSL)
  12. Climate Interactive
  13. The Climate Institute
  14. Buildings Performance Institute Europe (BPIE)
  15. International Institute for Environment and Development (IIED)
  16. Center for Climate and Energy Solutions (C2ES)
  17. Global Climate Forum (GCF)
  18. Potsdam Institute for Climate Impact Research (PIK)
  19. Sandbag Climate Campaign
  20. Civic Exchange

That’s some pretty illustrious company! Congratulations to all at CI.