Nature Reverses on Limits

Last week Nature editorialized,

Are there limits to economic growth? It’s time to call time on a 50-year argument

Fifty years ago this month, the System Dynamics group at the Massachusetts Institute of Technology in Cambridge had a stark message for the world: continued economic and population growth would deplete Earth’s resources and lead to global economic collapse by 2070. This finding was from their 200-page book The Limits to Growth, one of the first modelling studies to forecast the environmental and social impacts of industrialization.

For its time, this was a shocking forecast, and it did not go down well. Nature called the study “another whiff of doomsday” (see Nature 236, 47–49; 1972). It was near-heresy, even in research circles, to suggest that some of the foundations of industrial civilization — mining coal, making steel, drilling for oil and spraying crops with fertilizers — might cause lasting damage. Research leaders accepted that industry pollutes air and water, but considered such damage reversible. Those trained in a pre-computing age were also sceptical of modelling, and advocated that technology would come to the planet’s rescue. Zoologist Solly Zuckerman, a former chief scientific adviser to the UK government, said: “Whatever computers may say about the future, there is nothing in the past which gives any credence whatever to the view that human ingenuity cannot in time circumvent material human difficulties.”

“Another Whiff of Doomsday” (unpaywalled: Nature whiff of doomsday 236047a0.pdf) was likely penned by Nature editor John Maddox, who wrote in his 1972 book, the Doomsday Syndrome,

“Tiny though the earth may appear from the moon, it is in reality an enormous object. The atmosphere of the earth alone weighs more than 5,000 million million tons, more than a million tons of air for each human being now alive. The water on the surface of the earth weights more than 300 times as much – in other words, each living person’s share of the water would just about fill a cube half a mile in each direction… It is not entirely out of the question that human intervention could at some stage bring changes, but for the time being the vast scale on which the earth is built should be a great comfort. In other words, the analogy of space-ship earth is probably not yet applicable to the real world. Human activity, spectacular though it may be, is still dwarfed by the human environment.”

Reciting the scale of earth’s resources hasn’t held up well as a counterargument to Limits., for the reason given by Forrester and Meadows et al. at the time: exponential growth approaches any finite limit in a relatively small number of doublings. The Nature editors were clearly aware of this back in ’72, but ignored its implications:

Instead, they subscribed to a “smooth approach” view, in which “a kind of restraint” limits population all by itself:

There are a lot of problems with this reasoning, not least of which is that economic activity is growing faster than population, yet there is no historic analog of the demographic transition for economies. However, I think the most fundamental problem with the editors’ mental model is that it’s effectively first order. Population is the only stock of interest; to the extent that they mention resources and pollution, it is only to propose that prices and preferences will take care of them. There’s no consideration of the possibility of a laissez-faire demographic transition resulting in absolute levels of population and economic activity requiring resource withdrawals that deplete resources and saturate sinks, leading to eventual overshoot and collapse. I’m reminded of Jay Forrester’s frequent comment, to the effect of, “if you have a model, you’ll be the only person in the room who can speak for 20 minutes without self-contradiction.” The ’72 Nature editorial clearly suffers for lack of a model.

While the ’22 editorial at last acknowledges the existence of the problem, its prescription is “more research.”

Researchers must try to resolve a dispute on the best way to use and care for Earth’s resources.

But the debates haven’t stopped. Although there’s now a consensus that human activities have irreversible environmental effects, researchers disagree on the solutions — especially if that involves curbing economic growth. That disagreement is impeding action. It’s time for researchers to end their debate. The world needs them to focus on the greater goals of stopping catastrophic environmental destruction and improving well-being.

… green-growth and post-growth scientists need to see the bigger picture. Right now, both are articulating different visions to policymakers, and there is a risk this will delay action. In 1972, there was still time to debate, and less urgency to act. Now, the world is running out of time.

If there’s disagreement about the solution, then the solution should be distributed, so that we can learn from different approaches. It’s easy to verify success, by checking the equilibrium conditions for sources and sinks: as long as they’re in decline, policies need to adjust. However, I don’t think lack of agreement about the solution is the real problem.

The real problem is that the research “consensus that human activities have irreversible environmental effects” has no counterpart in the political and economic spheres. Neither green-growth nor degrowth has de facto support. This is not a problem that will be solved by more environmental or economic research.

Escalator Solutions

As promised, here’s my solution to the escalator problem … several, actually.

Before getting into the models, a point about simulation vs. analytic solutions. You can solve this problem on pencil and paper with simple algebra. This has some advantages. First, you can be completely data free, by using symbols exclusively. You don’t need to know the height of the stair or a person’s climbing speed, because you can call these Hs and Vc and solve the problem for all possible values. A simulation, by contrast, needs at least notional values for these things. Second, you may be able to draw general conclusions about the solution from its structure. For example, if it takes the form t = H/V, you know there’s some kind of singularity at V=0. With a simulation, if you don’t think to test V=0, you might miss an important special case. It’s easy to miss these special cases in a parameter space with many dimensions.

On the other hand, if there are many dimensions, this may imply that the problem will be difficult or impossible to solve analytically, so simulation may be the only fallback. A simulation also makes it easier to play with the model interactively (e.g., Vensim’s Synthesim mode) and to incorporate features like model-data comparisons and optimization. The ability to play invites experimentation with parameter values you might not otherwise think of. Also, drawing a stock-flow diagram may allow you to access other forms of visual thinking, or analogies with structurally similar systems in different domains.

With that prelude, here’s how I conceived of the problem:

  • You’re in a building, at height=0 (feet in my model, but the particular unit doesn’t matter as long as you have and check units).
  • Stairs rise to height=100.
  • There’s an escalator from 100 to 200 ft.
  • Then stairs resume, to infinite height.
  • The escalator ascends at 1ft/sec and the climber at 1ft/sec whether on stairs or not.
  • At some point, the climber rests for 60sec, at which point their rate of climb is 0, but they continue to ascend if on the escalator.

Of course all the numbers can be changed on the fly, but these concepts at least have to exist.

I think of this as a problem of pure accumulation, with height as a stock. But it turned out that I still needed some feedback to determine where the climber was – on the stairs, or on the escalator:

At first it struck me that this was “fake” feedback – an accounting artifact – and that it might go away with an alternate conception. Here’s my implementation of Pradeesh Kumar’s idea, from the SDS Discussion Group on Facebook, with the height to be climbed on the stairs and escalator as a stock, with an outflow as climbing is accomplished:The logical loop is still there, and the rest of the accounting is more complex, so I think it’s inevitable.

Finally, I built the same model in Ventity, so I could use multiple entities to quickly store and replicate several scenarios:

Looking at the Ventity output, resting on the escalator is preferable:

While resting on the stairs, nothing happens. While resting on the escalator, you continue to make gains.

There’s an unstated assumption present in all the twitter answers I’ve seen: the escalator is the up escalator. I actually prefer to go up the down escalator, though it attracts weird looks. If you do that, resting on the escalator is catastrophic, because you lose ground that you previously gained:

I suspect there are other interesting edge cases to explore.

The models:

Vensim (any version): Escalator 1.mdl

Vensim, alternate conception: Escalator 1 alt.mdl

Vensim Pro/DSS/Model Reader – subscripted for multiple experiments: escalator 2.mdl

Ventity: Escalator 1.zip

JJ Lauble has also created a version, posted at the Vensim forum. I haven’t had a chance to explore it yet, but it looks like he may have used Vensim to explore the algebraic solution, with the time axis as a way to scan the solution space with Synthesim overrides.

Modeling Chronic Wasting Disease

I’ve been too busy to post much lately, because I’ve been busy with projects in city energy planning, web interfaces, and chronic wasting disease (CWD) in deer, plus a lot of Vensim and Ventity testing.

I’m hoping to write a little more about CWD, because it’s very interesting (and very nasty). We’ve been very successful at blending Structured Decision Making (SDM) with SD modeling in Wisconsin’s 10-yr plan review. We’ve been able to use models live in a rather diverse stakeholder group, including non-modelers. The model has worked well as a shared thinking tool, triggering some really good discussions, without getting mired in black-box problems.

The video below is from an “under the hood” session that looked into the details of the model for an interested subset of participants, so it’s probably nerdier than other more policy-oriented discussions, but also of greater interest to modelers I hope.

I’ll have more to say about SD in CWD policy and the marriage of SD and SDM soon, I hope.

Escalator Problems

@stevenstrogatz reposts a clever, simple problem:

two people climb a staircase and then climb an escalator. One person rests a minute on the staircase and the other rests a minute on the escalator, but otherwise they climb stairs at the same rate. Who is faster or are they equally fast?

There’s also an airport-walkway version that adds special relativity as a twist.

It’s interesting to see the varied thought processes in the comments. Pencil and paper is often quicker and yields useful analytic insight, but these are both accumulation problems, and therefore good candidates for SD simulation.

How would you model this situation? (I’ll post my answer in a day or two.)

Mask Mandates and One Study Syndrome

The evidence base for Montana’s new order promoting parental opt-out from school mask mandates relies heavily on two extremely weak studies.

Montana Governor Gianforte just publicized a new DPHHS order requiring schools to provide a parental opt-out for mask requirements.

Underscoring the detrimental impact that universal masking may have on children, the rule cites a body of scientific literature that shows side effects and dangers from prolonged mask wearing.

The order purports to be evidence based. But is the evidence any good?

Mask Efficacy

The order cites:

The scientific literature is not conclusive on the extent of the impact of
masking on reducing the spread of viral infections. The department understands
that randomized control trials have not clearly demonstrated mask efficacy against
respiratory viruses, and observational studies are inconclusive on whether mask use
predicts lower infection rates, especially with respect to children.
1

The supporting footnote is basically a dog’s breakfast,

1 See, e.g., Guerra, D. and Guerra, D., Mask mandate and use efficacy for COVID-19 containment in
US States, MedRX, Aug. 7, 2021, https://www.medrxiv.org/content/10.1101/2021.05.18.21257385v2
(“Randomized control trials have not clearly demonstrated mask efficacy against respiratory viruses,
and observational studies conflict on whether mask use predicts lower infection rates.”). Compare
CDC, Science Brief: Community Use of Cloth Masks to Control the Spread of SARS-CoV-2, last
updated May 7, 2021, https://www.cdc.gov/coronavirus/2019-ncov/science/science-briefs/masking-
science-sars-cov2.html, last visited Aug. 30, 2021 (mask wearing reduces new infections, citing
studies)
….

(more stuff of declining quality)

This is not an encouraging start; it’s blatant cherry picking. Guerra & Guerra is an observational statistical test of mask mandates. The statement DPHHS quotes, “Randomized control trials have not clearly demonstrated mask efficacy…” isn’t even part of the study; it’s merely an introductory remark in the abstract.

Much worse, G&G isn’t a “real” model. It’s just a cheap regression of growth rates against mask mandates, with almost no other controls. Specifically, it omits NPIs, weather, prior history of the epidemic in each state, and basically every other interesting covariate, except population density. It’s not even worth critiquing the bathtub statistics issues.

G&G finds no effects of mask mandates. But is that the whole story? No. Among the many covariates they omit is mask compliance. It turns out that matters, as you’d expect. From Leech et al. (one of many better studies DPHHS ignored):

Across these analyses, we find that an entire population wearing masks in public leads to a median reduction in the reproduction number R of 25.8%, with 95% of the medians between 22.2% and 30.9%. In our window of analysis, the median reduction in R associated with the wearing level observed in each region was 20.4% [2.0%, 23.3%]1. We do not find evidence that mandating mask-wearing reduces transmission. Our results suggest that mask-wearing is strongly affected by factors other than mandates.

We establish the effectiveness of mass mask-wearing, and highlight that wearing data, not mandate data, are necessary to infer this effect.

Meanwhile, the DPHHS downplays its second citation, the CDC Science Brief, which cites 65 separate papers, including a number of observational studies that are better than G&G. It concludes that masks work, by a variety of lines of evidence, including mechanistic studies, CFD simulations and laboratory experiments.

Verdict: Relying on a single underpowered, poorly designed regression to make sweeping conclusions about masks is poor practice. In effect, DPHHS has chosen the one earwax-flavored jellybean from a bag of more attractive choices.

Mask Safety

The department order goes on,

The department
understands, however, that there is a body of literature, scientific as well as
survey/anecdotal, on the negative health consequences that some individuals,
especially some children, experience as a result of prolonged mask wearing.
2

The footnote refers to Kisielinski et al. – again, a single study in a sea of evidence. At least this time it’s a meta-analysis. But was it done right? I decided to spot check.

K et al. tabulate a variety of claims conveniently in Fig. 2:

The first claim assessed is that masks reduce O2, so I followed those citations.

Citation Claim Assessment/Notes
Beder 2008 Effect Effect, but you can’t draw any causal conclusion because there’s no control group.
Butz 2005 No effect PhD Thesis, not available for review
Epstein 2020 No effect No effect (during exercise)
Fikenzer 2020 Effect Effect
Georgi 2020 Effect Gray literature, not available for review
Goh 2019 No effect No effect; RCT n~=100 children
Jagim 2018 Effect Not relevant – this concerns a mask designed for elevation training, i.e. deliberately impeding O2
Kao 2004 Effect Effect. End stage renal patients.
Kyung 2020 Effect Dead link. Flaky journal? COPD patients.
Liu 2020 Effect Small effect – <1% SpO2. Nonmedical conference paper, so dubious peer review. N=12.
Mo 2020 No effect No effect. Gray lit. COPD patients.
Person 2018 No effect No effect. 6 minute walking test.
Pifarre 2020 Effect Small effect. Tiny sample (n=8). Questionable control of order of test conditions. Exercise.
Porcari 2016 Effect Irrelevant – like Jagim, concerns an elevation training mask.
Rebmann 2013 Effect No effect. “There were no changes in nurses’ blood pressure, O2 levels, perceived comfort, perceived thermal comfort, or complaints of visual difficulties compared with baseline levels.” Also, no control, as in Beder.
Roberge 2012 No effect No effect. N=20.
Roberge 2014 No effect No effect. N=22. Pregnancy.
Tong 2015 Effect Effect. Exercise during regnancy.

If there’s a pattern here, it’s lots of underpowered small sample studies with design defects. Morover, there are some blatant errors in assessment of relevance (Jagim, Porcari) and inclusion of uncontrolled studies (Beder, Rebmann, maybe Pifarre). In other words, this is 30% rubbish, and the rubbish is all on the “effect” side of the scale.

If the authors did a poor job assessing the studies they included, I also have to wonder whether they did a bad screening job. That turns out to be hard to determine without more time. But a quick search does reveal that there has been an explosion of interest in the topic, with a number of new studies in high-quality journals with better control designs. Regrettably, sample sizes still tend to be small, but the results are generally not kind to the assertions in the health order:

Mapelli et al. 2021:

Conclusions Protection masks are associated with significant but modest worsening of spirometry and cardiorespiratory parameters at rest and peak exercise. The effect is driven by a ventilation reduction due to an increased airflow resistance. However, since exercise ventilatory limitation is far from being reached, their use is safe even during maximal exercise, with a slight reduction in performance.

Chan, Li & Hirsch 2020:

In this small crossover study, wearing a 3-layer nonmedical face mask was not associated with a decline in oxygen saturation in older participants. Limitations included the exclusion of patients who were unable to wear a mask for medical reasons, investigation of 1 type of mask only, Spo2 measurements during minimal physical activity, and a small sample size. These results do not support claims that wearing nonmedical face masks in community settings is unsafe.

Lubrano et al. 2021:

This cohort study among infants and young children in Italy found that the use of facial masks was not associated with significant changes in Sao2 or Petco2, including among children aged 24 months and younger.

Shein et al. 2021:

The risk of pathologic gas exchange impairment with cloth masks and surgical masks is near-zero in the general adult population.

A quick trip to PubMed or Google Scholar provides many more.

Verdict: a sloppy meta-analysis is garbage-in, garbage-out.

Bottom Line

Montana DPHHS has failed to verify its sources, ignores recent literature and therefore relies on far less than the best available science in the construction of its flawed order. Its sloppy work will fan the flames of culture-war conspiracies and endanger the health of Montanans.

I CAN HAS SYSTEM DYNAMICZ?

IM PRETTY SURE THIS IS THE FURST EVAH SYSTEM DYNAMICZ SIMULASHUN MODEL WRITTEN IN LOLCODE.

HAI 1.2
    VISIBLE "HAI, JWF!"
    
    OBTW
     ==========================================================================
     SYSTEM DYNAMICZ INVENTORY MODEL IN LOLCODE
     TOM FIDDAMAN, METASD.COM, 2021
     INSPIRED BY THE CLASSIC BEER GAME
     AND MODEL 3.10 OF MICHAEL GOODMAN'S 
     'STUDY NOTES IN SYSTEM DYNAMICS'
     ==========================================================================
    TLDR
    
    BTW FUNKTION 4 INTEGRATIN STOCKZ WITH NET FLOW INOUT
    HOW IZ I INTEGRATIN YR STOCK AN YR INOUT AN YR TIMESTEP
        FOUND YR SUM OF STOCK AN PRODUKT OF INOUT AN TIMESTEP
    IF U SAY SO
    
    BTW FUNKTION 4 CHARACTER PLOTZ
    HOW IZ I PLOTTIN YR X AN YR SYMBOL
        I HAS A STRING ITZ ""
        I HAS A COUNT ITZ 0
        IM IN YR XLOOP
            BOTH SAEM COUNT AN BIGGR OF COUNT AN X, O RLY?
                YA RLY, GTFO
                NO WAI, STRING R SMOOSH " " STRING MKAY
            OIC
            COUNT R SUM OF COUNT AN 1
        IM OUTTA YR XLOOP
        VISIBLE SMOOSH STRING SYMBOL MKAY
    IF U SAY SO
    
    BTW INISHUL TIME - DEKLARE SUM VARIABLZ AND INIT STOCKZ

    I HAS A INV ITZ 0.0         BTW INVENTORY (WIDGETS)
    I HAS A MAKIN               BTW PRODUCTION RATE (WIDGETS/WEEK)
    I HAS A SELLIN              BTW SALES RATE (WIDGETS/WEEK)
    I HAS A TIME ITZ 0.0        BTW LOL I WISH (WEEK)
    I HAS A TIMESTEP ITZ 1.0    BTW SIMULATION TIME STEP (WEEK)
    I HAS A ZEND ITZ 50.0       BTW FINAL TIME OF THE SIM (WEEK)
    I HAS A TARGET ITZ 20.0     BTW DESIRED INVENTORY (WIDGETS)
    I HAS A ADJTIME ITZ 4.0     BTW INVENTORY ADJUSTMENT TIME (WEEK)
    I HAS A ORDERIN             BTW ORDER RATE (WIDGETS/WEEK)
    I HAS A INIORDERS ITZ 10.0  BTW INITIAL ORDER RATE (WIDGETS/WEEK)
    I HAS A STEPTIME ITZ 30.0   BTW TIME OF STEP IN ORDERS (WEEK)
    I HAS A STEPSIZE ITZ 5.0    BTW SIZE OF STEP IN ORDERS (WIDGETS/WEEK)
    I HAS A INVADJ              BTW INVENTORY ADJUSTMENT NEEDED (WIDGETS)
    I HAS A WIP ITZ 0.0         BTW WORK IN PROGRESS INVENTORY (WIDGETS)
    I HAS A SHIPPIN             BTW DELIVERIES FROM WIP (WIDGETS/WEEK)
    I HAS A PRODTIME ITZ 4.0    BTW TIME TO PRODUCE (WEEK)
    
    VISIBLE "SHOWIN RESULTZ FOR PRODUKSHUN"
    
    IM IN YR SIMLOOP        BTW MAIN SIMULASHUN LOOP
        
        BTW CALCULATE RATES AND AUXILIARIES
        
        BTW STEP IN CUSTOMER ORDERS
        BOTH SAEM TIME AN BIGGR OF TIME AN STEPTIME, O RLY?
            YA RLY, ORDERIN R SUM OF INIORDERS AN STEPSIZE
            NO WAI, ORDERIN R INIORDERS
        OIC
        
        SELLIN R SMALLR OF ORDERIN AN QUOSHUNT OF INV AN TIMESTEP
        INVADJ R DIFF OF TARGET AN INV
        MAKIN R SUM OF SELLIN AN QUOSHUNT OF INVADJ AN ADJTIME
        MAKIN R BIGGR OF MAKIN AN 0.0
        SHIPPIN R QUOSHUNT OF WIP AN PRODTIME
        
        BTW PLOT
        VISIBLE SMOOSH TIME " " MAKIN MKAY
        BTW PRODUKT WITH SCALE FACTOR FOR SIZING
        I IZ PLOTTIN YR PRODUKT OF MAKIN AN 4.0 AN YR "+" MKAY
                
        BTW INTEGRATE STOCKS
        
        TIME R I IZ INTEGRATIN YR TIME AN YR 1.0 AN YR TIMESTEP MKAY
        INV R I IZ INTEGRATIN YR INV AN YR DIFF OF SHIPPIN AN SELLIN AN YR TIMESTEP MKAY
        WIP R I IZ INTEGRATIN YR WIP AN YR DIFF OF MAKIN AN SHIPPIN AN YR TIMESTEP MKAY
        
        BTW CHECK STOPPING CONDISHUN
        BOTH SAEM TIME AN BIGGR OF TIME AN SUM OF ZEND AN TIMESTEP, O RLY?
            YA RLY, GTFO
        OIC
        
    IM OUTTA YR SIMLOOP
    
    
KTHXBYE

YOU CAN RUN IT IN THE TUTORIALSPOINT ONLINE INTERPRETER, OR GET JUSTIN MEZA’S DESKTOP LCI.

SD INVENTORY LOLCODE.TXT

$lci main.lo
HAI, JWF!
SHOWIN RESULTZ FOR PRODUKSHUN
0.00 5.00
                    +
1.00 5.00
                    +
2.00 5.93
                        +
3.00 6.64
                           +
4.00 7.34
                              +
5.00 8.00
                                 +
6.00 8.62
                                   +
7.00 9.22
                                     +
8.00 9.78
                                        +
9.00 10.31
                                          +
10.00 10.82
                                            +
11.00 11.30
                                              +
12.00 11.75
                                                +
13.00 12.18
                                                 +
14.00 12.46
                                                  +
15.00 12.30
                                                  +
16.00 12.03
                                                 +
17.00 11.68
                                               +
18.00 11.29
                                              +
19.00 10.89
                                            +
20.00 10.51
                                           +
21.00 10.17
                                         +
22.00 9.89
                                        +
23.00 9.66
                                       +
24.00 9.49
                                      +
25.00 9.39
                                      +
26.00 9.35
                                      +
27.00 9.35
                                      +
28.00 9.40
                                      +
29.00 9.47
                                      +
30.00 14.56
                                                           +
31.00 15.91
                                                                +
32.00 14.12
                                                         +
33.00 14.07
                                                         +
34.00 14.45
                                                          +
35.00 14.73
                                                           +
36.00 15.01
                                                             +
37.00 15.27
                                                              +
38.00 15.51
                                                               +
39.00 15.75
                                                                +
40.00 15.97

I THINK THIS SHOULD BE A PART OF EVERY SYSTEM THINKERZ LITTERBOX TOOLBOX.

Limits and Markets

Almost fifty years ago, economists claimed that markets would save us from Limits to Growth. Here’s William Nordhaus, writing about World Dynamics in Measurement without Data (1973):

How’s that working out? I would argue, not well.

Certainly there are functional markets for commodities like oil and gas, but even then a substantial share of the resources are allocated by myopic regulators captive to industry interests.

But for practically everything else, the markets that would in theory allocate across resources, time and space simply don’t exist, even today.

Water markets haven’t prevented the decline of Lake Mead, and they’re resisted widely, including here in Bozeman:

Joseph Stiglitz explained in the WSJ:

A similar pattern could unfold again. But economic forces alone may not be able to fix the problems this time around. Societies as different as the U.S. and China face stiff political resistance to boosting water prices to encourage efficient use, particularly from farmers. …

This troubles some economists who used to be skeptical of the premise of “The Limits to Growth.” As a young economist 30 years ago, Joseph Stiglitz said flatly: “There is not a persuasive case to be made that we face a problem from the exhaustion of our resources in the short or medium run.”

Today, the Nobel laureate is concerned that oil is underpriced relative to the cost of carbon emissions, and that key resources such as water are often provided free. “In the absence of market signals, there’s no way the market will solve these problems,” he says. “How do we make people who have gotten something for free start paying for it? That’s really hard. If our patterns of living, our patterns of consumption are imitated, as others are striving to do, the world probably is not viable.”

What is the price of declining rainforests, reefs or insects? What would markets quote for killing a bird with neonicotinoids, or a wind turbine, or for your Italian songbird pan-fry? What do gravel pits pay for dust and noise emissions, and what will autonomous EVs pay for increased congestion? The answer is almost universally zero. Even things that have received much attention, like emissions of greenhouse gases and criteria air pollutants, are free in most places.

These public goods aren’t free because they’re abundant or unimportant. They’re free because there are no property rights for them, and people resist creating the market mechanisms needed. Everyone loves the free market, until it applies to them. This might be OK if other negative feedback mechanisms picked up the slack, but those clearly aren’t functioning sufficiently either.

Lytton Burning

By luck and a contorted Jet Stream, Montana more or less escaped the horrific heat that gripped the Northwest at the end of June. You probably heard, but this culminated in temperatures in Lytton BC breaking all-time records for Canada and the globe north of latitude 50 by huge margins. The next day, the town burned to the ground.

I wondered just how big this was, so when GHCN temperature records from KNMI became available, I pulled the data for a quick and dirty analysis. Here’s the daily Tmax for Lytton:

That’s about 3.5 standard deviations above the recent mean. Lytton’s records are short and fragmented, so I also pulled Kamloops (the closest station with a long record):

You can see how bizarre the recent event was, even in a long term context. In Kamloops, it’s a +4 standard deviation event, which means a likelihood of 1 in 16,000 if this were simply random. Even if you start adjusting for selection and correlations, it still looks exceedingly rare – perhaps a 1000-year event in a 70-year record.

Clearly it’s not simply random. For one thing, there’s a pretty obvious long term trend in the Kamloops record. But a key question is, what will happen to the variance of temperature in the future? The simplest thermodynamic argument is that energy in partitions of a system has a Boltzmann distribution and therefore that variance should go up with the mean. However, feedback might alter this.

This paper argues that variance goes up:

Extreme summertime temperatures are a focal point for the impacts of climate change. Climate models driven by increasing CO2 emissions project increasing summertime temperature variability by the end of the 21st century. If credible, these increases imply that extreme summertime temperatures will become even more frequent than a simple shift in the contemporary probability distribution would suggest. Given the impacts of extreme temperatures on public health, food security, and the global economy, it is of great interest to understand whether the projections of increased temperature variance are credible. In this study, we use a theoretical model of the land surface to demonstrate that the large increases in summertime temperature variance projected by climate models are credible, predictable from first principles, and driven by the effects of warmer temperatures on evapotranspiration. We also find that the response of plants to increased CO2 and mean warming is important to the projections of increased temperature variability.

But Zeke Housfather argues for stable variance:

summer variability, where extreme heat events are more of a concern, has been essentially flat. These results are similar to those found in a paper last fall by Huntingford et al published in the journal Nature. Huntingford and colleagues looked at both land and ocean temperature records and found no evidence of increasing variability. They also analyzed the outputs of global climate models, and reported that most climate models actually predict a slight decline in temperature variability over the next century as the world warms. The figure below, from Huntingford, shows the mean and spread of variability (in standard deviations) for the models used in the latest IPCC report (the CMIP5 models).

This is good news overall; increasing mean temperatures and variability together would lead to even more extreme heat events. But “good news” is relative, and the projected declines in variability are modest, so rising mean temperatures by the end of this century will still push the overall temperature distribution well outside of what society has experienced in the last 12,000 years.

If he’s right, stable variance implies that the mean temperature of scenarios is representative of what we’ll experience – nothing further to worry about. I hope this is true, but I also hope it takes a long time to find out, because I really don’t want to experience what Lytton just did.

Lake Mead and incentives

Since I wrote about Lake Mead ten years ago (1 2 3), things have not improved. It’s down to 1068 feet, holding fairly steady after a brief boost in the wet year 2011-12. The Reclamation outlook has it losing another 60 feet in the next two years.

The stabilization has a lot to do with successful conservation. In Phoenix, for example, water use is down even though population is up. Some of this is technology and habits, and some of it is banishment of “useless grass” and other wasteful practices. MJ describes water cops in Las Vegas:

Investigator Perry Kaye jammed the brakes of his government-issued vehicle to survey the offense. “Uh oh this doesn’t look too good. Let’s take a peek,” he said, exiting the car to handle what has become one of the most existential violations in drought-stricken Las Vegas—a faulty sprinkler.

“These sprinklers haven’t popped up properly, they are just oozing everywhere,” muttered Kaye. He has been policing water waste for the past 16 years, issuing countless fines in that time. “I had hoped I would’ve worked myself out of a job by now. But it looks like I will retire first.”

Enforcement undoubtedly helps, but it strikes me as a band-aid where a tourniquet is needed. While the city is out checking sprinklers, people are free to waste water in a hundred less-conspicuous ways. That’s because standards say “conserve” but the market says “consume” – water is still cheap. As long as that’s true, technology improvements are offset by rebound effects.

Often, cheap water is justified as an equity issue: the poor need low-cost water. But there’s nothing equitable about water rates. The symptom is in the behavior of the top users:

Total and per-capita water use in Southern Nevada has declined over the last decade, even as the region’s population has increased by 14%. But water use among the biggest water users — some of the valley’s wealthiest, most prominent residents — has held steady.

The top 100 residential water users serviced by the Las Vegas Valley Water District used more than 284 million gallons of water in 2018 — over 11 million gallons more than the top 100 users of 2008 consumed at the time, records show. …

Properties that made the top 100 “lists” — which the Henderson and Las Vegas water districts do not regularly track, but compiled in response to records requests — consumed between 1.39 million gallons and 12.4 million gallons. By comparison, the median annual water consumption for a Las Vegas water district household was 100,920 gallons in 2018.

In part, I’m sure the top 100 users consume 10 to 100x as much water as the median user because they have 10 to 100x as much money (or more). But this behavior is also baked into the rate structure. At first glance, it’s nicely progressive, like the price tiers for a 5/8″ meter:

A top user (>20k gallons a month) pays almost 4x as much as a first-tier user (up to 5k gallons a month). But … not so fast. There’s a huge loophole. High users can buy down the rate by installing a bigger meter. That means the real rate structure looks like this:

A high user can consume 20x as much water with a 2″ meter before hitting the top rate tier. There’s really no economic justification for this – transaction costs and economies of scale are surely tiny compared to these discounts. The seller (the water district) certainly isn’t trying to push more sales to high-volume users to make a profit.

To me, this looks a lot like CAFE, which allocates more fuel consumption rights to vehicles with larger footprints, and Energy Star, which sets a lower bar for larger refrigerators. It’s no wonder that these policies have achieved only modest gains over multiple decades, while equity has worsened. Until we’re willing to align economic incentives with standards, financing and other measures, I fear that we’re just not serious enough to solve water or energy problems. Meanwhile, exhorting virtue is just a way to exhaust altruism.

The real reason the lights went out in Texas

I think TikTokers have discovered the real reason for the Texas blackouts: the feds stole the power to make snow.

Here’s the math:

The are of Texas is about 695,663 km^2. They only had to cover the settled areas, typically about 1% of land, or about 69 trillion cm^2. A 25mm snowfall over that area (i.e. about an inch), with 10% water content, would require freezing 17 trillion cubic centimeters of water. At 334 Joules per gram, that’s 5800 TeraJoules. If you spread that over a day (86400 seconds), that’s 67.2313 GigaWatts. Scale that up for 3% transmission losses, and you’d need 69.3 GW of generation at plant busbars.

Now, guess what the peak load on the grid was on the night of the 15th, just before the lights went out? 69.2 GW. Coincidence? I think not.

How did this work? Easy. They beamed the power up to the Jewish Space Laser, and used that to induce laser cooling in the atmosphere. This tells us another useful fact: Soros’ laser has almost 70 GW output – more than enough to start lots of fires in California.

And that completes the final piece of the puzzle. Why did the Texas PUC violate free market principles and intervene to raise the price of electricity? They had to, or they would have been fried by 70 GW of space-based Liberal fury.

Now you know the real reason they call leftists “snowflakes.”