Mask Mandates and One Study Syndrome

The evidence base for Montana’s new order promoting parental opt-out from school mask mandates relies heavily on two extremely weak studies.

Montana Governor Gianforte just publicized a new DPHHS order requiring schools to provide a parental opt-out for mask requirements.

Underscoring the detrimental impact that universal masking may have on children, the rule cites a body of scientific literature that shows side effects and dangers from prolonged mask wearing.

The order purports to be evidence based. But is the evidence any good?

Mask Efficacy

The order cites:

The scientific literature is not conclusive on the extent of the impact of
masking on reducing the spread of viral infections. The department understands
that randomized control trials have not clearly demonstrated mask efficacy against
respiratory viruses, and observational studies are inconclusive on whether mask use
predicts lower infection rates, especially with respect to children.
1

The supporting footnote is basically a dog’s breakfast,

1 See, e.g., Guerra, D. and Guerra, D., Mask mandate and use efficacy for COVID-19 containment in
US States, MedRX, Aug. 7, 2021, https://www.medrxiv.org/content/10.1101/2021.05.18.21257385v2
(“Randomized control trials have not clearly demonstrated mask efficacy against respiratory viruses,
and observational studies conflict on whether mask use predicts lower infection rates.”). Compare
CDC, Science Brief: Community Use of Cloth Masks to Control the Spread of SARS-CoV-2, last
updated May 7, 2021, https://www.cdc.gov/coronavirus/2019-ncov/science/science-briefs/masking-
science-sars-cov2.html, last visited Aug. 30, 2021 (mask wearing reduces new infections, citing
studies)
….

(more stuff of declining quality)

This is not an encouraging start; it’s blatant cherry picking. Guerra & Guerra is an observational statistical test of mask mandates. The statement DPHHS quotes, “Randomized control trials have not clearly demonstrated mask efficacy…” isn’t even part of the study; it’s merely an introductory remark in the abstract.

Much worse, G&G isn’t a “real” model. It’s just a cheap regression of growth rates against mask mandates, with almost no other controls. Specifically, it omits NPIs, weather, prior history of the epidemic in each state, and basically every other interesting covariate, except population density. It’s not even worth critiquing the bathtub statistics issues.

G&G finds no effects of mask mandates. But is that the whole story? No. Among the many covariates they omit is mask compliance. It turns out that matters, as you’d expect. From Leech et al. (one of many better studies DPHHS ignored):

Across these analyses, we find that an entire population wearing masks in public leads to a median reduction in the reproduction number R of 25.8%, with 95% of the medians between 22.2% and 30.9%. In our window of analysis, the median reduction in R associated with the wearing level observed in each region was 20.4% [2.0%, 23.3%]1. We do not find evidence that mandating mask-wearing reduces transmission. Our results suggest that mask-wearing is strongly affected by factors other than mandates.

We establish the effectiveness of mass mask-wearing, and highlight that wearing data, not mandate data, are necessary to infer this effect.

Meanwhile, the DPHHS downplays its second citation, the CDC Science Brief, which cites 65 separate papers, including a number of observational studies that are better than G&G. It concludes that masks work, by a variety of lines of evidence, including mechanistic studies, CFD simulations and laboratory experiments.

Verdict: Relying on a single underpowered, poorly designed regression to make sweeping conclusions about masks is poor practice. In effect, DPHHS has chosen the one earwax-flavored jellybean from a bag of more attractive choices.

Mask Safety

The department order goes on,

The department
understands, however, that there is a body of literature, scientific as well as
survey/anecdotal, on the negative health consequences that some individuals,
especially some children, experience as a result of prolonged mask wearing.
2

The footnote refers to Kisielinski et al. – again, a single study in a sea of evidence. At least this time it’s a meta-analysis. But was it done right? I decided to spot check.

K et al. tabulate a variety of claims conveniently in Fig. 2:

The first claim assessed is that masks reduce O2, so I followed those citations.

Citation Claim Assessment/Notes
Beder 2008 Effect Effect, but you can’t draw any causal conclusion because there’s no control group.
Butz 2005 No effect PhD Thesis, not available for review
Epstein 2020 No effect No effect (during exercise)
Fikenzer 2020 Effect Effect
Georgi 2020 Effect Gray literature, not available for review
Goh 2019 No effect No effect; RCT n~=100 children
Jagim 2018 Effect Not relevant – this concerns a mask designed for elevation training, i.e. deliberately impeding O2
Kao 2004 Effect Effect. End stage renal patients.
Kyung 2020 Effect Dead link. Flaky journal? COPD patients.
Liu 2020 Effect Small effect – <1% SpO2. Nonmedical conference paper, so dubious peer review. N=12.
Mo 2020 No effect No effect. Gray lit. COPD patients.
Person 2018 No effect No effect. 6 minute walking test.
Pifarre 2020 Effect Small effect. Tiny sample (n=8). Questionable control of order of test conditions. Exercise.
Porcari 2016 Effect Irrelevant – like Jagim, concerns an elevation training mask.
Rebmann 2013 Effect No effect. “There were no changes in nurses’ blood pressure, O2 levels, perceived comfort, perceived thermal comfort, or complaints of visual difficulties compared with baseline levels.” Also, no control, as in Beder.
Roberge 2012 No effect No effect. N=20.
Roberge 2014 No effect No effect. N=22. Pregnancy.
Tong 2015 Effect Effect. Exercise during regnancy.

If there’s a pattern here, it’s lots of underpowered small sample studies with design defects. Morover, there are some blatant errors in assessment of relevance (Jagim, Porcari) and inclusion of uncontrolled studies (Beder, Rebmann, maybe Pifarre). In other words, this is 30% rubbish, and the rubbish is all on the “effect” side of the scale.

If the authors did a poor job assessing the studies they included, I also have to wonder whether they did a bad screening job. That turns out to be hard to determine without more time. But a quick search does reveal that there has been an explosion of interest in the topic, with a number of new studies in high-quality journals with better control designs. Regrettably, sample sizes still tend to be small, but the results are generally not kind to the assertions in the health order:

Mapelli et al. 2021:

Conclusions Protection masks are associated with significant but modest worsening of spirometry and cardiorespiratory parameters at rest and peak exercise. The effect is driven by a ventilation reduction due to an increased airflow resistance. However, since exercise ventilatory limitation is far from being reached, their use is safe even during maximal exercise, with a slight reduction in performance.

Chan, Li & Hirsch 2020:

In this small crossover study, wearing a 3-layer nonmedical face mask was not associated with a decline in oxygen saturation in older participants. Limitations included the exclusion of patients who were unable to wear a mask for medical reasons, investigation of 1 type of mask only, Spo2 measurements during minimal physical activity, and a small sample size. These results do not support claims that wearing nonmedical face masks in community settings is unsafe.

Lubrano et al. 2021:

This cohort study among infants and young children in Italy found that the use of facial masks was not associated with significant changes in Sao2 or Petco2, including among children aged 24 months and younger.

Shein et al. 2021:

The risk of pathologic gas exchange impairment with cloth masks and surgical masks is near-zero in the general adult population.

A quick trip to PubMed or Google Scholar provides many more.

Verdict: a sloppy meta-analysis is garbage-in, garbage-out.

Bottom Line

Montana DPHHS has failed to verify its sources, ignores recent literature and therefore relies on far less than the best available science in the construction of its flawed order. Its sloppy work will fan the flames of culture-war conspiracies and endanger the health of Montanans.

I CAN HAS SYSTEM DYNAMICZ?

IM PRETTY SURE THIS IS THE FURST EVAH SYSTEM DYNAMICZ SIMULASHUN MODEL WRITTEN IN LOLCODE.

HAI 1.2
    VISIBLE "HAI, JWF!"
    
    OBTW
     ==========================================================================
     SYSTEM DYNAMICZ INVENTORY MODEL IN LOLCODE
     TOM FIDDAMAN, METASD.COM, 2021
     INSPIRED BY THE CLASSIC BEER GAME
     AND MODEL 3.10 OF MICHAEL GOODMAN'S 
     'STUDY NOTES IN SYSTEM DYNAMICS'
     ==========================================================================
    TLDR
    
    BTW FUNKTION 4 INTEGRATIN STOCKZ WITH NET FLOW INOUT
    HOW IZ I INTEGRATIN YR STOCK AN YR INOUT AN YR TIMESTEP
        FOUND YR SUM OF STOCK AN PRODUKT OF INOUT AN TIMESTEP
    IF U SAY SO
    
    BTW FUNKTION 4 CHARACTER PLOTZ
    HOW IZ I PLOTTIN YR X AN YR SYMBOL
        I HAS A STRING ITZ ""
        I HAS A COUNT ITZ 0
        IM IN YR XLOOP
            BOTH SAEM COUNT AN BIGGR OF COUNT AN X, O RLY?
                YA RLY, GTFO
                NO WAI, STRING R SMOOSH " " STRING MKAY
            OIC
            COUNT R SUM OF COUNT AN 1
        IM OUTTA YR XLOOP
        VISIBLE SMOOSH STRING SYMBOL MKAY
    IF U SAY SO
    
    BTW INISHUL TIME - DEKLARE SUM VARIABLZ AND INIT STOCKZ

    I HAS A INV ITZ 0.0         BTW INVENTORY (WIDGETS)
    I HAS A MAKIN               BTW PRODUCTION RATE (WIDGETS/WEEK)
    I HAS A SELLIN              BTW SALES RATE (WIDGETS/WEEK)
    I HAS A TIME ITZ 0.0        BTW LOL I WISH (WEEK)
    I HAS A TIMESTEP ITZ 1.0    BTW SIMULATION TIME STEP (WEEK)
    I HAS A ZEND ITZ 50.0       BTW FINAL TIME OF THE SIM (WEEK)
    I HAS A TARGET ITZ 20.0     BTW DESIRED INVENTORY (WIDGETS)
    I HAS A ADJTIME ITZ 4.0     BTW INVENTORY ADJUSTMENT TIME (WEEK)
    I HAS A ORDERIN             BTW ORDER RATE (WIDGETS/WEEK)
    I HAS A INIORDERS ITZ 10.0  BTW INITIAL ORDER RATE (WIDGETS/WEEK)
    I HAS A STEPTIME ITZ 30.0   BTW TIME OF STEP IN ORDERS (WEEK)
    I HAS A STEPSIZE ITZ 5.0    BTW SIZE OF STEP IN ORDERS (WIDGETS/WEEK)
    I HAS A INVADJ              BTW INVENTORY ADJUSTMENT NEEDED (WIDGETS)
    I HAS A WIP ITZ 0.0         BTW WORK IN PROGRESS INVENTORY (WIDGETS)
    I HAS A SHIPPIN             BTW DELIVERIES FROM WIP (WIDGETS/WEEK)
    I HAS A PRODTIME ITZ 4.0    BTW TIME TO PRODUCE (WEEK)
    
    VISIBLE "SHOWIN RESULTZ FOR PRODUKSHUN"
    
    IM IN YR SIMLOOP        BTW MAIN SIMULASHUN LOOP
        
        BTW CALCULATE RATES AND AUXILIARIES
        
        BTW STEP IN CUSTOMER ORDERS
        BOTH SAEM TIME AN BIGGR OF TIME AN STEPTIME, O RLY?
            YA RLY, ORDERIN R SUM OF INIORDERS AN STEPSIZE
            NO WAI, ORDERIN R INIORDERS
        OIC
        
        SELLIN R SMALLR OF ORDERIN AN QUOSHUNT OF INV AN TIMESTEP
        INVADJ R DIFF OF TARGET AN INV
        MAKIN R SUM OF SELLIN AN QUOSHUNT OF INVADJ AN ADJTIME
        MAKIN R BIGGR OF MAKIN AN 0.0
        SHIPPIN R QUOSHUNT OF WIP AN PRODTIME
        
        BTW PLOT
        VISIBLE SMOOSH TIME " " MAKIN MKAY
        BTW PRODUKT WITH SCALE FACTOR FOR SIZING
        I IZ PLOTTIN YR PRODUKT OF MAKIN AN 4.0 AN YR "+" MKAY
                
        BTW INTEGRATE STOCKS
        
        TIME R I IZ INTEGRATIN YR TIME AN YR 1.0 AN YR TIMESTEP MKAY
        INV R I IZ INTEGRATIN YR INV AN YR DIFF OF SHIPPIN AN SELLIN AN YR TIMESTEP MKAY
        WIP R I IZ INTEGRATIN YR WIP AN YR DIFF OF MAKIN AN SHIPPIN AN YR TIMESTEP MKAY
        
        BTW CHECK STOPPING CONDISHUN
        BOTH SAEM TIME AN BIGGR OF TIME AN SUM OF ZEND AN TIMESTEP, O RLY?
            YA RLY, GTFO
        OIC
        
    IM OUTTA YR SIMLOOP
    
    
KTHXBYE

YOU CAN RUN IT IN THE TUTORIALSPOINT ONLINE INTERPRETER, OR GET JUSTIN MEZA’S DESKTOP LCI.

SD INVENTORY LOLCODE.TXT

$lci main.lo
HAI, JWF!
SHOWIN RESULTZ FOR PRODUKSHUN
0.00 5.00
                    +
1.00 5.00
                    +
2.00 5.93
                        +
3.00 6.64
                           +
4.00 7.34
                              +
5.00 8.00
                                 +
6.00 8.62
                                   +
7.00 9.22
                                     +
8.00 9.78
                                        +
9.00 10.31
                                          +
10.00 10.82
                                            +
11.00 11.30
                                              +
12.00 11.75
                                                +
13.00 12.18
                                                 +
14.00 12.46
                                                  +
15.00 12.30
                                                  +
16.00 12.03
                                                 +
17.00 11.68
                                               +
18.00 11.29
                                              +
19.00 10.89
                                            +
20.00 10.51
                                           +
21.00 10.17
                                         +
22.00 9.89
                                        +
23.00 9.66
                                       +
24.00 9.49
                                      +
25.00 9.39
                                      +
26.00 9.35
                                      +
27.00 9.35
                                      +
28.00 9.40
                                      +
29.00 9.47
                                      +
30.00 14.56
                                                           +
31.00 15.91
                                                                +
32.00 14.12
                                                         +
33.00 14.07
                                                         +
34.00 14.45
                                                          +
35.00 14.73
                                                           +
36.00 15.01
                                                             +
37.00 15.27
                                                              +
38.00 15.51
                                                               +
39.00 15.75
                                                                +
40.00 15.97

I THINK THIS SHOULD BE A PART OF EVERY SYSTEM THINKERZ LITTERBOX TOOLBOX.

Limits and Markets

Almost fifty years ago, economists claimed that markets would save us from Limits to Growth. Here’s William Nordhaus, writing about World Dynamics in Measurement without Data (1973):

How’s that working out? I would argue, not well.

Certainly there are functional markets for commodities like oil and gas, but even then a substantial share of the resources are allocated by myopic regulators captive to industry interests.

But for practically everything else, the markets that would in theory allocate across resources, time and space simply don’t exist, even today.

Water markets haven’t prevented the decline of Lake Mead, and they’re resisted widely, including here in Bozeman:

Joseph Stiglitz explained in the WSJ:

A similar pattern could unfold again. But economic forces alone may not be able to fix the problems this time around. Societies as different as the U.S. and China face stiff political resistance to boosting water prices to encourage efficient use, particularly from farmers. …

This troubles some economists who used to be skeptical of the premise of “The Limits to Growth.” As a young economist 30 years ago, Joseph Stiglitz said flatly: “There is not a persuasive case to be made that we face a problem from the exhaustion of our resources in the short or medium run.”

Today, the Nobel laureate is concerned that oil is underpriced relative to the cost of carbon emissions, and that key resources such as water are often provided free. “In the absence of market signals, there’s no way the market will solve these problems,” he says. “How do we make people who have gotten something for free start paying for it? That’s really hard. If our patterns of living, our patterns of consumption are imitated, as others are striving to do, the world probably is not viable.”

What is the price of declining rainforests, reefs or insects? What would markets quote for killing a bird with neonicotinoids, or a wind turbine, or for your Italian songbird pan-fry? What do gravel pits pay for dust and noise emissions, and what will autonomous EVs pay for increased congestion? The answer is almost universally zero. Even things that have received much attention, like emissions of greenhouse gases and criteria air pollutants, are free in most places.

These public goods aren’t free because they’re abundant or unimportant. They’re free because there are no property rights for them, and people resist creating the market mechanisms needed. Everyone loves the free market, until it applies to them. This might be OK if other negative feedback mechanisms picked up the slack, but those clearly aren’t functioning sufficiently either.

Lytton Burning

By luck and a contorted Jet Stream, Montana more or less escaped the horrific heat that gripped the Northwest at the end of June. You probably heard, but this culminated in temperatures in Lytton BC breaking all-time records for Canada and the globe north of latitude 50 by huge margins. The next day, the town burned to the ground.

I wondered just how big this was, so when GHCN temperature records from KNMI became available, I pulled the data for a quick and dirty analysis. Here’s the daily Tmax for Lytton:

That’s about 3.5 standard deviations above the recent mean. Lytton’s records are short and fragmented, so I also pulled Kamloops (the closest station with a long record):

You can see how bizarre the recent event was, even in a long term context. In Kamloops, it’s a +4 standard deviation event, which means a likelihood of 1 in 16,000 if this were simply random. Even if you start adjusting for selection and correlations, it still looks exceedingly rare – perhaps a 1000-year event in a 70-year record.

Clearly it’s not simply random. For one thing, there’s a pretty obvious long term trend in the Kamloops record. But a key question is, what will happen to the variance of temperature in the future? The simplest thermodynamic argument is that energy in partitions of a system has a Boltzmann distribution and therefore that variance should go up with the mean. However, feedback might alter this.

This paper argues that variance goes up:

Extreme summertime temperatures are a focal point for the impacts of climate change. Climate models driven by increasing CO2 emissions project increasing summertime temperature variability by the end of the 21st century. If credible, these increases imply that extreme summertime temperatures will become even more frequent than a simple shift in the contemporary probability distribution would suggest. Given the impacts of extreme temperatures on public health, food security, and the global economy, it is of great interest to understand whether the projections of increased temperature variance are credible. In this study, we use a theoretical model of the land surface to demonstrate that the large increases in summertime temperature variance projected by climate models are credible, predictable from first principles, and driven by the effects of warmer temperatures on evapotranspiration. We also find that the response of plants to increased CO2 and mean warming is important to the projections of increased temperature variability.

But Zeke Housfather argues for stable variance:

summer variability, where extreme heat events are more of a concern, has been essentially flat. These results are similar to those found in a paper last fall by Huntingford et al published in the journal Nature. Huntingford and colleagues looked at both land and ocean temperature records and found no evidence of increasing variability. They also analyzed the outputs of global climate models, and reported that most climate models actually predict a slight decline in temperature variability over the next century as the world warms. The figure below, from Huntingford, shows the mean and spread of variability (in standard deviations) for the models used in the latest IPCC report (the CMIP5 models).

This is good news overall; increasing mean temperatures and variability together would lead to even more extreme heat events. But “good news” is relative, and the projected declines in variability are modest, so rising mean temperatures by the end of this century will still push the overall temperature distribution well outside of what society has experienced in the last 12,000 years.

If he’s right, stable variance implies that the mean temperature of scenarios is representative of what we’ll experience – nothing further to worry about. I hope this is true, but I also hope it takes a long time to find out, because I really don’t want to experience what Lytton just did.

Lake Mead and incentives

Since I wrote about Lake Mead ten years ago (1 2 3), things have not improved. It’s down to 1068 feet, holding fairly steady after a brief boost in the wet year 2011-12. The Reclamation outlook has it losing another 60 feet in the next two years.

The stabilization has a lot to do with successful conservation. In Phoenix, for example, water use is down even though population is up. Some of this is technology and habits, and some of it is banishment of “useless grass” and other wasteful practices. MJ describes water cops in Las Vegas:

Investigator Perry Kaye jammed the brakes of his government-issued vehicle to survey the offense. “Uh oh this doesn’t look too good. Let’s take a peek,” he said, exiting the car to handle what has become one of the most existential violations in drought-stricken Las Vegas—a faulty sprinkler.

“These sprinklers haven’t popped up properly, they are just oozing everywhere,” muttered Kaye. He has been policing water waste for the past 16 years, issuing countless fines in that time. “I had hoped I would’ve worked myself out of a job by now. But it looks like I will retire first.”

Enforcement undoubtedly helps, but it strikes me as a band-aid where a tourniquet is needed. While the city is out checking sprinklers, people are free to waste water in a hundred less-conspicuous ways. That’s because standards say “conserve” but the market says “consume” – water is still cheap. As long as that’s true, technology improvements are offset by rebound effects.

Often, cheap water is justified as an equity issue: the poor need low-cost water. But there’s nothing equitable about water rates. The symptom is in the behavior of the top users:

Total and per-capita water use in Southern Nevada has declined over the last decade, even as the region’s population has increased by 14%. But water use among the biggest water users — some of the valley’s wealthiest, most prominent residents — has held steady.

The top 100 residential water users serviced by the Las Vegas Valley Water District used more than 284 million gallons of water in 2018 — over 11 million gallons more than the top 100 users of 2008 consumed at the time, records show. …

Properties that made the top 100 “lists” — which the Henderson and Las Vegas water districts do not regularly track, but compiled in response to records requests — consumed between 1.39 million gallons and 12.4 million gallons. By comparison, the median annual water consumption for a Las Vegas water district household was 100,920 gallons in 2018.

In part, I’m sure the top 100 users consume 10 to 100x as much water as the median user because they have 10 to 100x as much money (or more). But this behavior is also baked into the rate structure. At first glance, it’s nicely progressive, like the price tiers for a 5/8″ meter:

A top user (>20k gallons a month) pays almost 4x as much as a first-tier user (up to 5k gallons a month). But … not so fast. There’s a huge loophole. High users can buy down the rate by installing a bigger meter. That means the real rate structure looks like this:

A high user can consume 20x as much water with a 2″ meter before hitting the top rate tier. There’s really no economic justification for this – transaction costs and economies of scale are surely tiny compared to these discounts. The seller (the water district) certainly isn’t trying to push more sales to high-volume users to make a profit.

To me, this looks a lot like CAFE, which allocates more fuel consumption rights to vehicles with larger footprints, and Energy Star, which sets a lower bar for larger refrigerators. It’s no wonder that these policies have achieved only modest gains over multiple decades, while equity has worsened. Until we’re willing to align economic incentives with standards, financing and other measures, I fear that we’re just not serious enough to solve water or energy problems. Meanwhile, exhorting virtue is just a way to exhaust altruism.

The real reason the lights went out in Texas

I think TikTokers have discovered the real reason for the Texas blackouts: the feds stole the power to make snow.

Here’s the math:

The are of Texas is about 695,663 km^2. They only had to cover the settled areas, typically about 1% of land, or about 69 trillion cm^2. A 25mm snowfall over that area (i.e. about an inch), with 10% water content, would require freezing 17 trillion cubic centimeters of water. At 334 Joules per gram, that’s 5800 TeraJoules. If you spread that over a day (86400 seconds), that’s 67.2313 GigaWatts. Scale that up for 3% transmission losses, and you’d need 69.3 GW of generation at plant busbars.

Now, guess what the peak load on the grid was on the night of the 15th, just before the lights went out? 69.2 GW. Coincidence? I think not.

How did this work? Easy. They beamed the power up to the Jewish Space Laser, and used that to induce laser cooling in the atmosphere. This tells us another useful fact: Soros’ laser has almost 70 GW output – more than enough to start lots of fires in California.

And that completes the final piece of the puzzle. Why did the Texas PUC violate free market principles and intervene to raise the price of electricity? They had to, or they would have been fried by 70 GW of space-based Liberal fury.

Now you know the real reason they call leftists “snowflakes.”

Feedback solves the Mars problem

Some folks apparently continue the Apollo tradition, doubting the latest Mars rover landing.

Perfect timing of release into space? Perfect speed to get to Mars? Perfect angle? Well, there are actually lots of problems like this that get solved, in spite of daunting challenges. Naval gunnery is an extremely hard problem:

USN via Math Encounters blog

Yet somehow WWII battleships could hit targets many miles away. The enabling technology was a good predictive model of the trajectory of the shell, embodied in an analog fire computer or just a big stack of tables.

However, framing a Mars landing as a problem in ballistics is just wrong. We don’t simply point a rocket at Mars and fire the rover like a huge shell, hoping it will arrive on target. That really would be hard: the aiming precision needed to hit a target area of <1km at a range of >100 million km would be ridiculous, even from solid ground. But that’s not the problem, because the mission has opportunities to course correct along the way.

Measurements of the spacecraft range to Earth and the rate of change of this distance are collected during every DSN station contact and sent to the navigation specialists of the flight team for analysis. They use this data to determine the true path the spacecraft is flying, and determine corrective maneuvers needed to maintain the desired trajectory. The first of four Trajectory Correction Maneuvers (TCMs) is scheduled on January 4th, 1997 to correct any errors collected from launch. The magnitude of this maneuver is less than 75 meters per second (m/s). Navigation is an ongoing activity that will continue until the spacecraft enters the atmosphere of Mars.

NASA

The ability to measure and correct the trajectory along the way turns the impossible ballistics problem into a manageable feedback control problem. You still need a good model of many aspects of the problem to design the control systems, but we do that all the time. Imagine a world without feedback control:

  • Your house has no thermostat; you turn on the furnace when you install it and let it run for 20 years.
  • Cars have no brakes or steering, and the accelerator is on-off.
  • After you flush the toilet, you have to wait around and manually turn off the water before the tank overflows.
  • Forget about autopilot or automatic screen brightness on your phone, and definitely avoid nuclear reactors.

Without feedback, lots of things would seem impossible. But fortunately that’s not the real world, and it doesn’t prevent us from getting to Mars.

Did the Texas PUC stick it to ratepayers?

I’ve been reflecting further on yesterday’s post, in which I noticed that the PUC intervened in ERCOT’s market pricing.

Here’s what happened. Starting around the 12th, prices ran up from their usual $20/MWh ballpark to $1000 typical of peak hours on the 14th, hitting the $9000/MWh market cap overnight on the 14th/15th, then falling midday on the 15th. Then the night of the 15th/16th, prices spiked back up to the cap and stayed there for several days.

ERCOT via energyonline

Zooming in,

On the 16th, the PUC issued an order to ERCOT, directing it to set prices at the $9000 level, even retroactively. Evidently they later decided that the retroactive aspect was unwise (and probably illegal) and rescinded that portion of the order.

ERCOT has informed the Commission that energy prices across the system are clearing at less than $9,000, which is the current system-wide offer cap pursuant to 16 TAC §25.505(g)(6)(B). At various times today, energy prices across the system have been as low as approximately $1,200. The Commission believes this outcome is inconsistent with the fundamental design of the ERCOT market. Energy prices should reflect scarcity of the supply. If customer load is being shed, scarcity is at its maximum, and the market price for the energy needed to serve that load should also be at its highest.

Griddy, who’s getting the blame for customers exposed to wholesale prices, argues that the PUC erred:

At Griddy, transparency has always been our goal. We know you are angry and so are we. Pissed, in fact. Here’s what’s been going down:

On Monday evening the Public Utility Commission of Texas (PUCT) cited its “complete authority over ERCOT” to direct that ERCOT set pricing at $9/kWh until the grid could manage the outage situation after being ravaged by the freezing winter storm.

Under ERCOT’s market rules, such a pricing scenario is only enforced when available generation is about to run out (they usually leave a cushion of around 1,000 MW). This is the energy market that Griddy was designed for – one that allows consumers the ability to plan their usage based on the highs and lows of wholesale energy and shift their usage to the cheapest time periods.

However, the PUCT changed the rules on Monday.

As of today (Thursday), 99% of homes have their power restored and available generation was well above the 1,000 MW cushion. Yet, the PUCT left the directive in place and continued to force prices to $9/kWh, approximately 300x higher than the normal wholesale price. For a home that uses 2,000 kWh per month, prices at $9/kWh work out to over $640 per day in energy charges. By comparison, that same household would typically pay $2 per day.

See (below) the difference between the price set by the market’s supply-and-demand conditions and the price set by the PUCT’s “complete authority over ERCOT.” The PUCT used their authority to ensure a $9/kWh price for generation when the market’s true supply and demand conditions called for far less. Why?

There’s one part of Griddy’s story I can’t make sense of. Their capacity chart shows substantial excess capacity from the 15th forward.

Griddy’s capacity chart – I believe the x-axis is hours on the 18th, not Feb 1-24.

It’s a little hard to square that with generation data showing a gap between forecast conditions and actual generation persisting on the 18th, suggesting ongoing scarcity with a lot more than 1% of load offline.

ERCOT via EIA gridmonitor

This gap is presumably what the PUC relied upon to justify its order. Was it real, or illusory? One might ask, if widespread blackouts or load below projections indicate scarcity, why didn’t the market reflect the value placed on that shed load naturally? Specifically, why didn’t those who needed power simply bid for it? I can imagine a variety of answers. Maybe they couldn’t use it due to other systemic problems. Maybe they didn’t want it at such an outrageous price.

Whatever the answer, the PUC’s intervention was not a neutral act. There are winners and losers from any change in transfer pricing. The winners in this case were presumably generators. The losers were (a) customers exposed to spot prices, and (b) utilities with fixed retail rates but some exposure to spot prices. In the California debacle two decades ago, (b) led to bankruptcies. Losses for customers might be offset by accelerated restoration of power, but it doesn’t seem very plausible that pricing at the cap was a prerequisite for that.

The PUC’s mission is,

We protect customers, foster competition, and promote high quality infrastructure.

I don’t see anything about “protecting generators” and it’s hard to see how fixing prices fosters competition, so I have to agree … the PUC erred. Ironically, it’s ERCOT board members who are resigning, even though ERCOT’s actions were guided by the PUC’s assertion of total authority.

Texas masters and markets

The architect of Texas’ electricity market says it’s working as planned. Critics compare it to late Soviet Russia.

Yahoo – The Week

Who’s right? Both and neither.

I think there’s little debate about what actually happened, though probably much remains to be discovered. But the general features are known: bad weather hit, wind output was unusually low, gas plants and infrastructure failed in droves, and coal and nuclear generation also took a hit. Dependencies may have amplified problems, as for example when electrified gas infrastructure couldn’t deliver gas to power plants due to blackouts. Contingency plans were ready for low wind but not correlated failures of many thermal plants.

The failures led to a spectacular excursion in the market. Normally Texas grid prices are around $20/MWhr (2 cents a kWhr wholesale). Sometimes they’re negative (due to subsidized renewable abundance) and for a few hours a year they spike into the 100s or 1000s:

But last week, prices hit the market cap of $9000/MWhr and stayed there for days:

“The year 2011 was a miserable cold snap and there were blackouts,” University of Houston energy fellow Edward Hirs tells the Houston Chronicle. “It happened before and will continue to happen until Texas restructures its electricity market.” Texans “hate it when I say that,” but the Texas grid “has collapsed in exactly the same manner as the old Soviet Union,” or today’s oil sector in Venezuela, he added. “It limped along on underinvestment and neglect until it finally broke under predictable circumstances.”

I think comparisons to the Soviet Union are misplaced. Yes, any large scale collapse is going to have some common features, as positive feedbacks on a network lead to cascades of component failures. But that’s where the similarities end. Invoking the USSR invites thoughts of communism, which is not a feature of the Texas electricity market. It has a central operator out of necessity, but it doesn’t have central planning of investment, and it does have clear property rights, private ownership of capital, a transparent market, and rule of law. Until last week, most participants liked it the way it was.

The architect sees it differently:

William Hogan, the Harvard global energy policy professor who designed the system Texas adopted seven years ago, disagreed, arguing that the state’s energy market has functioned as designed. Higher electricity demand leads to higher prices, forcing consumers to cut back on energy use while encouraging power plants to increase their output of electricity. “It’s not convenient,” Hogan told the Times. “It’s not nice. It’s necessary.”

Essentially, he’s taking a short-term functional view of the market: for the set of inputs given (high demand, low capacity online), it produces exactly the output intended (extremely high prices). You can see the intent in ERCOT’s ORDC offer curve:

W. Hogan, 2018

(This is a capacity reserve payment, but the same idea applies to regular pricing.)

In a technical sense, Hogan may be right. But I think this takes too narrow a view of the market. I’m reminded of something I heard from Hunter Lovins a long time ago: “markets are good servants, poor masters, and a lousy religion.” We can’t declare victory when the market delivers a designed technical result; we have to decide whether the design served any useful social purpose. If we fail to do that, we are the slaves, with the markets our masters. Looking at things more broadly, it seems like there are some big problems that need to be addressed.

First, it appears that the high prices were not entirely a result of the market clearing process. According to Platt’s, the PUC put its finger on the scale:

The PUC met Feb. 15 to address the pricing issue and decided to order ERCOT to set prices administratively at the $9,000/MWh systemwide offer cap during the emergency.

“At various times today (Feb. 15), energy prices across the system have been as low as approximately $1,200[/MWh],” the order states. “The Commission believes this outcome is inconsistent with the fundamental design of the ERCOT market. Energy prices should reflect scarcity of the supply. If customer load is being shed, scarcity is at its maximum, and the market price for the energy needed to serve that load should also be at its highest.”

The PUC also ordered ERCOT “to correct any past prices such that firm load that is being shed in [Energy Emergency Alert Level 3] is accounted for in ERCOT’s scarcity pricing signals.”

S&P Global/Platt’s

Second, there’s some indication that exposure to the market was extremely harmful to some customers, who now face astronomical power bills. Exposing customers to almost-unlimited losses, in the face of huge information asymmetries between payers and utilities, strikes me as predatory and unethical. You can take a Darwinian view of that, but it’s hardly a Libertarian triumph if PUC intervention in the market transferred a huge amount of money from customers to utilities.

Third, let’s go back to the point of good price signals expressed by Hogan above:

Higher electricity demand leads to higher prices, forcing consumers to cut back on energy use while encouraging power plants to increase their output of electricity. “It’s not convenient,” Hogan told the Times. “It’s not nice. It’s necessary.”

It may have been necessary, but it apparently wasn’t sufficient in the short run, because demand was not curtailed much (except by blackouts), and high prices could not keep capacity online when it failed for technical reasons.

I think the demand side problem is that there’s really very little retail price exposure in the market. The customers of Griddy and other services with spot price exposure apparently didn’t have the tools to observe realtime prices and conserve before their bills went through the roof. Customers with fixed rates may soon find that their utilities are bankrupt, as happened in the California debacle.

Hogan diagrams the situation like this:

This is just a schematic, but in reality I think there are too many markets where the red demand curves are nearly vertical, because very few customers see realtime prices. That’s very destabilizing.

Strangely, the importance of retail price elasticity has long been known. In their seminal work on Spot Pricing of Electricity, Schweppe, Caramanis, Tabors & Bohn write, right in the introduction:

Five ingredients for a successful marketplace are

  1. A supply side with varying supply costs that increase with demand
  2. A demand side with varying demands which can adapt to price changes
  3. A market mechanism for buying and selling
  4. No monopsonistic behavior on the demand side
  5. No monopolistic behavior on the supply side

I find it puzzling that there isn’t more attention to creation of retail demand response. I suspect the answer may be that utilities don’t want it, because flat rates create cross-subsidies that let them sell more power overall, by spreading costs from high peak users across the entire rate base.

On the supply side, I think the question is whether the expectation that prices could one day go to the $9000/MWhr cap induced suppliers to do anything to provide greater contingency power by investing in peakers or resiliency of their own operations. Certainly any generator who went offline on Feb. 15th due to failure to winterize left a huge amount of money on the table. But it appears that that’s exactly what happened.

Presumably there are some good behavioral reasons for this. No one expected correlated failures across the system, and thus they underestimated the challenge of staying online in the worst conditions. There’s lots of evidence that perception of risk of rare events is problematic. Even a sophisticated investor who understood the prospects would have had a hard time convincing financiers to invest in resilience: imagine walking into a bank, “I’d like a loan for this piece of equipment, which will never be used, until one day in a couple years when it will pay for itself in one go.”

I think legislators and regulators have their work cut out for them. Hopefully they can resist the urge to throw the baby out with the bathwater. It’s wrong to indict communism, capitalism, renewables, or any single actor; this was a systemic failure, and similar events have happened under other regimes, and will happen again. ERCOT has been a pioneering design in many ways, and it would be a shame to revert to a regulated, average-cost-pricing model. The cure for ills like demand inelasticity is more market exposure, not less. The market may require more than a little tinkering around the edges, but catastrophes are rare, so there ought to be time to do that.

Nordhaus on Subsidies

I’m not really a member of the neoclassical economics fan club, but I think this is on point:

“Subsidies pose a more general problem in this context. They attempt to discourage carbon-intensive activities by making other activities more attractive. One difficulty with subsidies is identifying the eligible low-carbon activities. Why subsidize hybrid cars (which we do) and not biking (which we do not)? Is the answer to subsidize all low carbon activities? Of course, that is impossible because there are just too many low-carbon activities, and it would prove astronomically expensive. Another problem is that subsidies are so uneven in their impact. A recent study by the National Academy of Sciences looked at the impact of several subsidies on GHG emissions. It found a vast difference in their effectiveness in terms of CO2removed per dollar of subsidy. None of the subsidies were efficient; some were horribly inefficient; and others such as the ethanol subsidy were perverse and actually increased GHG emissions. The net effect of all the subsidies taken together was effectively zero!” So in the end, it is much more effective to penalize carbon emissions than to subsidize everything else.” (Nordhaus, 2013, p. 266)

(Via a W. Hogan paper, https://scholar.harvard.edu/whogan/files/hogan_hepg_100418r.pdf)