Excel, lost in the cloud

Excel is rapidly becoming unusable as Microsoft tries to shift everyone into the OneDrive/Sharepoint cloud. Here’s a very simple equation from a population model:

='https://ventanasystems-my.sharepoint.com/personal/vrbo_onmicrosoft_com/Documents/_Mkt/lxpgi/Model/Model/[Cohort Model Natural Increase.xlsx]Boston'!S135+'https://ventanasystems-my.sharepoint.com/personal/vrbo_onmicrosoft_com/Documents/_Mkt/lxpgi/Model/Model/[Cohort Model Immigration.xlsx]Boston'!S119+('https://ventanasystems-my.sharepoint.com/personal/vrbo_onmicrosoft_com/Documents/_Mkt/lxpgi/Model/Model/[Cohort Model NPR.xlsx]Boston'!S119-'https://ventanasystems-my.sharepoint.com/personal/vrbo_onmicrosoft_com/Documents/_Mkt/lxpgi/Model/Model/[Cohort Model.xlsx]Boston'!R119

URLs as equation terms? What were they thinking? This is an interface choice that makes things easy for programmers, and impossible for users.

Coronavirus Roundup II

Some things I’ve found interesting and useful lately:

R0

What I think is a pretty important article from LANL: High Contagiousness and Rapid Spread of Severe Acute Respiratory Syndrome Coronavirus 2. This tackles the questions I wondered about in my steady state growth post, i.e. that high observed growth rates imply high R0 if duration of infectiousness is long.

Earlier in the epidemic, this was already a known problem:

The time scale of asymptomatic transmission affects estimates of epidemic potential in the COVID-19 outbreak

The reproductive number of COVID-19 is higher compared to SARS coronavirus

Data

Epiforecasts’ time varying R0 estimates

CMMID’s time varying reporting coverage estimates

NECSI’s daily update for the US

The nifty database of US state policies from Raifman et al. at BU

A similar policy tracker for the world

The covidtracking database. Very useful, if you don’t mind a little mysterious turbulence in variable naming.

The Kinsa thermometer US health weather map

Miscellaneous

Nature’s Special report: The simulations driving the world’s response to COVID-19

Pandemics Depress the Economy, Public Health Interventions Do Not: Evidence from the 1918 Flu

Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period has some interesting dynamics, including seasonality.

Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing looks at requirements for contact tracing and isolation

Models for Count Data With Overdispersion has important considerations for calibration

Variolation: hmm. Filed under “interesting but possibly crazy.”

Creative, and less obviously crazy: An alternating lock-down strategy for sustainable mitigation of COVID-19

How useful are antibody tests?

I just ran across this meta-analysis of antibody test performance on medrxiv:

Antibody tests in detecting SARS-CoV-2 infection: a meta-analysis

In total, we identified 38 eligible studies that include data from 7,848 individuals. The analyses showed that tests using the S antigen are more sensitive than N antigen-based tests. IgG tests perform better compared to IgM ones, andshow better sensitivity when the samples were taken longer after the onset of symptoms. Moreover, irrespective of the method, a combined IgG/IgM test seems to be a better choice in terms of sensitivity than measuring either antibody type alone. All methods yielded high specificity with some of them (ELISA and LFIA) reaching levels around 99%. ELISA-and CLIA-based methods performed better in terms of sensitivity (90-94%) followed by LFIA and FIA with sensitivities ranging from 80% to 86%.

The sensitivity results are interesting, but I’m more interested in timing:

Sample quality, low antibody concentrations and especially timing of the test -too soon after a person is infected when antibodies have not been developed yet or toolate when IgM antibodies have decreased or disappeared -could potentially explain the low ability of the antibody tests to identify people with COVID-19. According to kinetic measurements of some of the included studies 22, 49, 54 IgM peaks between days 5 and 12 and then drops slowly. IgGs reach peak concentrations after day 20 or so as IgM antibodies disappear. This meta-analysis showed, through meta-regression, that IgG tests did have better sensitivity when the samples were taken longer after the onset of symptoms. This is further corroborated by the lower specificity of IgM antibodies compared to IgG 15. Only few of the included studies provided data stratified by the time of onset of symptoms, so a separate stratified analysis was not feasible, but this should be a goal for future studies.

This is an important knowledge gap. Timing really matters, because tests that aren’t sensitive to early asymptomatic transmission have limited utility for preventing spread. Consider the distribution of serial infection times (Ferretti et al., Science):

Testing by itself doesn’t do anything to reduce the spread of infection. It’s an enabler: transmission goes down only if coronavirus-positive individuals identified through testing change their behavior. That implies a chain of delays:

  • Conduct the test and get the results
  • Inform the positive person
  • Get them into a situation where they won’t infect their coworkers, family, etc.
  • Trace their contacts, test them, and repeat

A test that only achieves peak sensitivity at >5 days may not leave much time for these delays to play out, limiting the effectiveness of contact tracing and isolation. A test that peaks at day 20 would be pretty useless (though interesting for surveillance and other purposes).

Consider Long et al., Antibody responses to SARS-CoV-2 in COVID-19 patients: the perspective application of serological tests in clinical practice:

Seroconversion rates of 30% at onset of symptoms seem problematic, given the significant pre-symptomatic transmission implied by the Ferretti, Liu & Nishiura results on serial infection times. I hope the US testing strategy relies on lots of fast tests, not just lots of tests.

See also:

Potential Rapid Diagnostics, Vaccine and Therapeutics for 2019 Novel Coronavirus (2019-nCoV): A Systematic Review.

Antibody surveys suggesting vast undercount of coronavirus infections may be unreliable in Science

h/t Yioryos Stamboulis

A coronavirus prediction you can bank on

How many cases will there be on June 1? Beats me. But there’s one thing I’m sure of.

My confidence bounds on future behavior of the epidemic are still pretty wide. While there’s good reason to be optimistic about a lot of locations, there are also big uncertainties looming. No matter how things shake out, I’m confident in this:

The antiscience crowd will be out in force. They’ll cherry-pick the early model projections of an uncontrolled epidemic, and use that to claim that modelers predicted a catastrophe that didn’t happen, and conclude that there was never a problem. This is the Cassandra’s curse of all successful modeling interventions. (See Nobody Ever Gets Credit for Fixing Problems that Never Happened for a similar situation.)

But it won’t stop there. A lot of people don’t really care what the modelers actually said. They’ll just make stuff up. Just today I saw a comment at the Bozeman Chronicle to the effect of, “if this was as bad as they said, we’d all be dead.” Of course that was never in the cards, or the models, but that doesn’t matter in Dunning Krugerland.

Modelers, be prepared for a lot more of this. I think we need to be thinking more about defensive measures, like forecast archiving and presentation of results only with confidence bounds attached. However, it’s hard to do that and to produce model results at a pace that keeps up with the evolution of the epidemic. That’s something we need more infrastructure for.

Coronavirus Curve-fitting OverConfidence

This is a follow-on to The Normal distribution is a bad COVID19 model.

I understand that the IHME model is now more or less the official tool of the Federal Government. Normally I’m happy to see models guiding policy. It’s better than the alternative: would you fly in a plane designed by lawyers? (Apparently we have been.)

However, there’s nothing magic about a model. Using flawed methods, bad data, the wrong boundary, etc. can make the results GIGO. When a bad model blows up, the consequences can be just as harmful as any other bad reasoning. In addition, the metaphorical shrapnel hits the rest of us modelers. Currently, I’m hiding in my foxhole.

On top of the issues I mentioned previously, I think there are two more problems with the IHME model:

First, they fit the Normal distribution to cumulative cases, rather than incremental cases. Even in a parallel universe where the nonphysical curve fit was optimal, this would lead to understatement of the uncertainty in the projections.

Second, because the model has no operational mapping of real-world concepts to equation structure, you have no hooks to use to inject policy changes and the uncertainty associated with them. You have to construct some kind of arbitrary index and translate that to changes in the size and timing of the peak in an unprincipled way. This defeats the purpose of having a model.

For example, from the methods paper:

A covariate of days with expected exponential growth in the cumulative death rate was created using information on the number of days after the death rate exceeded 0.31 per million to the day when different social distancing measures were mandated by local and national government: school closures, non-essential business closures including bars and restaurants, stay-at-home recommendations, and travel restrictions including public transport closures. Days with 1 measure were counted as 0.67 equivalents, days with 2 measures as 0.334 equivalents and with 3 or 4 measures as 0.

This postulates a relationship that has only the most notional grounding. There’s no concept of compliance, nor any sense of the effect of stringency and exceptions.

In the real world, there’s also no linear relationship between “# policies implemented” and “days of exponential growth.” In fact, I would expect this to be extremely nonlinear, with a threshold effect. Either your policies reduce R0 below 1 and the epidemic peaks and shrinks, or they don’t, and it continues to grow at some positive rate until a large part of the population is infected. I don’t think this structure captures that reality at all.

That’s why, in the IHME figure above (retrieved yesterday), you don’t see any scenarios in which the epidemic fizzles, because we get lucky and warm weather slows the virus, or there are many more mild cases than we thought. You also don’t see any runaway scenarios in which measures fail to bring R0 below 1, resulting in sustained growth. Nor is there any possibility of ending measures too soon, resulting in an echo.

For comparison, I ran some sensitivity runs my model for North Dakota last night. I included uncertainty from fit to data (for example, R0 constrained to fit observations via MCMC) and some a priori uncertainty about effectiveness and duration of measures, and from the literature about fatality rates, seasonality, and unobserved asymptomatics.

I found that I couldn’t exclude the IHME projections from my confidence bounds, so they’re not completely crazy. However, they understate the uncertainty in the situation by a huge margin. They forecast the peak at a fairly definite time, plus or minus a factor of two. With my hybrid-SEIR model, the 95% bounds include variation by a factor of 10. The difference is that their bounds are derived only from curve fitting, and therefore omit a vast amount of structural uncertainty that is represented in my model.

Who is right? We could argue, but since the IHME model is statistically flawed and doesn’t include any direct effect of uncertainty in R0, prevalence of unobserved mild cases, temperature sensitivity of the virus, effectiveness of measures, compliance, travel, etc., I would not put any money on the future remaining within their confidence bounds.