AI for modeling – what (not) to do

Ali Akhavan and Mohammad Jalali have a nice new article in the SDR on the use of AI (LLMs) to complement simulation modeling.

Generative AI and simulation modeling: how should you (not) use large language models like ChatGPT

Ali Akhavan, Mohammad S. Jalali


Generative Artificial Intelligence (AI) tools, such as Large Language Models (LLMs) and chatbots like ChatGPT, hold promise for advancing simulation modeling. Despite their growing prominence and associated debates, there remains a gap in comprehending the potential of generative AI in this field and a lack of guidelines for its effective deployment. This article endeavors to bridge these gaps. We discuss the applications of ChatGPT through an example of modeling COVID-19’s impact on economic growth in the United States. However, our guidelines are generic and can be applied to a broader range of generative AI tools. Our work presents a systematic approach for integrating generative AI across the simulation research continuum, from problem articulation to insight derivation and documentation, independent of the specific simulation modeling method. We emphasize while these tools offer enhancements in refining ideas and expediting processes, they should complement rather than replace critical thinking inherent to research.

It’s loaded with useful examples of prompts and responses:

I haven’t really digested this yet, but I’m looking forward to writing about it. In the meantime, I’m very interested to hear your take in the comments.

How Beauty Dies

I’m lucky to live in a beautiful place, with lots of wildlife, open spaces, relative quiet and dark skies, and clean water. Keeping it that way is expensive. The cost is not money; it’s the time it takes to keep people from loving it to death, or simply exploiting it until the essence is lost.

Fifty years ago, some far sighted residents realized that development would eventually ruin the area, so they created conservation zoning designed to limit density and preserve natural resources. For a long time, that structure attracted people who were more interested in beauty than money, and the rules enjoyed strong support.

However, there are some side effects. First, the preservation of beauty in a locale, when everywhere else turns to burbs and billboards raises values. Second, the low density needed to preserve resources creates scarcity, again raising property values. High valuations attract people who come primarily for the money, not for the beauty itself.

Every person who moves in exploits a little or a lot of the remaining resources, with the money people leaning towards extraction of as much as possible. As the remaining beauty degrades, fewer people are attracted for the beauty, and the nature of the place becomes more commercial.

A reinforcing feedback drives the system to a tipping point, when the money people outnumber the beauty people. Then they erode the regulations, and paradise is lost to a development free-for-all.

Defining SD

Open Access Note by Asmeret Naugle, Saeed Langarudi, Timothy Clancy:

A clear definition of system dynamics modeling can provide shared understanding and clarify the impact of the field. We introduce a set of characteristics that define quantitative system dynamics, selected to capture core philosophy, describe theoretical and practical principles, and apply to historical work but be flexible enough to remain relevant as the field progresses. The defining characteristics are: (1) models are based on causal feedback structure, (2) accumulations and delays are foundational, (3) models are equation-based, (4) concept of time is continuous, and (5) analysis focuses on feedback dynamics. We discuss the implications of these principles and use them to identify research opportunities in which the system dynamics field can advance. These research opportunities include causality, disaggregation, data science and AI, and contributing to scientific advancement. Progress in these areas has the potential to improve both the science and practice of system dynamics.

I shared some earlier thoughts here, but my refined view is in the SDR now:

Invited Commentaries by Tom Fiddaman, Josephine Kaviti Musango, Markus Schwaninger, Miriam Spano:

More reasons to love emissions pricing

I was flipping through a recent Tech Review, and it seemed like every other article was an unwitting argument for emissions pricing. Two examples:

Job title of the future: carbon accountant

We need carbon engineers who know how to make emissions go away more than we need bean counters to tally them. Are we also going to have nitrogen accountants, and PFAS accountants, and embodied methane in iridium accountants, and … ? That way lies insanity.

The fact is, if carbon had a nontrivial price attached at the wellhead, it would pervade the economy, and we’d already have carbon accountants. They’re called accountants.

More importantly, behind those accountants is an entire infrastructure of payment systems that enforces conservation of money. You can only cheat an accounting system for so long, before the cash runs out. We can’t possibly construct parallel systems providing the same robustness for every externality we’re interested in.

Here’s what we know about lab-grown meat and climate change

Realistically, now matter how hard we try to work out the relative emissions of natural and laboratory cows, the confidence bounds on the answer will remain wide until the technology is used at scale.

We can’t guide that scaling process by assessments that are already out of date when they’re published. Lab meat innovators need a landscape in which carbon is priced into their inputs, so they can make the right choices along the way.

Model quality: draining the swamp for large models

In my high road post a while ago, I advocated “voluntary simplicity” as a way of avoiding a large model with an insurmountable burden of undiscovered rework.

Sometimes this is not a choice, because you’re asked to repurpose a large model for some related question. Maybe you didn’t build it, or maybe you did, but it’s time to pause and reflect before proceeding. It’s critical to determine where on the spectrum above the model lies – between the Vortex of Confusion and Nirvana.

I will assert from experience that a large model was almost always built with some narrow questions in mind, and that it was exercised mainly over parts of its state space relevant to those questions. It’s quite likely that a lot of bad behaviors lurk in other parts of the state space. When you repurpose the model those things are going to bite you.

If you start down the red road, “we’ll just add a little stuff for the new question…” you will find yourself in a world of hurt later. It’s absolutely essential that you first do some rigorous testing to establish what the limitations of the model might be in the new context.

The reason scope is so dangerous is that its effect on your ability to make quality progress is nonlinear. The number of interactions you have to manage, and therefore opportunities for errors, and complexity of corrections, is proportional to scope squared. The speed of your model declines with 1/scope, as does the time you have to pay attention to each variable.

My tentative recipe for success, or at least survival:

1. Start early. Errors beget more errors, so the sooner you discover them, the sooner you can arrest that vicious cycle.

2. Be ruthless. Don’t test to see if the model can answer the new question; test to see if you can break it and get nonsense answers.

3. Use your tools. Pay attention to unit errors and runtime warnings. Write Reality Checks to automate tests. Set ranges on key variables to ensure that they’re within reason.

4. Isolate. Because of the nonlinear interaction problem, it’s hard to interpret tests on a full model. Instead, extract components and test them in isolation. You can do this by copy-pasting, or even easier in Vensim, by using Synthesim Overrides to modify inputs to steps, ramps, etc.

5. Don’t let go. When you find a problem, track it back to its root cause.

6. Document. Keep a lab notebook, or an email stream, or a todo list, so you don’t lose the thought when you have multiple issues to chase.

7. Be extreme. Pick a stock and kick it with a pulse or an override. Take all the people out of the factory, or all the ships out of the fleet. What happens? Does anything go negative? Do decisions remain consistent with goals?

8. Calibrate. Calibration against data can be a useful way to find issues, but it’s much slower than other options, so this is something to pursue late in the process. Also, remember that model-data gaps are equally likely to reveal a problem with the data.

9. Rebuild. If you’re finding a lot of problems, you may be better of starting clean, using the existing model as a conceptual guide, but reconsidering the detailed design of the implementation.

10. Submodel. It’s often hard to fix something inside the full plate of spaghetti. But you may be able to identify a solution in an external submodel, free of distractions, and then transplant it back into the host.

11. Reduce. If you can’t rebuild the full scope within available resources, cut things out. This may not be appetizing to the client, but it’s certainly better than delivering a fragile model that only works if you don’t touch it.

12. If you find you’re in a hole, stop digging. Don’t add any features until you have things under control, because they’ll only exacerbate the problems.

13. Communicate. Let the client, and your team, know what you’re up to, and why quality is more important than their cherished kitchen sink.

Integration & Correlation

Claims by AI chatbots, engineers and Nobel prize winners notwithstanding, absence of correlation does not prove absence of causation, any more than presence of correlation proves presence of causation. Bard outlines several reasons from noise and nonlinearity, but missed a key one: bathtub statistics.

Here’s a really simple example of how this reasoning can go wrong. Consider a system with a stock Y(t) that integrates a flow X(t):

X(t) = -t

Y(t) = ∫X(t)dt

We don’t need to simulate to solve for Y(t) = -1/2*t^2 +C.

Over the interval t=[-1,1] the X and Y time series look like this:

The X-Y relationship is parabolic, with correlation zero:

Zero correlation can’t mean “not causal” because we constructed the system to be causal. Even worse, the sign of the relationship depends on the subset of the interval you examine:

This is not the only puzzling case. Consider instead:

X(t) = 1

Y(t) = ∫X(t)dt = t + C

In this case, X(t) has zero variance. But Corr(X,Y) = Cov(X,Y)/σ(X)σ(Y) which is 0/0. What are we to make of that?

This pathology can also arise from feedback. Consider a thermostat that controls a heater that operates in two states (on or off). If the heater is fast, and the thermostat is sensitive with a narrow temperature band, then σ(temperature) will be near 0, even though the heater is cycling with σ(heater state)>0.

AI Chatbots on Causality

Having recently encountered some major causality train wrecks, I got curious about LLM “understanding” of causality. If AI chatbots are trained on the web corpus, and the web doesn’t “get” causality, there’s no reason to think that AI will make sense either.

TLDR; ChatGPT and Bing utterly fail this test, for reasons that are evident in Google Bard’s surprisingly smart answer.


Bing: FAIL

Google Bard: PASS

Google gets strong marks for mentioning a bunch of reasons to expect that we might not find a correlation, even though x is known to cause y. I’d probably only give it a B+, because it neglected integration and feedback, but it’s a good answer that properly raises lots of doubts about simplistic views of causality.

Thyroid Dynamics: Hyper Resurgence

In my last thyroid post, I described a classic case of overshoot due to failure to account for delays. I forgot to mention the trigger for this episode.

At point B above, there was a low TSH measurement, at .05 well below the recommended floor of .4. That was taken as a signal for a dose reduction, which is qualitatively reasonable.

Let’s suppose we believe the dose-TSH response to be stable:

Then we have some puzzles. First, at 200mcg, we’d expect TSH=.15, about 3x higher. To get the observed measurement, we’d expect the dose to be more like 225mcg. Second, exactly the same reading of .05 has been observed at a much lower dose (162.5mcg), which is in the sweet spot (yellow box) we should be targeting. Third, also within that sweet spot, at 150mcg, we’ve seen TSH as high as 15 – far out of range in the opposite direction.

I think an obvious conclusion is that noise in the system is extreme, so there’s good reason to respond by discounting the measurement and retesting. But that’s not what happens in general. Here’s a plot of TSH observations (x, log scale) against subsequent dose adjustments (y, %):
There are three clusters of note.

  • The yellow-highlighted points are low TSH values that were followed by large dose reductions, exceeding guidelines.
  • The green points are large dose increases needed to restore the yellow changes, when they subsequently proved to be errors.
  • The purple points (3 total) are high TSH readings, right at the top of the recommended range, that did not induce a dose increase, even though they were accompanied by symptom complaints.

This is interesting, because the trendline seems to indicate a reasonable, if noisy, strategy of targeting TSH=1. But the operative decision rule for several of the doctors involved seems to be more like:

  • If you get a TSH measurement at the high end of the range, indicating a dose increase might be appropriate, ignore it.
  • If you get a low TSH measurement, PANIC. Cut dose drastically.
  • If you’re the doctor replacing the one just fired for screwing this up, restore the status quo.

Why is this? I think it’s an error in reasoning. Low TSH could be caused by excessive T4 levels, which could arise from (a) overtreatment of a hypothyroid patient, or (b) hyperthyroid activity in a previously hypothyroid patient. In the case described previously, evidence from T4 testing as well as the long term relationship suggested that the dose was 20-30% high, but it was ultimately reduced by 60%. But in two other cases, there was no T4 confirmation, and the dose was right in the middle of its apparent sweet spot. That rules out overtreatment, so the mental model behind a dose reduction has to be (b). But that makes no sense. It’s a negative feedback system, yet somehow the thyroid has increased its activity, in response to a reduction in the hormone that normally signals it to do so? Admittedly, there are possibilities like cancer that could explain such behavior, but no one has ever explored that possibility in N=1’s case.

I think the basic problem here is that it’s hard to keep a mechanistic model of a complex hormone signalling system in your head, which makes it easy to get fooled by delays, feedback, noise and nonlinearity. Bad information systems and TSH monomania contribute to the problem, as does ignoring dose guidelines due to overconfidence.

So what should happen in response to a low TSH measurement in patient N=1? I think it’s more like the following:

  • Don’t panic.
    • It might be a bad measurement (labs don’t correct for seasonality, time of day, and other features that could inflate variance beyond the precision of the test itself).
    • It might be some unknown source of variability driving TSH, like food, medications, or endogenous variation in upstream hormones.
  • Look at the measurement in context of other information: the past dose-response relationship, T4, and symptoms, and reference dose per unit body mass.
  • Make at most a small move, wait for the guideline-prescribed period, and retest.

Thyroid Dynamics: Dose Management Challenges

In my last two posts about thyroid dynamics, I described two key features of the information environment that set up a perfect storm for dose management:

  1. The primary indicator of the system state for a hypothyroid patient is TSH, which has a nonlinear (exponential) response to T3 and T4. This means you need to think about TSH on a log scale, but test results are normally presented on a linear scale. Information about the distribution is hard to come by. (I didn’t mention it before, but there’s also an element of the Titanic steering problem, because TSH moves in a direction opposite the dose and T3/T4.)
  2. Measurements of TSH are subject to a rather extreme mix of measurement error and driving noise (probably mostly the latter). Test results are generally presented without any indication of uncertainty, and doctors generally have very few data points to work with.

As if that weren’t enough, the physics of the system is tricky. A change in dose is reflected in T4 and T3, then in TSH, only after a delay. This is a classic “delayed negative feedback loop” situation, much like the EPO-anemia management challenge in the excellent work by Jim Rogers, Ed Gallaher & David Dingli.

If you have a model, like Rogers et al. do, you can make fairly rapid adjustments with confidence. If you don’t, you need to approach the problem like an unfamiliar shower: make small, slow adjustments. If you react two quickly, you’ll excite oscillations. Dose titration guidelines typically reflects this:

Titrate dosage by 12.5 to 25 mcg increments every 4 to 6 weeks, as needed until the patient is euthyroid.

Just how long should you wait before making a move? That’s actually a little hard to work out from the literature. I asked OpenEvidence about this, and the response was typically vague:

The expected time delay between adjusting the thyroid replacement dose and the response of thyroid-stimulating hormone (TSH) is typically around 4 to 6 weeks. This is based on the half-life of levothyroxine (LT4), which reaches steady-state levels by then, and serum TSH, which reaches its nadir at the same time.[1]

The first citation is the ATA guidelines, but when you consult the details, there’s no cited basis for the 4-6 weeks. Presumably this is some kind of 3-tau rule of thumb learned from experience. As an alternative, I tested a dose change in the Eisenberg et al. model:

At the arrow, I double the synthetic T4 dose on a hypothetical person, then observe the TSH trajectory. Normally, you could then estimate the time constant directly from the chart: 70% of the adjustment is realized at 1*tau, 85% at 2*tau, 95% at 3*tau. If you do that here, tau is about 8 days. But not so fast! TSH responds exponentially, so you need to look at this on a log-y scale:

Looking at this correctly, tau is somewhat longer: about 12-13 days. This is still potentially tricky, because the Eisenberg model is not first order. However, it’s reassuring that I get similar time constants when I estimate my own low-order metamodel.

Taking this result at face value, one could roughly say that TSH is 95% equilibrated to a dose change after about 5 weeks, which corresponds pretty well with the ATA guidelines.

This is a long setup for … the big mistake. Referring to the lettered episodes on the chart above, here’s what happened.

  • A: Dose is constant at about 200mcg (a little hard to be sure, because it was a mix of 2 products, and the equivalents aren’t well established.
  • B: New doctor orders a test, which comes out very low (.05), out of the recommended range. Given the long term dose-response range, we’d expect about .15 at this dose, so it seems likely that this was a confluence of dose-related factors and noise.
  • C: New doc orders an immediate drastic reduction of dose by 37.5% or 75mcg (3 to 6 times the ATA recommended adjustment).
  • D: Day 14 from dose change, retest is still low (.2). At this point you’d expect that TSH is at most 2/3 equilibrated to the new dose. Over extremely vociferous objections, doc orders another 30% reduction to 88mcg.
  • E: Patient feeling bad, experiencing hair loss and other symptoms. Goes off the reservation and uses remaining 125mcg pills. Coincident test is in range, though one would not expect it to remain so, because the dose changes are not equilibrated.
  • F: Suffering a variety of hypothyroid symptoms at the lower dose.
  • G: Retest after an appropriate long interval is far out of range on the high side (TSH near 7). Doc unresponsive.
  • H: Fired the doc. New doc restores dose to 125mcg immediately.
  • I: After an appropriate interval, retest puts TSH at 3.4, on the high side of the ATA range and above the NACB guideline. Doc adjusts to 175mcg, in part considering symptoms rather than test results.

This is an absolutely classic case of overshooting a goal in a delayed negative feedback system. There are really two problems here: failure to anticipate the delay, and therefore making a second adjustment before the first was stabilized, and making overly aggressive changes, much larger than guidelines recommend.

So, what’s really going on? I’ve been working with a simplified meta version of the Eisenberg model to figure this out. (The full model is hourly, and therefore impractical to run with Kalman filtering over multi-year horizons. It’s silly to use that much computation on a dozen data points.)

The problem is, the model can’t replicate the data without invoking huge driving noise – there simply isn’t any thing in the structure that can account for data points far from the median behavior. I’ve highlighted a few above. At each of these points, the model takes a huge jump, not because of any known dynamics, but because of a filter reset of the model state. This is a strong hint that there’s an unobserved state influencing the system.

If we could get docs to provide a retest at these outlier points, we could at least rule out measurement error, but that has almost never happened. Also, if docs would routinely order a full panel including T3 and T4, not just TSH, we might have a better mechanistic explanation, but that has also been hard to get. Recently, a doc ordered a full panel, but office staff unilaterally reduced the scope to TSH only, because they felt that testing T3 and T4 was “unconventional”. No doubt this is because ATA and some authors have been shouting that TSH is the only metric needed, and any nuances that arise when the evidence contradicts get lost.

For our N=1, the instability of the TSH/T4 relationship contradicts the conventional wisdom, which is that individuals have a stable set point., with the observed high population variation arising from diversity of set points across individuals:

I think the obvious explanation in our N=1 is that some individuals have an unstable set point. You could visualize that in the figure above as moving from one intersection of curves to another. This could arise from a change in the T4->TSH curve (e.g. something upstream of TSH in the hypothalamic-pituitary-adrenal axis) or the TSH->T4 relationship (intermittent secretion or conversion). Unfortunately very few treatment guidelines recognize this possibility.

Thyroid Dynamics: Chartjunk

I just ran across a funny instance of TSH nonlinearity. Check out the axis on this chart:

It’s actually not as bad as you’d think: the irregular axis is actually a decent approximation of a log-linear scale:

My main gripe is that the perceptual midpoint of the ATA range bar on the chart is roughly 0.9, whereas the true logarithmic midpoint is more like 1.6. The NACB bar is similarly distorted.