ChatGPT struggles with pandemics

I decided to try out a trickier problem on ChatGPT: epidemiology. This is tougher, because it requires some domain knowledge about terminology as well as some math. R0 itself is a slippery concept. It appears that ChatGPT is essentially equating R0 and the transmission rate; perhaps the result would be different had I used a … Continue reading “ChatGPT struggles with pandemics”

ChatGPT and the Department Store Problem

Continuing with the theme, I tried the department store problem out on ChatGPT. This is a common test of stock-flow reasoning, in which participants assess the peak stock of people in a store from data on the inflow and outflow. I posed a simplified version of the problem: Interestingly, I had intended to have 6 … Continue reading “ChatGPT and the Department Store Problem”

Can ChatGPT generalize Bathtub Dynamics?

Research indicates that insights about stock-flow management don’t necessarily generalize from one situation to another. People can fill their bathtubs without comprehending the federal debt or COVID prevalence. ChatGPT struggles a bit with the climate bathtub, so I wondered if it could reason successfully about real bathtubs. The last sentence is a little tricky, but … Continue reading “Can ChatGPT generalize Bathtub Dynamics?”

ChatGPT does the Climate Bathtub

Following up on our earlier foray into AI conversations about dynamics, I decided to follow up on ChatGPT’s understanding of bathtub dynamics. First I repeated our earlier question about climate: This is close, but note that it’s suggesting that a decrease in emissions corresponds with a decrease in concentration. This is not necessarily true in … Continue reading “ChatGPT does the Climate Bathtub”

Modeling with ChatGPT

A couple weeks ago my wife started probing ChatGPT’s abilities. An early foray suggested that it didn’t entirely appreciate climate bathtub dynamics. She decided to start with a less controversial topic: If there was a hole that went through the center of the moon, and I jumped in, how long would it take for me … Continue reading “Modeling with ChatGPT”

AI Chatbots on Causality

Having recently encountered some major causality train wrecks, I got curious about LLM “understanding” of causality. If AI chatbots are trained on the web corpus, and the web doesn’t “get” causality, there’s no reason to think that AI will make sense either. TLDR; ChatGPT and Bing utterly fail this test, for reasons that are evident … Continue reading “AI Chatbots on Causality”

AI doesn’t help modelers

Large language model AI doesn’t help with modeling. At least, that’s my experience so far. DALL-E images from Bing image creator. On the ACM blog, Bertrand Meyer argues that AI doesn’t help programmers either. I think his reasons are very much compatible with what I found attempting to get ChatGPT to discuss dynamics: Here is … Continue reading “AI doesn’t help modelers”

Sources of Information for Modeling

The traditional picture of information sources for modeling is a funnel. For example, in Some Basic Concepts in System Dynamics (2009), Forrester showed: I think the diagram, or at least the concept, is much older than that. However, I think the landscape has changed a lot, with more to come. Generally, the mental database hasn’t … Continue reading “Sources of Information for Modeling”