Et tu, Groupon?

Is Groupon overvalued too? Modeling Groupon actually proved a bit more challenging than my last post on Facebook.

Again, I followed in the footsteps of Cauwels & Sornette, starting with the SEC filing data they used, with an update via google. C&S fit a logistic to Groupon’s cumulative repeat sales. That’s actually the end of a cascade of participation metrics, all of which show logistic growth:

The variable of greatest interest with respect to revenue is Groupons sold. But the others also play a role in determining costs – it takes money to acquire and retain customers. Also, there are actually two populations growing logistically – users and merchants. Growth is presumably a function of the interaction between these two populations. The attractiveness of Groupon to customers depends on having good deals on offer, and the attractiveness to merchants depends on having a large customer pool.

I decided to start with the customer side. The customer supply chain looks something like this:

Subscribers data includes all three stocks, cumulative customers is the right two, and cumulative repeat customers is just the rightmost.

Continue reading “Et tu, Groupon?”

Time to short some social network stocks?

I don’t want to wallow too long in metaphors, so here’s something with a few equations.

A recent arXiv paper by Peter Cauwels and Didier Sornette examines market projections for Facebook and Groupon, and concludes that they’re wildly overvalued.

We present a novel methodology to determine the fundamental value of firms in the social-networking sector based on two ingredients: (i) revenues and profits are inherently linked to its user basis through a direct channel that has no equivalent in other sectors; (ii) the growth of the number of users can be calibrated with standard logistic growth models and allows for reliable extrapolations of the size of the business at long time horizons. We illustrate the methodology with a detailed analysis of facebook, one of the biggest of the social-media giants. There is a clear signature of a change of regime that occurred in 2010 on the growth of the number of users, from a pure exponential behavior (a paradigm for unlimited growth) to a logistic function with asymptotic plateau (a paradigm for growth in competition). […] According to our methodology, this would imply that facebook would need to increase its profit per user before the IPO by a factor of 3 to 6 in the base case scenario, 2.5 to 5 in the high growth scenario and 1.5 to 3 in the extreme growth scenario in order to meet the current, widespread, high expectations. […]

I’d argue that the basic approach, fitting a logistic to the customer base growth trajectory and multiplying by expected revenue per customer, is actually pretty ancient by modeling standards. (Most system dynamicists will be familiar with corporate growth models based on the mathematically-equivalent Bass diffusion model, for example.) So the surprise for me here is not the method, but that forecasters aren’t using it.

Looking around at some forecasts, it’s hard to say what forecasters are actually doing. There’s lots of handwaving and blather about multipliers, and little revelation of actual assumptions (unlike the paper). It appears to me that a lot of forecasters are counting on big growth in revenue per user, and not really thinking deeply about the user population at all.

To satisfy my curiosity, I grabbed the data out of Cauwels & Sornette, updated it with the latest user count and revenue projection, and repeated the logistic model analysis. A few observations:

I used a generalized logistic, which has one more parameter, capturing possible nonlinearity in the decline of the growth rate of users with increasing saturation of the market. Here’s the core model:

Continue reading “Time to short some social network stocks?”

Models and metaphors

My last post about metaphors ruffled a few feathers. I was a bit surprised, because I thought it was pretty obvious that metaphors, like models, have their limits.

The title was just a riff on the old George Box quote, “all models are wrong, some are useful.” People LOVE to throw that around. I once attended an annoying meeting where one person said it at least half a dozen times in the space of two hours. I heard it in three separate sessions at STIA (which was fine).

I get nervous when I hear, in close succession, about the limits of formal mathematical models and the glorious attributes of metaphors. Sure, a metaphor (using the term loosely, to include similes and analogies) can be an efficient vehicle for conveying meaning, and might lend itself to serving as an icon in some kind of visualization. But there are several possible failure modes:

  • The mapping of the metaphor from its literal domain to the concept of interest may be faulty (a bathtub vs. a true exponential decay process).
  • The point of the mapping may be missed. (If I compare my organization to the Three Little Pigs, does that mean I’ve built a house of brick, or that there are a lot of wolves out there, or we’re pigs, or … ?)
  • Listeners may get the point, but draw unintended policy conclusions. (Do black swans mean I’m not responsible for disasters, or that I should have been more prepared for outliers?)

These are not all that different from problems with models, which shouldn’t really come as a surprise, because a model is just a special kind of metaphor – a mapping from an abstract domain (a set of equations) to a situation of interest – and neither a model nor a metaphor is the real system.

Models and other metaphors have distinct strengths and weaknesses though. Metaphors are efficient, cheap, and speak to people in natural language. They can nicely combine system structure and behavior. But that comes at a price of ambiguity. A formal model is unambiguous, and therefore easy to test, but potentially expensive to build and difficult to share with people who don’t speak math. The specificity of a model is powerful, but also opens up opportunities for completely missing the point (e.g., building a great model of the physics of a situation when the crux of the problem is actually emotional).

I’m particularly interested in models for their unique ability to generate reliable predictions about behavior from structure and to facilitate comparison with data (using the term broadly, to include more than just the tiny subset of reality that’s available in time series). For example, if I argue that the number of facebook accounts grows logistically, according to dx/dt=r*x*(k-x) for a certain r, k and x(0), we can agree on exactly what that means. Even better, we can estimate r and k from data, and then check later to verify that the model was correct. Try that with “all the world’s a stage.”

If you only have metaphors, you have to be content with not solving a certain class of problems. Consider climate change. I say it’s a bathtub, you say it’s a Random Walk Down Wall Street. To some extent, each is true, and each is false. But there’s simply no way to establish which processes dominate accumulation of heat and endogenous variability, or to predict the outcome of an experiment like doubling CO2, by verbal or visual analogy. It’s essential to introduce some math and data.

Models alone won’t solve our problems either, because they don’t speak to enough people, and we don’t have models for the full range of human concerns. However, I’d argue that we’re already drowning in metaphors, including useless ones (like “the war on [insert favorite topic]”), and in dire need of models and model literacy to tackle our thornier problems.

Forest Cover Tipping Points

There’s an interesting discussion of forest tipping points in a new paper in Science:

Global Resilience of Tropical Forest and Savanna to Critical Transitions

Marina Hirota, Milena Holmgren, Egbert H. Van Nes, Marten Scheffer

It has been suggested that tropical forest and savanna could represent alternative stable states, implying critical transitions at tipping points in response to altered climate or other drivers. So far, evidence for this idea has remained elusive, and integrated climate models assume smooth vegetation responses. We analyzed data on the distribution of tree cover in Africa, Australia, and South America to reveal strong evidence for the existence of three distinct attractors: forest, savanna, and a treeless state. Empirical reconstruction of the basins of attraction indicates that the resilience of the states varies in a universal way with precipitation. These results allow the identification of regions where forest or savanna may most easily tip into an alternative state, and they pave the way to a new generation of coupled climate models.

Science 14 October 2011

The paper is worth a read. It doesn’t present an explicit simulation model, but it does describe the concept nicely. The basic observation is that there’s clustering in the distribution of forest cover vs. precipitation:

Hirota et al., Science 14 October 2011

In the normal regression mindset, you’d observe that some places with 2m rainfall are savannas, and others are forests, and go looking for other explanatory variables (soil, latitude, …) that explain the difference. You might learn something, or you might get into trouble if forest cover is not-only nonlinear in various inputs, but state-dependent. The authors pursue the latter thought: that there may be multiple stable states for forest cover at a given level of precipitation.

They use the precipitation-forest cover distribution and the observation that, in a first-order system subject to noise, the distribution of observed forest cover reveals something about the potential function for forest cover. Using kernel smoothing, they reconstruct the forest potential functions for various levels of precipitation:

Hirota et al., Science 14 October 2011

I thought that looked fun to play with, so I built a little model that qualitatively captures the dynamics:

The tricky part was reconstructing the potential function without the data. It turned out to be easier to write the rate equation for forest cover change at medium precipitation (“change function” in the model), and then tilt it with an added term when precipitation is high or low. Then the potential function is reconstructed from its relationship to the derivative, dz/dt = f(z) = -dV/dz, where z is forest cover and V is the potential.

That yields the following potentials and vector fields (rates of change) at low, medium and high precipitation:

If you start this system at different levels of forest cover, for medium precipitation, you can see the three stable attractors at zero trees, savanna (20% tree cover) and forest (90% tree cover).

If you start with a stable forest, and a bit of noise, then gradually reduce precipitation, you can see that the forest response is not smooth.

The forest is stable until about year 8, then transitions abruptly to savanna. Finally, around year 14, the savanna disappears and is replaced by a treeless state. The forest doesn’t transition to savanna until the precipitation index reaches about .3, even though savanna becomes the more stable of the two states much sooner, at precipitation of about .55. And, while the savanna state doesn’t become entirely unstable at low precipitation, noise carries the system over the threshold to the lower-potential treeless state.

The net result is that thinking about such a system from a static, linear perspective will get you into trouble. And, if you live around such a system, subject to a changing climate, transitions could be abrupt and surprising (fire might be one tipping mechanism).

The model is in my library.

Your gut may be leading you astray

An interesting comment on rationality and conservatism:

I think Sarah Palin is indeed a Rorschach test for Conservatives, but it’s about much than manners or players vs. kibbitzes – it’s about what Conservativsm MEANS.

The core idea behind Conservatism is that most of human learning is done not by rational theorizing, but by pattern recognition. Our brain processes huge amounts of data every second, and most information we get out of it is in the form of recognized patterns, not fully logical theories. It’s fair to say that 90% of our knowledge is in patterns, not in theories.

This pattern recognition is called common sense, and over generations, it’s called traditions, conventions etc. Religion is usually a carrier meme for these evolved patterns. It’s sort of an evolutionary process, like a genetic algorithm.

Liberals, Lefties and even many Libertarians want to use only 10% of the human knowledge that’s rational. And because our rational knowledge cannot yet fully explain neither human nature in itself nor everything that happens in society, they fill the holes with myths like that everybody is born good and only society makes people bad etc.

Conservatives are practical people who instinctively recognize the importance of evolved patterns in human learning: because our rational knowledge simply isn’t enough yet, these common sense patterns are our second best option to use. And to use these patterns effectively you don’t particularly have to be very smart i.e. very rational. You have to be _wise_ and you have to have a good character: you have to set hubris and pride aside and be able to accept traditions you don’t fully understand.

Thus, for a Conservative, while smartness never hurts, being wise and having a good character is more important than being very smart. Looking a bit simple simply isn’t a problem, you still have that 90% of knowledge at hand.

Anti-Palin Conservatives don’t understand it. They think Conservativism is about having different theories than the Left, they don’t understand that it’s that theories and rational knowledge isn’t so important.

(via Rabbett Run)

A possible example of the writer’s perspective at work is provided by survey research showing that Tea Partiers are skeptical of anthropogenic climate change (established by models) but receptive to natural variation (vaguely, patterns), and they’re confident that they’re well-informed about it in spite of evidence to the contrary. Another possible data point is conservapedia’s resistance to relativity, which is essentially a model that contradicts our Newtonian common sense.

As an empirical observation, this definition of conservatism seems plausible at first. Humans are fabulous pattern recognizers. And, there are some notable shortcomings to rational theorizing. However, as a normative statement – that conservatism is better because of the 90%/10% ratio, I think it’s seriously flawed.

The quality of the 90% is quite different from the quality of the 10%. Theories are the accumulation of a lot of patterns put into a formal framework that has been shared and tested, which at least makes it easy to identify the theories that fall short. Common sense, or wisdom or whatever you want to call it, is much more problematic. Everyone knows the world is flat, right?

Sadly, there’s abundant evidence that our evolved heuristics fall short in complex systems. Pattern matching in particular falls short even in simple bathtub systems. Inappropriate mental models and heuristics can lead to decisions that are exactly the opposite of good management, even when property rights are complete; noise only makes things worse.

Real common sense would have the brains to abdicate when faced with situations, like relativity or climate change, where it was clear that experience (low velocities, local weather) doesn’t provide any patterns that are relevant to the conditions under consideration.

After some reflection, I think there’s more than pattern recognition to conservatism. Liberals, anarchists, etc. are also pattern matchers. We all have our own stylized facts and conventional wisdom, all of which are subject to the same sorts of cognitive biases. So, pattern matching doesn’t automatically lead to conservatism. Many conservatives don’t believe in global warming because they don’t trust models, yet observed warming and successful predictions of models from the 70s (i.e. patterns) also don’t count. So, conservatives don’t automatically respond to patterns either.

In any case, running the world by pattern recognition alone is essentially driving by looking in the rearview mirror. If you want to do better, i.e. to make good decisions at turning points or novel conditions, you need a model.

 

Elk, wolves and dynamic system visualization

Bret Victor’s video of a slick iPad app for interactive visualization of the Lotka-Voltera equations has been making the rounds:

Coincidentally, this came to my notice around the same time that I got interested in the debate over wolf reintroduction here in Montana. Even simple models say interesting things about wolf-elk dynamics, which I’ll write about some other time (I need to get vaccinated for rabies first).

To ponder the implications of the video and predator-prey dynamics, I built a version of the Lotka-Voltera model in Vensim.

After a second look at the video, I still think it’s excellent. Victor’s two design principles, ubiquitous visualization and in-context manipulation, are powerful for communicating a model. Some aspects of what’s shown have been in Vensim since the introduction of SyntheSim a few years ago, though with less Tufte/iPad sexiness. But other features, like Causal Tracing, are not so easily discovered – they’re effective for pros, but not new users. The way controls appear at one’s fingertips in the iPad app is very elegant. The “sweep” mode is also clever, so I implemented a similar approach (randomized initial conditions across an array dimension) in my version of the model. My favorite trick, though, is the 2D control of initial conditions via the phase diagram, which makes discovery of the system’s equilibrium easy.

The slickness of the video has led some to wonder whether existing SD tools are dinosaurs. From a design standpoint, I’d agree in some respects, but I think SD has also developed many practices – only partially embodied in tools – that address learning gaps that aren’t directly tackled by the app in the video: Continue reading “Elk, wolves and dynamic system visualization”

Bad data, bad models

Baseline Scenario has a nice post on bad data:

To make a vast generalization, we live in a society where quantitative data are becoming more and more important. Some of this is because of the vast increase in the availability of data, which is itself largely due to computers. Some is because of the vast increase in the capacity to process data, which is also largely due to computers. …

But this comes with a problem. The problem is that we do not currently collect and scrub good enough data to support this recent fascination with numbers, and on top of that our brains are not wired to understand data. And if you have a lot riding on bad data that is poorly understood, then people will distort the data or find other ways to game the system to their advantage.

In spite of ubiquitous enterprise computing, bad data is the norm in my experience with corporate consulting. At one company, I had access to very extensive data on product pricing, promotion, advertising, placement, etc., but the information system archived everything inaccessibly on a rolling 3-year horizon. That made it impossible to see long term dynamics of brand equity, which was really the most fundamental driver of the firm’s success. Our experience with large projects includes instances where managers don’t want to know the true state of the system, and therefore refuse to collect or provide needed data – even when billions are at stake. And some firms jealously guard data within stovepipes – it’s hard to optimize the system when the finance group keeps the true product revenue stream secret in order to retain leverage over the marketing group.

People worry about garbage-in-garbage out, but modeling can actually be the antidote to bad data. If you pay attention to quality, the process of building a model will reveal all kinds of gaps in data. We recently discovered that various sources of vehicle fleet data are in serious disagreement, because of double-counting of transactions and interstate sales, and undercounting of inspections. Once data issues are known, a model can be used to remove biases and filter noise (your GPS probably runs a Kalman Filter to combine a simple physical model of your trajectory with noisy satellite measurements).

Not just any model will do; causal models are important. It’s hard to discover that your data fails to observe physical laws or other reality checks with a model that permits negative cows and buries the acceleration of gravity in a regression coefficient.

The problem is, a lot of people have developed an immune response against models, because there are so many that don’t pay attention to quality and serve primarily propagandistic purposes. The only antidote for that, I think, is to teach modeling skills, or at least model consumption skills, so that they know the right questions to ask in order to separate the babies from the bathwater.

And so it begins…

A kerfuffle is brewing over Richard Tol’s FUND model (a recent  installment). I think this may be one of the first instances of something we’ll see a lot more of: public critique of integrated assessment models.

Integrated Assessment Models (IAMs) are a broad class of tools that combine the physics of natural systems (climate, pollutants, etc.) with the dynamics of socioeconomic systems. Most of the time, this means coupling an economic model (usually dynamic general equilibrium or an optimization approach; sometimes bottom-up technical or a hybrid of the two) with a simple to moderately complex model of climate. The IPCC process has used such models extensively to generate emissions and mitigation scenarios.

Interestingly, the IAMs have attracted relatively little attention; most of the debate about climate change is focused on the science. Yet, if you compare the big IAMs to the big climate models, I’d argue that the uncertainties in the IAMs are much bigger. The processes in climate models are basically physics and many are even be subject to experimental verification. We can measure quantities like temperature with considerable precision and spatial detail over long time horizons, for comparison with model output. Some of the economic equivalents, like real GDP, are much slipperier even in their definitions. We have poor data for many regions, and huge problems of “instrumental drift” from changing quality of goods and sectoral composition of activity, and many cultural factors are not even measured. Nearly all models represent human behavior – the ultimate wildcard – by assuming equilibrium, when in fact it’s not clear that equilibrium emerges faster than other dynamics change the landscape on which it arises. So, if climate skeptics get excited about the appropriate centering method for principal components analysis, they should be positively foaming at the mouth over the assumptions in IAMs, because there are far more of them, with far less direct empirical support.

Last summer at EMF Snowmass, I reflected on some of our learning from the C-ROADS experience (here’s my presentation). One of the key points, I think, is that there is a huge gulf between models and modelers, on the one hand, and the needs and understanding of decision makers and the general public on the other. If modelers don’t close that gap by deliberately translating their insights for lay audiences, focusing their tools on decision maker needs, and embracing a much higher level of transparency, someone else will do that translation for them. Most likely, that “someone else” will be much less informed, or have a bigger axe to grind, than the modelers would hope.

With respect to transparency, Tol’s FUND model is further along than many models: the code is available. So, informed tinkerers can peek under the hood if they wish. However, it comes with a warning:

It is the developer’s firm belief that most researchers should be locked away in an ivory tower. Models are often quite useless in unexperienced hands, and sometimes misleading. No one is smart enough to master in a short period what took someone else years to develop. Not-understood models are irrelevant, half-understood models treacherous, and mis-understood models dangerous.

Therefore, FUND does not have a pretty interface, and you will have to make to real effort to let it do something, let alone to let it do something new.

I understand the motivation for this warning. However, it leaves the modeler-consumer gulf gaping.The modelers have their insights into systems, the decision makers have their problems managing those systems, and ne’er the twain shall meet – there just aren’t enough modelers to go around. That leaves reports as the primary conduit of information from model to user, which is fine if your ivory tower is secure enough that you need not care whether your insights have any influence. It’s not even clear that reports are more likely to be understood than models: there have been a number of high-profile instances of ill-conceived institutional press releases and misinterpretation of conclusions and even raw data.

Also, there’s a hint of danger in the very idea of building dangerous models. Obviously all models, like analogies, are limited in their fidelity and generality. It’s important to understand those limitations, just as a pilot must understand the limitations of her instruments. However, if a model is a minefield for the uninitiated user, I have to question its utility. Robustness is an important aspect of model quality; a model given vaguely realistic inputs should yield realistic outputs most of the time, and a model given stupid inputs should generate realistic catastrophes. This is perhaps especially true for climate, where we are concerned about the tails of the distribution of possible outcomes. It’s hard to build a model that’s only robust to the kinds of experiments that one would like to perform, while ignoring other potential problems. To the extent that a model generates unrealistic outcomes, the causes should be traceable; if its not easy for the model user to see in side the black box, then I worry that the developer won’t have done enough inspection either. So, the discipline of building models for naive users imposes some useful quality incentives on the model developer.

IAM developers are busy adding spatial resolution, technical detail, and other useful features to models. There’s comparatively less work on consolidation of insights, with translation and construction of tools for wider consumption. That’s understandable, because there aren’t always strong rewards for doing so. However, I think modelers ignore this crucial task at their future peril.

How Many Pairs of Rabbits Are Created by One Pair in One Year?

The Fibonacci numbers are often illustrated geometrically, with spirals or square tilings, but the nautilus is not their origin. I recently learned that the sequence was first reported as the solution to a dynamic modeling thought experiment, posed by Leonardo Pisano (Fibonacci) in his 1202 masterpiece, Liber Abaci.

How Many Pairs of Rabbits Are Created by One Pair in One Year?

A certain man had one pair of rabbits together in a certain enclosed place, and one wishes to know how many are created from the pair in one year when it is the nature of them in a single month to bear another pair, and in the second month those born to bear also. Because the abovewritten pair in the first month bore, you will double it; there will be two pairs in one month. One of these, namely the first, bears in the second month, and thus there are in the second month 3 pairs; of these in one month two are pregnant, and in the third month 2 pairs of rabbits are born, and thus there are 5 pairs in the month; in this month 3 pairs are pregnant, and in the fourth month there are 8 pairs, of which 5 pairs bear another 5 pairs; these are added to the 8 pairs making 13 pairs in the fifth month; these 5 pairs that are born in this month do not mate in this month, but another 8 pairs are pregnant, and thus there are in the sixth month 21 pairs; [p284] to these are added the 13 pairs that are born in the seventh month; there will be 34 pairs in this month; to this are added the 21 pairs that are born in the eighth month; there will be 55 pairs in this month; to these are added the 34 pairs that are born in the ninth month; there will be 89 pairs in this month; to these are added again the 55 pairs that are born in the tenth month; there will be 144 pairs in this month; to these are added again the 89 pairs that are born in the eleventh month; there will be 233 pairs in this month.

Source: http://www.math.utah.edu/~beebe/software/java/fibonacci/liber-abaci.html

The solution is the famous Fibonacci sequence, which can be written as a recurrent series,

F(n) = F(n-1)+F(n-2), F(0)=F(1)=1

This can be directly implemented as a discrete time Vensim model:

Fibonacci SeriesHowever, that representation is a little too abstract to immediately reveal the connection to rabbits. Instead, I prefer to revert to Fibonacci’s problem description to construct an operational representation:

Fibonacci Rabbits

Mature rabbit pairs are held in a stock (Fibonacci’s “certain enclosed space”), and they breed a new pair each month (i.e. the Reproduction Rate = 1/month). Modeling male-female pairs rather than individual rabbits neatly sidesteps concern over the gender mix. Importantly, there’s a one-month delay between birth and breeding (“in the second month those born to bear also”). That delay is captured by the Immature Pairs stock. Rabbits live forever in this thought experiment, so there’s no outflow from mature pairs.

You can see the relationship between the series and the stock-flow structure if you write down the discrete time representation of the model, ignoring units and assuming that the TIME STEP = Reproduction Rate = Maturation Time = 1:

Mature Pairs(t) = Mature Pairs(t-1) + Maturing
Immature Pairs(t) = Immature Pairs(t-1) + Reproducing - Maturing

Substituting Maturing = Immature Pairs and Reproducing = Mature Pairs,

Mature Pairs(t) = Mature Pairs(t-1) + Immature Pairs(t-1)
Immature Pairs(t) = Immature Pairs(t-1) + Mature Pairs(t-1) - Immature Pairs(t-1) = Mature Pairs(t-1)

So:

Mature Pairs(t) = Mature Pairs(t-1) + Mature Pairs(t-2)

The resulting model has two feedback loops: a minor negative loop governing the Maturing of Immature Pairs, and a positive loop of rabbits Reproducing. The rabbit population tends to explode, due to the positive loop:

Fibonacci Growth

In four years, there are about as many rabbits as there are humans on earth, so that “certain enclosed space” better be big. After an initial transient, the growth rate quickly settles down:

Fibonacci Growth RateIts steady-state value is .61803… (61.8%/month), which is the Golden Ratio conjugate. If you change the variable names, you can see the relationship to the tiling interpretation and the Golden Ratio:

Fibonacci Part Whole

Like anything that grows exponentially, the Fibonacci numbers get big fast. The hundredth is  354,224,848,179,261,915,075.

As before, we can play the eigenvector trick to suppress the growth mode. The system is described by the matrix:

-1 1
 1 0

which has eigenvalues {-1.618033988749895, 0.6180339887498949} – notice the appearance of the Golden Ratio. If we initialize the model with the eigenvector of the negative eigenvalue, {-0.8506508083520399, 0.5257311121191336}, we can get the bunny population under control, at least until numerical noise excites the growth mode, near time 25:

Fibonacci Stable

The problem is that we need negarabbits to do it, -.850653 immature rabbits initially, so this is not a physically realizable solution (which probably guarantees that it will soon be introduced in legislation).

I brought this up with my kids, and they immediately went to the physics of the problem: “Rabbits don’t live forever. How big is your cage? Do you have rabbit food? TONS of rabbit food? What if you have all males, or varying mixtures of males and females?”

It’s easy to generalize the structure to generate other sequences. For example, assuming that mature rabbits live for only two months yields the Padovan sequence. Its equivalent of the Golden Ratio is 1.3247…, i.e. the rabbit population grows more slowly at ~32%/month, as you’d expect since rabbit lives are shorter.

The model’s in my library.