By the rule of 72 for exponential growth, that means sales are doubling every 16 weeks, or about three times per year.
If sales are growing exponentially, the installed base is also growing exponentially (because the integral of e^x is e^x). Half of the accumulated sales occur in the most recent doubling (because the series sum 1+2+4+8+…+n = 2*n-1), so the integrated unit sales are roughly one doubling (16 weeks) ahead of the interval sales.
Extrapolating, there’s an Android for everyone on the planet in two years (6 doublings, or a factor of 64 increase).
Extrapolating a little further, sales equal the mass of the planet by about 2030 (ln(10^25/10^8)/ln(2)/3 = 19 years).
Here’s a pretty array of pendulums of different lengths and therefore different natural frequencies:
This is a nice demonstration of how structure (length) causes behavior (period of oscillation). You can also see a variety of interesting behavior patterns, like beats, as the oscillations move in and out of phase with one another.
These metronomes move in and out of sync as they’re coupled and uncoupled. This is interesting because it’s a fundamentally nonlinear process. Syncprovides a nice account of such things, and there’s a nifty interactive coupled pendulum demo here.
This is a physical analog of an infection model or the Bass diffusion model. It illustrates shifting loop dominance – initially, positive feedback dominates due to the chain reaction of balls tripping new traps, ejecting more balls. After a while, negative feedback takes over as the number of live traps is depleted, and the reaction slows.
Eli Pariser argues that “filter bubbles” are bad for us and bad for democracy:
As web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.
Filter bubbles are close cousins of confirmation bias, groupthink, polarization and other cognitive and social pathologies.
As confidence in an idea grows, the delay in recognition (or frequency of outright rejection) of anomalous information grows larger. As a result, confidence in the idea – flat earth, 100mpg carburetor – can grow far beyond the level that would be considered reasonable, if contradictory information were recognized.
The dynamics resulting from this and other positive feedbacks play out in many spheres. Wittenberg & Sterman give an example:
The dynamics generated by the model resemble the life cycle of intellectual fads. Often a promising new idea rapidly becomes fashionable through excessive optimism, aggressive marketing, media hype, and popularization by gurus. Many times the rapid influx of poorly trained practitioners, or the lack of established protocols and methods, causes expectations to outrun achievements, leading to a backlash and disaffection. Such fads are commonplace, especially in (quack) medicine and most particularly in the world of business, where “new paradigms” are routinely touted in the pages of popular journals of management, only to be displaced in the next issue by what many business people have cynically come to call the next “flavor of the month.”
Typically, a guru proposes a new theory, tool, or process promising to address persistent problems facing businesses (that is, a new paradigm claiming to solve the anomalies that have undermined the old paradigm.) The early adopters of the guru’s method spread the word and initiate some projects. Even in cases where the ideas of the guru have little merit, the energy and enthusiasm a team can bring to bear on a problem, coupled with Hawthorne and placebo effects and the existence of “low hanging fruit” will often lead to some successes, both real and apparent. Proponents rapidly attribute these successes to the use of the guru’s ideas. Positive word of mouth then leads to additional adoption of the guru’s ideas. (Of course, failures are covered up and explained away; as in science there is the occasional fraud as well.) Media attention further spreads the word about the apparent successes, further boosting the credibility and prestige of the guru and stimulating additional adoption.
As people become increasingly convinced that the guru’s ideas work, they are less and less likely to seek or attend to disconfirming evidence. Management gurus and their followers, like many scientists, develop strong personal, professional, and financial stakes in the success of their theories, and are tempted to selectively present favorable and suppress unfavorable data, just as scientists grow increasingly unable to recognize anomalies as their familiarity with and confidence in their paradigm grows. Positive feedback processes dominate the dynamics, leading to rapid adoption of those new ideas lucky enough to gain a sufficient initial following. …
The wide range of positive feedbacks identified above can lead to the swift and broad diffusion of an idea with little intrinsic merit because the negative feedbacks that might reveal that the tools don’t work operate with very long delays compared to the positive loops generating the growth. …
For filter bubbles, I think the key positive loops are as follows:
Loops R1 are the user’s well-worn path. We preferentially visit sites presenting information (theory x or y) in which we have confidence. In doing so, we consider only a subset of all information, building our confidence in the visited theory. This is a built-in part of our psychology, and to some extent a necessary part of the process of winnowing the world’s information fire hose down to a usable stream.
Loops R2involve the information providers. When we visit a site, advertisers and other observers (Nielsen) notice, and this provides the resources (ad revenue) and motivation to create more content supporting theory x or y. This has also been a part of the information marketplace for a long time.
R1 and R2 are stabilized by some balancing loops (not shown). Users get bored with an all-theory-y diet, and seek variety. Providers seek out controversy (real or imagined) and sensationalize x-vs-y battles. As Pariser points out, there’s less scope for the positive loops to play out in an environment with a few broad media outlets, like city newspapers. The front page of the Bozeman Daily Chronicle has to work for a wide variety of readers. If the paper let the positive loops run rampant, it would quickly lose half its readership. In the online world, with information customized at the individual level, there’s no such constraint.
Individual filtering introduces R3. As the filter observes site visit patterns, and preferentially serves up information consistent with past preferences. This introduces a third set of reinforcing feedback processes, as users begin to see what they prefer, they also learn to prefer what they see. In addition, on Facebook and other social networking sites every person is essentially a site, and people include one another in networks preferentially. This is another mechanism implementing loop R1 – birds of a feather flock together and share information consistent with their mutual preferences, and potentially following one another down conceptual rabbit holes.
The result of the social web and algorithmic filtering is to upset the existing balance of positive and negative feedback. The question is, were things better before, or are they better now?
I’m not exactly sure how to tell. Presumably one could observe trends in political polarization and duration of fads for an indication of the direction of change, but that still leaves open the question of whether we have more or less than the “optimal” quantity of pet rocks, anti-vaccine campaigns and climate skepticism.
My suspicion is that we now have too much positive feedback. This is consistent with Wittenberg & Sterman’s insight from the modeling exercise, that the positive loops are fast, while the negative loops are weak or delayed. They offer a prescription for that,
The results of our model suggest that the long-term success of new theories can be enhanced by slowing the positive feedback processes, such as word of mouth, marketing, media hype, and extravagant claims of efficacy by which new theories can grow, and strengthening the processes of theory articulation and testing, which can enhance learning and puzzle-solving capability.
In the video, Pariser implores the content aggregators to carefully ponder the consequences of filtering. I think that also implies more negative feedback in algorithms. It’s not clear that providers have an incentive to do that though. The positive loops tend to reward individuals for successful filtering, while the risks (e.g., catastrophic groupthink) accrue partly to society. At the same time, it’s hard to imagine a regulatory that does not flirt with censorship.
Absent a global fix, I think it’s incumbent on individuals to practice good mental hygiene, by seeking diverse information that stands some chance of refuting their preconceptions once in a while. If enough individuals demand transparency in filtering, as Pariser suggests, it may even be possible to gain some local control over the positive loops we participate in.
I’m not sure that goes far enough though. We need tools that serve the social equivalent of “strengthening the processes of theory articulation and testing” to improve our ability to think and talk about complex systems. One such attempt is the “collective intelligence” behind Climate Colab. It’s not quite Facebook-scale yet, but it’s a start. Semantic web initiatives are starting to help by organizing detailed data, but we’re a long way from having a “behavioral dynamic web” that translates structure into predictions of behavior in a shareable way.
Wired covers a new article in Nature, investigating massive failures in linked networks.
The interesting thing is that feedback between the connected networks destabilizes the whole:
“When networks are interdependent, you might think they’re more stable. It might seem like we’re building in redundancy. But it can do the opposite,” said Eugene Stanley, a Boston University physicist and co-author of the study, published April 14 in Nature.
The interconnections fueled a cascading effect, with the failures coursing back and forth. A damaged node in the first network would pull down nodes in the second, which crashed nodes in the first, which brought down more in the second, and so on. And when they looked at data from a 2003 Italian power blackout, in which the electrical grid was linked to the computer network that controlled it, the patterns matched their models’ math.
The BBC today carries the headline, “US manufacturing output hits 6 year high.” That sounded like an April Fool’s joke. Sure enough, FRED shows manufacturing output 15% below its 2007 peak at the end of last year, a gap that would be almost impossible to make up in a quarter. The problem is that the ISM-PMI index reported by the BBC is a measure of growth, not absolute level. The BBC has confused the stock (output) with the flow (output growth). In reality, things are improving, but there’s still quite a bit of ground to cover to recover the peak.
In the 80s, my mom had an Audi 5000. It’s value was destroyed by allegations of sudden, uncontrollable acceleration. No plausible physical mechanism was ever identified.
Today, Toyota’s suffering from the same fate. A more likely explanation? Operator error. Stepping on the gas instead of the brake transforms the normal negative feedback loop controlling velocity into a runaway positive feedback:
… A driver would step on the wrong pedal, panic when the car did not perform as expected, continue to mistake the accelerator for the brake, and press down on the accelerator even harder.
This had disastrous consequences in a 1992 Washington Square Park incident that killed five and a 2003 Santa Monica Farmers’ Market incident that killed ten …
Given time, the driver can model the situation, figure out what’s wrong, and correct. But, as my sister can attest, when you’re six feet in front of the garage with the 350 V8 Buick at full throttle, there isn’t a lot of time.
Here’s the story: About 800,000 people in California who buy insurance on the individual market — as opposed to getting it through their employers — are covered by Anthem Blue Cross, a WellPoint subsidiary. These are the people who were recently told to expect dramatic rate increases, in some cases as high as 39 percent.
Why the huge increase? It’s not profiteering, says WellPoint, which claims instead (without using the term) that it’s facing a classic insurance death spiral.
Bear in mind that private health insurance only works if insurers can sell policies to both sick and healthy customers. If too many healthy people decide that they’d rather take their chances and remain uninsured, the risk pool deteriorates, forcing insurers to raise premiums. This, in turn, leads more healthy people to drop coverage, worsening the risk pool even further, and so on.
First, check out SEED’s recent article, which asks, When it comes to scientific publishing and fame, the rich get richer and the poor get poorer. How can we break this feedback loop?
For to all those who have, more will be given, and they will have an abundance; but from those who have nothing, even what they have will be taken away.
Author John Wilbanks proposes to use richer metrics to evaluate scientists, going beyond publications to consider data, code, etc. That’s a good idea per se, but it’s a static solution to a dynamic problem. It seems to me that it spreads around the effects of the positive feedback from publications->resources->publications a little more broadly, but doesn’t necessarily change the gain of the loop. A better solution, if meritocracy is the goal, might be greater use of blind evaluation and changes to allocation mechanisms themselves.
The reason we care about this is that we’d like science to progress as quickly as possible. That involves crafting a reward system with some positive feedback, but not so much that it easily locks in to suboptimal paths. That’s partly a matter of the individual researcher, but there’s a larger question: how to ensure that good theories out-compete bad ones?
At lunch today we were amazed by these near-perfect convection cells that formed in a pot of quinoa. You can DIY at NOAA. I think this is an instance of Benard-Marangoni convection, because the surface is free, though the thinness assumptions are likely violated, and quinoa is not quite an ideal liquid. Anyway, it’s an interesting phenomenon because the dynamics involve a surface tension gradient, not just heat transfer. See this and this.
Other rodents also rebounded, turning to seabird chicks for food
An expensive pan-rodent eradication plan is now underway.
But this time, administrators are prepared to make course corrections if things do not turn out according to plan.“This study clearly demonstrates that when you’re doing a removal effort, you don’t know exactly what the outcome will be,” said Barry Rice, an invasive species specialist at the Nature Conservancy. “You can’t just go in and make a single surgical strike. Every kind of management you do is going to cause some damage.”