The danger of path-dependent information flows on the web

Eli Pariser argues that “filter bubbles” are bad for us and bad for democracy:

As web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.

Filter bubbles are close cousins of confirmation bias, groupthink, polarization and other cognitive and social pathologies.

A key feedback is this reinforcing loop, from Sterman & Wittenberg’s model of path dependence in Kuhnian scientific revolutions:

Anomalies

As confidence in an idea grows, the delay in recognition (or frequency of outright rejection) of anomalous information grows larger. As a result, confidence in the idea – flat earth, 100mpg carburetor – can grow far beyond the level that would be considered reasonable, if contradictory information were recognized.

The dynamics resulting from this and other positive feedbacks play out in many spheres. Wittenberg & Sterman give an example:

The dynamics generated by the model resemble the life cycle of intellectual fads. Often a promising new idea rapidly becomes fashionable through excessive optimism, aggressive marketing, media hype, and popularization by gurus. Many times the rapid influx of poorly trained practitioners, or the lack of established protocols and methods, causes expectations to outrun achievements, leading to a backlash and disaffection. Such fads are commonplace, especially in (quack) medicine and most particularly in the world of business, where “new paradigms” are routinely touted in the pages of popular journals of management, only to be displaced in the next issue by what many business people have cynically come to call the next “flavor of the month.”

Typically, a guru proposes a new theory, tool, or process promising to address persistent problems facing businesses (that is, a new paradigm claiming to solve the anomalies that have undermined the old paradigm.) The early adopters of the guru’s method spread the word and initiate some projects. Even in cases where the ideas of the guru have little merit, the energy and enthusiasm a team can bring to bear on a problem, coupled with Hawthorne and placebo effects and the existence of “low hanging fruit” will often lead to some successes, both real and apparent. Proponents rapidly attribute these successes to the use of the guru’s ideas. Positive word of mouth then leads to additional adoption of the guru’s ideas. (Of course, failures are covered up and explained away; as in science there is the occasional fraud as well.) Media attention further spreads the word about the apparent successes, further boosting the credibility and prestige of the guru and stimulating additional adoption.

As people become increasingly convinced that the guru’s ideas work, they are less and less likely to seek or attend to disconfirming evidence. Management gurus and their followers, like many scientists, develop strong personal, professional, and financial stakes in the success of their theories, and are tempted to selectively present favorable and suppress unfavorable data, just as scientists grow increasingly unable to recognize anomalies as their familiarity with and confidence in their paradigm grows. Positive feedback processes dominate the dynamics, leading to rapid adoption of those new ideas lucky enough to gain a sufficient initial following. …

The wide range of positive feedbacks identified above can lead to the swift and broad diffusion of an idea with little intrinsic merit because the negative feedbacks that might reveal that the tools don’t work operate with very long delays compared to the positive loops generating the growth. …

For filter bubbles, I think the key positive loops are as follows:

FilterBubblesLoops R1 are the user’s well-worn path. We preferentially visit sites presenting information (theory x or y) in which we have confidence. In doing so, we consider only a subset of all information, building our confidence in the visited theory. This is a built-in part of our psychology, and to some extent a necessary part of the process of winnowing the world’s information fire hose down to a usable stream.

Loops R2 involve the information providers. When we visit a site, advertisers and other observers (Nielsen) notice, and this provides the resources (ad revenue) and motivation to create more content supporting theory x or y. This has also been a part of the information marketplace for a long time.

R1 and R2 are stabilized by some balancing loops (not shown). Users get bored with an all-theory-y diet, and seek variety. Providers seek out controversy (real or imagined) and sensationalize x-vs-y battles. As Pariser points out, there’s less scope for the positive loops to play out in an environment with a few broad media outlets, like city newspapers. The front page of the Bozeman Daily Chronicle has to work for a wide variety of readers. If the paper let the positive loops run rampant, it would quickly lose half its readership. In the online world, with information customized at the individual level, there’s no such constraint.

Individual filtering introduces R3. As the filter observes site visit patterns, and preferentially serves up information consistent with past preferences. This introduces a third set of reinforcing feedback processes, as users begin to see what they prefer, they also learn to prefer what they see. In addition, on Facebook and other social networking sites every person is essentially a site, and people include one another in networks preferentially. This is another mechanism implementing loop R1 – birds of a feather flock together and share information consistent with their mutual preferences, and potentially following one another down conceptual rabbit holes.

The result of the social web and algorithmic filtering is to upset the existing balance of positive and negative feedback. The question is, were things better before, or are they better now?

I’m not exactly sure how to tell. Presumably one could observe trends in political polarization and duration of fads for an indication of the direction of change, but that still leaves open the question of whether we have more or less than the “optimal” quantity of pet rocks, anti-vaccine campaigns and climate skepticism.

My suspicion is that we now have too much positive feedback. This is consistent with Wittenberg & Sterman’s insight from the modeling exercise, that the positive loops are fast, while the negative loops are weak or delayed. They offer a prescription for that,

The results of our model suggest that the long-term success of new theories can be enhanced by slowing the positive feedback processes, such as word of mouth, marketing, media hype, and extravagant claims of efficacy by which new theories can grow, and strengthening the processes of theory articulation and testing, which can enhance learning and puzzle-solving capability.

In the video, Pariser implores the content aggregators to carefully ponder the consequences of filtering. I think that also implies more negative feedback in algorithms. It’s not clear that providers have an incentive to do that though. The positive loops tend to reward individuals for successful filtering, while the risks (e.g., catastrophic groupthink) accrue partly to society. At the same time, it’s hard to imagine a regulatory that does not flirt with censorship.

Absent a global fix, I think it’s incumbent on individuals to practice good mental hygiene, by seeking diverse information that stands some chance of refuting their preconceptions once in a while. If enough individuals demand transparency in filtering, as Pariser suggests, it may even be possible to gain some local control over the positive loops we participate in.

I’m not sure that goes far enough though. We need tools that serve the social equivalent of “strengthening the processes of theory articulation and testing” to improve our ability to think and talk about complex systems. One such attempt is the “collective intelligence” behind Climate Colab. It’s not quite Facebook-scale yet, but it’s a start. Semantic web initiatives are starting to help by organizing detailed data, but we’re a long way from having a “behavioral dynamic web” that translates structure into predictions of behavior in a shareable way.

Update: From Tech Review, technology for breaking the bubble

Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution

This is a very interesting model, both because it tackles ‘soft’ dynamics of paradigm formation in ‘hard’ science, and because it is an aggregate approach to an agent problem. Unfortunately, until now, the model was only available in DYNAMO, which limited access severely. It turns out to be fairly easy to translate to Vensim using the dyn2ven utility, once you know how to map the DYNAMO array FOR loops to Vensim subscripts.

Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution

J. Wittenberg and J. D. Sterman, 1999

Abstract

What is the relative importance of structural versus contextual forces in the birth and death of scientific theories? We describe a dynamic model of the birth, evolution, and death of scientific paradigms based on Kuhn’s Structure of Scientific Revolutions. The model creates a simulated ecology of interacting paradigms in which the creation of new theories is stochastic and endogenous. The model captures the sociological dynamics of paradigms as they compete against one another for members. Puzzle solving and anomaly recognition are also endogenous. We specify various regression models to examine the role of intrinsic versus contextual factors in determining paradigm success. We find that situational factors attending the birth of a paradigm largely determine its probability of rising to dominance, while the intrinsic explanatory power of a paradigm is only weakly related to the likelihood of success. For those paradigms that do survive the emergence phase, greater explanatory power is significantly related to longevity. However, the relationship between a paradigm’s ‘strength’ and the duration of normal science is also contingent on the competitive environment during the emergence phase. Analysis of the model shows the dynamics of competition and succession among paradigms to be conditioned by many positive feedback loops. These self-reinforcing processes amplify intrinsically unobservable micro-level perturbations in the environment – the local conditions of science, society, and self faced by the creators of a new theory – until they reach macroscopic significance. Such dynamics are the hallmark of self-organizing evolutionary systems.

We consider the implications of these results for the rise and fall of new ideas in contexts outside the natural sciences such as management fads.

Cite as: J. Wittenberg and J. D. Sterman (1999) Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution. Organization Science, 10.

I believe that this version is faithful to the original, but it’s difficult to be sure because the model is stochastic, so the results differ due to differences in the random number streams. For the moment, this model should be regarded as a beta release.

Continue reading “Path Dependence, Competition, and Succession in the Dynamics of Scientific Revolution”

Better Lies

Hoisted from the comments, Miles Parker has a nice reflection on modeling in this video, Why Model Reality.

It might be subtitled, “Better Lies,” a reference to modeling as the pursuit of better stories about the world, which remain never quite true (a variation on the famous Box quote, “All models are wrong but some are useful.”). A few nice points that I picked out along the way,

  • All thinking, even about the future, is retrospective.
  • Big Data is Big Dumb, because we’re collecting more and more detail about a limited subset of reality, and thus suffer from sampling and “if your only tool is a hammer …” bias.
  • A crucial component of a modeling approach is a “bullshit detector” – reality checks that identify problems at various levels on the ladder of inference.
  • Model design is more than software engineering.
  • Often the modeling process is a source of key insights, and you don’t even need to run the model.
  • Modeling is a social process.

Coming back to the comment,

I think one of the greatest values of a model is that it can bring you to the point where you say “There isn’t any way to build a model within this methodology that is not self-contradicting. Therefore everyone in this room is contradicting themselves before they even open their mouths.”

I think that’s close to what Dana Meadows was talking about when she placed paradigms and transcendence of paradigms on the list of places to intervene in systems.

It reminds me of Gödel’s incompleteness theorems. With that as a model, I’d argue that one can construct fairly trivial models that aren’t self-contradictory. They might contradict a lot of things we think we know about the world, but by virtue of their limited expressiveness remain at least true to themselves.

Going back to the elasticity example, if I assert that oilConsumption = oilPrice^epsilon, there’s no internal contradiction as long as I use the same value of epsilon for each proposition I consider. I’m not even sure what an internal contradiction would look like in such a simple framework. However, I could come up with a long list of external consistency problems with the model: dimensional inconsistency, lack of dynamics, omission of unobserved structure, failure to conform to data ….

In the same way, I would tend to argue that general equilibrium is an internally consistent modeling paradigm that just happens to have relatively little to do with reality, yet is sometimes useful. I suppose that Frank Ackerman might disagree with me, on the grounds that equilibria are not necessarily unique or stable, which could raise an internal contradiction by violating the premise of the modeling exercise (welfare maximization).

Once you step beyond models with algorithmically simple decision making (like CGE), the plot thickens. There’s Condorcet’s paradox and Arrow’s impossibility theorem, the indeterminacy of Arthur’s El Farol bar problem, and paradoxes of zero discount rates on welfare.

It’s not clear to me that all interesting models of phenomena that give rise to self-contradictions must be self-contradicting though. For example, I suspect that Sterman & Wittenberg’s model of Kuhnian scientific paradigm succession is internally consistent.

Maybe the challenge is that the universe is self-referential and full of paradoxes and irreconcilable paradigms. Therefore as soon as we attempt to formalize our understanding of such a mess, either with nontrivial models, or trivial models assisting complex arguments, we are dragged into the quagmire of self-contradiction.

Personally, I’m not looking for the cellular automaton that runs the universe. I’m just hoping for a little feedback control on things that might make life on earth a little better. Maybe that’s a paradoxical quest in itself.

Elasticity contradictions

If a global oil shock reduces supply 10%, the price of crude will rise to $20,000/barrel, with fuel expenditures consuming more than the entire GDP of importing nations.

At least that’s what you’d predict if you think the price elasticity of oil demand is about -0.02. I saw that number in a Breakthrough post, citing Kevin Drum, citing Early Warning, citing IMF. It’s puzzling that Breakthrough is plugging small price elasticities here, when their other arguments about the rebound effect require elasticities to have large magnitudes. Continue reading “Elasticity contradictions”

The real constraint on nuclear power: war

A future where everything goes right for nuclear power, with advancing technology driving down costs, making reactors a safe and ubiquitous energy source, and providing a magic bullet for climate change, might bring other surprises.

For example, technology might also make supersonic cruise missiles cheap and ubiquitous.

Brahmos_imds

The Fukushima operators appear to be hanging in there. But imagine how they’d be coping if someone fired a missile at them once in a while.

Fortunately, reactors today are mostly in places where peace and rule of law prevail.

world_map

But peace and good governance aren’t exactly the norm in places where emissions are rising rapidly, or the poor need energy.

governance

Building lots of nuclear power plants is ultimately a commitment to peace, or at least acceptance of rather dreadful consequences of war (not necessarily war with nuclear weapons, but war with conventional weapons turning nuclear reactors into big dirty bombs).

One would hope that abundant, clean energy would reduce the motivation to blow things up, but how much are we willing to gamble on that?

Lakoff on “The Country We Believe In”

George Lakoff has an interesting take on the president’s April 13 budget speech,

Last week, on April 13, 2011, President Obama gave all Democrats and all progressives a remarkable gift. Most of them barely noticed. They looked at the President’s speech as if it were only about budgetary details. But the speech went well beyond the budget. It went to the heart of progressive thought and the nature of American democracy, and it gave all progressives a model of how to think and talk about every issue.

I’m definitely in the “barely noticed” category. The interesting thing, George argues, is that the speech is really about systems. Part concerns a system of values:

The policy topic happened to be the budget, but he called it “The Country We Believe In” for a reason. The real topic was how the progressive moral system defines the democratic ideals America was founded on, and how those ideals apply to specific issues.

More interesting to me, another key theme is systems in the “systems thinking” sense:

Systems Thinking

President Obama, in the same speech, laid the groundwork for another crucial national discussion: systems thinking, which has shown up in public discourse mainly in the form of “systemic risk” of the sort that led to the global economic meltdown. The president brought up systems thinking implicitly, at the center of his budget proposal. He observed repeatedly that budget deficits and “spending” do not occur in isolation. The choice of what to cut and what to keep is a matter of factors external to the budget per se. Long-term prosperity, economic recovery, and job creation, he argued, depend up maintaining “investments” — investments in infrastructure (roads, bridges, long-distance rail), education, scientific research, renewable energy, and so on. The maintenance of American values, he argued, is outside of the budget in itself, but is at the heart of the argument about what to cut. The fact is that the rich have gotten rich because of the government — direct corporate subsidies, access to publicly-owned resources, access to government research, favorable trade agreements, roads and other means of transportation, education that provides educated workers, tax loopholes, and innumerable government resources are taken advantage of by the rich, but paid for by all of us. What is called a ”tax break” for the rich is actually a redistribution of wealth from the poor and middle class—whose incomes have gone down—to those who have considerably more money than they need, money they have made because of tax investments by the rest of America.

The President provided a beautiful example of systems thinking. Under the Republican budget plan, the President would get a $200,000 a year tax break, which would be paid for by cutting programs for seniors, with the result that 33 seniors would be paying $6,000 more a year for health care to pay for his tax break. To see this, you have to look outside of the federal budget to the economic system at large, in which you can see what budget cuts will be balanced by increased in costs to others. A cut here in the budget is balanced by an increase outside the federal budget for real human beings.

When a system has causal effects, as in the above cases, we speak of “systemic causation.” “Systemic risks” are the risks created when there is systemic causation. Systemic causation contrasts with direct causation, as when, say, someone lifts something, or throws something, or shoots someone.

Linguists have discovered that every language studied has direct causation in its grammar, but no language has systemic causation in its grammar. Systemic causation is a harder concept and has to be learned either through socialization or education.

This got me interested in the original speech (transcript, video).

From our first days as a nation, we have put our faith in free markets and free enterprise as the engine of America’s wealth and prosperity. More than citizens of any other country, we are rugged individualists, a self-reliant people with a healthy skepticism of too much government.

But there has always been another thread running throughout our history – a belief that we are all connected; and that there are some things we can only do together, as a nation. We believe, in the words of our first Republican president, Abraham Lincoln, that through government, we should do together what we cannot do as well for ourselves.

There’s some feedback:

Ultimately, all this rising debt will cost us jobs and damage our economy. It will prevent us from making the investments we need to win the future. We won’t be able to afford good schools, new research, or the repair of roads and bridges – all the things that will create new jobs and businesses here in America. Businesses will be less likely to invest and open up shop in a country that seems unwilling or unable to balance its books. And if our creditors start worrying that we may be unable to pay back our debts, it could drive up interest rates for everyone who borrows money – making it harder for businesses to expand and hire, or families to take out a mortgage.

And recognition of systemic pressures for deficits:

But that starts by being honest about what’s causing our deficit. You see, most Americans tend to dislike government spending in the abstract, but they like the stuff it buys. Most of us, regardless of party affiliation, believe that we should have a strong military and a strong defense. Most Americans believe we should invest in education and medical research. Most Americans think we should protect commitments like Social Security and Medicare. And without even looking at a poll, my finely honed political skills tell me that almost no one believes they should be paying higher taxes.

Because all this spending is popular with both Republicans and Democrats alike, and because nobody wants to pay higher taxes, politicians are often eager to feed the impression that solving the problem is just a matter of eliminating waste and abuse –that tackling the deficit issue won’t require tough choices. Or they suggest that we can somehow close our entire deficit by eliminating things like foreign aid, even though foreign aid makes up about 1% of our entire budget.

There’s a bit of dynamics implicit in the discussion (e.g., the role of debt accumulation), but I think one thing is missing: straightforward grappling with worse-before-better behavior. The president proposes to go after waste (a favorite of all politicians) and tax breaks for the rich (far more sensible than the Ryan proposal), but doesn’t quite come to grips with the underlying question of how we can continue to feel prosperous and secure, when fundamentally we can’t (or at least shouldn’t) return to a previous pattern of unsustainable consumption in excess of our income funded by budget, trade and environmental deficits. What we really need, per yesterday’s post, is a reframing of what is now perceived as austerity as an opportunity to live with better health, relationships and security.

I part ways with Lakoff a bit on one topic:

Progressives tend to think more readily in terms of systems than conservatives. We see this in the answers to a question like, “What causes crime?” Progressives tend to give answers like economic hardship, or lack of education, or crime-ridden neighborhoods. Conservatives tend more to give an answer like “bad people — lock ‘em up, punish ‘em.” This is a consequence of a lifetime of thinking in terms of social connection (for progressives) and individual responsibility (for conservatives). Thus conservatives did not see the President’s plan, which relied on systemic causation, as a plan at all for directly addressing the deficit.

Differences in systemic thinking between progressives and conservatives can be seen in issues like global warming and financial reform. Conservatives have not recognized human causes of global warming, partly because they are systemic, not direct. When a huge snowstorm occurred in Washington DC recently, many conservatives saw it as disproving the existence of global warming — “How could warming cause snow?” Similarly, conservatives, thinking in terms of individual responsibility and direct causation, blamed homeowners for foreclosures on their homes, while progressives looked to systemic explanations, seeking reform in the financial system.

Certainly it is true that self-interested denial of feedback (or externalities, as an economist might describe some feedbacks) has found its home in the conservative and libertarian movements. But that doesn’t mean all conservative thought is devoid of systems thinking, and one can easily look back at history and find progressive or liberal policies that have also ignored systemic effects. Indeed, the conservative critique of progressive policies addressing crime and poverty issues has often been evolutionary arguments about the effects of incentives – a very systemic view. The problem is, words don’t provide enough formalism or connection to data to determine whose favorite feedback loops might dominate, so philosophical arguments about the merits of turn-the-other-cheek or an-eye-for-an-eye can go on forever. Models can assist with resolving these philosophical debates. However, at present public discourse is almost devoid of thinking, and often anti-intellectual, which makes it tough to contemplate sophisticated solutions to our problems.

Thanks to James McFarland for the tip.

Tim Jackson on the horns of the growth dilemma

I just ran across a nice talk by Tim Jackson, author of Prosperity Without Growth, on BigIdeas. It’s hard to summarize such a wide-ranging talk, but I’d call it a synthesis of the physical (planetary boundaries and exponential growth) and the behavioral (what is the economy for, how does it influence our choices, and how can we change it?). The horns of the dilemma are that growth can’t go on forever, yet we don’t know how to run an economy that doesn’t grow. (This of course begs the question, “growth of what?” – where the what is a mix of material and non-material things – a distinction that lies at the heart of many communication failures around the Limits to Growth debate.)

There’s an article covering the talk at ABC.au, but it’s really worth a listen at http://mpegmedia.abc.net.au/rn/podcast/2010/07/bia_20100704_1705.mp3

Positive Feedback Pricing

Hat tip to John Sterman & Travis Franck for passing along this cool example of positive feedback, discovered on Amazon by evolutionary biologist Michael Eisen. Two sellers apparently used algorithmic pricing that led to exponential growth of the price of a book:

bookPriceThis reminds me of a phenomenon that’s puzzled me for some time: “new economy” firms have at least as many opportunities for systemic problems as any others, yet modeling remains somewhat “old economy” focused on physical products and supply chains and more traditional services like health care. Perhaps this is just my own observational sampling bias; I’d be curious to know whether others see things the same way.