Is social networking making us dumber?

Another great conversation at the Edge weaves together a number of themes I’ve been thinking about lately, like scientific revolutions, big data, learning from models, filter bubbles and the balance between content creation and consumption. I can’t embed, or do it full justice, so go watch the video or read the transcript (the latter is a nice rarity these days).

Pagel’s fundamental hypothesis is humans as social animals are wired for imitation more than innovation, for the very good reason that imitation is easy, while innovation is hard, error-prone and sometimes dangerous. Better communication intensifies the advantage to imitators, as it has become incredibly cheap to observe our fellows in large networks like Facebook. There are a variety of implications of this, including the possibility that, more than ever, large companies have strong incentives to imitate through acquisition of small innovators rather than to risk innovating themselves. This resonates very much with Ventana colleague David Peterson’s work on evolutionary simulation of the origins of economic growth and creativity.

Continue reading “Is social networking making us dumber?”

Time to short some social network stocks?

I don’t want to wallow too long in metaphors, so here’s something with a few equations.

A recent arXiv paper by Peter Cauwels and Didier Sornette examines market projections for Facebook and Groupon, and concludes that they’re wildly overvalued.

We present a novel methodology to determine the fundamental value of firms in the social-networking sector based on two ingredients: (i) revenues and profits are inherently linked to its user basis through a direct channel that has no equivalent in other sectors; (ii) the growth of the number of users can be calibrated with standard logistic growth models and allows for reliable extrapolations of the size of the business at long time horizons. We illustrate the methodology with a detailed analysis of facebook, one of the biggest of the social-media giants. There is a clear signature of a change of regime that occurred in 2010 on the growth of the number of users, from a pure exponential behavior (a paradigm for unlimited growth) to a logistic function with asymptotic plateau (a paradigm for growth in competition). […] According to our methodology, this would imply that facebook would need to increase its profit per user before the IPO by a factor of 3 to 6 in the base case scenario, 2.5 to 5 in the high growth scenario and 1.5 to 3 in the extreme growth scenario in order to meet the current, widespread, high expectations. […]

I’d argue that the basic approach, fitting a logistic to the customer base growth trajectory and multiplying by expected revenue per customer, is actually pretty ancient by modeling standards. (Most system dynamicists will be familiar with corporate growth models based on the mathematically-equivalent Bass diffusion model, for example.) So the surprise for me here is not the method, but that forecasters aren’t using it.

Looking around at some forecasts, it’s hard to say what forecasters are actually doing. There’s lots of handwaving and blather about multipliers, and little revelation of actual assumptions (unlike the paper). It appears to me that a lot of forecasters are counting on big growth in revenue per user, and not really thinking deeply about the user population at all.

To satisfy my curiosity, I grabbed the data out of Cauwels & Sornette, updated it with the latest user count and revenue projection, and repeated the logistic model analysis. A few observations:

I used a generalized logistic, which has one more parameter, capturing possible nonlinearity in the decline of the growth rate of users with increasing saturation of the market. Here’s the core model:

Continue reading “Time to short some social network stocks?”

Social network valuation with logistic models

This is a logistic growth model for Facebook’s user base, with a very simple financial projection attached. It’s inspired by:

Quis pendit ipsa pretia: facebook valuation and diagnostic of a bubble based on nonlinear demographic dynamics

Peter Cauwels, Didier Sornette

We present a novel methodology to determine the fundamental value of firms in the social-networking sector based on two ingredients: (i) revenues and profits are inherently linked to its user basis through a direct channel that has no equivalent in other sectors; (ii) the growth of the number of users can be calibrated with standard logistic growth models and allows for reliable extrapolations of the size of the business at long time horizons. We illustrate the methodology with a detailed analysis of facebook, one of the biggest of the social-media giants. There is a clear signature of a change of regime that occurred in 2010 on the growth of the number of users, from a pure exponential behavior (a paradigm for unlimited growth) to a logistic function with asymptotic plateau (a paradigm for growth in competition). We consider three different scenarios, a base case, a high growth and an extreme growth scenario. Using a discount factor of 5%, a profit margin of 29% and 3.5 USD of revenues per user per year yields a value of facebook of 15.3 billion USD in the base case scenario, 20.2 billion USD in the high growth scenario and 32.9 billion USD in the extreme growth scenario. According to our methodology, this would imply that facebook would need to increase its profit per user before the IPO by a factor of 3 to 6 in the base case scenario, 2.5 to 5 in the high growth scenario and 1.5 to 3 in the extreme growth scenario in order to meet the current, widespread, high expectations. …

(via the arXiv blog)

This is not an exact replication of the model (though you can plug in the parameters from C&S’ paper to replicate their results). I used slightly different estimation methods, a generalization of the logistic (for saturation exponent <> 1), and variable revenues and interest rates in the projections (also optional).

This is a good illustration of how calibration payoffs work. The payoff in this model is actually a policy payoff, because the weighted sum-squared-error is calculated explicitly in the model. That makes it possible to generate Monte Carlo samples and filter them by SSE, and also makes it easier to estimate the scale and variation in the standard error of user base reports.

The model is connected to input data in a spreadsheet. Most is drawn from the paper, but I updated users and revenues with the latest estimates I could find.

A command script replicates optimization runs that fit the model to data for various values of the user carrying capacity K.

Note that there are two views, one for users, and one for financial projections.

See my accompanying blog post for some reflections on the outcome.

This model requires Vensim DSS, Pro, or the Model Reader. facebook 3.vpm or facebook3.zip (The .zip is probably easier if you have DSS or Pro and want to work with the supplementary control files.)

Update: I’ve added another set of models for Groupon: groupon 1.vpmgroupon 2.vpm and groupon.zip groupon3.zip

See my latest blog post for details.

 

Kill your iPad?

Are iPads the successor to the dark side of TV?

I love the iPad, but it seems rather limited as a content creation device. It’s good at some things (GarageBand), but even with a good app, I can’t imagine serious model building on it. Even some social media activities, like twitter, seem a bit awkward, because it’s hard to multitask effectively to share web links and other nontrivial content.

It seems that there’s some danger of it becoming a channel for content consumption, insulating users in their filter bubbles and  leaving aspiring content creators disempowered. The monolithic gatekeeper model for apps seems potentially problematic in the long term as well, as a distortion to the evolutionary landscape for software.

It would be a bit ironic if cars someday bore bumper stickers protesting a new vehicle for mindless media delivery:

“You watch television to turn your brain off and you work on your computer when you want to turn your brain on.”

— Steve Jobs, co-founder of Apple Computer and Pixar, in Macworld Magazine, February 2004

“You watch television to turn your brain off and you work on your computer when you want to turn your brain on.”
— Steve Jobs, co-founder of Apple Computer and Pixar, in Macworld Magazine, February 2004

The danger of path-dependent information flows on the web

Eli Pariser argues that “filter bubbles” are bad for us and bad for democracy:

As web companies strive to tailor their services (including news and search results) to our personal tastes, there’s a dangerous unintended consequence: We get trapped in a “filter bubble” and don’t get exposed to information that could challenge or broaden our worldview.

Filter bubbles are close cousins of confirmation bias, groupthink, polarization and other cognitive and social pathologies.

A key feedback is this reinforcing loop, from Sterman & Wittenberg’s model of path dependence in Kuhnian scientific revolutions:

Anomalies

As confidence in an idea grows, the delay in recognition (or frequency of outright rejection) of anomalous information grows larger. As a result, confidence in the idea – flat earth, 100mpg carburetor – can grow far beyond the level that would be considered reasonable, if contradictory information were recognized.

The dynamics resulting from this and other positive feedbacks play out in many spheres. Wittenberg & Sterman give an example:

The dynamics generated by the model resemble the life cycle of intellectual fads. Often a promising new idea rapidly becomes fashionable through excessive optimism, aggressive marketing, media hype, and popularization by gurus. Many times the rapid influx of poorly trained practitioners, or the lack of established protocols and methods, causes expectations to outrun achievements, leading to a backlash and disaffection. Such fads are commonplace, especially in (quack) medicine and most particularly in the world of business, where “new paradigms” are routinely touted in the pages of popular journals of management, only to be displaced in the next issue by what many business people have cynically come to call the next “flavor of the month.”

Typically, a guru proposes a new theory, tool, or process promising to address persistent problems facing businesses (that is, a new paradigm claiming to solve the anomalies that have undermined the old paradigm.) The early adopters of the guru’s method spread the word and initiate some projects. Even in cases where the ideas of the guru have little merit, the energy and enthusiasm a team can bring to bear on a problem, coupled with Hawthorne and placebo effects and the existence of “low hanging fruit” will often lead to some successes, both real and apparent. Proponents rapidly attribute these successes to the use of the guru’s ideas. Positive word of mouth then leads to additional adoption of the guru’s ideas. (Of course, failures are covered up and explained away; as in science there is the occasional fraud as well.) Media attention further spreads the word about the apparent successes, further boosting the credibility and prestige of the guru and stimulating additional adoption.

As people become increasingly convinced that the guru’s ideas work, they are less and less likely to seek or attend to disconfirming evidence. Management gurus and their followers, like many scientists, develop strong personal, professional, and financial stakes in the success of their theories, and are tempted to selectively present favorable and suppress unfavorable data, just as scientists grow increasingly unable to recognize anomalies as their familiarity with and confidence in their paradigm grows. Positive feedback processes dominate the dynamics, leading to rapid adoption of those new ideas lucky enough to gain a sufficient initial following. …

The wide range of positive feedbacks identified above can lead to the swift and broad diffusion of an idea with little intrinsic merit because the negative feedbacks that might reveal that the tools don’t work operate with very long delays compared to the positive loops generating the growth. …

For filter bubbles, I think the key positive loops are as follows:

FilterBubblesLoops R1 are the user’s well-worn path. We preferentially visit sites presenting information (theory x or y) in which we have confidence. In doing so, we consider only a subset of all information, building our confidence in the visited theory. This is a built-in part of our psychology, and to some extent a necessary part of the process of winnowing the world’s information fire hose down to a usable stream.

Loops R2 involve the information providers. When we visit a site, advertisers and other observers (Nielsen) notice, and this provides the resources (ad revenue) and motivation to create more content supporting theory x or y. This has also been a part of the information marketplace for a long time.

R1 and R2 are stabilized by some balancing loops (not shown). Users get bored with an all-theory-y diet, and seek variety. Providers seek out controversy (real or imagined) and sensationalize x-vs-y battles. As Pariser points out, there’s less scope for the positive loops to play out in an environment with a few broad media outlets, like city newspapers. The front page of the Bozeman Daily Chronicle has to work for a wide variety of readers. If the paper let the positive loops run rampant, it would quickly lose half its readership. In the online world, with information customized at the individual level, there’s no such constraint.

Individual filtering introduces R3. As the filter observes site visit patterns, and preferentially serves up information consistent with past preferences. This introduces a third set of reinforcing feedback processes, as users begin to see what they prefer, they also learn to prefer what they see. In addition, on Facebook and other social networking sites every person is essentially a site, and people include one another in networks preferentially. This is another mechanism implementing loop R1 – birds of a feather flock together and share information consistent with their mutual preferences, and potentially following one another down conceptual rabbit holes.

The result of the social web and algorithmic filtering is to upset the existing balance of positive and negative feedback. The question is, were things better before, or are they better now?

I’m not exactly sure how to tell. Presumably one could observe trends in political polarization and duration of fads for an indication of the direction of change, but that still leaves open the question of whether we have more or less than the “optimal” quantity of pet rocks, anti-vaccine campaigns and climate skepticism.

My suspicion is that we now have too much positive feedback. This is consistent with Wittenberg & Sterman’s insight from the modeling exercise, that the positive loops are fast, while the negative loops are weak or delayed. They offer a prescription for that,

The results of our model suggest that the long-term success of new theories can be enhanced by slowing the positive feedback processes, such as word of mouth, marketing, media hype, and extravagant claims of efficacy by which new theories can grow, and strengthening the processes of theory articulation and testing, which can enhance learning and puzzle-solving capability.

In the video, Pariser implores the content aggregators to carefully ponder the consequences of filtering. I think that also implies more negative feedback in algorithms. It’s not clear that providers have an incentive to do that though. The positive loops tend to reward individuals for successful filtering, while the risks (e.g., catastrophic groupthink) accrue partly to society. At the same time, it’s hard to imagine a regulatory that does not flirt with censorship.

Absent a global fix, I think it’s incumbent on individuals to practice good mental hygiene, by seeking diverse information that stands some chance of refuting their preconceptions once in a while. If enough individuals demand transparency in filtering, as Pariser suggests, it may even be possible to gain some local control over the positive loops we participate in.

I’m not sure that goes far enough though. We need tools that serve the social equivalent of “strengthening the processes of theory articulation and testing” to improve our ability to think and talk about complex systems. One such attempt is the “collective intelligence” behind Climate Colab. It’s not quite Facebook-scale yet, but it’s a start. Semantic web initiatives are starting to help by organizing detailed data, but we’re a long way from having a “behavioral dynamic web” that translates structure into predictions of behavior in a shareable way.

Update: From Tech Review, technology for breaking the bubble