Selection for deception?

Eric R. Weinstein on Edge’s 2011 question:

The sophisticated “scientific concept” with the greatest potential to enhance human understanding may be argued to come not from the halls of academe, but rather from the unlikely research environment of professional wrestling.

Evolutionary biologists Richard Alexander and Robert Trivers have recently emphasized that it is deception rather than information that often plays the decisive role in systems of selective pressures. Yet most of our thinking continues to treat deception as something of a perturbation on the exchange of pure information, leaving us unprepared to contemplate a world in which fakery may reliably crowd out the genuine. In particular, humanity’s future selective pressures appear likely to remain tied to economic theory which currently uses as its central construct a market model based on assumptions of perfect information.

If we are to take selection more seriously within humans, we may fairly ask what rigorous system would be capable of tying together an altered reality of layered falsehoods in which absolutely nothing can be assumed to be as it appears. Such a system, in continuous development for more than a century, is known to exist and now supports an intricate multi-billion dollar business empire of pure hokum. It is known to wrestling’s insiders as “Kayfabe”.

Were Kayfabe to become part of our toolkit for the twenty-first century, we would undoubtedly have an easier time understanding a world in which investigative journalism seems to have vanished and bitter corporate rivals cooperate on everything from joint ventures to lobbying efforts. Perhaps confusing battles between “freshwater” Chicago macro economists and Ivy league “Saltwater” theorists could be best understood as happening within a single “orthodox promotion” given that both groups suffered no injury from failing (equally) to predict the recent financial crisis. …

Reasoning was not designed to pursue the truth

Uh oh:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. – Mercier & Sperber via, which has a video conversation with coauthor Mercier.

This makes sense to me, but I think it can’t be the whole story. There must be at least a little evolutionary advantage to an ability to predict the consequences of one’s actions. The fact that it appears to be dominated by confirmation bias and other pathologies may be indicative of how much we are social animals, and how long we’ve been that way.

It’s easy to see why this might occur by looking at the modern evolutionary landscape for ideas. There’s immediate punishment for touching a hot stove, but for any complex system, attribution is difficult. It’s easy to see how the immediate rewards from telling your fellow tribesmen crazy things might exceed the delayed and distant rewards of actually being right. In addition, wherever there are stocks of resources lying about, there are strong incentives to succeed by appropriation rather than creation. If you’re really clever with your argumentation, you can even make appropriation resemble creation.

The solution is to use our big brains to raise the bar, by making better use of models and other tools for analysis of and communication about complex systems.

Nothing that you will learn in the course of your studies will be of the slightest possible use to you in after life, save only this, that if you work hard and intelligently you should be able to detect when a man is talking rot, and that, in my view, is the main, if not the sole, purpose of education. – John Alexander Smith, Oxford, 1914

So far, though, models seem to be serving argumentation as much as reasoning. Are we stuck with that?

The myth of optimal depletion

Fifteen years ago, when I was working on my dissertation, I read a lot of the economic literature on resource management. I was looking for a behavioral model of the management of depletable resources like oil and gas. I never did find one (and still haven’t, though I haven’t been looking as hard in the last few years).

Instead, the literature focused on optimal depletion models. Essentially these characterize the extraction of resources that would occur in an idealized market – a single, infinitely-lived resource manager, perfect information about the resource base and about the future (!), no externalities, no lock-in effects.

It’s always useful to know the optimal trajectory for a managed resource – it identifies the upper bound for improvement and suggests strategic or policy changes to achieve the ideal. But many authors have transplanted these optimal depletion models into real-world policy frameworks directly, without determining whether the idealized assumptions hold in reality.

The problem is that they don’t. There are some obvious failings – for example, I’m pretty certain a priori that no resource manager actually knows the future. Unreal assumptions are reflected in unreal model behavior – I’ve seen dozens of papers that discuss results matching the classic Hotelling framework – prices rising smoothly at the interest rate, with the extraction rate falling to match, as if it had something to do with what we observe.

The fundamental failure is valuing the normative knowledge about small, analytically tractable problems above the insight that arises from experiments with a model that describes actual decision making – complete with cognitive limitations, agency problems, and other foibles.

In typical optimal depletion models, an agent controls a resource, and extracts it to maximize discounted utility. Firms succeed in managing other assets reasonably well, so why not? Well, there’s a very fundamental problem: in most places, firms don’t control resources. They control reserves. Governments control resources. As a result, firms’ ownership of the long term depletion challenge extends only as far as their asset exposure – a few decades at most. If there are principal-agent problems within firms, their effective horizon is even shorter – only as long as the tenure of a manager (worse things can happen, too).

Governments are no better; politicians and despots both have incentives to deplete resources to raise money to pacify the populace. This encourages a “sell low” strategy – when oil prices are low, governments have to sell more to meet fixed obligations (the other end of the backward-bending supply curve). And, of course, a government that wisely shepherds its resources can always lose them to a neighbor that extracts its resources quickly and invests the proceeds in military hardware.

The US is unusual in that many mineral rights are privately held, but still the government’s management of its share is instructive. I’ll just skip over the circus at the MMS and go to Montana’s trust lands. The mission of the trust is to provide a permanent endowment for public schools. But the way the trust is run could hardly be less likely to maximize or even sustain school revenue.

Fundamentally, the whole process is unmanaged – the trust makes no attempt to control the rate at which parcels are leased for extraction. Instead, trust procedures put the leasing of tracts in the hands of developers – parcels are auctioned whenever a prospective bidder requests.  Once anyone gets a whiff of information about the prospects of a tract, they must act to bid – if they’re early enough, they may get lucky and face little or no competition in the auction (easier than you’d think, because the trust doesn’t provide much notice of sales). Once buyers obtain a lease, they must drill within five years, or the lease expires. This land rush mentality leaves the trust with no control over price or the rate of extraction – they just take their paltry 16% cut (plus or minus), whenever developers choose to give it to them. When you read statements from the government resource managers, they’re unapologetically happy about it: they talk about the trust as if it were a jobs program, not an endowment.

This sort of structure is the norm, not the exception. It would be a strange world in which all of the competing biases in the process cancelled each other out, and yielded a globally optimal outcome in spite of local irrationality. The result, I think, is that policies in climate and energy models are biased, possibly in an unknown direction. On one hand, it seems likely that there’s a negative externality from extraction of public resources above the optimal rate, as in Montana. On the other hand, there might be harmful spillovers from climate or energy policies that increase the use of natural gas, if they exacerbate problems with a suboptimal extraction trajectory.

I’ve done a little sniffing around lately, and it seems that the state of the art in integrated assessment models isn’t too different from what it was in 1995 – most models still use exogenous depletion trajectories or some kind of optimization or equilibrium approach. The only real innovation I’ve seen is a stochastic model-within-a-model approach – essentially, agents know the structure of the system they’re in, but are uncertain about it’s state, so they make stochastically optimal decisions at each point in time. This is a step in the right direction, but still implies a very high cognitive load and degree of intended rationality that doesn’t square with real institutions. I’d be very interested to hear about anything new that moves toward a true behavioral model of resource management.

Football physics & perception

SEED has a nice story on perception of curving shots in football (soccer).

The physics of the curving trajectory is interesting. In short, a light spinning ball can transition from a circular trajectory to a tighter spiral, surprising the goalkeeper.

What I find really interesting, though, is that goalkeepers don’t anticipate this.

But goalkeepers see hundreds of free kicks in practice on a daily basis. Surely they’d eventually adapt to bending shots, wouldn’t they?

… Elite professionals from some of the top soccer clubs in the world were shown simulations of straight and bending free kicks, which disappeared from view 10 to 12.5 meters from the goal. They then had to predict the path of the ball. The players were accurate for straight kicks, but they made systematic errors on bending shots. Instead of taking the curve into account, players tended to assume the ball would continue straight along the path it was following when it disappeared. Even more surprisingly, goalkeepers were no better at predicting the path of bending balls than other players. …

I think the interesting question is, could they be trained to anticipate this? It’s fairly easy for the goalie to observe the early trajectory of a ball, but due to the nonlinear transition to a new curvature, that’s not helpful. To guess whether the ball might suddenly take a wicked turn, one would have to judge its spin, which has to be much harder. My guess is that prediction is difficult, so the only option is to take robust action. In the case of the famous Carlos shot, one might guess that the goalie should have moved to cover the pole, even if he judged that the ball would be wide. (But who am I to say? I’m a lousy soccer player – I find 9 year olds to be stiff competition.)

SEED has another example:

I wrote about a similar problem on my blog earlier this year: How baseball fielders track fly balls. Researchers found that even when the ball is not spinning, outfielders don’t follow the optimum path to the ball—instead they constantly update their position in response to the ball’s motion.

At first this sounds like a classic lion-gazelle pursuit problem. But there’s one key difference: in pursuit problems I’ve seen, the opponent’s location is known, so the questions are all about physics and (maybe) strategic behavior. In soccer and baseball, at least part of the ball’s state (spin, for example) is at best poorly observed by the receiver. Therefore trajectories that appear to be suboptimal might actually be robust responses to imperfect measurement.

The problems faced by goalies and outfielders are in some ways much like those facing managers: what do you do, given imperfect information about a surprisingly nonlinear world?