Reasoning was not designed to pursue the truth

Uh oh:

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found. – Mercier & Sperber via Edge.org, which has a video conversation with coauthor Mercier.

This makes sense to me, but I think it can’t be the whole story. There must be at least a little evolutionary advantage to an ability to predict the consequences of one’s actions. The fact that it appears to be dominated by confirmation bias and other pathologies may be indicative of how much we are social animals, and how long we’ve been that way.

It’s easy to see why this might occur by looking at the modern evolutionary landscape for ideas. There’s immediate punishment for touching a hot stove, but for any complex system, attribution is difficult. It’s easy to see how the immediate rewards from telling your fellow tribesmen crazy things might exceed the delayed and distant rewards of actually being right. In addition, wherever there are stocks of resources lying about, there are strong incentives to succeed by appropriation rather than creation. If you’re really clever with your argumentation, you can even make appropriation resemble creation.

The solution is to use our big brains to raise the bar, by making better use of models and other tools for analysis of and communication about complex systems.

Nothing that you will learn in the course of your studies will be of the slightest possible use to you in after life, save only this, that if you work hard and intelligently you should be able to detect when a man is talking rot, and that, in my view, is the main, if not the sole, purpose of education. – John Alexander Smith, Oxford, 1914

So far, though, models seem to be serving argumentation as much as reasoning. Are we stuck with that?

2 thoughts on “Reasoning was not designed to pursue the truth”

  1. “So far, though, models seem to be serving argumentation as much as reasoning.”

    Yes, that is exactly the problem!

    “Are we stuck with that?”

    I think you’ve hit exactly where the value of generative approaches. Here is my reasoning, please detect my rot!

    1. Most modeling approaches simply take what we think to be true and reify it.
    2. Generative modeling — here defined as fine-grained agents interacting autonomously — allows us to create models with little or no thought to what the actual outcome is. That is because generative models are actually capable of creating higher complexity behavior that we didn’t anticipate.
    3. Generative modeling has a selective advantage over other modeling approaches because it is relatively free of bias.

    1. Hmm …

      I completely agree that there’s value to generative approaches, but I’m not sure that’s enough.

      1. Certainly some models do merely reify assumptions, but that’s actually useful, because those assumptions can then be compared to data and, more importantly, subjected to a variety of quality and robustness checks. In many cases no one actually bothers with the checks, but that’s not the fault of the approach.

      2. I think there’s something to this, but I’m not sure that agents or knowing the outcome in advance are always the key dimensions. For example:
      – Does a linear regression (for example) say much more a priori about causality in a system than an agent model?
      – Isn’t the surprise outcome of an aggregate dynamic model (e.g. chaos in the Lorenz system) as surprising as emergent behavior (like gliders & glider guns) in an agent or spatial model?
      – Linear 2nd order models of climate are pretty useful, even though they’re also quite predictable.

      3. I’m not sure that agent models are any more bias-free than other approaches, except perhaps with some kind of extreme infinite-monkeys-at-infinite-typewriters approach. The advantage perhaps is that, because agent models are harder to build and calibrate in many cases, bending the model to one’s will is tough – but I’m not sure that makes it easier to be right. That difficulty might be an evolutionary disadvantage, because it raises the cost of modeling, while the perceived benefit (among non-modelers) remains low. Witness the failure of agent or even lumped behavioral dynamic models to beat out general equilibrium and input-output in economics, in spite of the obvious flaws of the competition.

      I guess the real question is, what are the attributes of a modeling approach that has a chance of thriving? Possibly:
      – low cost (cognitive, data hunger)
      – structure appropriate to problems
      – closed loop
      – supports argumentation (easy to tell the story of the insight)
      – includes tracking/evaluation of predictions
      The problem may be that the first item is at odds with the others.

Leave a Reply

Your email address will not be published. Required fields are marked *

+ 6 = 9

This site uses Akismet to reduce spam. Learn how your comment data is processed.