Cynefin, Complexity and Attribution

This nice article on the human skills needed to deal with complexity reminded me of Cynefin.

Cynefin framework by Edwin Stoop

Generally, I find the framework useful – it’s a nice way of thinking about the nature of a problem domain and therefore how one might engage. (One caution: the meaning of the chaotic domain differs from that in nonlinear dynamics.)

However, I think the framework’s policy prescription in the complex domain falls short of appreciating the full implications of complexity, at least of dynamic complexity as we think of it in SD:

Complex Contexts: The Domain of Emergence

In a complicated context, at least one right answer exists. In a complex context, however, right answers can’t be ferreted out. It’s like the difference between, say, a Ferrari and the Brazilian rainforest. Ferraris are complicated machines, but an expert mechanic can take one apart and reassemble it without changing a thing. The car is static, and the whole is the sum of its parts. The rainforest, on the other hand, is in constant flux—a species becomes extinct, weather patterns change, an agricultural project reroutes a water source—and the whole is far more than the sum of its parts. This is the realm of “unknown unknowns,” and it is the domain to which much of contemporary business has shifted.

Most situations and decisions in organizations are complex because some major change—a bad quarter, a shift in management, a merger or acquisition—introduces unpredictability and flux. In this domain, we can understand why things happen only in retrospect. Instructive patterns, however, can emerge if the leader conducts experiments that are safe to fail. That is why, instead of attempting to impose a course of action, leaders must patiently allow the path forward to reveal itself. They need to probe first, then sense, and then respond.

HBR

Here’s the problem: in a system with delays, feedback and nonlinearity, dynamic complexity separates cause from effect. If things are sufficiently confounded, we can no more understand the system through retrospective pattern matching than we could a priori. I think this is the kind of situation Repenning & Sterman describe in Capability Traps and Self-Confirming Attribution Errors in the Dynamics of Process Improvement:

Our data suggest that the critical
determinants of success in efforts to learn and improve
are the interactions between managers’ attributions
about the cause of poor organizational performance and
the physical structure of the workplace, particularly
delays between investing in improvement and recognizing the rewards. Building on this observation, we propose a dynamic model capturing the mutual evolution of
those attributions, managers’ and workers’ actions, and
the production technology. We use the model to show
how managers’ beliefs about those who work for them,
workers’ beliefs about those who manage them, and the
physical structure of the environment can coevolve to
yield an organization characterized by conflict, mistrust,
and control structures that prevent useful change of any
type.

ASQ 47

Here, managers observe performance incrementally and after the fact, as prescribed, but bad outcomes still occur because mental models fail to capture the structure of the system.

To some extent, you can get around this problem via evolutionary learning (that’s why eliminating diversity is deadly sin #2). Inside a company, that might look like “experiments that are safe to fail,” but the design would have to be pretty clever to avoid the attribution confounding above. At the population level, it looks like natural selection, but of course, that’s not helpful if you’d like your company to succeed, not just companies in general. Evolutionary learning is an expensive and noisy solution, which I would hope to avoid if another approach would do. It’s also the default solution that you get if you do nothing, but then you have no control over the goal of the selection process. For our biggest problems, like climate, evolutionary learning is not helpful, because we only have one instance of the system, and one try.

Therefore it is imperative that we invest in development of formal, operational understanding of system structures that can be used to construct policies in a virtual environment where catastrophe is cheap. Modeling is not optional. That’s not to say that we can’t use evolutionary learning in service of solutions; that’s exactly what a carbon tax would do, for example, by creating an incentive for lots of changes in markets and rules.

The problem is that we have sufficient models of climate and the energy system to design practical policies. But we lack the corresponding process skills, widespread understanding and models of human institutions needed to bootstrap implementation. So, we still have a long way to go (which brings us back to the original article).

h/t @jdevoo

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.