Strategic Decision-Making: The Decision Itself

Articles
The first principle is that you must not fool yourself, and you are the easiest person to fool. | Richard Feynman

Background

In this series on strategic decision-making, I have explored the labyrinth of cognitive biases that the human mind represents when we are confronted with when attempting to make decisions. In short, we are each of us switching between two parallel minds: an intuitive mind that works well for everyday uses, like opening doors, and walking down the street while avoiding collisions with others; and a reflective mind, one that is always monitoring what’s going on, but which is employed when intuitive reasoning is inadequate, as in the face of uncertainty, or the application of rules or numbers.

The intuitive mind is always creating a narrative — telling us stories — and trying hard to jump to conclusions. It’s good for being startled by a snake in the park, but does a bad job of applying systems of thinking that are not innate, like arithmetic.

It may be that in an additional million years humans may evolve so that the reflective mind has more control, and the cognitive biases that want us to take shortcuts are quieted, but that day is far off.

In essence, the core of strategic decision-making is to avoid the tendency of the intuitive mind to jump to conclusions through the tangled illogic of biases. The key to actually accomplishing that is to acknowledge, first, that we need to actively seek out bias in decision-making because otherwise, it will creep in.

As outlined in the earlier two parts of the series, Getting Strategic About Decision-Making and A Checklist For Strategic Decision-Making, the general approach for serious business decisions generally follows a three-part process: fact-gathering and analysis, the insights and judgment of a defined group of people (stakeholders or advisors), and some process -- ranging between very formal to very informal -- for that group to make a decision, reflecting that analysis and judgment. Two McKinsey researchers conducted a major study that the process used to make decisions was more important than the fact-finding and judgment stages They found process mattered more than fact-finding and analysis by a factor of six.

We have to rely on processes designed to uncover bias: in effect, we have to create a checklist and follow it.

In A Checklist For Strategic Decision-Making, we walked through the first two stages of a checklist created by those two researchers, Dan Lovallo and Olivier Sibony, with the guidance of Daniel Kahneman, a Nobel laureate in economics, who is one of the leading lights in the understanding of behavioral economics.

The checklist is organized as a series of questions to be asked by those charged with making the final strategic decision.

The first questions are those the decision-makers should direct to themselves to determine if the context for the development of a proposed solution has been debiased. For example, looking into a lack of dissent among those involved in the proposal is critical, because a healthy investigation for a major decision should involve a broad spectrum of perspectives. Finding no dissent is a major warning sign of unexamined premises, or self-interest, or other negatives.

The second set of questions should be directed toward those making the proposal, with the purpose of examining specific well-known biases. For example, the sunk-cost fallacy is where we should rationally disregard past investment when considering additional investment. People tend to value what they have invested in more than they should, so companies often overlook the benefits of selling off a product line or a subsidiary instead of continuing to fund them. Kahneman and his colleagues put it this way:

History leads us astray when we evaluate options in reference to a past starting point instead of the future.

Evaluating the Proposal Itself

Perhaps it comes as a letdown that there are only three major questions left to ask in the third phase. But that is a consequence of the degree of deliberation that has already been undertaken. Also, in many real-world situations, it’s possible that none of the preceding nine major questions have been asked explicitly, since many decisions — even obviously strategic ones — are only evaluated informally.

Is The Base Case Overly Optimistic? People — especially those who have invested time and energy in developing a proposal — have a natural leaning toward optimism. Those making the final decision need to adopt what Kahneman and his colleagues call an ‘outside view’. “Inside view’ thinking focuses on the specifics of the proposal at hand and may downplay the history of similar projects. We don’t want to be tied to the past, or constrained by it, but we need to learn from it.

Forecasting is an area where overconfidence is common, especially in teams that have been successful. Outside view forecasting should be statistical, not narrative-based, and should avoid being driven by aspirations. ‘Desire’, As Roger Martin pointed out, ‘is not a strategy’. For example, estimates of the level of effort for a project are better when compared with other similar projects, rather than simply adding up step-by-step estimates of effort, bottom-up.

Another aspect of the tension between outside versus inside views is the degree to which the responses of competitors are evaluated. Most business decisions are more like chess than solitaire. Kahneman and company propose using ‘war games’ to encourage a systematic analysis about competition in response to some proposed course of action.

Is The Worst Case Bad Enough? Even when the proposal includes a set of scenarios, like a best, worst, and medium case. But, as Kahneman and Co. state,

Unfortunately, the worst case is rarely bad enough.

This is like a law of the universe, and decision-makers should drill into the thinking behind the worst case, and open the field of deliberation to at the minimum ask ‘what could happen that we have not thought of?’ I believe that the decision-making team — for truly major decisions — should designate a so-called ‘tenth man’, who should create a scenario where everything that might go wrong does go wrong, and they should hold that up to the proposed course of action, as a sanity check.

This is related to the ‘pre-mortem’ approach advocated by Gary Klein that I wrote about in Constructive Uncertainty, which is similar, and the group as a whole plays the ‘tenth man’:

A post-mortem is a technique for analyzing what happened after a failed project, such as a surgical operation in which the patient died (from which the term has been borrowed). Klein's insight is to have a post-mortem in advance of the project -- hence pre-mortem -- where the project members cast their thinking ahead to the end of the project's hypothetical failure, and determine what went wrong and how to learn from it… before the project even starts.

Is The Recommending Team Overly Cautious? Despite a strong bias toward action in general, the proposers might fall into a different trap: the plan may not be as creative, ambitious, or transformative as executives might desire. Here we return to loss avoidance, as was touched on in the first in this series:

Operating unit managers are focused on short-term timeframes, and therefore tend to take on only small risks, instead of larger ones that may contribute more to long-term corporate growth. They are more concerned with the possibility of a decrease in status if things go wrong: the potential loss for the manager may be perceived as worse than any potential upside for the company and the manager.

The ‘cure’ for loss aversion is to derisk the decision. Those evaluating the proposal  — upon determining an aspect of the course of action where higher risk alternatives were ruled out — could designate the riskier version of the proposal as officially sanctioned risk, so that those involved are viewed as setting audacious and potentially ‘big bang’ goals. But the natural conservatism of corporate culture makes uprooting loss aversion difficult.

Try for free

Quality Control For Decisions

In essence, the checklist approach outlined here acts as quality control for decision-making, analogous to quality control in manufacturing. As just as in manufacturing, the degree of effort expended in quality control should be commensurate with the value or importance of the goods being made. Quality control of toothpicks is likely to get less funding than that of pacemakers, or parachutes.

As a result, companies need to undertake a decision-making process to determine when the full checklist should be undertaken for any potentially strategic decision. However, this preceding step does not itself have to take on the full anti-bias checklist. It is probably sufficient to determine whether a proposed decision could lead to a catastrophic impact on the business if things go pear-shaped.

Jeff Bezos breaks decisions into two categories: decisions that could be reversed he calls ‘two-way’, like a door that can swing both ways. Decisions that cannot be reversed — one-way decisions — can potentially sink the product, project, or company. It’s those decisions that warrant the application of the checklist explored in this series.

Cognitive biases are like a shadow thrown by human ingenuity. To attain the inventiveness and curiosity latent in the mind, we must accept the inherent bipolarity of our sensemaking since the intuitive and reflective minds run in parallel, but not totally in synch. As a result, we must adapt by creating a work culture that does not pretend we are fully rational beings: one that accepts the shadow along with the light. Therefore companies are well-served to educate everyone in the business about biases, and the techniques — like the checklist — that should be used to counter them.

Perhaps the single most important takeaway is that recognition of and parrying bias requires both active inquiry and a culture that supports dissent. The processes laid out in the decision-making checklist are most effective when participants in these processes are encouraged to speak openly about how research and analysis were conducted, what alternatives were considered, how estimates and scenarios were generated, and what disagreements arose and the status of their outcomes. These activities require openness and good communication, and efforts will have to be taken to avoid personalizing disagreements that will naturally arise.

Developing a culture in which such discussions are common — and individuals feel safe to ask such questions and answer them — is likely to be hard work, but the result is incredibly valuable, while the alternatives are potentially disastrous.

Share:
Facebook iconTwitter IconLinkedIn icon