Why smart managers do stupid things

Large firms hire some of the smartest, most capable people. Yet, when those people become managers, they do stupid things. For example, they cling to concepts like “pay for performance” and “talent management” without any critical thinking - without questioning whether the systems and processes in place actually work.

When it comes to management, there’s a clear absence of intellectual rigor. And I always wondered: “Why?”

Thinking Fast and Slow

Recently, I came across an answer that’s both encouraging and dispiriting. It’s because of the way our brains work.

In the brilliant book, “Thinking Fast and Slow”, Nobel prize winner Daniel Kahneman describes the brain as having two systems:

“System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.

System 2 allocates attentions to the effortful mental activities that demand it, including complex computations.”

If you see two objects, for example, you’ll use System 1 to detect which one is further away. But to calculate the distance, you’ll use System 2. While System 1 is easier for us - largely automatic - System 2 can come up with a better answer.

But using System 2 requires effort and attention. And attention is a scarce resource. So we systematically take shortcuts so we can make sense of the world around us.

Kahneman’s book is full of incredible insights. Chapter after chapter, you’ll face uncomfortable truths about how you think, all grounded in excellent research and careful studies, yet written so all of it is accessible and engaging.

Of the many fascinating stories in the book, two struck me as particularly relevant to management practices at most firms.

The Illusion of Validity: Evaluating Leaders

Throughout the book, Kahneman refers to the WYSIATI principle - What You See Is All There Is. Over and over, he demonstrates how people systematically disregard basic probability and other facts in order to (quickly and easily) make up a story that fits with the things they see.

“The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent than it really is.”

Kahneman describes a story from the Israeli Army, where he participated in a formal review to determine soldiers’ leadership abilities, complete with a numerical rating, detailed observations, and a recommendation per soldier.

Months later, they were able to learn how well the candidates actually performed in officer-training school and, repeatedly, the correlation between the leadership evaluation and actual performance was negligible.

But that didn’t change anything.

“The dismal truth about the quality of our predictions had no effect whatsoever on how we evaluated candidates and very little effect on the confidence we felt in our judgments and predictions about individuals.

We knew as a general fact that our predictions were little better than random guesses, but we continued to feel and act as if each of our specific predictions was valid...Because our impressions of how well each soldier had performed were generally coherent and clear, our formal predictions were just as definite.”

It was a case of WYSIATI. The evidence in front of them (impressions of the soldiers) outweighed their knowledge about their flawed process, and gave them an illusion of validity.

Every firm with a “talent management” process, rigorously evaluating employees’  potential, is subject to that same illusion.

The Illusion of Validity: Rewarding Results

Decades after his experience in the Israeli Army, Kahneman was invited to speak to a group of investment advisers.

Before the talk, he analyzed the investment outcomes of the advisers over eight consecutive years. Then he computed correlations to see if there were differences in skill. The results “resembled what you would expect from a dice-rolling contest.”

“Our message to the executives was that, at least when it came to building portfolios, the firm was rewarding luck as if it were skill...But their own experience of exercising careful judgment on complex problems was far more compelling to them than an obscure statistical fact.”

Unfortunately, every performance review and every distribution of performance rankings suffers from these same biases. And yet we persist.

After his talk, Kahneman spoke with one of the investment advisers:

"He told me, with a trace of defensiveness: 'I have done very well for the firm and no one can take that away from me.'"

What successful manager would want to admit that their success is largely due to chance? That the system that promoted them was fundamentally flawed?

The Red Beads

But we’ve known all this for decades - even without the additional proof in Kahneman's work.

Deming talked about the inherent errors in most evaluations of performance, and he wrote about it in the early 1980s in “Out of the Crisis”:

“A common fallacy is the supposition that it is possible to rate people; to put them in rank order or performance...The performance of anybody is the result of many forces - the person himself, the people that he works with, the job...his customer, his management...

These forces will produce unbelievably large differences between people. In fact, as we shall see, apparent differences between people arise almost entirely from action of the system that they work in, not from the people themselves. A man not promoted is unable to understand why his performance is lower than someone else’s. No wonder; his rating was the result of a lottery.”

Deming then introduced the Red Bead experiment to prove his point. In that experiment, workers with the same skills, instructions, and tools are put in front of a bin of red and white beads and asked to select only white beads (not red ones) with a special paddle. And they produce wildly different results.

The process is "in control" and the variations are to be expected.

“It would obviously be a waste of time to try to find out why Terry made 15 red beads, or why Jack made only 4. What is worse, anybody that would seek a cause would come up with an answer, action on which could only make things worse henceforth.”

“Anybody that would see a cause would come up with an answer.” Make up a story to fit the facts. Kahneman couldn’t agree more.

What to do?

So, our brains’ ability to make sense of the world involves some devastating trade-offs in terms of biases and systematic errors. But if we’re wired to make substitutions, to overlook simple statistics, to make up causes and coherent stories where there’s only luck, then is there anything we can do?

Yes.

We can recognize that “the laziness of System 2 is an important fact of life.” And that “System 1 is radically insensitive to both the quality and the quantity of information that gives rise to impressions and intuitions.”

Knowing our natural tendencies and biases, we can ask more questions. We can insist on more evidence.

“The way to block error that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcement from System 2.”

We can think more. We can refuse to accept management practices that don’t work just because that’s the way things are.

We can do better.