Aibrary Logo
Podcast thumbnail

The Failure of Risk Management

12 min

Why It’s Broken and How to Fix It

Introduction

Narrator: On a July day in 1989, United Airlines Flight 232 was cruising at 37,000 feet when a catastrophic failure in its tail-mounted engine sent shrapnel flying, severing the lines to all three of its redundant hydraulic systems. The pilots lost all conventional flight controls. Through sheer ingenuity, they learned to steer the crippled jet by varying the thrust of the two remaining wing engines, guiding it toward an emergency landing in Sioux City, Iowa. While 185 people miraculously survived, 111 did not. The investigation revealed a "common mode failure"—a single event that defeated multiple layers of protection. This tragedy highlights a terrifying question: what happens when the very systems designed to protect us become the single point of failure?

In his book, The Failure of Risk Management: Why It’s Broken and How to Fix It, author Douglas W. Hubbard argues that for most organizations, the ultimate common mode failure is their risk management process itself. He contends that the popular methods used to assess and mitigate risk are not just ineffective; they are often worse than useless, creating a dangerous illusion of security that is no better than astrology.

The Illusion of Control: Why Popular Risk Methods Are Like Astrology

Key Insight 1

Narrator: Hubbard launches a direct assault on the most common risk assessment tools used in business today, particularly the ubiquitous risk matrix. These are the color-coded charts where managers plot the likelihood and impact of a risk on a simple scale, often from 1 to 5, resulting in a "heat map" of red, yellow, and green squares. While they appear structured and official, Hubbard argues they are fundamentally unscientific and dangerously misleading.

He illustrates this with a story from a consulting workshop. A manager had labeled a particular project risk as "very likely." When asked what that meant, he replied, "about a 20% chance." A colleague, surprised, noted that the company guidelines defined "very likely" as over 80%. The manager's defense was that because the impact was so high, a 20% chance felt "too likely" for comfort. In that moment, the entire room realized that despite countless meetings and detailed charts, they had been speaking completely different languages.

This is the core problem: these scoring methods create an "illusion of communication." They suffer from what Hubbard calls range compression, where a single number like '3' can represent a vast and undefined set of possibilities. They presume regular intervals, wrongly assuming the difference between a '2' and a '3' is the same as between a '4' and a '5'. Dr. Tony Cox, a risk analysis expert cited by Hubbard, has researched these methods extensively and concluded they are often "worse than useless" because they can actually lead to poorer decisions than no method at all.

The Expert Problem: We Don't Know What We Think We Know

Key Insight 2

Narrator: If risk matrices are flawed, many organizations fall back on the judgment of their experts. Yet, Hubbard argues this is another weak foundation, citing decades of psychological research from figures like Daniel Kahneman and Amos Tversky. Humans, including experts, are riddled with cognitive biases that distort their perception of risk. We are systematically overconfident.

The most chilling example of this is the 1986 Space Shuttle Challenger disaster. The Rogers Commission, which investigated the explosion, included Nobel Prize-winning physicist Richard Feynman. He discovered a shocking disconnect in risk perception. NASA management had estimated the probability of a catastrophic failure at 1 in 100,000. The engineers working on the shuttle, however, estimated the odds were closer to 1 in 100. This wasn't a simple disagreement; it was a chasm between the optimistic faith of management and the grim reality of the engineers. Management's overconfidence, their belief in their own machinery, directly contributed to the decision to launch in unsafe conditions. This wasn't an isolated incident; the 2003 Columbia disaster was linked to a similar culture where near-misses, like foam striking the shuttle on previous flights, were normalized instead of being treated as dire warnings.

The Tower of Babel: Deconstructing the Conflicting Languages of Risk

Key Insight 3

Narrator: Part of the reason risk management is so broken is that its practitioners don't even agree on what "risk" means. Hubbard points out that the field is an "ivory tower of Babel," where different disciplines use the word in conflicting ways.

Economist Frank Knight famously defined risk as a measurable uncertainty (like the odds in a dice game) and "uncertainty" as an unmeasurable one. While influential, this definition is not how most people or fields operate. In finance, risk is often equated with volatility. But as Hubbard illustrates with a simple game, this is an oversimplification. If you could pay $100 to roll a die and win an amount between $100 and $600, the outcome is volatile, but there is no risk of loss. Risk requires the possibility of an undesirable outcome. Meanwhile, the Project Management Institute (PMI) defines risk as an uncertain event that can have a positive or negative effect, lumping opportunities in with threats. Hubbard argues this needlessly complicates the term, as "uncertainty" already covers all possible outcomes. This lack of a shared, precise vocabulary makes a unified, effective approach to risk management nearly impossible.

The Quant's Paradox: Where Sophisticated Models Go Wrong

Key Insight 4

Narrator: Even when organizations embrace quantitative methods, they often apply them incorrectly. Hubbard identifies two critical errors: the "Risk Paradox" and the "Measurement Inversion."

The Risk Paradox is the tendency for the most sophisticated risk analysis to be applied to the least important risks. Hubbard tells of a manager at Boise Cascade who used complex Monte Carlo simulations to analyze risks in paper production operations. When asked if these advanced methods were used for the company's major IT projects, the manager said no—even though he admitted the IT projects were far riskier. The sophisticated tools were used where the risks were small and understood, while the biggest, most uncertain risks were managed with intuition.

This is compounded by the Measurement Inversion: organizations measure what is easy to measure, not what is most valuable to measure. They focus on things with abundant data while ignoring the huge, strategically critical uncertainties that are harder to quantify. The result is a systematic focus on the least valuable measurements at the expense of the most valuable ones.

The Scientific Fix: Speaking the Language of Probability

Key Insight 5

Narrator: Hubbard’s solution is to tear down the tower of Babel and replace it with the language of science. The first step is to adopt calibrated probabilities and Monte Carlo simulations. Calibration is the process of training experts to assess odds accurately. Hubbard explains it with a simple betting exercise: if an expert provides a 90% confidence interval for a given question, would they rather bet on their range being correct or on drawing a winning marble from a bag with 9 out of 10 winning marbles? Most people, when uncalibrated, prefer the marble bag, revealing their own lack of true confidence in their "90% certain" estimate. Through training, experts can become calibrated, meaning their subjective probability statements align with reality.

Once experts are calibrated, their estimates can be fed into a Monte Carlo model. This is a computer simulation that runs a scenario thousands of times, using the probabilistic ranges for each variable, to produce not a single-point answer, but a distribution of possible outcomes. It allows decision-makers to see the full spectrum of possibilities and understand the real probability of success or failure.

Building a Calibrated Culture: From Models to Organizational DNA

Key Insight 6

Narrator: Adopting better tools is only half the battle. The final step is to embed this scientific approach into the organization's culture. This requires breaking down silos and creating a central risk management function, perhaps led by a Chief Risk Officer, who oversees a "Global Probability Model" for the entire firm.

Crucially, the organization must incentivize a calibrated culture. Managers and experts should know that their predictions will be tracked against reality. Good forecasting should be rewarded. Hubbard also champions techniques like the "premortem," developed by psychologist Gary Klein. Instead of asking what might go wrong with a project, a premortem assumes the project has already failed spectacularly and asks the team to write down all the plausible reasons why it failed. This simple shift in perspective frees people to identify threats they might otherwise have kept to themselves, turning risk identification from a bureaucratic exercise into a powerful, creative process.

Conclusion

Narrator: The single most important takeaway from The Failure of Risk Management is that effective risk management is not about creating reassuring charts or trusting gut feelings; it is about the rigorous measurement of uncertainty. Douglas Hubbard's work is a clarion call to abandon the pseudoscience of scoring methods and replace it with the empirical, probabilistic approach of a scientist.

The book's ultimate challenge is a shift in mindset. It forces every leader to look at their own organization and ask a difficult question: Is our risk management process a genuine shield, built on evidence and objective measurement? Or is it merely a frosted lens, obscuring the true dangers we face and giving us a false sense of security as we walk, unknowingly, toward the edge of a cliff?

00:00/00:00