
Corporate Horoscopes
14 minWhy It’s Broken and How to Fix It
Golden Hook & Introduction
SECTION
Olivia: Jackson, you know that color-coded risk chart that hangs in basically every corporate office? The one with all the green, yellow, and red boxes that supposedly maps out all potential disasters? Jackson: Oh, I know it well. The 'Risk Matrix.' It's the crown jewel of every project kickoff meeting. It makes everyone feel very serious and responsible. Olivia: Well, what if I told you that chart might be doing more harm than good? That, in fact, it could be as scientifically valid as your daily horoscope. Jackson: Whoa, shots fired at every project manager and consultant listening right now! That’s a bold claim. Are you saying my carefully plotted 'high impact, low probability' risk is just corporate astrology? Olivia: That's precisely the argument made in the book we're diving into today: The Failure of Risk Management: Why It’s Broken and How to Fix It by Douglas W. Hubbard. And he doesn't make that claim lightly. Hubbard is a fascinating figure—he’s a management consultant, but also an expert in decision sciences and the inventor of a method called Applied Information Economics. His entire career is built on the belief that you can, and should, measure anything, especially the things we label 'intangible.' Jackson: I like that. So he's not just a critic, he's a builder. He’s coming in with a new system. Okay, I'm intrigued. If our trusty red-yellow-green charts are just horoscopes, what’s the real problem? Where does it all go so wrong?
The Illusion of Control: Why Most Risk Management is 'Worse Than Useless'
SECTION
Olivia: It goes wrong at the most fundamental level. Hubbard argues that our entire approach to thinking about risk is flawed. To understand why, he uses this incredibly powerful, and frankly, chilling story. It’s about United Airlines Flight 232 back in 1989. Jackson: I think I’ve heard of this one. Wasn't that the crash in Iowa? Olivia: That's the one. The DC-10 was flying from Denver to Chicago when the fan disk in its tail-mounted engine just disintegrated. It was a catastrophic failure. Shrapnel from the explosion flew out and severed the lines for all three of the plane's hydraulic systems. Jackson: Hold on, all three? I thought the whole point of having three systems was for redundancy. If one fails, the others take over. Olivia: Exactly. That's the design. But this was what engineers call a "common mode failure." A single event—the engine explosion—took out all the backups simultaneously. The pilots had no way to control the plane's flaps, ailerons, or rudder. They had, for all intents and purposes, a 250-ton glider. The fact that they managed to crash-land it in Sioux City and that 185 of the 296 people on board survived is considered one of the most incredible feats of airmanship in history. Jackson: That's terrifying, and just an unbelievable story of heroism. But what does it have to do with a risk matrix in a business meeting? Olivia: Hubbard's brilliant insight is this: the ultimate common mode failure in any organization is a failure of risk management itself. If your system for identifying and preparing for risks is broken, it doesn't matter how many 'redundant' safety checks or backup plans you have. A single, flawed way of thinking can blind you to every real threat, making all your preparations useless. Jackson: Okay, that makes sense. Your one method for seeing risk is the single point of failure. So how did we end up with these flawed methods? Surely they didn't just appear out of nowhere. Olivia: Well, Hubbard describes what he calls the "Four Horsemen" of risk management—four different fields that developed their own ways of handling risk, often in isolation. You have the Actuaries in insurance, who are very mathematical. You have the "War Quants" who developed things like game theory during World War II. You have the Economists with their financial models. And then you have the fourth horseman: the Management Consultants. Jackson: Ah, the consultants. I have a feeling I know where this is going. Olivia: They're the ones who brought us the simple, intuitive, and incredibly popular scoring methods, like the risk matrix. They are easy to explain in a workshop, they look official in a PowerPoint deck, and they give managers a feeling of control. The problem is, they are riddled with what Hubbard calls fundamental errors. Jackson: What kind of errors? Olivia: One of the biggest is "range compression." A 5-point scale, from 'very low' to 'very high' risk, seems simple. But what does 'high' even mean? Is a 20% chance of failure 'high'? Or is a 5% chance of a company-ending catastrophe 'high'? The book shows data where different managers in the same room have wildly different numbers in their heads for the same word. One manager's 'likely' is another's 'unlikely.' So you're not actually communicating; you're creating an illusion of agreement. Jackson: It's a language problem. We're all using the same words but speaking different dialects. Olivia: Precisely. And it gets worse. These methods assume the difference between a '2' and a '3' on the scale is the same as between a '4' and a '5'. But in reality, the jump from a risk that costs you a million dollars to one that costs you a hundred million isn't a simple, linear step. The scales are arbitrary. Jackson: But wait, isn't doing something better than doing nothing? At least it gets people in a room talking about potential problems. It forces a conversation that might not happen otherwise. Olivia: That's the most dangerous misconception of all. Hubbard is adamant on this point, quoting research that shows these methods can be "worse than useless." He uses a great analogy: it's like looking at a flawed bridge design through a frosted lens. The lens makes all the sharp, scary cracks look blurry and soft. It doesn't fix the cracks; it just makes you less aware of how bad they are. You feel reassured, but you're actually in more danger because you've been lulled into a false sense of security. Jackson: Wow. So the very tool meant to make us safer is actually making us blind. That's a heavy thought. It's not just ineffective, it's actively counterproductive. Olivia: Exactly. It replaces genuine, uncomfortable uncertainty with a comforting, but ultimately meaningless, illusion of control. And that illusion is what allows the real disasters to happen.
The Calibration Cure: How to Actually Measure the 'Unmeasurable'
SECTION
Jackson: Okay, I'm sold. My risk matrix is officially going in the bin. But this leaves a huge void. If we can't trust our experts' gut feelings and we can't trust these simple charts, what are we supposed to do? How do you possibly measure something as fuzzy as 'project success' or 'cybersecurity risk'? It feels unmeasurable. Olivia: This is where Hubbard's work gets really exciting and, honestly, empowering. He argues that the feeling of 'unmeasurability' is just an illusion caused by a lack of a specific skill. And that skill is called calibration. Jackson: Calibration. That sounds like something you do to a TV screen. What does it mean for a human brain? Olivia: It's about training yourself to be better at assessing probabilities. Most of us are wildly overconfident. If you ask a group of people to provide answers they are "90% certain" are correct, studies show they are typically right only about 60-70% of the time. We don't intuitively understand what 90% confidence feels like. Jackson: I can definitely relate to that. My "90% sure I'll be on time" is probably closer to 50% on a good day. Olivia: (laughs) Exactly. So Hubbard uses a simple but brilliant thought experiment to make this real. Imagine I ask you for a 90% confidence interval for, say, the year the telephone was invented. You give me a range, maybe 1870 to 1890. Now, I offer you a bet. Option A: You win $1,000 if the correct answer is inside your range. Option B: You can pull one marble from a bag containing 9 white marbles and 1 black one. If you pull a white marble, you win $1,000. Which do you choose? Jackson: Oh, that's easy. I'm taking the marbles. No question. Olivia: And almost everyone says that! By choosing the marbles, you've just admitted that you believe the 90% chance from the bag is better than the "90% confidence" of your own estimate. Your brain knows you're overconfident, even if you don't consciously admit it. Calibration is the process of training, through feedback, to get your internal sense of probability to match reality, so you'd be indifferent between your estimate and the bag of marbles. Jackson: That's a fantastic way to put it. It makes the abstract idea of overconfidence feel completely tangible. So, once you have these calibrated experts who can give you more realistic probabilities, what do you do with them? Olivia: You use them as inputs for a much more powerful tool: a Monte Carlo simulation. Jackson: Okay, now that definitely sounds like something from a James Bond movie. Is it super complex? Olivia: The name is intimidating, but the concept is surprisingly simple. It was developed by physicists on the Manhattan Project who needed to model incredibly complex systems. A Monte Carlo simulation is basically a way of playing out the future thousands, or even millions, of times on a computer. Instead of putting one number into your model—like 'sales will be $10 million'—you put in a calibrated range—'sales will be between $8 million and $15 million, with a 90% confidence.' The simulation then randomly picks a value from that range, and from all the other uncertain ranges in your model, and calculates the outcome. It does this over and over again. Jackson: So it's not trying to predict one single future, it's mapping out all the possible futures. Olivia: Exactly! And when it's done, you don't get a single number. You get a curve showing you the probability of every possible outcome. You can see, for instance, that there's a 10% chance you'll lose money, a 60% chance you'll make at least your target profit, and a 5% chance you'll hit a grand slam. It gives you a true picture of your uncertainty. Jackson: That sounds incredibly useful. Why isn't everyone doing this? Olivia: This brings us to another of Hubbard's key ideas: the "Measurement Inversion." He argues that in most organizations, there's an inverse relationship between what we measure and what's most valuable to measure. We spend enormous effort measuring things that are easy to count—like server uptime or office supply costs—because we have the data. Jackson: Right, because they're tangible and we can put them in a spreadsheet. Olivia: But we completely ignore the huge, uncertain, high-value questions—like 'What is the risk of a catastrophic brand reputation failure?' or 'What is the value of improved employee morale?'—because they feel 'unmeasurable.' We measure what we can, not what we should. The Measurement Inversion means we focus our resources on the least valuable information. Jackson: So you're saying we spend all our time measuring the easy, low-impact stuff, and ignore the big, scary, important things because they feel fuzzy. Olivia: And Hubbard's whole point is that those 'fuzzy' things are not only measurable through calibration and simulation, but that measuring them provides the biggest bang for your buck. The very act of starting to measure something highly uncertain reduces that uncertainty dramatically. You don't need perfect data to start making vastly better decisions.
Synthesis & Takeaways
SECTION
Jackson: You know, as we've been talking, the big takeaway for me isn't just that we're using the wrong tools, like a risk matrix. It feels deeper than that. It’s like we have a fundamentally flawed relationship with uncertainty itself. We crave simple, definite answers—a red box, a single number—when what we should be doing is embracing and quantifying the full, messy range of what we don't know. Olivia: That's the heart of it. We've been taught to see uncertainty as a failure of knowledge, something to be eliminated. Hubbard reframes it. Uncertainty isn't the problem; it's the reality. The problem is our refusal to describe it honestly. The book is really a call to change our language, to move from the false certainty of 'high risk' to the honest uncertainty of 'a 15% chance of a loss greater than $5 million.' That shift in language forces a shift in thinking. Jackson: It's about being intellectually honest about what we don't know. And it seems like the book's message has become even more relevant since it was first written, especially after the 2008 financial crisis, which was a massive failure of risk management. Olivia: Absolutely. The book was born from that crisis, but its lessons apply everywhere, from engineering disasters to public health crises to launching a new product. And Hubbard's challenge to us is simple. The next time you or someone on your team says a risk is 'unmeasurable,' pause and ask: 'What do you mean by measure?' Because as he proves, the definition of measurement is simply 'a reduction in uncertainty based on observation.' Even a small observation, a tiny bit of data, can start to reduce that uncertainty. That first step is often the most valuable one you can take. Jackson: That's a powerful thought to leave our listeners with. So, maybe a good question for everyone listening is: what's the one 'unmeasurable' risk in your life or your work that you've been avoiding? And what's one small observation you could make, one tiny piece of data you could gather, to start reducing that uncertainty, even just a little? Olivia: I love that. It’s a perfect first step out of the world of astrology and into the world of real measurement. Jackson: Thanks, Olivia. This has been eye-opening. I'll never look at a color-coded chart the same way again. Olivia: This is Aibrary, signing off.