
The Many-Model Mind
11 minGolden Hook & Introduction
SECTION
Christopher: Most of what you've been told about expertise is wrong. The smartest people in the room aren't the ones with the single best answer. They're the ones with the most different answers. And that subtle shift is the key to avoiding catastrophe. Lucas: Okay, that's a bold claim. Avoiding catastrophe? Where are you getting this from? That sounds like a pretty high-stakes game for just having a few extra ideas. Christopher: It's the central idea in a fantastic, though admittedly dense, book called The Model Thinker by Scott E. Page. And Page is the perfect person to write this—he's a professor at the University of Michigan, but his background is in math, economics, and complexity science. He's an expert in seeing how different fields connect, which is exactly what this book is about. Lucas: Huh. I like that he comes from different fields. But when you say "dense," my brain immediately pictures a 500-page textbook I'm supposed to read on the beach. Is this something a normal human can actually use? Christopher: Absolutely. That’s the beauty of it. You don't need to be a mathematician. You just need to be curious about why we get things so wrong, so often. And to see why having many models is crucial, we have to look at a time when having only one led to absolute disaster.
The Danger of a Single Story: Why One Model Is Never Enough
SECTION
Lucas: I have a feeling I know where this is going. Are we talking about the big one? Christopher: We are. The 2008 financial crisis. For years leading up to it, the vast majority of economists and financial experts were using a specific set of models to understand the economy. These models were elegant, they were mathematically sound, and they were all built on a few core assumptions. Lucas: Let me guess: that people are rational and markets are efficient? Christopher: Exactly. The models assumed that if housing prices got too high, rational people would stop buying, and the market would correct itself. They assumed that risk was being efficiently distributed through these new, complex financial products like mortgage-backed securities. The dashboard looked clean. The numbers looked good. Lucas: But the engine was on fire. Christopher: The engine was melting down. These models, by their very design, had a massive blind spot. They couldn't "see" the possibility of a housing bubble, or what would happen if millions of subprime mortgages all failed at once. They couldn't account for the panic and irrationality that would spread through the system. They were all looking at the same, beautifully polished, but deeply flawed map of reality. Lucas: Hold on. These are the most highly-paid, supposedly brilliant minds in finance. Are you saying they were all just using the same bad map? How is that possible? Christopher: That's the core of Page's argument. It wasn't necessarily a "bad" map. It was just one map. And any single map, by definition, has to leave things out. A map that showed every single tree and rock would be as big as the territory itself, and completely useless. The problem wasn't the model; it was the monoculture of the model. Everyone was so confident in that one way of seeing the world that they couldn't imagine another. Lucas: It’s like a doctor only using a thermometer to diagnose every patient. They'd get fevers right, but they'd miss a broken bone, a heart attack, everything else. They'd be an expert in one thing and dangerously ignorant about the rest. Christopher: That's a perfect analogy. Page argues that wisdom comes from having a whole toolkit. You need the thermometer, but you also need the X-ray machine, the stethoscope, and the blood test. Each model reveals a different aspect of the truth. The economists of 2007 had a world-class thermometer, but the patient was bleeding out from a wound they had no tool to see. Lucas: Wow. Okay, so one model is dangerous. I get it. It creates blind spots. But what does having many models actually let you do? Is it just about avoiding mistakes, or is there a more proactive side to this?
The Swiss Army Knife for Your Brain: How to Use Models to REDCAPE the World
SECTION
Christopher: That's the next crucial step. It's not just defensive. Page gives us this great acronym, REDCAPE, to show the superpowers that a many-model toolkit gives you. It stands for Reason, Explain, Design, Communicate, Act, Predict, and Explore. Lucas: REDCAPE. It sounds a little like a corporate retreat buzzword, Christopher. You're going to have to sell me on this. Christopher: I will, with two stories that are almost unbelievable. Let's start with 'Predict'. In the mid-19th century, astronomers were puzzled. The orbit of Uranus wasn't quite right. It wobbled. It didn't follow the path that Newton's laws of gravity predicted. Lucas: So the model was wrong? Newton was wrong? Christopher: That was one possibility! But a French mathematician named Urbain Le Verrier had a different idea. He thought, what if the model isn't wrong? What if there's something else out there, something we can't see, that's pulling on Uranus? Lucas: An unknown planet. Christopher: Exactly. So he used the Newtonian model not to describe what he saw, but to predict what he couldn't see. He sat down with a pen and paper and calculated the exact mass and location of a hypothetical eighth planet that would account for Uranus's wobble. He sent his prediction to the Berlin Observatory. Lucas: And? Don't leave me hanging. Christopher: On September 23, 1846, the very first night they looked, astronomers pointed their telescope to the exact spot Le Verrier had calculated. And there it was. They discovered Neptune. Lucas: That's insane. He found a planet with a pen and paper. He used a model to predict something into existence. Christopher: He used a model to see the invisible. Now, let's fast forward 160 years. Air France Flight 447 crashes in the middle of the Atlantic Ocean in 2009. The search area is vast, thousands of square miles of deep, turbulent water. They find some floating debris, but the fuselage and the black boxes are lost. Lucas: A needle in a haystack. An impossible task. Christopher: Seemed like it. The initial searches failed. But then, a team of analysts did something similar to Le Verrier. They didn't just look for the plane. They built a model. They created a set of probabilistic models, factoring in ocean currents, wind patterns, the last known location, and the physics of a crash. They ran thousands of simulations to create a probability map of where the wreckage was most likely to be. Lucas: So they weren't looking for the plane, they were looking for the area with the highest probability score. Christopher: Precisely. The model pointed to a small, specific rectangular region. They sent a search team there, and within a week, they found it. The wreckage, the black boxes, everything. Two miles beneath the surface. Lucas: Wow. And we're still using that same fundamental idea today, just with supercomputers, to find a needle in a haystack at the bottom of the ocean. From Neptune to a lost airliner. Christopher: That's the power of having the right model. In one case, it was Newtonian physics. In the other, it was Bayesian probability. Different problems, different models. That's the REDCAPE framework in action. It's a mental Swiss Army knife.
The One-to-Many Principle: Finding the Universal in the Specific
SECTION
Christopher: And here's where it gets really elegant. After all this talk about needing many models, Page shows us the power of using one model in many different, creative ways. Lucas: Okay, now you're confusing me. First you say one model is bad, now you're saying one model is good? Christopher: It's about the flexibility of the modeler, not just the model. Page calls it 'one-to-many' thinking. Let me pose a puzzle. What could possibly connect the profitability of a supertanker, the flaws in the Body Mass Index, and the reason there are so few female CEOs? Lucas: Okay, that sounds like a riddle from a sphinx. A ship, a health metric, and a corporate diversity problem. I have no idea. Lay it on me. Christopher: The answer is a single, incredibly simple mathematical principle. It's the relationship between an object's surface area and its volume, or more generally, the formula X to the power of N. Lucas: X to the N. I remember that from high school math, but I have no idea how it connects to any of those things. Christopher: Let's break it down. First, supertankers. After World War II, a shipping magnate realized that the cost of building a tanker is mostly based on its surface area—how much steel you need. But its revenue is based on its volume—how much oil it can carry. As you make a ship bigger, its volume (which grows cubed) increases much, much faster than its surface area (which grows squared). So, bigger ships become exponentially more profitable. A simple model, X^3 vs X^2, unlocked billions. Lucas: Okay, that makes sense. More carrying capacity for not that much more steel. What about BMI? Christopher: The BMI formula is your weight divided by your height squared. But your weight is a rough proxy for your volume, which is three-dimensional. Your height is one-dimensional. The model is flawed because it compares a 3D property to a 2D one. It's why tall, muscular athletes like LeBron James are often classified as 'overweight'. Their muscle mass, their volume, grows faster than their height squared. The model's math is just wrong for their body type. Lucas: I see! The model's dimensions are mismatched. So what about the CEO pipeline? That can't be about volume and surface area. Christopher: Here, the X^N model is about probability. Let's say to become a CEO you need to get promoted 6 times. That's our 'N'. Now, let's imagine there's a tiny, almost imperceptible bias at each step. Let's say men have a 50% chance of promotion, but women, due to this slight bias, have a 48% chance. Just a 2% difference. That's our 'X'. Lucas: That seems like a really small difference. It shouldn't matter that much, right? Christopher: That's what our intuition says. But the model shows us the brutal truth of compounding. After one promotion, the gap is small. But after six promotions—0.48 to the power of 6—the effect is massive. The model shows that this tiny, 2% bias at each step results in a huge disparity at the top. The small disadvantage gets multiplied by itself over and over again. Lucas: Wow. So it's the same logic, the same simple math, just applied to completely different worlds. A ship, a body, a career. That's the real 'model thinking,' isn't it? Not just knowing the models, but knowing how to see them in the wild.
Synthesis & Takeaways
SECTION
Christopher: Exactly. The book isn't just a list of models. It's an argument for a new kind of literacy. In a complex world, wisdom isn't about having the one right answer. It's about having a latticework of mental models, as the investor Charlie Munger would say, to see the problem from multiple angles. Each model is a different lens, and when you layer them, you start to see the world in three dimensions. Lucas: That’s a great way to put it. So the takeaway for our listeners isn't to go memorize 50 equations from a textbook. It's to start asking a simple question when faced with a problem: 'What's another way to look at this? What's a different model I could use?' Even if it's a simple one. Christopher: It's about cultivating intellectual humility. It's admitting that your favorite model, your default way of seeing things, is probably incomplete. The 2008 crisis happened because of intellectual arrogance, a belief in one true model. A model thinker is humble enough to be a pluralist. Lucas: I love that. It’s not about being the smartest person in the room, but the most flexible thinker. You’re building a mental toolkit, not a single golden hammer. Christopher: And that leads us to the final, reflective question for everyone listening. What's one problem in your life right now—at work, in a relationship, with a personal goal—that you've only been looking at from a single angle? Lucas: And what would happen if you tried to see it through a different lens? What new solutions might appear? Christopher: This is Aibrary, signing off.