
The Glitch in Your Brain
14 minGolden Hook & Introduction
SECTION
Michelle: Okay, Mark, quick challenge for you. I want you to guess the answer to a math problem. In your head, just a rough estimate. Ready? Mark: Ready. Lay it on me. Michelle: What’s 1 times 2 times 3 times 4 times 5 times 6 times 7 times 8? Mark: Uh… okay, let’s see. 1 times 2 is 2, times 3 is 6, 24, 120… it gets big fast. I don’t know, maybe… 500? 600? Michelle: Okay, hold that thought. Now, new problem. What’s 8 times 7 times 6 times 5 times 4 times 3 times 2 times 1? Mark: Huh. Well, it’s the same numbers. But starting with 8 times 7 is 56… that feels bigger right away. My gut says the answer is much larger. Maybe a few thousand? Let’s say… 2,500. Michelle: Fascinating. Your first guess was around 500, your second was 2,500. The correct answer is 40,320. Mark: Whoa. Not even close. But why was my second guess so much higher than my first, for the exact same problem? Michelle: That fascinating glitch in our thinking is exactly what we're exploring today through Michael Lewis's incredible book, The Undoing Project. Mark: Ah, Michael Lewis, the guy who wrote Moneyball and The Big Short. He's a master at making super complex stuff feel like a thriller. Michelle: Exactly. And what's amazing is that this book is basically the psychological prequel to Moneyball. Lewis himself admitted he didn't fully grasp why the market inefficiencies he wrote about existed until he discovered the work of two Israeli psychologists, Daniel Kahneman and Amos Tversky. Their work didn't just win a Nobel Prize; it fundamentally changed how we understand the human mind. Mark: So this is the origin story for why we're all so bad at guessing math problems… and probably a lot more. Michelle: A lot more. It’s the story of why experts make disastrously wrong predictions, why we trust our gut even when it’s leading us off a cliff, and how the friendship between two geniuses uncovered the invisible rules that govern our thoughts.
The Glitch in the Matrix: Why Even Experts Get It Wrong
SECTION
Mark: Okay, so connect the dots for me. What does a math glitch have to do with baseball or the stock market? Michelle: It has everything to do with it. Lewis opens the book not with the psychologists, but with the real-world consequences of these mental glitches. He revisits the world of Moneyball, where data-driven strategies were taking over sports. But he noticed a strange pattern. Teams would adopt these data-heavy approaches, have success, and then, after a few bad seasons, they’d panic and run right back to old-school gut instinct. Mark: Wait, so you're telling me a team like the Boston Red Sox, who broke an 86-year curse and won multiple World Series using data, would just ditch a winning formula because of a few bad seasons? That sounds crazy. Michelle: It is! Their owner, John Henry, literally said, "We have perhaps overly relied on numbers." They went back to relying on the instincts of baseball "experts." And this is the core problem the book tackles: our minds are fundamentally uncomfortable with probability and statistics. We crave certainty and simple stories, and we trust the confident expert in front of us far more than the cold, hard data on a spreadsheet. Mark: That feels so true. We want a human story, not a printout. Michelle: Precisely. And no one embodies this struggle better than Daryl Morey, the longtime general manager of the Houston Rockets basketball team. Morey is a data disciple. He built a sophisticated statistical model to predict which college players would succeed in the NBA. He famously said, "Your mind needs to be in a constant state of defense against all this crap that is trying to mislead you." Mark: The "crap" being… our own eyes and ears? Michelle: Exactly. Our own intuition. Morey knew that watching a charming, athletic player in an interview could create a "halo effect"—where one positive trait makes you assume everything else about them is positive. He knew his scouts, and even he himself, could be fooled. Mark: So what happened? Did the model work? Michelle: It was incredibly successful, but the book highlights the moments of failure, because that's where the psychology gets interesting. In the 2008 draft, his model screamed that he should pick a player named Joey Dorsey. The model saw his college stats—rebounds, blocks—and flagged him as a can't-miss prospect. At the same time, the model completely dismissed another player, DeAndre Jordan, as unworthy. Mark: And let me guess what happened… Michelle: You got it. The Rockets drafted Dorsey, who was a complete bust in the NBA. Meanwhile, DeAndre Jordan, who they passed on, became an All-Star, one of the most dominant centers in the league. The model made a huge mistake. Mark: So the gut-instinct scouts were right that time! Michelle: Well, this is the twist. Morey didn't abandon the model. He dug in to figure out why it failed. He realized the model hadn't properly weighted the age of the players or the quality of their college competition. He adjusted the algorithm. He learned from the error. But the more profound error he discovered wasn't in the machine, but in the humans. Michelle: He tells the story of Jeremy Lin. In 2010, the model was screaming at the Rockets to draft Lin. His stats were off the charts. But the human experts in the room, the scouts, just saw a "not terribly athletic Asian kid" from Harvard. Their brains couldn't compute it. It didn't fit their stereotype of an NBA player. Mark: So the experts literally saw an athletic Asian kid and their brains just short-circuited? That's wild. They let their bias override the data. Michelle: They chickened out. They didn't draft him. A year later, during the "Linsanity" craze, they went back and measured the speed of players' first two steps. They discovered Jeremy Lin had the quickest first move of any player they had ever measured. The data was right all along. The human eye, clouded by prejudice, was wrong. Morey later said, "I can’t think of any reason for it other than he was Asian." Mark: Wow. That's a powerful, and pretty damning, example of a mental glitch. It’s not just about getting a guess wrong, it’s about our brains actively filtering reality to fit our expectations. Michelle: And that is the perfect bridge to the two men who spent their lives mapping out exactly those kinds of mental filters. The men who explained the "why" behind the Daryl Moreys and the Jeremy Lins of the world.
The Odd Couple Who Rewrote the Rules of the Mind
SECTION
Michelle: The men who first mapped out these mental glitches had one of the most fascinating partnerships in scientific history. They were complete opposites, working together at Hebrew University in Jerusalem. Mark: An intellectual odd couple. I love it. Michelle: On one side, you have Daniel "Danny" Kahneman. Lewis paints him as a man defined by doubt. He was a Holocaust survivor who, as a young Jewish boy in Nazi-occupied Paris, had a surreal encounter that shaped his entire life. Mark: Oh, I remember reading about this. It's an incredible story. Michelle: He was out past curfew, wearing his mandatory yellow Star of David, when he saw an SS soldier approaching. He was terrified. But the soldier, instead of arresting him, beckoned him over, hugged him, showed him a picture of his own son, and gave him some money before sending him on his way. Mark: That's unbelievable. To find that flicker of humanity inside a monster… Michelle: It left Kahneman with a lifelong conviction. As he put it, "people were endlessly complicated and interesting." He was always looking for the contradictions, the nuance, the uncertainty. A former student described him as being like "Woody Allen, without the humor"—a brilliant, anxious, self-doubting intellectual. Mark: Okay, so that's one half of the equation. Who was his partner? Michelle: His partner was Amos Tversky. And if Kahneman was doubt, Tversky was certainty. He was everything Kahneman wasn't: a charismatic, supremely confident Israeli paratrooper, a war hero, a man who seemed to excel at everything effortlessly. Everyone who knew him described him as the most intelligent person they had ever met. He was the life of every party, a brilliant storyteller, and utterly fearless. Mark: You've got Woody Allen without the humor and the most interesting man in the world. How did they even start working together? Their personalities sound like they would repel each other. Michelle: It started with an argument, of course. In 1969, Amos gave a guest lecture in Danny's seminar. He was presenting the mainstream view in psychology at the time, which was that people are generally rational. The theory was that humans are "conservative Bayesians." Mark: Hold on, 'conservative Bayesians'? What on earth does that mean in plain English? Michelle: It's a fancy way of saying that people are intuitive statisticians. That when we get new information, we update our beliefs in a logical, rational way, just maybe a bit more slowly or cautiously than a perfect computer would. Think of the poker chip experiment from the book: you have two bags, one mostly red chips, one mostly white. You start drawing red chips. Your brain logically, if conservatively, increases the odds that you're holding the red bag. Mark: That seems reasonable. Our brains adjust to new evidence. Michelle: Amos thought so. But Danny, sitting in the audience, thought it was the stupidest thing he'd ever heard. He stood up and argued passionately that people are not intuitive statisticians. He knew from his own experience that people, including trained scientists, leap to massive conclusions from tiny amounts of data. He called this the "Belief in the Law of Small Numbers." Mark: The Law of Small Numbers? As opposed to the Law of Large Numbers? Michelle: Exactly. The Law of Large Numbers is a real statistical principle: with a big enough sample size, results will converge on the expected average. A coin flipped a million times will be very close to 50/50 heads and tails. But the "Law of Small Numbers" is a fiction that lives in our heads. It's the mistaken belief that a small sample should also perfectly reflect the average. Michelle: They used a great example. They asked people to consider two hospitals. A large one where 45 babies are born each day, and a small one where 15 are born. Over a year, which hospital would record more days where over 60% of the babies born were boys? Mark: My gut says it would be about the same for both. The odds are 50/50 for a boy, so it should even out. Michelle: And that's what most people say! But it's completely wrong. The smaller hospital is far more likely to have those extreme days. A small sample is much more likely to deviate from the average. Getting 9 boys out of 15 is much more probable than getting 27 out of 45. We intuitively ignore the power of sample size. Mark: That's a fantastic example. It feels so counter-intuitive. But I have to ask, some critics, like the psychologist Gerd Gigerenzer, have argued that these kinds of experiments are a bit misleading. Is it possible they were just tricking people with clever wording rather than revealing a true cognitive flaw? Michelle: That's a really important point, and it's a legitimate debate within the field. Gigerenzer's argument is that the human mind is not built for abstract probabilities, but for real-world frequencies, and that some of these questions exploit conversational norms. However, what makes Kahneman and Tversky's work so powerful is the sheer volume and variety of the biases they uncovered. It wasn't just one trick. It was a whole system of predictable errors. Michelle: After that initial argument, Amos, instead of getting defensive, was fascinated. He saw that Danny had a point. He saw a flaw in his own thinking. And that was the beginning of their collaboration. They would spend hours locked in a room, just the two of them, laughing and arguing and crafting these brilliant experiments to map the "glitches" in the human mind. They described it as "sharing a mind."
Synthesis & Takeaways
SECTION
Mark: It’s an incredible story. You have the real-world chaos of bad decisions in sports and politics, and then you have these two brilliant minds in a quiet room in Jerusalem, figuring out the universal code behind that chaos. So what's the big takeaway here? Are we all just hopelessly irrational? Michelle: I think that's the common misinterpretation of their work. But it's not about being hopelessly irrational. It's about recognizing that our minds evolved to use shortcuts—what they called heuristics—that are incredibly useful for surviving on the savanna but have predictable blind spots in our complex, modern world. Mark: So they're features, not bugs? Michelle: In a way, yes! The "availability heuristic," for example—judging something as more likely if it comes to mind easily—is why we're more afraid of a shark attack, which is incredibly rare but vivid, than we are of texting while driving, which is common and deadly. The shortcut is useful for quick threat assessment, but it's terrible at statistical risk analysis. Kahneman and Tversky gave us the map to those blind spots. Mark: So their work isn't a story of human stupidity; it's a story of human complexity. It’s an owner’s manual for a brain that didn’t come with one. Michelle: That's a perfect way to put it. It's about understanding the machine. The book is ultimately a tribute to that idea, and to the friendship that made the discovery possible. When Amos Tversky was dying of cancer, he told Kahneman, "We were a good team." It's a story about how collaboration and challenging each other's thinking can lead to something truly revolutionary. Mark: It makes you rethink every major decision you've ever made. I'm already looking back at job offers I've taken or apartments I've rented and seeing all the biases at play. Michelle: The real question their work leaves us with is: now that we know about these glitches in our thinking, what do we do about it? It forces a certain kind of humility. Mark: It really does. We'd love to hear from our listeners. What's a decision you look back on and now see a cognitive bias at play? Was it the halo effect, anchoring, or just the law of small numbers? Let us know on our social channels. It’s a conversation worth having. Michelle: This is Aibrary, signing off.