
Your Gut is a Liar
12 minWhy Thinking-by-Numbers Is the New Way to Be Smart
Golden Hook & Introduction
SECTION
Joe: Your gut feeling is probably wrong. In fact, a simple equation scribbled on a napkin might be smarter than a lifetime of your expert intuition. Today, we're exploring why thinking-by-numbers is the new way to be smart. Lewis: Hold on, you're telling me my gut is a liar? I've built my entire life on questionable gut feelings, Joe. From ordering takeout to my questionable fashion choices in the early 2000s. Joe: (Laughs) Well, your fashion sense might be beyond saving, but for almost everything else, there's a compelling argument that data just does it better. That's the provocative idea at the heart of Super Crunchers by Ian Ayres. Lewis: Right, and Ayres is a Yale law professor and economist, not some Silicon Valley tech bro. What's wild is he wrote this back in 2007, basically predicting the 'big data' explosion before most of us even knew what it was. He was spotting the trend when the rest of us were still trying to figure out Facebook. Joe: Exactly. He saw this quiet revolution taking shape. And it didn't start with giant computers or social media. It started with something far more... refined. It started with wine.
The Uprising of the Algorithm: When Numbers Outsmart Experts
SECTION
Lewis: Wine? You mean, like, fancy, expensive wine that people sniff and swirl and talk about in hushed tones? That seems like the last place a spreadsheet would be welcome. Joe: Precisely. For decades, the world of fine wine, especially French Bordeaux, was ruled by the palates of a few god-like critics. The most famous was Robert Parker. His ratings could make or break a vineyard. His expertise, his intuition, was everything. Lewis: I can picture him. A man who can taste the difference between a grape that grew on the sunny side of the hill versus the shady side. Joe: That's the image. Then along comes Orley Ashenfelter, a Princeton economist. He loves wine, but he's an economist. He thinks in numbers. And he has a simple, almost insulting idea: "Wine is an agricultural product dramatically affected by the weather." Lewis: Okay, that sounds like a no-brainer. Good weather, good grapes. Joe: It is a no-brainer! But no one in the wine world was using it to predict quality before the wine was even made. Ashenfelter created a simple formula. He looked at just a few variables: the amount of winter rainfall, the average temperature during the growing season, and the rainfall during the harvest. That's it. Lewis: So a weather report beat a master's palate? How is that even possible? Joe: He started publishing his predictions in a newsletter. In the late 80s, the experts, including Parker, were raving about the 1986 vintage. But Ashenfelter's weather data told him the '86 was just okay. His model pointed to the 1989 vintage. He declared it would be "the wine of the century." The wine establishment laughed at him. Robert Parker famously quipped, "I'd hate to be invited to his house to drink wine." Lewis: Ouch. That's the academic equivalent of a diss track. So who was right? Joe: Ashenfelter. The 1989 Bordeaux vintage turned out to be legendary. Then, his model predicted 1990 would be even better. He was right again. Years later, auction prices proved it. The '89 and '90 vintages were selling for double or triple the price of the '86. His simple equation had humiliated decades of refined, intuitive expertise. Lewis: That is incredible. It's like the ultimate "I told you so." This sounds a lot like the 'Moneyball' story, right? Finding value where the so-called 'experts' refuse to look. Joe: It's the exact same pattern. While baseball scouts were judging players on their looks, their confidence, or the "sound of the ball off the bat," analysts like Bill James were looking at the numbers. They found that boring stats like on-base percentage were far better predictors of success than a scout's gut feeling. The Oakland A's, led by Billy Beane, used this to build a winning team with a shoestring budget. Lewis: They were drafting players the experts thought were flawed—too short, too fat, weird throwing motion... Joe: Exactly. Billy Beane’s mantra was, "We're not selling jeans." He meant he didn't care what a player looked like, he only cared what the data said about their ability to get on base and score runs. In both wine and baseball, the Super Crunchers proved that a good algorithm could see things the human eye, no matter how experienced, simply missed.
The Hidden Architects of Your Choices
SECTION
Lewis: Okay, so that's fascinating for wine snobs and baseball nerds. But how does this 'Super Crunching' actually affect me, a regular person who just wants to watch a movie and not think too hard? Joe: Ah, that's where the revolution went from a niche curiosity to the engine of the modern world. It's already affecting you every single day. Think about Netflix. When you finish a show, what happens? Lewis: The algorithm serves up my next binge. It knows my secret love for historical dramas and my not-so-secret love for terrible reality TV. Joe: And it's incredibly effective. Ayres points out that nearly two-thirds of all movies rented on Netflix, back when they were mailing DVDs, were from the recommendation engine. And people rated those recommended movies half a star higher on average. The algorithm knows your taste better than you do. Lewis: So basically, my Netflix queue isn't just a friendly suggestion, it's a finely-tuned machine designed to keep me glued to the couch? Joe: Precisely. But it gets much more intense. Let's talk about Harrah's Casino. They wanted to solve a simple problem: how to keep gamblers playing as long as possible without them getting so upset about their losses that they never come back. Lewis: The gambler's "breaking point." Joe: They called it the "pain point." And they built a Super Crunching model to predict it for every single customer. They tracked every bet you made, what time of day you played, how much you were winning or losing, your age, your gender, where you lived—all of it. They fed this into a regression model that calculated the exact dollar amount of loss you could tolerate before you'd get angry and leave. Lewis: Wow, that's both brilliant and deeply cynical. They're literally calculating how much pain they can inflict before you walk away. Joe: It gets better. Let's say the algorithm predicts your pain point is $900. You're at a slot machine, and the system sees you've just lost $850. A silent alarm goes off in the casino's command center. A "luck ambassador"—I'm not making that title up—is dispatched to your location. Lewis: A luck ambassador? What do they do, sprinkle you with magic good-luck dust? Joe: They walk up to you with a big smile and say, "Hi, we've noticed you're one of our valued customers! How would you like a complimentary dinner for two at our prime steakhouse, on the house?" Lewis: Whoa. So right before you hit your breaking point, they distract you with a free steak. You forget your losses, you feel special and valued, and you're much more likely to come back tomorrow and lose another $850. Joe: That's the system. It's not about luck; it's about calculated, personalized intervention designed to manage your emotions and maximize their profit. It's a perfect, if unsettling, example of Super Crunching in action. And it's not just casinos. Wal-Mart famously analyzed sales data from hurricanes and discovered that before a storm, sales of strawberry Pop-Tarts went through the roof. Lewis: Pop-Tarts? Not water or batteries? Joe: Those too, but Pop-Tarts were the surprise. So now, when a hurricane is forecast, Wal-Mart preemptively ships truckloads of strawberry Pop-Tarts to stores in the storm's path. They're predicting and meeting a need you didn't even know was a statistical certainty.
The Double-Edged Sword: Progress, Privacy, and the Perils of Prediction
SECTION
Joe: Exactly. And that brings us to the biggest question in the book: Is all this crunching a force for good or for... something else? It's a true double-edged sword. Lewis: I can see the "something else" part pretty clearly from the casino story. But what's the argument for it being a force for good? Joe: The most powerful tool the Super Crunchers have is randomized testing. It's a simple idea: you want to know if something works? You create two identical groups, give the treatment to one group (the heads of a coin flip) but not the other (the tails), and see what happens. It takes all the guesswork and bias out of the equation. Lewis: Like a clinical trial for a new drug. Joe: Exactly. And governments are starting to use this to make policy. Instead of just throwing money at a problem, they test it. For example, several states tested job-search assistance programs for the unemployed. They used an algorithm to predict who was most likely to be unemployed for a long time, and then randomly gave half of those people extra help—resume workshops, interview coaching, etc. Lewis: And what happened? Joe: The people who got the help found jobs weeks earlier than the control group. The program more than paid for itself in saved unemployment benefits. For every dollar spent, the government saved two. It was proof, based on hard data, that the program worked. This is evidence-based policy, and it has the potential to make government smarter and more effective. The same goes for medicine, with campaigns that use data to identify simple, life-saving interventions in hospitals. Lewis: Okay, so this can actually make government more effective and save lives. That's a powerful argument. It’s hard to be against that. Joe: It is. But here's the other edge of the sword. What happens when the data is wrong, or the person crunching it is biased? We put all our faith in these objective numbers, but they're still collected and analyzed by flawed human beings. Lewis: But that's the catch, isn't it? We're told to trust the numbers, but what if the numbers are wrong? Or the person crunching them has an agenda? Joe: This is where the story of economist John Lott comes in. In the late 90s, he published a hugely influential study called "More Guns, Less Crime." He had a massive dataset and his regressions seemed to show that when states passed laws making it easier to carry concealed weapons, crime rates dropped significantly. Lewis: I've definitely heard that argument. It became a cornerstone for gun-rights advocates. Joe: A massive one. But other researchers, including the author Ian Ayres himself, decided to check the math. They got Lott's data and tried to replicate his findings. And they couldn't. They dug deeper and found that Lott had made several coding errors when setting up his data. Simple mistakes that completely changed the results. When the errors were fixed, the effect vanished. Some of their models even suggested the laws might slightly increase crime. Lewis: So one of the most influential pieces of data-driven policy research in a generation was based on... a typo? Joe: Essentially. The National Academy of Sciences later reviewed all the evidence and concluded there was no credible proof either way. The Lott saga is a cautionary tale. It shows that even the Super Crunchers can be blinded by their own beliefs or make simple mistakes. It proves that we can't just blindly accept a statistical result. We need what Ayres calls an "empirical devil's advocate"—an independent, formalized system for challenging and verifying data-driven claims.
Synthesis & Takeaways
SECTION
Joe: And that really brings us to the core takeaway of the book. The Super Crunching revolution isn't really about algorithms replacing humans, or intuition becoming obsolete. It's about the evolution of expertise. Lewis: It seems like the old expertise was based on a "black box" of personal experience. You just had to trust the expert because they'd "seen it all." But the new expertise is more transparent and testable. Joe: Exactly. The future isn't just about being a brilliant number cruncher. It's about having the wisdom and creativity to ask the right questions of the data. Your intuition's new job isn't to provide the answer, but to form the hypothesis that the data can then test. Statistical results that seem wildly counter-intuitive should be interrogated, not just accepted. Lewis: So the real skill isn't being a math genius, but being a smart skeptic. Knowing when to trust the algorithm, and when to trust your... well, your newly data-informed gut. It’s about learning to have a conversation with the data, not just taking orders from it. Joe: That's the new way to be smart. It's a partnership. Intuition points the flashlight, and the data tells you what's really hiding in the dark. Lewis: That makes me wonder, what's one decision you make based on pure intuition that a simple dataset could probably improve? For me, it's probably deciding what to make for dinner. My gut says pizza, but the data on my waistline would probably suggest a salad. Joe: (Laughs) A classic conflict. We'd love to hear what our listeners think. What's your personal "Ashenfelter vs. Parker" moment? Let us know. We're always curious to see where data and daily life collide. Lewis: This is Aibrary, signing off.