Aibrary Logo
Podcast thumbnail

The 37% Rule for Life

13 min

Golden Hook & Introduction

SECTION

Michelle: Alright Mark, I've got the book here: Algorithms to Live By. My first thought? This sounds like the least romantic, most robotic guide to life ever written. 'How to find a spouse using binary search.' Is that what we're in for today? Mark: You know, it’s funny you say that, because that’s the exact trap everyone falls into with a title like this. It sounds like it’s going to turn you into a soulless automaton, optimizing every second of your life. But it’s actually the opposite. Michelle: Oh, really? You’re going to have to convince me on that one. Mark: I will. Today we’re diving into Algorithms to Live By: The Computer Science of Human Decisions by Brian Christian and Tom Griffiths. And what's fascinating is the team-up behind it. You have Brian Christian, who is a poet and a programmer, paired with Tom Griffiths, a top cognitive scientist at UC Berkeley. So it's this perfect blend of humanistic inquiry and hard science. Michelle: Okay, a poet and a scientist. That's an interesting combo. I’m intrigued. So where do they even start? How do you apply cold, hard algorithms to the beautiful mess of human life? Mark: They start with one of the most universal and agonizing decisions we all face: knowing when to stop looking.

Optimal Stopping: The Surprising Math of When to Say 'Yes'

SECTION

Mark: Think about apartment hunting in a city with a crazy rental market. You see a place, it’s pretty good. The rent is decent, the location is okay. But you just started looking. What if the perfect place is just around the corner? But if you pass on this one, it could be gone in an hour. What do you do? Michelle: This is my nightmare. I just fall into total analysis paralysis. I’d probably make a spreadsheet, lose sleep for a week, and then end up picking the first one out of sheer exhaustion. There's no way there's a right answer to this, is there? It feels like pure luck. Mark: That's what you'd think! But mathematically, there is a provably optimal strategy. It's called the 37% Rule, and it's one of the most famous ideas from the book. Michelle: The 37% Rule? Okay, you have my full attention. That sounds way too specific to be true. Mark: It’s shockingly specific. The rule goes like this: if you have a set period of time to make a decision—say, 30 days to find an apartment—you should spend the first 37% of that time, so about 11 days, just exploring. You don't commit to anything. You just look. This phase is purely for gathering data and setting a baseline for what a "good" apartment looks like in this market. Michelle: Okay, so for 11 days, I'm just a tourist in my own apartment search. I'm building my mental database. Then what? Mark: After that 37% mark—after day 11—the rule changes. You are now in the "leap" phase. The algorithm is simple: you commit to the very next apartment you see that is better than any of the apartments you saw in that initial 37% exploration period. Michelle: Whoa, hold on. The very next one? What if an even better one comes along the day after? And why 37%? Why not 40% or 50%? It feels so arbitrary. Mark: It’s not arbitrary at all! It’s the mathematical sweet spot. 1 divided by e, to be precise. This number, 37%, gives you the highest possible probability of selecting the single best option from the whole pool. If you look for too short a time, your standard will be too low. If you look for too long, the best option has likely already passed you by. 37% is the perfect balance between exploration and commitment. Michelle: My mind is a little blown right now. It takes this incredibly messy, emotional human problem and just… solves it with a number. Mark: And people have been wrestling with this for centuries! The book tells the amazing story of the astronomer Johannes Kepler, back in the 17th century. After his first wife died, he decided to remarry and meticulously "interviewed" eleven different women over two years. Michelle: He made a shortlist? That’s cold, Kepler. Mark: He was a scientist! He was trying to be rational. He really liked the fourth candidate, but he thought, "I'm not even halfway through my list, I should keep looking." So he passed on her. He then went through the rest, found them all lacking in some way, and was filled with regret. He ended up having to go back to the fifth candidate, who thankfully still accepted his proposal. Michelle: So Kepler was basically beta-testing the 37% rule and almost messed it up! He should have stopped at number four, which is roughly 37% of eleven. That's incredible. But does it really work for dating? It feels so… unromantic to apply a formula. Mark: The authors would say it's not about being unromantic; it's about understanding the structure of the problem. And this problem of balancing the new versus the familiar, the known versus the unknown, applies to almost everything in our lives.

The Explore/Exploit Trade-off: Your Life's Internal Algorithm

SECTION

Mark: And that leads perfectly to the next big idea, which is less about a single decision and more about how we live our entire lives. It’s the trade-off between Exploration and Exploitation. Michelle: Okay, explain that. Mark: Exploration is trying new things—a new restaurant, a new genre of music, a new city. Exploitation is enjoying the things you already know you love—going to your favorite restaurant, listening to your favorite band, vacationing in your favorite spot. We face this choice constantly. Michelle: Oh, I know this feeling. This is my social life in a nutshell. My friends and I have the same debate every Friday night. And I've noticed as I get older, I... exploit more. I just want my favorite pizza place. I don't have the energy for a new place that might be terrible. Am I getting old and boring? Mark: According to the book, you're getting rational! The authors argue this is a 'lifespan algorithm' that’s hardwired into us. When you're young, you have a long time horizon. The potential payoff for exploration is massive. You might discover a new favorite band that you can enjoy for the next 50 years. The investment is worth it. Michelle: Right, the risk is low and the potential reward is huge. Mark: Exactly. But when you're older, your time horizon is shorter. The math changes. It becomes more rational to exploit the tried-and-true favorites you've already discovered through a lifetime of exploration. The book cites fascinating research showing that as people age, their social circles naturally shrink. They don't stop being social; they just focus their energy on the most meaningful, high-value relationships they've cultivated over the years. Michelle: That's a much kinder way of looking at it than 'you're getting boring.' So it's not a flaw, it's a feature of our internal programming. It's an adaptive strategy. What about on a bigger scale? Mark: The book has a brilliant example: Hollywood's obsession with sequels and reboots. Think about it. Why are there so many? Because the film industry, facing declining profits and a shrinking time horizon for theatrical releases, is in full-on "exploit" mode. Michelle: Wow. They don't want to risk exploring a new, original idea that might flop. They want to exploit the guaranteed audience of a known franchise. Mark: Precisely. A new superhero movie is a low-risk, high-reward exploit. An original, independent drama is a high-risk exploration. The industry's behavior is a perfect reflection of this algorithm. They're acting like someone with not much time left, prioritizing sure things over new discoveries. Michelle: That's a little depressing, but it makes perfect sense. So being too focused on exploiting can lead to stagnation. But being too focused on exploring means you never get to enjoy anything. It's a constant balancing act. Mark: It is. And being too analytical, trying to find the perfect balance, can lead to its own set of problems.

Overfitting and Computational Kindness: The Wisdom of Thinking Less

SECTION

Mark: And this idea of being 'too optimal' or 'too analytical' brings us to the final, and maybe most profound, concept: the danger of Overfitting. Michelle: Overfitting. That sounds like a term from statistics or machine learning. Mark: It is. In simple terms, overfitting is when you create a model or a plan that is so perfectly tailored to the specific data you have that it fails spectacularly when it encounters anything new. It’s a model that has memorized the noise, not learned the signal. Michelle: Like when you plan a vacation down to the minute, based on perfect weather and traffic, and then one flight delay or a sudden rainstorm ruins the entire, brittle thing? Mark: That is a perfect analogy! The plan was overfitted to ideal conditions. The book uses the amazing story of Charles Darwin creating a pros-and-cons list for whether he should get married. Michelle: No way. The Charles Darwin? Mark: The one and only. He had a column for "Marry" and "Not Marry." Under "Marry," he listed things like "children," "constant companion," and my personal favorite, "charms of music & female chit-chat." Michelle: (laughing) Female chit-chat! Oh, Darwin. What was in the "Not Marry" column? Mark: "Terrible loss of time," "less money for books," and "anxiety and responsibility." He was trying to turn this deeply emotional, complex life decision into a simple optimization problem. He was trying to find the perfect answer, but in doing so, he was overfitting to tiny, irrelevant details. Michelle: That's hilarious. So the advice is... think less? That feels so counter-intuitive. Mark: In a way, yes. It's about knowing when to stop analyzing. And this leads to their beautiful closing idea, which I think is the most important takeaway from the whole book: Computational Kindness. Michelle: Computational Kindness. I have never heard that phrase before. What does it mean? Mark: It's the idea that we should design our interactions and our systems to reduce the cognitive load on other people. We are all constantly solving computational problems, and we can either make those problems harder or easier for each other. Michelle: Give me an example. Mark: Don't ask a friend, "Where should we eat tonight?" That's a terrible question. You've just handed them a huge, open-ended search problem. They have to consider cuisine, location, price, your preferences, their preferences... it's exhausting. Michelle: I am guilty of doing this at least twice a week. Mark: We all are! A computationally kind approach would be to say, "I'm in the mood for Italian or Thai. What do you think?" You've constrained the problem. You've turned a difficult search into a simple choice. You've been computationally kind. It's a form of empathy. Michelle: Wow. That reframes so much about social etiquette. It’s not just about being polite; it’s about being considerate of other people's mental energy. Mark: Exactly. The book argues that this principle applies everywhere. A well-designed parking garage with a single, spiraling path is computationally kind because you just take the first spot you see. A chaotic, multi-lane parking lot is computationally cruel. A restaurant that takes your name and texts you when your table is ready is kind; one with a "hover and pounce" policy is cruel.

Synthesis & Takeaways

SECTION

Michelle: Computational kindness. I've never heard that phrase, but it instantly makes so much sense. It's not just about efficiency; it’s a form of active empathy. You're making life easier for someone else's brain. Mark: Exactly. And that's the big takeaway from Algorithms to Live By. It’s not about turning humans into robots. It’s about using the logic of computation to become more, not less, human. To understand our own limits, to be wiser in our choices, and ultimately, to be kinder in our interactions. The authors argue that the best algorithms are often the ones that embrace imperfection, uncertainty, and the messiness of it all. Michelle: It reminds me of their final point about framing problems. Asking an interviewee 'When are you free?' is a burden. It forces them into a complex search of their entire calendar. But asking 'Are you free Tuesday at 2 PM?' is a gift. It's a simple verification task, not a complex search. Mark: A perfect example. It’s a small shift that makes a world of difference. The book is full of these little revelations that, once you see them, you can't unsee them. It gives you a new language for thinking about your own life and your interactions with others. Michelle: It really does. It’s less of a "how-to" guide and more of a "how-to-think" guide. It’s about recognizing the shape of the problems we face every day. Mark: And that’s a powerful thing. So, for everyone listening, what's one small way you can be computationally kind this week? Maybe it’s in how you schedule a meeting, or how you ask your partner about dinner plans. Let us know your thoughts. We'd love to hear them. Michelle: This is Aibrary, signing off.

00:00/00:00