Aibrary Logo
Podcast thumbnail

Algorithms to Live By

11 min

The Computer Science of Human Decisions

Introduction

Narrator: How do you know when to stop looking? Whether searching for the perfect apartment, the ideal job, or even a life partner, we all face the same paralyzing dilemma: if you commit too early, you might miss out on something better, but if you wait too long, the best option might already be gone. This isn't just a personal anxiety; it's a classic computational problem. What if the same logic that powers computers could provide a clear, rational answer to life's most complex and uncertain questions?

In their groundbreaking book, Algorithms to Live By, authors Brian Christian and Tom Griffiths reveal that the challenges of human life—managing limited time, making decisions with incomplete information, and interacting with others—are mirrored in the problems computer scientists have been solving for decades. They argue that by understanding the logic of algorithms, we can unlock a new framework for navigating our own lives with greater wisdom and efficiency.

The 37% Rule for Life's Big Decisions

Key Insight 1

Narrator: Many of life's most significant choices are what computer scientists call "optimal stopping" problems. These are situations where you must review a series of options and decide when to commit, knowing you can't go back. The book introduces a surprisingly simple and effective solution: the 37% Rule. This rule states that the optimal strategy is to spend the first 37% of your search time—or options—exploring without any intention of committing. This initial phase is purely for gathering data and establishing a baseline for what "good" looks like. After that 37% threshold, you should commit to the very next option that is better than anything you saw in the initial exploration phase.

This isn't just a theoretical concept. The authors share the historical anecdote of the astronomer Johannes Kepler, who, after his first wife died, embarked on a methodical search for a new one. He courted eleven women over two years, meticulously evaluating each one. He was tempted by the fourth candidate but decided to keep looking, only to find the subsequent options less suitable. He eventually returned to the fifth woman, Susanna Reuttinger, and married her. While Kepler didn't know the 37% rule, his process of looking and then leaping mirrors its core logic. Applying the rule to his eleven candidates would have meant exploring the first four (approximately 37%) and then choosing the next best. That would have led him directly to Susanna, his fifth and ultimately happy choice.

The Explore/Exploit Trade-Off

Key Insight 2

Narrator: Life is a constant negotiation between trying something new (exploration) and sticking with what you know and love (exploitation). Should you visit your favorite restaurant again or try the new place that just opened? Should you listen to your favorite album or explore a new artist? This is the explore/exploit trade-off, a fundamental problem in computer science, often modeled as the "multi-armed bandit problem." The book explains that the optimal balance depends entirely on your time horizon.

For instance, data scientist Chris Stucchio noticed his own behavior changed depending on his time in a city. When he first moved to a new place, he would explore dozens of restaurants, knowing he had a long time to enjoy any new favorites he discovered. But in his final weeks before leaving, he would exclusively exploit his established favorites, ensuring his last meals were guaranteed to be great. This behavior is perfectly rational. When you have a long time left, the potential payoff of discovering a new favorite is high, making exploration worthwhile. When time is short, the risk of a bad experience outweighs the potential reward, making exploitation the wiser choice. The authors suggest this even explains societal trends, like Hollywood's increasing reliance on sequels—a sign of a risk-averse, short-termist industry exploiting known successes rather than exploring new ideas.

The Hidden Costs of Order and Clutter

Key Insight 3

Narrator: We spend a remarkable amount of time and energy organizing our lives, from sorting emails into folders to arranging books on a shelf. But is all this effort worth it? The book delves into the computer science of sorting and caching to reveal the trade-offs. Sorting makes searching faster, but the act of sorting itself takes time. The authors highlight research showing that people who meticulously file their emails are no faster at finding a specific message than those who just use the search bar on a messy, unsorted inbox.

This leads to the concept of caching, where computers—and our brains—keep frequently used items in a small, easily accessible place. The most effective caching policy is often "Least Recently Used" (LRU), where the item you haven't touched in the longest time is the first to be discarded or moved to deep storage. This explains why we often leave recently used documents on our desk rather than filing them immediately. The book tells the story of Yukio Noguchi, a Japanese economist who developed a filing system based on this principle. He simply put every new document on the far left of a single box. When he used a file, he returned it to the far left. This self-organizing system, which keeps the most relevant files at the front, proved to be incredibly efficient, demonstrating that a little bit of managed chaos can be far more effective than rigid order.

The Perils of Overthinking

Key Insight 4

Narrator: In a world of big data, it’s easy to assume that more information and more analysis always lead to better decisions. The book powerfully refutes this idea with the concept of "overfitting." In machine learning, a model overfits when it becomes too complex and learns the noise and random fluctuations in the data, rather than the underlying pattern. As a result, it becomes excellent at explaining the past but terrible at predicting the future.

This human tendency is illustrated by the story of Nobel Prize-winning economist Harry Markowitz. He developed a sophisticated, complex model for optimizing investment portfolios. Yet, when it came to investing his own retirement savings, he didn't use his own model. Instead, he used a simple heuristic: he split his money 50/50 between stocks and bonds. He did this to avoid overfitting to historical market data, which is notoriously noisy and an unreliable predictor of the future. He understood that in a highly uncertain environment, a simple, robust strategy is often superior to a complex one that creates a false sense of precision. Sometimes, the most rational thing to do is to think less.

Solving the Unsolvable by Letting Go

Key Insight 5

Narrator: Not all problems have a perfect, efficient solution. Computer scientists classify certain challenges, like the famous "Traveling Salesman Problem," as intractable—the number of possible solutions is so vast that even the world's fastest computers couldn't check them all in a lifetime. When faced with such problems, the best approach is often "relaxation," which involves strategically simplifying the problem by letting some of the constraints slide.

A perfect real-world example is the wedding seating chart. For her wedding, PhD student Meghan Bellows tried to create the "perfect" seating arrangement for 107 guests, turning it into a massive optimization problem. Even after running it on a lab computer for 36 hours, no perfect solution emerged. The problem was simply too hard. This is where relaxation comes in. Instead of demanding perfection, we can aim for a "good enough" solution. Sports scheduler Michael Trick uses a technique called Lagrangian Relaxation to create schedules for Major League Baseball. He can't satisfy every single constraint from every team, stadium, and TV network. Instead, he turns hard constraints into soft penalties. A team might have to play an undesirable Sunday night game, but the algorithm minimizes how often that happens. By relaxing the demand for perfection, he creates a workable, near-optimal schedule that keeps the league running.

The Protocols of Connection and Kindness

Key Insight 6

Narrator: The final part of the book extends these computational ideas to our interactions with each other, exploring the domains of networking and game theory. The internet functions because of protocols—shared rules that manage how information is sent, acknowledged, and rerouted during congestion. One such protocol is "Exponential Backoff," where a computer waits a progressively longer time before re-trying to send a failed message. This prevents the network from collapsing under a flood of simultaneous retries. The authors suggest this is a model for human forgiveness: be willing to forgive a mistake the first time, tolerate it a second, but become increasingly reluctant to engage after repeated failures.

This leads to the book's most profound idea: "computational kindness." Since many social interactions are themselves complex computational problems—like deciding where to eat with a group—we can make things easier for others. Instead of asking an open-ended question like "Where do you want to eat?", which forces the other person to search through all possibilities, it's computationally kinder to propose a specific option: "I'm thinking of Italian, what do you think?" This turns a hard search problem into a simple verification problem. By understanding the cognitive load our requests place on others, we can design our interactions to be more efficient, effective, and ultimately, kinder.

Conclusion

Narrator: The single most important takeaway from Algorithms to Live By is that rationality is not about having infinite brainpower or perfect information. It's about having the best strategy for dealing with the messy, complex, and constrained reality of being human. Computer science, far from being a cold and technical discipline, provides a language for understanding the very trade-offs we face every day: between exploration and exploitation, order and flexibility, thinking more and thinking less.

The book challenges us to move beyond simply seeking the right answers and instead focus on finding the right processes. By embracing these algorithms, we can make better decisions, but more importantly, we can develop a new kind of empathy. What would change if you started thinking about the cognitive burden you place on your friends, family, and colleagues? How can you design your own life and interactions to be more "computationally kind," making the hard problems of life just a little bit easier for everyone?

00:00/00:00