
The Math of Misbehavior
11 min...and 131 More Warped Suggestions and Well-Intended Rants from the Freakonomics Guys
Golden Hook & Introduction
SECTION
Joe: Most of us think of cheating as a moral failure. A sign of bad character. But what if it's just... math? A simple calculation of risk versus reward that we all do, all the time? Lewis: That feels a little cynical, Joe. Are you saying my guilt over sneaking an extra cookie is just a bad calculation? Joe: According to the authors we're talking about today, it might just be. That's the kind of beautifully uncomfortable question at the heart of the book we're diving into: When to Rob a Bank by Steven D. Levitt and Stephen J. Dubner. Lewis: Ah, the Freakonomics guys. I feel like their whole brand is built on making us squirm with questions like that. Their work always gets a mixed but passionate reception, and I can see why. Joe: Exactly. And this book is unique. It’s not a single, polished narrative like their others. It’s a curated collection of their best, and often most outlandish, blog posts from over a decade of writing online. So it has this raw, unfiltered, experimental feel. They even admit a lot of their blog posts were 'rubbish,' but these are the hand-picked gems. Lewis: So we're getting the highlight reel of their most warped ideas. I'm in. And they dive right into the deep end with some truly 'bad' ideas that, when you look closer, have a terrifying logic to them.
The Hidden Logic of 'Bad' Ideas
SECTION
Joe: They really do. The book is full of these suggestions that sound terrible on the surface. One of the most jarring examples comes from a chapter called "We Were Only Trying to Help." Levitt poses a question inspired by his own father after the D.C. sniper attacks in 2002. The question was: what's the simplest, most effective, low-cost way for a terrorist to cause maximum chaos and fear? Lewis: Whoa, hold on. Are they really publishing a 'how-to' guide for terrorists? That seems incredibly irresponsible. Joe: That was the immediate reaction from many readers. They got emails saying things like, "You are an idiot." But the authors' point wasn't to help terrorists. It was to demonstrate how terribly we, and our governments, assess risk. The answer his father came up with was chillingly simple. Lewis: Okay, I'm almost afraid to ask. What was it? Joe: No bombs, no complex plots. Just twenty people, with twenty cheap rifles and twenty cars. At a pre-set time, they start shooting randomly across the country—in big cities, small towns, suburbs. They keep moving, making them almost impossible to track. The actual body count might be relatively low, but the psychological damage and the nationwide panic would be immense. The country would grind to a halt. Lewis: That's... horrifyingly plausible. And the resources required are minimal. It’s the kind of thing you can’t really build a multi-billion dollar defense system against. Joe: And that is the entire point. Levitt argues that there's a virtually infinite array of simple, low-tech strategies available to people who want to cause harm. But our governments and media are incentivized to focus on big, spectacular, expensive threats—the kind that require massive budgets and create a visible show of security. Lewis: You mean 'security theater.' Like taking off our shoes at the airport. It makes us feel safer, but does it actually stop a simple, low-tech threat? Joe: Precisely. Levitt says for most officials, there's more pressure to look like you're stopping terrorism than to actually stop it in the most efficient way. The incentives are misaligned. It's better for a politician's career to fund a flashy, expensive program that might not work than to address the simple, uncomfortable truths about our real vulnerabilities. Lewis: That's a deeply cynical take on public service, but it rings true. The incentive is to perform safety, not necessarily to create it. Do they apply this 'crazy idea' logic to anything less... terrifying? Joe: Absolutely. They do it with public policy all the time. For instance, they float an idea from an off-duty pilot stuck on the tarmac in New York. The pilot’s solution to the city's crippling air traffic congestion? Lewis: Let me guess. Something completely bonkers? Joe: Just close LaGuardia Airport. Permanently. Lewis: Come on. That’s one of the busiest airports in the country. How does closing it help? Joe: The pilot's logic was that the three New York airports—JFK, Newark, and LaGuardia—have overlapping airspace. They're too close together, creating a permanent bottleneck in the sky. Removing LaGuardia, the smallest and most geographically constrained of the three, would dramatically simplify the airspace, allowing the other two airports to operate far more efficiently, likely reducing overall delays for the entire region. Lewis: Okay, I can see the cold logic. It's a systems-thinking approach. But politically, it's impossible. The powerful people who live in Manhattan and use LaGuardia for its convenience would never let it happen. Joe: And that's the Freakonomics signature. They present a brutally logical, data-driven solution, and in doing so, they reveal the hidden incentives—politics, convenience, public perception—that actually drive our decisions, often leading to worse outcomes for everyone. The 'bad' idea isn't the point; it's a tool to show us what's really going on.
The Universal Calculus of Cheating
SECTION
Lewis: Okay, so it's all about exposing the hidden incentives. The incentive to look like you're fighting terror, or the incentive to keep a convenient airport open. Which brings us back to our opening idea... the incentive to cheat. Joe: Exactly. This is probably the most fundamental theme in all of their work, and it's all over When to Rob a Bank. They have a chapter titled "If You're Not Cheating, You're Not Trying," which is a famous sports adage. Their core argument is that cheating isn't some deep moral failing. It's a primordial economic act. Lewis: "Getting more for less." Joe: You got it. And they illustrate this with a fantastic, low-stakes story. A Washington D.C. media blog called FishbowlDC ran an online contest to find the "hottest media folks" in the city. Lewis: This already sounds ridiculous. What are the stakes here? Bragging rights and maybe a free coffee? Joe: Pretty much. The prize was essentially nothing. But the contest was based on online voting. Two of the contestants had friends who were tech-savvy, and these friends created software 'bots' that could vote thousands of times for them automatically. Lewis: So for meaningless bragging rights, people built software to cheat an online poll? That's both pathetic and brilliant. Joe: Isn't it? They won in a landslide, of course. And the authors use this to make a crucial point. People’s behavior is determined by the incentives. In this case, the reward was small, but the cost of cheating was near-zero, and the risk of punishment was literally zero. When you have that equation, cheating becomes an almost irresistible choice. It’s a perfect microcosm of their entire theory. Lewis: It makes the big, abstract idea of 'incentives' feel very concrete and funny. We see this everywhere. People sharing Netflix passwords, lying about their kid's age to get a cheaper ticket. We're all making these little calculations. Joe: We are. And it gets more serious. They bring up a study of a Mexican welfare program called Oportunidades. To qualify for aid, people had to self-report their household assets. Lewis: I can see where this is going. They lied. Joe: They did, but in a fascinatingly complex way. As you'd expect, people underreported things that would disqualify them. About 80% of people who owned a car or a truck didn't mention it. No surprise there. The incentive is clear: lie to get money. Lewis: Right, that’s the straightforward 'getting more for less.' Joe: But here's the twist. The researchers found that people also overreported certain items. A huge number of applicants who didn't have a toilet, or tap water, or a concrete floor in their home claimed that they did. Lewis: Wait, why would they do that? Lying in a way that could potentially disqualify them from getting aid seems completely irrational. Joe: It does, until you think about a different kind of incentive: reputation. The shame of admitting to a government official that your home doesn't have a toilet was a more powerful force than the potential financial gain. They were willing to risk the aid to avoid the embarrassment. Lewis: Wow. So it's not just about greed. People also cheat, or lie, to manage how they're perceived. The incentive isn't always money; it can be dignity, or status, or avoiding shame. That adds a whole other layer to it. Joe: It does. It shows that human motivation is complex, but it's still a response to a set of incentives. The Freakonomics approach is about figuring out what those incentives—financial, social, moral, or otherwise—truly are. And once you see them, behavior that looked random or irrational suddenly makes a whole lot of sense.
Synthesis & Takeaways
SECTION
Joe: And that really ties it all together. Whether it's proposing 'bad' ideas about terrorism or analyzing why people cheat in a welfare program, the Freakonomics method is about stripping away our initial moral judgment to see the raw, underlying incentives. Lewis: It’s like they’re putting on a special pair of glasses that filters out emotion and just shows the math of human behavior. Joe: That's a perfect way to put it. Levitt has a famous quote that sums it all up: "Morality represents the way that people would like the world to work, whereas economics represents the way it actually does work." This book is a collection of dispatches from the way the world actually works. Lewis: And it's unsettling, but it feels true. Reading this, you start to realize we're not all inherently good or bad; we're just incredibly responsive to the systems we're placed in. And if a system makes it easy and rewarding to cheat, or to ignore simple solutions in favor of complicated ones, a lot of us will take that path. Joe: It’s a powerful lens to apply to the world. It forces you to stop judging the person and start questioning the system. The book’s title, When to Rob a Bank, is a joke, of course. Their analysis concludes it’s a terrible career choice—the average take is only a few thousand dollars, and the risk of getting caught is surprisingly high. Lewis: The incentives are all wrong! Joe: The incentives are all wrong. But the title itself is a perfect example of their method: ask a provocative, seemingly immoral question to uncover a practical, data-driven truth. Lewis: It definitely makes you look at the world differently. It’s less about finding blame and more about finding the hidden rules of the game everyone is playing. Joe: And it leaves you with a really powerful question to ask yourself. It’s a question that I think gets to the heart of their entire project. Lewis: What’s that? Joe: It makes you look around and ask: where in my own life am I just responding to a hidden incentive I haven't even noticed before? Lewis: That's a great question for our listeners. Let us know what you think. What's a 'Freakonomics' observation you've made in your own world? We’d love to hear about it. Joe: This is Aibrary, signing off.