
The Most Important Century
13 minGolden Hook & Introduction
SECTION
Michael: If humanity survives for just a fraction of its potential lifespan, you and I, living right now, are the ancients. We're the cave-dwellers of history. Our choices today are like the first-ever cave paintings—they could set the course for everything that follows. Kevin: Whoa. That gives me vertigo. The idea that we're at the very beginning of history, not somewhere in the middle or near the end... it’s a heavy thought. It reframes everything. Michael: It completely does. And that's the core idea behind the book we're diving into today, What We Owe the Future by William MacAskill. What's fascinating is that MacAskill isn't some detached, armchair philosopher; he's a co-founder of the effective altruism movement and became one of Oxford's youngest-ever associate professors of philosophy. He lives this stuff, donating a huge chunk of his income to charity. Kevin: Okay, so he's putting his money where his mouth is. That adds some weight. But "the future" is so huge. It feels abstract. Why should I, sitting here today, care more about someone in the year 3000 than my neighbor next door who needs help now? That's a real tension people feel with this idea.
The Moral Case for the Future: Why Should We Even Care?
SECTION
Michael: That is the perfect question, and it's exactly where MacAskill starts. He grounds this enormous idea in a very simple, visceral thought experiment. Imagine you're hiking on a remote trail, deep in the woods. You accidentally drop a glass bottle, and it shatters on the path. You're tired, it's getting dark, and you know nobody will be on this trail for months, maybe even years. Do you clean it up? Kevin: Of course. You have to. Michael: Why? Kevin: Because someone could come along later and get seriously hurt. A kid, an animal, anyone. It's a hazard. Michael: Okay, but what if you knew for a fact that no one would walk that path for another hundred years? Does that change your moral obligation to clean up the glass? Kevin: Huh. No, I don't think it does. A person getting a deep gash in their leg is a bad thing, whether it happens tomorrow or in 2124. The suffering is the same. Michael: Exactly. And that's the anchor for his entire argument. Harm is harm, regardless of when it occurs. Distance in time doesn't diminish moral responsibility, any more than distance in space does. We wouldn't say it's okay to dump toxic waste in a country just because it's far away. So why would we say it's okay to leave a broken world for future generations just because they're far away in time? Kevin: Okay, that clicks. That’s a powerful way to put it. No one would say it's okay to leave the glass just because the kid who gets cut hasn't been born yet. But a bottle is simple. The world is complex. How can we possibly know what people in the future will need or want? Aren't we just guessing and potentially imposing our own flawed values on them? Michael: That's the next layer. He uses another great analogy for this: the Atlantis analogy. Imagine we discover a vast, thriving civilization living at the bottom of the ocean—Atlantis. We can't communicate with them, but we realize our actions, like dumping pollution into the sea, are directly affecting their health and happiness. Do we have a responsibility to stop, even if we can't ask them what they want? Kevin: Yeah, absolutely. You can't just knowingly poison a whole civilization because you can't get their direct feedback. You have to assume they want to live, be healthy, and flourish. Michael: And MacAskill's point is that future generations are our Atlantis. They are a vast, undiscovered country whose well-being depends on what we do today. We don't need to know the specifics of their fashion or music to know they will likely value clean air, a stable climate, and the freedom to build their own lives. We can work on those robustly good things. Kevin: I see. It’s about creating the conditions for them to flourish, whatever that might look like for them. It’s not about picking their future for them, but making sure they have one to begin with. Michael: Precisely. And the scale is just staggering. He does this thought experiment where you live every single human life that has ever existed, one after another. You'd spend most of your time as a farmer living in poverty. The modern era would be a tiny, bizarre flash of incredible change and wealth at the very end. But if humanity continues, the number of lives in the future could be trillions upon trillions. The lives already lived would be a rounding error. Kevin: That’s a mind-bending perspective. It makes our current moment feel both incredibly small and unbelievably important. It feels like we’re standing on a knife's edge.
History's Hinge: Are We Living in the Most Important Century?
SECTION
Michael: You've just hit on the second major idea. MacAskill argues that we're not just another generation in a long line. We might be living in a rare "moment of plasticity"—a hinge of history where our actions have unusually large and lasting consequences. Kevin: A moment of plasticity? What does that mean, exactly? Michael: Think of it like hot glass. For a brief period, it's malleable and can be shaped into anything. But once it cools, it becomes rigid and fixed. MacAskill suggests civilizations have these moments too. To figure out if we're in one, he gives us a framework: look for actions that are high in significance, persistence, and contingency. Kevin: Okay, break those down for me. Significance, persistence, contingency. Michael: Significance is the scale of the impact—how much good or bad it does. Persistence is how long that effect lasts. And contingency is the most interesting one: how dependent the outcome was on a specific, small choice. In other words, could it have easily gone another way? Kevin: That sounds a bit academic. Can you give me an example? Michael: The best one in the book is the division of Korea. After World War II, the US and the Soviet Union needed to divide up the peninsula. The decision fell to two junior American officers, Dean Rusk and Charles Bonesteel. They had thirty minutes. They had no expertise on Korea. They just grabbed a National Geographic map and saw the thirty-eighth parallel, which looked like it split the country roughly in half. Kevin: Wait, seriously? A National Geographic map? Michael: A National Geographic map. They proposed it, and to their surprise, the Soviets accepted. That single, contingent decision—made in half an hour by two guys who were just trying to solve an immediate problem—created two entirely different worlds. One, a prosperous, democratic society. The other, a totalitarian, impoverished state. The lives of millions of people for generations were locked into a path determined by a line drawn on a map almost at random. Kevin: That's terrifying. A decision that monumental was basically an accident of history. It’s a perfect example of contingency. It so easily could have been a different line, or a different decision altogether. Michael: Exactly. And it makes you think. What lines are we drawing on the map right now, without even realizing it? When we're developing artificial general intelligence, or editing genes with CRISPR, or setting precedents for how we handle global pandemics—are those our thirty-eighth parallel moments? Kevin: That’s a chilling question. It feels like we're constantly making these high-stakes decisions with incomplete information, just like those officers. We're shaping a future that's still like hot glass, and we don't even know what we're making. Michael: And that's why this century could be the most important. We have unprecedented technological power to shape the future, for good or for ill, in ways that could persist for thousands, or even millions, of years. Kevin: So if we're at this hinge point, this moment of plasticity, what are the biggest ways we could screw it up? What are the biggest risks we face?
The Perils of Progress: Extinction, Stagnation, and Value Lock-in
SECTION
Michael: MacAskill argues we tend to focus on one big risk—extinction—but there are other, more subtle dangers. I like to think of it as facing three different doors to a bad future. Kevin: Okay, lay them out for me. What's behind Door Number One? Michael: Door Number One is the one we all know: extinction. A permanent end. And he argues the risk isn't just from asteroids or supervolcanoes. The more pressing threat might come from our own technology, especially engineered pathogens. He tells the story of the Sverdlovsk anthrax leak in the Soviet Union in 1979. Kevin: I think I've heard of this. This was a bioweapons lab, right? Michael: A covert one. A technician removed a clogged air filter for cleaning and left a note for the next shift. The next supervisor didn't see the note and turned the machinery on. For a few hours, a plume of weapons-grade anthrax dust was vented over the city. It was a simple, human error—a forgotten filter. And it killed over a hundred people. Kevin: Oh man. And that was with 1970s technology. The thought of what could be created and accidentally released today is... deeply unsettling. Michael: It is. And that's just one risk. So that's Door Number One: we wipe ourselves out. But Door Number Two is different. It's not a bang, but a long, slow whimper: Stagnation. Kevin: What do you mean by stagnation? Like, the economy stops growing? Michael: More than that. A civilizational plateau. Imagine if technological and moral progress just... stops. For centuries, or even millennia. He points to historical examples, like the Islamic Golden Age or the Roman Empire. These were periods of incredible innovation and progress, "efflorescences" he calls them, that eventually hit a wall and declined. What if our current era of rapid growth is just another efflorescence, and we're headed for a long, dark period of stagnation where we're stuck with dangerous technology we can't control and no ability to innovate our way out? Kevin: That's a bleak thought. To be stuck in a holding pattern forever. But you said there were three doors. What could possibly be worse than extinction or eternal stagnation? Michael: Door Number Three. And in many ways, it's the most horrifying. MacAskill calls it "Value Lock-In." This is where humanity survives, and technology might even advance, but we lock in a single, flawed, or outright evil set of values forever. Kevin: Like a global dystopia that can never be overthrown. Michael: Exactly. He uses the example of China's first emperor, Qin Shi Huang. He unified the country, but he was a follower of a brutal philosophy called Legalism. To ensure it would be the only ideology, he ordered the burning of all books from other schools of thought—Confucianism, Daoism, all of it. He had hundreds of scholars buried alive. His goal was to create a philosophical monoculture that would last for ten thousand generations. Kevin: Wow. So the risk isn't just that the world ends, but that it becomes a permanent prison of bad ideas. That's somehow even scarier. Michael: It is. Imagine a misaligned superintelligent AI, created with one flawed goal—say, maximizing the production of paperclips. It could convert the entire planet, and then the galaxy, into paperclip factories, eliminating all life and all value in the process. That's a value lock-in. A future that is permanently, irrevocably bad. Kevin: This is where the critics of longtermism get really nervous, right? This focus on huge, speculative, sci-fi-sounding risks. They argue that by obsessing over potential AI dystopias or trillions of future lives, we risk ignoring the very real, very immediate suffering of people alive today. Does MacAskill address that?
Synthesis & Takeaways
SECTION
Michael: He does, and it's one of the most important parts of the book. He's very clear that longtermism should supplement, not replace, our commonsense moral duties. He's not saying we should stop fighting poverty or disease today. In fact, he argues that many of the best longtermist actions are things we should be doing anyway. Kevin: Like what? What's an example of something that's good for both now and the far future? Michael: Pandemic preparedness is a perfect one. Preventing the next COVID-19 saves millions of lives and trillions of dollars in the present. It also reduces the risk of an engineered pathogen causing an extinction-level event in the future. It's a robustly good action. The same goes for promoting stable, liberal-democratic institutions, or developing cheap, clean energy. These things make life better for us now and they make a good future more likely. Kevin: So it’s not always a zero-sum game between the present and the future. The goal is to find the overlaps. Michael: Exactly. The profound insight here isn't really about choosing between helping someone today versus someone in a thousand years. It's about fundamentally expanding our moral circle. For most of history, we cared about our family, then our tribe, then our nation. The effective altruism movement pushed us to care about everyone on Earth, regardless of where they live. Longtermism is the next step: caring about everyone, regardless of when they live. Kevin: That’s a powerful reframing. It’s not about being a sci-fi prophet trying to predict the future. It’s about being a good ancestor. It's about cleaning up the broken glass on the trail. Michael: That's it perfectly. It's about recognizing that history has placed this immense responsibility in our hands. We are the ones standing at the hinge, with the power to open the door to a future of unimaginable flourishing, or to slam it shut forever. Kevin: It's a lot to take in. But it also feels incredibly motivating. It gives a sense of meaning and purpose to our actions that's hard to find elsewhere. It leaves you with one big question, really. Michael: What's that? Kevin: What's one small thing you can do today that might still matter in a hundred years? It could be anything—donating to a cause that works on these long-term problems, spreading these ideas, or even just how you raise your kids. We'd love to hear what our listeners think. Let us know on our social channels what your answer is. Michael: A fantastic question to end on. It brings this whole cosmic perspective right back down to a personal choice. Kevin: This is Aibrary, signing off.