Aibrary Logo
Future-Proof: Act Now, Thrive Later! cover

Future-Proof: Act Now, Thrive Later!

Podcast by Wired In with Josh and Drew

A Guide to Ethical Living for the Fate of Our Future

Introduction

Part 1

Josh: Hey everyone, welcome to the show! Today we're tackling a question that's way bigger than just us: What do we actually owe to the future? Drew: Exactly, Josh. Think about it: every choice we make now, from what kind of car we drive to who we vote for, could have repercussions for centuries. It's a huge concept, and honestly, a little daunting, right? Josh: Totally. Daunting, but also incredibly motivating. That brings us to “What We Owe the Future” by William MacAskill, where he lays out this idea called longtermism. It's basically a moral framework that challenges us to consider the long-term impact of our actions on future generations, and strive to create a thriving future for humanity. MacAskill points out how history shows we're capable of moral progress, like abolishing slavery. It tells us we can steer society towards a better future, but we need to start now. Drew: Right, it's not all sunshine and roses, though. MacAskill also throws some pretty heavy stuff at us, like the potential for human extinction, out-of-control AI, or even just getting stuck in a rut where humanity never reaches its full potential. Cheerful stuff, huh? Josh: I know, it's a bit heavy. But we have to face these possibilities. That's why we're going to unpack three key ideas today. First, we'll look at the moral arguments behind longtermism – how do we balance the needs of billions of future people against our current concerns? Then, we'll dive into the existential threats looming over us – nuclear war, pandemics, and those more futuristic risks like AI that doesn't align with our values. Drew: And finally, the big question: what can we actually do about it? Strategies, that is. From reducing existential threats to encouraging ethical tech development, and even rethinking our moral frameworks. You could say civilization might need a serious upgrade. Josh: So whether you're a philosopher, a policy expert, or just someone curious about their place in the grand scheme of things, you're in for a fascinating discussion about one of the most important ideas of our time. Drew: Alright, let's jump in.

Introduction to Longtermism

Part 2

Josh: Okay, let's start with the basics—longtermism. It's essentially the idea that future generations are just as important as we are today, and we have a responsibility to consider how our current choices will impact them. What do you think, Drew? Sounds simple enough, right? Drew: Simple in theory, perhaps. But practically speaking, it raises a lot of questions. How far into the future are we talking about? Ten years? A hundred? Or are we supposed to be worrying about people in the year 3000 with, who knows, maybe evolved webbed feet? Josh: Good point! That's where the depth of longtermism comes in. We're looking as far into the future as we realistically can, to make sure our actions today don't trap future generations in bad situations. A great historical example is the Iroquois' Seventh Generation Principle. Their leaders had to think about how every decision would affect people seven generations down the line. Drew: Right, the Iroquois – clearly the original futurists! But it reminds us that long-term thinking isn't some new academic thing. Still, Josh, it’s a big leap from considering the natural world, as the Iroquois probably did, to facing existential threats like AI or nuclear weapons. Are we really equipped to make decisions with such long-range implications? Josh: That’s the crux of the matter, isn't it? But longtermism argues that we must equip ourselves, because the stakes are incredibly high. The future isn’t some far-off dream; it’s a direct result of the decisions we make today. Regarding those existential risks, yes, we’ve created technologies that improve lives, like better healthcare and longer lifespans. But we've also created unprecedented dangers. Poorly managed technology could give future generations tools for progress, or it could inadvertently lead to their downfall. Drew: It's the double-edged sword of progress, isn't it? Every good thing comes with a potential downside. Antibiotics wiped out diseases but also led to superbugs. And all this talk about existential risks reminds me of MacAskill’s emphasis on nuclear war or pandemics. We’re talking about catastrophes with global consequences, yet those future people have no say in whether we manage these risks well or not. Josh: Exactly. Which leads to another key longtermist idea: moral stewardship. MacAskill emphasizes that adopting a longtermist perspective broadens our moral responsibilities to include people who aren't even born yet. So, when we think about whether we're acting responsibly – with environmental policies, AI, or economic systems – we're not just asking how those choices affect us, but also how they impact the countless lives yet to come. Drew: That sounds noble, but playing devil's advocate here... how do we balance that moral responsibility with the urgent crises we face today? With climate change and poverty, shouldn’t we focus on solving the problems that affect current lives, instead of worrying about hypothetical future people? Josh: That's a valid question. Longtermism doesn't dismiss today's problems – they're interconnected. Think about climate change: tackling it benefits people right now and ensures a livable planet for future generations. It's not about choosing between now and the future; it's about finding solutions that work for both. Drew: And that's where I start to come around. It shifts from an abstract philosophical argument to something real. Speaking of real, MacAskill also shares data showing why long-term thinking matters. Two centuries ago, global life expectancy was just 30 years; now it's over 70. That transformation wasn't accidental; it was the result of deliberate effort. Josh: Exactly, Drew. Those improvements didn't just happen; they're the result of people investing in infrastructure, science, and innovation that paid off decades later. It shows that even small actions now – like funding clean energy or preventing AI misuse – can have huge benefits in the long run. Drew: But let's not forget the other side. That same technological progress also gave us things like nuclear weapons. MacAskill argues that innovation is a double-edged sword, depending on how responsibly we handle it. We're essentially gambling with outcomes that could either create a thriving future or destroy it completely. Josh: That's why frameworks like the one MacAskill proposes are so important. He outlines the SPC framework – Significance, Persistence, and Contingency – to assess the long-term impact of our actions. Drew: Okay, break that down for us. Sounds like one of those deceptively simple philosophy acronyms that are actually mind-bending. Josh: It's not too mind-bending – more like a practical guide. Significance asks how much value something adds to society. Persistence evaluates how long those positive effects could last. And Contingency looks at whether the outcome directly depends on our actions. Take renewable energy, for example: its significance is reducing emissions, its persistence ensures lasting benefits, and its contingency highlights that our active efforts can achieve this. Drew: So basically: "Is it important? Can it last? Is it in our hands?" Seems doable. And honestly, when you compare something like fossil fuels against that framework, it becomes clear that our current path is “terrible” on all fronts. If only we applied this kind of thinking to more policy decisions. Josh: Exactly! The SPC framework helps us make choices more thoughtfully and prioritize sustainable, high-impact actions. That’s the core of longtermism—a toolbox for guiding humanity toward thriving instead of just temporary fixes. Drew: But tell me this—why should people actually adopt this mindset? It’s a big shift, and I bet most people don’t see themselves as decision-makers for the next thousand years. Josh: I’d say it’s not about seeing ourselves as all-powerful decision-makers; it’s about recognizing that we all have some agency. Even small actions can have huge consequences when they add up. And beyond responsibility, there's inspiration here. Humanity has achieved incredible things before, like abolishing slavery or eradicating smallpox. These victories prove that long-term thinking can lead to major progress. Drew: True, but every time I hear those examples, I can’t help but wonder—what if we’d failed? What if those efforts had stalled? It’s sobering to think that these moments of moral progress aren’t always guaranteed. Josh: It’s sobering, but it’s also motivating. History shows us that moral advancements aren't inevitable; they're a choice. We have to decide to create systems today – whether it's ethical AI, fair governance, or sustainable economics – that support long-term wellbeing. Longtermism encourages us to broaden our ethical thinking and take that responsibility seriously. Drew: So here we are, discussing webbed feet and future generations. Kidding aside, I have to admit that the stakes are incredibly high.

Existential Risks and Safeguarding Civilization

Part 3

Josh: So, with that baseline understanding in place, let's explore what longtermism actually “means” in practice. We're talking about existential risks and how to protect civilization, remember? Now, this isn't just some abstract concept. It's about the real, tangible dangers that threaten our survival and what we can actively “do” about them. Drew: Right, Josh, these aren't just minor setbacks, we're talking about game over. Extinction, societal collapse... you can't just rewind and try again later. Josh: Exactly! Existential risks, like environmental disasters, nuclear war, and pandemics, are uniquely critical. They jeopardize humanity's long-term potential. Failing to protect against them is basically gambling with the entire future. Drew: Okay, so, if we're thinking of this as a "doomsday clock," what's ticking the loudest right now? Josh: Environmental collapse is definitely a contender. Think about greenhouse gas emissions, deforestation, pollution – it's a domino effect of ecological damage. Rising temperatures fuel stronger hurricanes, melt the ice caps, damage ecosystems, and destabilize communities. Drew: Downer alert! But this isn’t exactly breaking news, right? Ecosystems have collapsed before. Wasn't there that whole cautionary tale about Easter Island? Josh: Precisely! Easter Island shows what happens when you mismanage resources. They thrived, relying on their forests. But unchecked logging, overpopulation, and competition led to complete deforestation, followed by famine and collapse. Drew: So, we're globalizing Easter Island's mistake? Josh: In a nutshell, yes. It's prioritizing short-term gain over sustainability on a planetary scale. Climate models predict biodiversity loss, crop failures, and refugee crises. Easter Island proves ignoring the signs is a fatal mistake. Drew: But here's the thing, Easter Island was isolated. Today, we have global collaboration, tech, and science on our side. Can't we innovate our way out? Josh: Innovation is part of the solution. Renewable energy, carbon capture, sustainable agriculture – they all offer possibilities. But the real issue is urgency. It's not just slowing down climate change. There are tipping points, like the 1.5-degree Celsius rise, where feedback loops spiral out of control. Innovation alone isn't enough without immediate, collective action. Drew: Okay, environmental collapse is a creeping threat. Nuclear war, though, brings it all crashing down in a matter of hours. Josh: Absolutely. The destructive power is immense. A single exchange could wipe out millions instantly. And the secondary effects, like nuclear winter, could disrupt agriculture for years, leading to mass famine. Drew: Nuclear winter sounds like a sci-fi trope... but scientists say it's legit. All that soot blocking sunlight, plummeting temperatures, and destroying crops. Josh: Exactly. The risk isn't just the bombs, it's the cascading effects on everything we depend on. The Cuban Missile Crisis in '62 serves as a chilling warning. For thirteen days, we were on the edge of nuclear war. Drew: Thirteen days of diplomacy, thinking, "This could actually be it." One wrong move, one miscommunication... gone. Josh: Thankfully, diplomacy prevailed. But it shows how vulnerable we are to geopolitical tensions and miscalculations. Drew: The more nukes out there, the higher the odds something goes wrong, either by accident, sabotage, or ego. Josh: That's why international cooperation, disarmament agreements, and safeguards are so critical. It's a political problem with huge consequences, one we can't screw up. Drew: Moving on from geopolitics to biology, let's talk pandemics. COVID-19 was a wake-up call, but MacAskill said it underestimated the true risk? Josh: Precisely. COVID-19 showed how disruptive a virus can be, even with a relatively modest fatality rate. Think about what could happen with advanced biotechnology, where we can engineer pathogens to be even more deadly and contagious. Drew: The CRISPR dilemma—curing blindness one day, creating a super-virus the next. Josh: Exactly. Biotechnology is a double-edged sword, with the potential for accidental or intentional misuse. And we're vulnerable: urbanization, global travel, and inadequate healthcare all amplify the risks. Drew: So, our interconnectedness that's great for Zoom calls is terrible for containing outbreaks. The faster we evolve, the faster our weaknesses evolve with us. Josh: Which is why experts want stronger biosecurity, early detection, and pandemic preparedness. These aren't luxuries, these are necessities. Drew: Prevention is key, rather than scrambling for solutions after the crisis hits. Josh: Exactly. History shows that foresight matters. Look at the Black Death. A third of Europe decimated, but societies adapted, rebuilt, and paved the way for the Renaissance. Drew: Or post-WWII recovery. Hiroshima and Nagasaki were obliterated, yet Japan became an economic powerhouse. It's comforting to think we can rebuild, but terrifying to see how close we've come to losing everything. Josh: And that's why proactive measures are crucial. The Svalbard Global Seed Vault and the Internet Archive are great examples of safeguarding knowledge and resources to help future generations recover. Drew: Seed vaults, digital archives—our modern Noah's Ark. Going back to your earlier point, how fragile progress really is. Lose the right knowledge, and we're back to square one. Josh: Exactly. Preserving climate data, agricultural techniques, or ethical frameworks, allows humanity to rebuild and thrive, no matter the crisis. Drew: So, Josh, existential risks are human-created. Therefore, preventing them is a human responsibility. No divine intervention, no Planet B. Just us, our tools, and our choices. Josh: Precisely! Mitigating existential risks and building resilience isn't just about avoiding disaster – it's about shaping a future worth living in.

Practical Recommendations and Collective Action

Part 4

Josh: Understanding these risks naturally leads us to consider how we can actively shape a better future. And that’s where things get “really” interesting—what can we actually do about all of this? Which brings us to today’s core topic: practical recommendations and collective action. Drew: Exactly. We've been unpacking existential risks and thinking long-term, and now we get down to solutions. What concrete steps can we take—as individuals and as societies—to make sure we’re not just circling the drain but actually steering toward real progress? Josh: Precisely. This is where principles become plans, ideas turn into action, and inspiration meets practicality. We'll cover strategies like choosing impactful careers, embracing effective altruism, amplifying political activism, and the importance of building movements for change. And ultimately, how all of these efforts work together in securing our collective future. Drew: Alright, Josh, let’s dive in. Where do we start? Josh: Let’s start with advocacy and education, because awareness is often the foundation for action. MacAskill highlights how educating people about long-term impacts can inspire individual commitment and collective responses. Think about community workshops or forums. For example, imagine gathering a group to discuss the long-term ramifications of climate change—not abstract numbers, but relatable examples, like how disrupted supply chains or rising sea levels could impact their lives directly. Once you bring these issues closer to home, people are more likely to feel a sense of both responsibility and agency. Drew: So, it’s like hosting an intervention—but instead of getting Uncle Jerry to quit smoking, you’re getting people to rethink their carbon footprint and voting habits. Josh: In a way, yes! It’s about translating huge, often intimidating challenges into local, actionable insights. People need to see how their choices ripple outward. Even small shifts—like choosing renewable energy or adopting a plant-based diet—can model broader, societal solutions. Drew: You know, this reminds me of something you said earlier, that history isn’t just something that "happens." It’s made. You’re suggesting these small individual efforts can act as tiny cogs in a much larger machine of change. But – playing devil's advocate here – can these personal changes really create meaningful progress? Aren’t we up against systemic forces way bigger than any one person can influence? Josh: Great point. Personal actions alone aren’t the whole answer—they need to scale. And that’s where collective campaigns come in. Take the renewable energy movement in Germany, for instance. Local advocacy groups there pushed for government support and subsidies on solar and wind technologies. That grassroots pressure created systemic shifts, turning once-expensive solutions into national policy. So yes, personal actions matter, but pairing them with collective movements amplifies the impact exponentially. Drew: That’s encouraging, seeing real examples where change took root. But changing behaviors is one thing. Setting up lifelong commitments, like choosing high-impact careers, seems like an even heavier lift. What’s MacAskill’s take on this? Josh: He argues that career choices are one of the most potent tools for affecting systemic change. Think of it this way: your career represents decades of effort and innovation. Investing that time into fields like biosecurity, renewable energy research, or ethical AI development contributes directly to foundational advancements that shape our collective future. Drew: Biosecurity is a good example. We’ve seen how pandemics disrupt everything—economies, healthcare, even international relations. People investing their careers into early warning systems or pandemic-prevention research could quite literally save millions of lives. Josh: Absolutely. Biosecurity is uniquely critical because it tackles risks that could be catastrophic, but are potentially preventable. Similarly, renewable energy technology has the potential to revolutionize our fight against climate change. Imagine a talented engineer pivoting their skills into creating more efficient, scalable solar panels—contributions like that could accelerate humanity’s transition to clean energy. Drew: And it’s not just about the direct contributions, is it? It’s also about encouraging a cultural shift. You don’t need an army of altruistic engineers if you can get even a subset of individuals to lead the charge. Josh: Exactly. And that’s where effective altruism comes into play. The EA movement serves as both a guide and a motivator for people who want to align their talents with causes that produce the greatest good. Drew: Yeah, but the EA mindset can trip people up, right? It’s one thing to hear, “Okay, become a renewable energy innovator.” But it’s another to tell someone, “Actually, staying in a high-paying corporate gig to donate most of your salary to charity might have more impact than a nonprofit career.” It’s counterintuitive. Josh: It is, and I think that’s why the EA framework is so useful. It challenges us to think beyond surface-level assumptions about impact and encourages a data-driven approach. A finance professional, for example, could redirect a portion of their earnings to malaria prevention programs or fund biosecurity research—two areas with massive, real-world consequences. It’s not one-size-fits-all, but it’s about thoughtful trade-offs and leveraging your position for maximum good. Drew: Which makes it less about big heroics and more about game theory. If everyone optimizes their role, the collective benefit skyrockets. I can get behind that, Josh. But how does this translate into something even broader, like political activism? Josh: Politics is where the biggest levers for change usually reside. Voting, lobbying, and organizing help shape the very policies that govern our collective direction. Grassroots climate groups, for example, have fought successfully for renewable energy standards in various U.S. states. The systemic shifts we’re talking about—clean energy infrastructure, ethical AI oversight, biosecurity safeguards—they happen most efficiently through policymaking. Drew: Does MacAskill touch on how to avoid the cynicism trap, though? Political activism often feels like shouting into the void. Josh: He does. He cites historical successes that underscore its importance. The abolitionist movement, centuries of opposition before achieving tangible victories through legislation. Similarly, contemporary movements like Fridays for Future have shown how collective action within politics can push reluctant legislators into action. These examples remind us that change is incremental, sustained, and relies on coalition-building. Drew: So it’s about persistence and scaling, which is probably why MacAskill places such a huge emphasis on movement-building. He’s not just saying, “Do good,” but also, “Join forces.” Josh: Exactly. Collaboration amplifies efficacy and prevents burnout. Take the effective altruism movement itself: by connecting researchers, activists, policymakers, and philanthropists, they’ve created a network that identifies neglected areas—like asteroid deflection research or geoengineering—and channels resources into those gaps. Drew: Asteroid deflection programs? That sounds like straight-up science fiction! Josh: It does, but it’s reality. Movements like EA address highly plausible, underfunded challenges with strategies grounded in evidence. They’re leveraging their collective focus to tackle less visible but critical risks. Drew: So, the crux of movement-building is mutual reinforcement. Resilience isn’t just about surviving; it’s about supporting people who push forward in these demanding, long-term arenas. Josh: Exactly. It’s about fortified collaboration, like those historic examples like the abolition or suffrage movements that required generational persistence. Collective action isn’t the cherry on top—it’s the scaffolding that holds up longtermist ambitions. Drew: And all of this—whether personal commitments or collaborative networks—feeds into the ultimate vision MacAskill outlines, doesn’t it? Creating a future worth living. Josh: Yes, that’s the overarching thread. Every action, every movement, every innovative breakthrough connects back to this: intentional progress toward a flourishing future, for generations yet unseen.

Conclusion

Part 5

Josh: Okay, so, to recap, today we’ve really dug into the main ideas of “What We Owe the Future.” We started with longtermism, this moral idea that says we should care about the well-being of future generations just as much as we care about our own. Drew: Right. Then, we jumped into those huge existential risks—you know, like climate change spiraling out of control, nuclear war, AI going haywire, and those scary, engineered pandemics—all the stuff that could completely wipe out humanity's potential. Josh: And then, we talked about solutions, the real ways we can, both as individuals and as a society, face these risks and build a better, sustainable future. From choosing careers that make a difference to getting involved in activism and pushing for big, systemic changes, we saw that progress doesn't just happen on its own, it's something we have to intentionally create. Drew: Which brings us to a key point from MacAskill’s book: humanity’s at this critical turning point. The decisions we make now could either set future generations up to really thrive, or, well, leave them with some seriously impossible problems. A bit like choosing the right path in a "choose your own adventure" novel eh? Josh: Exactly! And I think the big message here is that we actually have some power. Longtermism isn’t just a way of thinking; it's a call to action, a real challenge to do something. Whether it's protecting important knowledge, pushing for systemic change, or just thinking about the impact of our choices, we all have a part to play. Drew: So, here’s the million-dollar question for our listeners: What's one thing you're going to do today that could actually make a difference centuries from now? Because, like MacAskill says, the future isn't just something that's going to happen to us—it's something that we're actively building, right now. Josh: And that's a responsibility that's both scary and incredibly exciting. So, let's try and make it count, shall we?

00:00/00:00