Aibrary Logo
Ethical Toolkit: Build Your Best Self cover

Ethical Toolkit: Build Your Best Self

Podcast by The Mindful Minute with Autumn and Rachel

The Correct Answer to Every Moral Question

Introduction

Part 1

Autumn: Hey everyone, welcome! Today, we're jumping into the fascinating — and sometimes totally bewildering — world of moral philosophy. Our guide? None other than Michael Schur, the comedic mind behind “The Good Place”. Yes, he of trolley problems and soul-mate points now brings us “How to Be Perfect”. Rachel: Philosophy, huh? That's the universe's way of saying, "Good luck trying to figure out right from wrong!" But somehow, Schur manages to take these ancient, mind-bending debates and connect them to… oh, I don’t know… whether or not you should return your shopping cart. Seriously? Autumn: Exactly! The book is full of relatable, real-life dilemmas that make moral philosophy feel like it’s been taken straight out of our daily lives, instead of some stuffy academic lecture. And, what's even better, he actually gives us practical tools for moral growth — things like empathy, self-reflection, and a focus on progress, not some impossible ideal of perfection. Rachel: Now, don't let Autumn make you think this is just feel-good fluff. Schur actually puts some pretty serious ethical frameworks to the test. We're talking virtue ethics — Aristotle’s guide to building a good character, utilitarianism — where happiness is king, and deontology, or as I call it, "Kant's no-fun rule book for life.” Autumn: Right, and here is the interesting part. None of these frameworks is a one-size-fits-all solution. Each one offers a unique way to approach moral challenges, whether we’re talking about global issues or those smaller, everyday dilemmas, like, “Is it okay to buy Chick-fil-A if I disagree with their views?” Rachel: And that's exactly what we're diving into today, one moral framework at a time. First, what does it really mean to build a good character? Second, how do we balance individual happiness against the greater good? And finally, can rules and duty really hold up against the messiness of real life? By the end of our conversation, you might not be perfect, but you'll definitely think twice about that shopping cart... Autumn: Exactly! It’s philosophy packed with humor, a lot of heart, and maybe just a tiny sprinkle of existential dread. So, let’s jump right in!

Virtue Ethics

Part 2

Autumn: Alright, let’s jump into virtue ethics, the bedrock of moral philosophy that Michael Schur champions. We’ll start with the philosophical underpinnings, explore the core concepts, and then see how we can actually use this stuff. Basically, virtue ethics, rooted in Aristotle, is all about character. It's the question: "What kind of person do I aspire to be?" So Rachel, ready to explore? Rachel: "Explore" sounds less strenuous than "flex," so I'm in. The idea that ethics is about who you are, not just what rules you follow or what results you get, is fascinating. It's like getting to the root of the problem rather than cutting off branches. So, fill me in, Autumn, according to Aristotle, what makes someone a “good person”? Autumn: Well, Aristotle would say a good person actively cultivates virtues – things like courage, honesty, and kindness. But here’s the thing: these aren't just abstract concepts. They become real through habit. He famously compared moral growth to physical training. You don’t just wake up brave one day; you become brave by consistently acting courageously. Essentially, it’s like developing muscle memory for your moral compass: practice makes virtue. Rachel: Habits, huh? So, ethics becomes less about these grand gestures and more about the daily grind, right? Like, am I racking up moral points every time I return my shopping cart? Autumn: Precisely! And Aristotle would totally be on board with your focus on the small stuff. He emphasized that moral excellence isn’t something you’re born with; it’s a gradual development. Now, here’s where it gets interesting: it’s not just about mindless repetition. It’s about doing the right thing, for the right reasons, finding what Aristotle called the "golden mean." Every virtue exists between two extremes, or vices. Rachel: Golden mean… so, are virtues Goldilocks? You know, not too much, not too little, but juuuust right? Autumn: Exactly! Think about courage. On one end, you have cowardice – a lack of courage. On the other, you have recklessness – too much courage without considering consequences. True courage sits in the middle: acknowledging fear but using reason to guide your actions. And that’s brilliant because you’re always striving for balance, not following rigid rules. Rachel: Alright, but let's be honest. Finding that "just right" sounds harder than it is. I mean, what if I mess up and end up being a total chicken while trying to be brave? Is there a philosophical "do-over" button? Autumn: There is! Virtue ethics is all about progress. Michael Schur makes this point really well: Aristotle wasn’t expecting perfection from the start. It’s a journey, right? You look back at your actions, adjust your course, and learn. That’s why things like self-reflection are so important. Rachel: Right, but what does practice look like in real life? It's not like we're walking around asking, "Am I hitting the golden mean of courage today?" Autumn: Think about this: Imagine there’s an unfair policy at your job, and you have a chance to speak up to your boss . On one extreme, cowardice could be staying silent, even though you know it’s wrong. On the other, recklessness is barging in, yelling, and quitting without thinking. Courage, in that situation, might be respectfully voicing your concerns, maybe even trying to rally others to take action together Rachel: Ah, I get it—it's balancing integrity with strategy, right? Seems finding that point isn't something you find overnight, and ties back to Aristotle's point about habits. It's those consistent choices that make you courageous over the long haul. Autumn: Exactly. And here is a important part of the equation: community. Aristotle believed we don’t develop virtues in isolation. Communities act as moral support systems, offering role models, encouragement, and accountability. Rachel: Hold on, so, like, some ancient Greek mentor program? I’m picturing people in togas patting each other on the back for being better humans. Autumn: Similar! Aristotle was convinced that a good community—whether it's family, friends, or even your workplace—helps to grow moral character. Take mentorship; a mentor embodies virtues like patience and honesty, pushing their mentees to practice them too. Mentorship doesn't have to be formal. Think of characters like Chidi from The Good Place. He’s a philosophy professor guiding people with choices , but he's also improving his own virtues. Rachel: Okay, but let’s raise the stakes here. If communities shape our morality, does that mean my group chat is impacting my ethical growth? Because there’s this one guy who refuses to tip because he thinks it’s “a broken system.” Does Aristotle have anything to say about that? Autumn: He'd probably suggest finding some better role models. But more importantly, Aristotle would say that virtuous communities keep each other in check. That’s where having a ‘virtue buddy' is helpful—someone who helps you stay on track and pushes you to strive for more. It’s like having a gym partner for your moral workouts. Rachel: I'm guessing that partner doesn't let you skip "ethical leg day." The appealing part is that trying to grow morally on your own can feel impossible. But having some to support and challenge you changes the game. Autumn: Precisely. And it ties back to what we were saying about the community aspect of virtue ethics: we’re molded by the people we surround ourselves with and, in turn, we shape them. It’s a very human approach to ethics—complex, reciprocal and, at the end of the day, rooted in our connections. Rachel: I have to admit, this is making more sense. Virtue ethics feels like a practical roadmap, less like abstract philosophy—one step at a time, messy but meaningful. Autumn: That’s what makes it so great, Rachel. Virtue ethics doesn’t promise you perfection; it gives you a path to grow—a collaborative, evolving quest to become better, both for yourself and for your community. That’s a pursuit worth fighting for.

Utilitarianism

Part 3

Autumn: So, Rachel, after laying down the foundation of moral character, we naturally progress to applying these virtues in everyday situations. This brings us to utilitarianism – or as I like to call it, "How much happiness can I buy with this ethically dubious dollar?" Rachel: <chuckles> Perfect segue, Autumn. Utilitarianism... it sounds simple, right? Maximize happiness, minimize suffering for the most people. But trust me, the deeper you go, the more complicated it gets. So, what’s the plan here? Let's start with the basics: What is utilitarianism? Then, we can wander through some of those famous thought experiments and more modern examples. Finally, we can pick apart some of the weaknesses, the limitations of the whole thing. Autumn: Sounds great! So, let’s dive in. Broad strokes here: Jeremy Bentham, right? Was he just telling everybody to, like, go out and live their best lives? Autumn: Well, in a way, yes! Jeremy Bentham, along with John Stuart Mill, really developed utilitarianism as a consequentialist framework. And what that means is that it’s really outcome-based. With utilitarianism, you judge the morality of an action by its results, not by its intentions, not by whether it follows some moral code, but just by the actual impact. Bentham famously coined the "Greatest Happiness Principle," and he insisted that our main moral goal should always be to maximize happiness and minimize pain for the majority of people. Rachel: Right, because nothing screams moral clarity like mathematically trying to quantify human suffering and joy. So, is this where the infamous Trolley Problem comes into play? Autumn: Spot on. Philippa Foot’s Trolley Problem is the quintessential utilitarian thought experiment. Picture this: A runaway trolley is speeding down the tracks, heading straight for five oblivious workers. You're standing next to a lever. If you pull it, the trolley switches to another track, but there's one worker on that track. Utilitarian ethics says you have to pull the lever, sacrificing one life to save five. Why? Less death, less suffering, net positive. Rachel: The math sounds easy enough, but supposedly letting a train hit someone doesn’t exactly feel like a recipe for a moral victory to me. You’re making an active choice to kill somebody, right? Autumn: That's exactly what makes utilitarianism so tricky. It forces you to make these very stark, unsettling choices because it puts outcomes way above our moral intuitions. Yes, saving five lives? It's the logical choice, but you are also accepting an ethical cost, deliberately sacrificing one life. And for a lot of people, that tension creates a lot of discomfort. Rachel: Well, it doesn’t stop there, does it? There's like, a million twisted versions of the Trolley Problem. What if the person on the tracks is your best friend? What if you have to shove someone onto the tracks instead of pulling a lever? Why is philosophy always testing how okay we are with hypothetical manslaughter? Autumn: Because these variations reveal how those moral instincts clash with utilitarian logic, right? In the original Trolley Problem, pulling a lever feels kind of emotionally distanced. But imagine now that you are standing next to somebody on a bridge, and you have to physically push them onto the tracks to stop the trolley and save those five workers. The outcome is the same: one life sacrificed, five lives saved. But suddenly, it feels far more personal, even horrifying. Rachel: Exactly! And it's not just the grim mechanics of pushing someone. There's this gut feeling, right? This gut-level reaction that something changes when you are so directly involved in the harm. Autumn: Well, that reaction really goes to the heart of what utilitarianism's critics say. Human morality isn’t just about arithmetic. It’s also shaped by empathy, by our emotions, and by that sense of personal responsibility. Really, philosophers would argue that experiments like these reveal the limits of utilitarianism. Sure, it gives us a clean answer, but that answer often conflicts with how we actually experience moral decisions. Rachel: Okay, so, theoretically, let's bring this train wreck into daily life. Schur mentions another utilitarian dilemma: how we decide where to donate money. Say I got a crisp hundred-dollar bill. Should I fund, let’s say, lifesaving malaria treatments overseas, or should I support my buddy's struggling community art project? Autumn: OK, this is a perfect example of how utilitarianism plays out in real world scenarios. On paper, the obvious utilitarian move is the malaria treatment. It saves more lives and prevents suffering on a scale that, honestly, a local art project could never achieve. And that's what Peter Singer, who is a leading advocate of effective altruism, always champions. He argues that our moral obligation should be to allocate resources to have the biggest, measurable impact. Anything less, he says, is ethically indefensible. Rachel: So, Singer's saying no coffee shops, no AirPods, no concert tickets until we've maxed out our global giving, huh? Autumn: <Laughs> Well, more or less. I mean, he challenges us to really rethink our priorities, suggesting that every dollar we spend on luxuries could really save lives if it was directed toward high-impact charities. It is a very stark and demanding framework, but it is undeniably logical. Rachel: Sure, but let's play devil's advocate for a moment. What about the value of…local art, for example? It might not save lives, but it inspires people. It creates community. Can you really put a dollar-for-dollar comparison on that? Or would the utilitarian just crunch the numbers and say, "Sorry, art lovers, malaria wins"? Autumn: Well, that's the key tension, really. Critics of utilitarianism argue that it oversimplifies value. Yes, malaria treatments deliver quantifiable impact, but how on earth do you measure the joy, the cultural enrichment, or the connection that people derive from art? These are real goods that defy the calculus of utility. Schur points out that morality gets messy because humans are more than just cold calculators. We care about what moves us, what fulfills us personally. Rachel: Right, and if you strip away things like beauty or emotional depth, you're left with a kind of utilitarian dystopia. A world entirely obsessed with the math of saving lives, but completely sterile otherwise. Autumn: Exactly. And this is where Schur reminds us that frameworks like utilitarianism are tools. They're not the whole picture. They give us clarity, but they're blind to a lot of what makes life meaningful. So, while it's crucial to embrace utilitarian principles, like reducing harm, we also have to leave room for complexity, right? Not everything fits neatly into a spread sheet. Rachel: Fair enough. But look, utilitarianism does offer this certain brutal honesty, doesn’t it? It forces you to ask hard questions, whether you like the answers or not. Maybe we need that now and then: to reflect on the real impact of our choices, not just what our intentions are. Autumn: Absolutely. At its heart, utilitarianism just invites us to look past our immediate interests and really consider the bigger effects of our actions. But even these very logical conclusions have limits, and as we will explore further, philosopher Bernard Williams really takes that to task with sharp critiques that introduce a whole new dimension of ethical complexity.

Deontology

Part 4

Autumn: Okay, so utilitarianism focuses on outcomes, which can feel a little… cold, right? That’s where deontology comes in. Rachel: Deontology… Autumn, are you about to explain why Kant's both an ethical genius “and” a total killjoy? Prepare me. Autumn: Pretty much! So, deontology, mainly shaped by Immanuel Kant, basically flips utilitarianism on its head. It says, "Forget trying to maximize happiness! Focus on the rules." Kant believed morality is all about sticking to universal principles, what he called the Categorical Imperative. Think of it as rules so rational and universal that “everyone” could – and should – follow them. Rachel: "Universal laws”… Sounds good in theory, I guess, but where does one even begin? What does this "Categorical Imperative" actually entail? Autumn: Okay, so at its heart, the Categorical Imperative is this: "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." Basically, before you do anything, ask yourself this: "What if everyone did this?" If the world would fall apart, then you're probably on ethically shaky ground. Take lying, for example. Kant is against it, period. If everyone lied, trust would break down, and communication would be impossible. So lying fails the "universal law" test. Rachel: So, by that logic, lying is always wrong? Like, no exceptions? What about when your friend asks if you like their new baby's name, and it's like Mildred or something? Autumn: Nope. Not even then! Kant's incredibly strict. Morality isn't situational, according to him. Even if a lie seems harmless, it still undermines the principle of universal honesty. It gets really contentious, though. Imagine someone's hiding in your house, and a murderer shows up asking if they're inside. Kant would say you still can't lie. Rachel: Okay, hang on. So, in that scenario, the ethical move is… what? Just spill the beans and hope for the best? Seriously? Autumn: Yeah, essentially. Kant believed you can't predict outcomes, and morality shouldn't hinge on them. What if you lied, but the murderer found the person anyway? You'd still be guilty of undermining honesty. For Kant, it’s all about sticking to the moral law, no matter what. Rachel: Right, I get it… sort of. But isn’t that a bit… detached? If ethics is supposed to make the world a better place, how useful is a rulebook that doesn't consider human complexity? Autumn: That's a major critique of Kantian ethics. It values consistency over emotional nuance. And life rarely fits into neat categories. Critics say Kant's framework can feel cold and impractical. But Kant would argue that morality shouldn't be based on emotions or biases. It's about being fair and consistent for everyone. Rachel: Okay, fair enough. But let’s get real-world for a second. Let's say I catch a coworker padding their expense reports. Deontology says I have to report them, right? Dishonesty, universal law, the whole thing? Autumn: Exactly! You have a moral obligation to uphold honesty, even if it leads to awkwardness or your coworker getting fired. Integrity is non-negotiable. But that's where it gets thorny. Real life is complex, and Kant would say that these complexities can easily cloud judgment, which is why we need strict rules. Rachel: Okay, valid point. But what about conflicting duties? Say that same coworker confided in me about some personal stuff. Now I’ve got two obligations: honoring their trust versus exposing the truth. What does Kant say when duties clash like that? Autumn: That's one of the hardest parts of deontology! Kant doesn't give us a clear answer when duties conflict. In your example, confidentiality might contradict the duty of honesty. Kantian ethics struggles in these gray areas because it's rule-based. Some philosophers have adapted deontology to fix this, but it's a definite limitation. Rachel: So, morality isn't just a checklist – shocker, I know. Let's switch gears. If I wanted to apply these ideas “without” turning into a robot, where would I even start? Autumn: So, there are some tools that can help! One is "maxim testing." You take the principle behind your action – your "maxim" – and ask, "What if everyone did this?" Say you're tempted to break a promise. Would a world where everyone breaks promises function? Probably not, because promises would be meaningless. So, this helps you think about whether your actions could be a universal law. Rachel: Alright, maxim testing. Got it. What else can I do? Autumn: Another good one is role reversal. When facing a moral problem, try to see it from the other person's perspective. Would you think the decision was fair and consistent with universal principles? It can give you a new understanding of the situation. And finally, think about your motives. Kant cares about intention over outcomes. So, ask yourself if you're acting out of duty or, say, convenience. Rachel: Okay, I see how these methods help you get a better idea of your ethical compass. But the cynic in me still wonders… Doesn't all this overlook the fact that life is, well, messy? Autumn: You're right! Even deontology's biggest fans admit it has limits. Its strict nature can't “really” handle emotional complexity. Take privacy versus public safety in the age of tech—a modern ethical problem. Deontology says privacy is important, but critics say it doesn’t offer flexibility to weigh competing values. Rachel: And that can make it feel disconnected, right? Like, we're always making trade-offs in practice, trying to balance principles and outcomes. Autumn: Exactly! That's why frameworks like deontology work best when you combine them with others, like utilitarianism. Kant gives us a foundation: a moral baseline rooted in being fair, respecting rationality, and doing your duty.

Conclusion

Part 5

Autumn: Okay, so to recap, we've looked at the three big ethical frameworks from How to Be Perfect: virtue ethics, which is all about character; utilitarianism, about maximizing happiness; and deontology, focusing on moral principles. Each has its pluses. Virtue ethics helps us grow morally, utilitarianism makes us think about the bigger picture, and deontology keeps us grounded in fairness and duty. But, you know, none of them is a perfect, one-size-fits-all solution. Rachel: Which, I think, is Schur's main point, isn't it? Being a good person isn't about rigidly sticking to one set of rules or passing some ethics exam. It's about asking the right questions, staying curious, and being open to growth – even when we make mistakes. Autumn: Precisely, Rachel. It's about progress, not perfection. See these frameworks as tools, not rigid rules. Use virtue ethics to build good habits and a strong character, utilitarianism to consider the wider impact of your choices, and deontology to stay grounded in principles. Together, they help create a well-rounded, thoughtful way to deal with life's ethical dilemmas. Rachel: And Schur reminds us that even small, conscious choices—like putting the shopping cart back—help us become better people. So, the elevator pitch for ethics might be something like: try a bit harder, mess up a bit less, and keep asking yourself, "What kind of person do I aspire to be?" Autumn: Exactly, Rachel! Beautifully put. And on that note, we'll leave you to consider your next ethical challenge. Because whether it involves finding the golden mean, facing a trolley problem, or upholding a universal law, striving to be better “really” does make all the difference.

00:00/00:00