
Evolved: Why Your Brain Isn't a Robot
Podcast by The Mindful Minute with Autumn and Rachel
How Evolution Gave Us Free Will
Evolved: Why Your Brain Isn't a Robot
Part 1
Autumn: Hey everyone, welcome back! Today we're tackling some of the biggest questions out there: life, choice, and whether we're basically just fancy robots on a pre-programmed path. Rachel: Yeah, you know, just another Tuesday asking myself, "Am I actually making decisions, or am I just a sophisticated Roomba bumping into furniture?" Autumn: Right! But here's where it gets interesting. What if evolution itself is why you can even question all this? That's where Kevin J. Mitchell's book, Free Agents: How Evolution Gave Us Free Will, comes in. Rachel: Mitchell walks us through how life evolved from simple, single-celled organisms – think amoebas chilling in a pond – to something truly amazing: agency. I like to call it "organized chaos with opposable thumbs." Autumn: you're actually not far off! Mitchell basically argues that free will isn't some magical illusion or a byproduct of our brains. It's actually a direct result of evolution, shaped by randomness in our neural pathways, how we process sensory information, and a bit of higher-level thinking. Rachel: So it's randomness and reasoning working together? Kind of like jazz, but with neurons instead of instruments? Autumn: Exactly! A perfect way to put it. So today, we're going to unpack this whole journey step-by-step. First, we'll look at how the basic building blocks of life—simple chemistry—led to something we recognize as agency. Rachel: Picture pond scum suddenly developing ambitions! Autumn: Then, we'll break down what Mitchell calls the "dance" between randomness and reason. It's how your brain makes decisions, mixing that unpredictable neural activity with deliberate thought. Rachel: Hmm... so is this dance a graceful waltz, or is my brain throwing a rave? Autumn: Let's just call it... improvisational. And finally, we'll look at why all this matters. What are the implications for things like justice, artificial intelligence, and even just how we understand ourselves as humans? Rachel: From evolution to ethics to AI... that's a huge leap for a concept that starts with, you know, bacteria. Autumn: Precisely! But that's exactly what makes it so fascinating. So stick with us, because we're about to explore how evolution made us not just survivors, but free agents in charge of our own lives.
The Evolutionary Origins of Agency
Part 2
Autumn: Okay, let's dive in. So, agency starts way back, right at the very beginning of life on Earth. Think billions of years ago – volcanic landscapes, everything's chaotic, tons of chemical reactions bubbling away in hydrothermal vents. This is where Mitchell kicks off the whole story. Rachel: Right, the primordial soup that eventually cooked up… well, everything. But Autumn, why aren’t we still just a bunch of aimless molecules bumping into each other? What exactly sparked agency out of all that mess? Autumn: That's the million-dollar question! It's all about organization, Rachel. Among all that molecular randomness, certain chemical systems started to self-organize. They managed to maintain themselves and even replicate – kind of like molecules with a mission, you know? And the real game-changer was the development of “membranes”. These lipid barriers created these tiny compartments, trapping these self-sustaining systems inside. Rachel: So, membranes are like the velvet ropes outside the hottest club of early life, keeping out all the riffraff? Autumn: Exactly! They shielded those internal processes while still allowing for some selective exchange with the environment. That control meant these early cells weren’t just passive bystanders. They could actually act in specific ways to keep themselves going. Rachel: Okay, but "acting in specific ways"? Sounds like a bit of a leap, doesn't it? I mean, we're still talking about basic chemical reactions, right? No little brains in these blobs. Autumn: True, but even back then, survival wasn't just about luck. These primitive cells needed to respond to their surroundings – like, moving towards energy sources or dodging harmful conditions. You could see it as the earliest form of goal-oriented behavior. They weren't exactly “thinking”, but they were “doing” – guided by mechanisms shaped by the pressure of evolution. Rachel: So, these cells laid the groundwork for what we now call agency—but it's still pretty rudimentary, right? Just chemical survival instincts? Autumn: Precisely, but that's what makes it so foundational! This shift from simply enduring the world to actively engaging with it, that's a massive leap. And from there, life could evolve progressively more sophisticated ways of making decisions. Rachel: Which brings us to our next evolutionary milestone—multicellularity. Now we’re not just talking about lone wolf cells, but teams of cells working together. Autumn: Exactly! When single-celled organisms started clumping together, it unlocked a whole new world of potential for specialization. Suddenly, you had cells working as muscle, as nerve, as skin… allowing organisms to respond to their environment with more complexity and coordination. Rachel: Alright, I get it. But here’s what I’m wondering—why would a single-celled organism suddenly want to join the collective? Why not just stick with its solo act? Autumn: It's a question of trade-offs. As a single cell, there's a limit to what you can achieve on your own. Teaming up might mean giving up some autonomy, sure, but it also meant protection, shared resources, and better odds of survival. Plus, multicellularity opened the door for division of labor—a total game-changer in evolutionary terms. Rachel: Okay, fair enough. But hold on a sec. Isn’t this also where the whole control thing gets tricky? Instead of a single cell calling the shots, you have a whole crowd cooperating—what happens when the cells disagree? Autumn: That's a brilliant question, Rachel. And that’s where evolution had to step in with better ways to coordinate – which is precisely how and why nerves and neurons first appeared. Rachel: Ah, neurons—the VIPs at the agency evolution party. Autumn: You got it. Even simple multicellular organisms, like jellyfish, evolved nerve nets to coordinate their movements. These primitive nervous systems allowed them to sense their surroundings and respond in a purposeful way—hunting for food, for instance, or avoiding danger. Rachel: Okay, so now we’ve got multicellularity and nervous systems enabling more deliberate actions. But at this stage, is there any conscious “decision-making" going on, or is it all still automated reflexes? Autumn: Initially, it’s mostly reflex-based—that's spot on. But then evolution came up with something amazing: the ability to “interpret” sensory information. Instead of just reacting instinctively, organisms could process inputs and adapt their behavior based on context or past experiences. Rachel: So, a jellyfish can decide, “Hmm, maybe I should avoid bumping into that thing again”? Autumn: Essentially, yes! And that gradually paved the way for predictive modeling—the ability to anticipate outcomes and plan accordingly. Rachel: And this is where things start getting cerebral—literally. This is when we start moving towards animals with central nervous systems that can handle more complex decision-making. Autumn: Exactly! A great example Mitchell uses to show early predictive behavior is, believe it or not, “E. coli”—that humble bacterium that swims towards nutrients and away from toxins. Rachel: Wait a minute, are you saying that “E. coli”, of all things, embodies agency? Autumn: Surprisingly, yes. They use a mechanism called chemotaxis to “sense” chemical gradients in their environment. What’s fascinating is their ability to adjust their course based on past movement. If conditions improve, they keep going; if things get worse, they change direction. It’s rudimentary, yes, but it’s an example of behavior clearly guided by more than just random chance. Rachel: So basically, “E. coli” is out there optimizing its life choices while I can’t even decide which brand of cereal to buy. Autumn: It’s certainly humbling, isn’t it? But that’s the beauty of Mitchell’s argument: even the simplest life forms demonstrate a basic form of agency. And as evolution chugs along, these capabilities become more refined and deliberate. Rachel: Okay, so we’ve outlined the origins—from chemical chaos to early cells, from single cells to multicellularity, and now primitive nervous systems. What's the next big hit on this evolutionary playlist? Autumn: Next, we get into the major leagues—how organisms began to not just react to the environment, but actually shape their actions with purposeful intent, even factoring future outcomes into their decisions. But let's take a quick break and we will dive into that in just a moment.
Neural Mechanisms and Free Will
Part 3
Autumn: And this fundamental role of biological agency really sets us up to understand how higher-level thinking developed—it's not just about reacting anymore, but actually weighing different choices and making decisions that feel, well, intentional. That neatly leads us to the heart of our discussion today: the brain's mechanisms and this thing we call free will. Rachel: Okay, so we're moving from bacteria with a purpose to the wonderfully complex human brain. Now, Autumn, I know we're about to get into the nitty-gritty brain stuff here, but could you lay it all out for those of us who didn't spend years studying neuroscience? Autumn: Definitely. So, we're basically going from the biological hardware—things like the basal ganglia, how dopamine works, and even just random neural firing—to bigger ideas like unpredictability and how we make decisions. Mitchell's work really aims to show how these biological and philosophical elements connect. Rachel: Let's start with the basal ganglia then. Autumn, what exactly is this mysterious part, and why is it so important when it comes to making decisions? Autumn: Well, you can think of the basal ganglia as the brain's action selector. It's a network of interconnected areas responsible for sorting through all the different options we have and deciding which action to prioritize. Imagine a buffet with tons of choices—each dish is a possible action—and the basal ganglia is the filter deciding what goes on your plate. Rachel: Okay, but doesn't that just make the system a glorified traffic controller? It's filtering, sure, but does that really equal free will? Autumn: Good point. It's not free will yet, but it's a crucial building block. The basal ganglia takes in information from our senses, our surroundings, and our own internal desires, and it weighs all of that to make the best choice. A hungry animal, for example, finds food but also stays alert to danger. The system balances competing needs—like staying alive versus eating—based on signals often controlled by dopamine. Rachel: Ah, dopamine—the brain's reward chemical. So, in our buffet analogy, is dopamine like the friend who's hyping up a certain dish, saying, "You have to try this one!"? Autumn: Exactly! Dopamine boosts the motivational value of certain choices. So, if an animal smells food but hears a predator, dopamine helps the basal ganglia prioritize the "avoid danger" signal over "get food"—unless, perhaps, the hunger is really intense. Rachel: Okay, I understand how it works for survival. But what about when things go wrong, like with Parkinson's disease? Does that mean decision-making collapses entirely? Autumn: Parkinson's really highlights just how critical dopamine and the basal ganglia are for action and choice. When the dopamine-producing neurons in this area start to die off, patients lose the ability to start actions and to smoothly choose between options. That's why movement becomes so hard—it's not just physical function, but also the brain's ability to prioritize and act. Rachel: So, it's like an overworked bouncer at a club, letting no one in – or maybe letting everyone in at once! But here's the thing, Autumn: all of this still sounds like a machine. Could randomness—this so-called neural noise—be the ingredient that changes everything and makes free will possible? Autumn: I'm glad you asked because neural noise is where things get really interesting. Mitchell shows how this variability isn't just background static; it's actually built in to allow for creativity and flexibility in decision-making. Rachel: How does that work, though? When I hear “neural noise,” I think of static or chaos, not some “genius plan to solve my problems.” Autumn: Well, that randomness introduces unpredictability into our neural systems, which breaks us out of fixed patterns. Take cockroaches, for example. The way they dart around randomly when they're escaping a predator? That's not just random—it's influenced by neural noise that makes their movements less predictable, giving them a better chance of survival. Rachel: So, the roaches' chaos is actually... strategic chaos? Autumn: Exactly. And in humans, this dynamic goes even further. Little random changes in neural signals can help us break a decision-making stalemate. Say you can't decide whether to cook or order in. A bit of neural noise might amplify one option just enough to tip you in that direction. Rachel: But isn't that just randomness forcing my hand? Where does the "thinking it through" part come in? Autumn: That's where the two-stage decision model comes in: first, you generate possibilities, and then you evaluate and select based on context and goals. Think of noise as the initial brainstorming session, throwing out all kinds of ideas. The prefrontal cortex—your brain's executive control center—then narrows things down by considering your priorities, past experiences, and potential results. Rachel: So, it's like a brainstorming session in my brain where crazy ideas are welcome, but then my prefrontal cortex steps in as the manager saying, "Okay, let's focus on what we can actually do." Autumn: Exactly! Mitchell suggests that free will arises from this interplay of randomness and rational thought. Neural noise fuels creativity and flexibility, while structured, goal-oriented processing turns that into purposeful action. Rachel: What about Libet's experiments, though? You know, the famous ones that show the brain seems to make decisions before we're consciously aware of them. Doesn't that imply that free will is just something we tell ourselves after the fact? Autumn: Libet's studies are a really key piece of this puzzle, actually. His data did show what he called a "readiness potential"—basically, brain activity that signals an upcoming action—starts before participants consciously decide to act. But Mitchell interprets this differently. He sees that subconscious activity as the first stage of the process—where potential actions are being generated. Rachel: So, the readiness potential isn't overriding free will; it's just setting the stage for it? Autumn: Exactly. Conscious awareness comes in as a higher-level evaluation—deciding whether to commit to or reject what's already brewing. So, rather than disproving free will, Libet's work highlights just how complex it is. Rachel: Alright, Autumn, let's take a breath here. We've gone from the basal ganglia's action selection to the chaotic creativity of neural noise and wrapped it all up with decision-making models. If I had to sum it up, I'd say our choices are a team effort between randomness and reason.
Moral Responsibility and Future Frontiers
Part 4
Autumn: So, with this neural framework laid out, it naturally leads to the philosophical and ethical questions around agency. Which brings us to the core of our topic today: moral responsibility and where agency is headed in the future. Rachel: You mean where science bumps into, well, everything we argue about – justice, ethics, and whether Skynet really wants to end us. Autumn: Exactly, Rachel. Mitchell doesn’t just explain agency through evolution and neuroscience; he gets into what’s really at stake. How our understanding of agency affects moral and legal responsibility, and how we deal with AI in a world where these systems are playing a bigger and bigger role. Rachel: So, we’re going from neurons in the courtroom to algorithms in your inbox, huh? Okay, let’s start with the basics. What does Mitchell say about moral and legal responsibility? Autumn: He argues that moral responsibility is closely linked to our evolved capacity for agency. In other words, the same processes that allow us to think and act—balancing randomness with rationality—also make us responsible for what we do. Rachel: Right, but what if biology throws a wrench in the works? Like, what if someone’s brain is fundamentally impaired – does that erase their responsibility? Autumn: That’s where it gets tricky. Mitchell talks about real cases, like the famous one where someone committed a serious crime, but they later found a brain tumor affecting their orbitofrontal cortex. Rachel: The orbitofrontal cortex – remind me, that's where impulse control and decision-making happen, right? Autumn: Exactly. This tumor messed with their ability to control their actions, affecting their judgment. Neuroscientists testified, showing how this biological factor directly affected the defendant's behavior. The result? A lighter sentence, but not a complete pass. Rachel: So, the biology is taken into account, but it doesn’t let them off the hook completely. Autumn: Precisely. And that’s Mitchell's point. Understanding the brain doesn’t excuse responsibility; it gives us a better way to understand culpability. It’s about balancing biological constraints with the agency we do have. Rachel: Okay, but how do we stop this from becoming an excuse-fest? Like, can't I just say, "Sorry about that bad decision – blame my dopamine levels today"? Autumn: And that's the trap Mitchell warns against. He critiques the oversimplified, “my brain made me do it” argument. Science can explain influences on behavior, but it doesn’t erase intent or reflection. For example, even when biology sets the stage, the interplay of influences still enables higher reasoning to guide actions—or in some cases, to veto them. Rachel: Ok, so science helps us fine-tune accountability rather than eliminate it. Let’s switch gears a bit—if we’re talking about accountability, things get even messier when AI enters the picture, don’t they? Autumn: Oh, absolutely. And Mitchell dives into this increasing concern. AI systems, especially machine learning, make decisions that have real-world consequences—loan approvals, medical diagnoses, even criminal sentencing. Yet, these systems lack the kind of evolved agency we’ve been talking about. Rachel: Which means no randomness, no deliberation, and, crucially, no moral reasoning. Just a bunch of statistical predictions powered by code. Autumn: Exactly. Mitchell points to the facial recognition fiasco where AI falsely identified a Black man as a criminal suspect. This wasn't just a technical glitch; it showed how historical biases in the data led to systemic errors. Rachel: Right, because AI doesn’t “know” what fairness is—it just amplifies patterns from its input. Autumn: And Mitchell emphasizes that this lack of context and ethical judgment means AI could never truly replicate human agency. That said, he does argue for embedding better ethical frameworks into these systems to minimize harm and bias. Rachel: Sure, but can you really "teach" an algorithm fairness? It feels like patching a leaky boat—you’re addressing symptoms, not the core problem. Autumn: True, which is why Mitchell calls for a broader societal agreement on the values we want AI to reflect. Think of it less as “teaching” ethics and more about carefully designing systems that align with human priorities. Rachel: Okay, but something’s been bugging me – are we trying too hard to make AI mimic human decision-making? Even if we crack general intelligence, how will machine agency ever compare to the messy, evolved thing we call free will? Autumn: That’s where Mitchell draws a clear line. Machines follow pre-programmed rules or optimize tasks based on data – they don’t have intrinsic motivations, experiential learning, or the ability to assign meaning to their actions. Even if you introduce randomness or complexity, that doesn’t magically create free will. Rachel: So, no matter how much randomness we code in, a robot won’t suddenly sit down and ponder the meaning of life? Autumn: Exactly. Biological agency evolved over millions of years in dynamic environments, cultivating the neural flexibility and moral frameworks we rely on today. Machines, on the other hand, are bound by rigid architectures. It’s apples and oranges—one reflects the chaos of evolution; the other is human design. Rachel: And yet we're building these systems to make life-and-death decisions, like autonomous cars facing ethical dilemmas. You can’t exactly preprogram a car for every possible scenario. Autumn: Which is why those famous "trolley problem" thought experiments come into play. But even here, Mitchell cautions against overconfidence in AI’s ability to replicate human decision-making. Humans bring intuition, cultural norms, and a sense of moral reflection—AI systems have none of that. Rachel: Right, so putting AI in charge of ethical decisions without a moral compass is… let’s just say, a bold move. Autumn: Bold indeed. And Mitchell leaves us with a critical warning: as we dive deeper into technology, we can’t lose sight of what makes human agency unique. That understanding shapes not only how we govern ourselves but also how we design systems that won’t undermine justice, fairness, and accountability. Rachel: Seems like the future of agency has a lot more questions than answers – whether it’s about neural tumors or self-driving cars swerving into sketchy moral territory. But hey, at least I know my neurons are holding some chaotic election every time I hesitate at the donut counter. Autumn: And that's the beauty of human agency, Rachel—messy, complex, but ultimately capable of reflection, growth, and even choosing restraint at the donut counter. Sometimes.
Conclusion
Part 5
Autumn: Okay, let’s wrap things up. Today, we talked about how Kevin J. Mitchell connects the dots between the beginning of life's chemistry and how our brains make decisions. We really got into how randomness and thinking things through play a role, with random brain activity sparking creativity and the basal ganglia helping filter our choices. We also discovered that this is the foundation of free will, allowing us to not just react but actually think, plan, and make choices on purpose. Rachel: And we didn’t stop at just science, did we? We jumped into the philosophical and ethical side of agency, like how brain science affects responsibility and whether AI can actually have agency. If you're now wondering whether AI can truly be in control, or if your brain is just throwing out random ideas until one works… well, you’re in good company. Autumn: The key thing to remember is that agency isn't about having total control or being untouched by outside stuff. It’s more about how randomness and reason work together—it’s what helps us change, learn, and make choices that matter. Rachel: So, whether you're trying to decide what to eat or dealing with a tricky moral question, remember, your agency is a bit of a mess, wonderfully complicated, and completely your own. Autumn: And that’s something no machine can “really” copy. Thanks for joining us as we explored evolution, free will, and what makes us human. Let's keep questioning, keep exploring, and most importantly, keep choosing. Rachel: Until next time, everyone—choose wisely. Or…chaotically. Either way, you’ve got agency.