Aibrary Logo
Podcast thumbnail

From Overlords to Oxen

12 min

How to Think About Robots

Golden Hook & Introduction

SECTION

Joe: Alright Lewis, be honest. When I say 'the future of robots,' what's the first image that pops into your head? Lewis: Easy. A shiny metal skull crushing a human one under its heel. Or, you know, my Roomba getting stuck under the couch again, beeping pathetically for help. It's one or the other. There is no in-between. Joe: That's perfect. Because that's exactly the trap Kate Darling wants us to avoid in her book, The New Breed: How to Think About Robots. She argues that our imagination is stuck in this dead-end loop of either apocalyptic overlord or bumbling servant. Lewis: I mean, can you blame us? That’s what every movie for the last fifty years has taught us. It’s either The Terminator or The Jetsons. Joe: Exactly. But Darling comes at this from a really unique place. And she's not just a philosopher in an armchair; she's a top researcher at MIT's Media Lab and works with the folks at Boston Dynamics—you know, the ones with those terrifyingly agile robot dogs that can open doors. Lewis: Oh, I've seen the videos. The ones that give me low-grade anxiety every time they pop up on my feed. So she’s seen the future up close, and she’s not picturing crushed skulls? Joe: Quite the opposite. She thinks we're using a completely broken analogy to understand them. And that if we just swapped it for a different one—one we've been using for thousands of years—it would change everything. Lewis: Okay, now I'm intrigued. What's the magic analogy that saves us from the robot apocalypse and my Roomba's existential crisis?

The 'Animal Analogy': A New Lens for Robots

SECTION

Joe: Animals. She argues we should think about robots the way we've always thought about animals. Lewis: Animals? Like... dogs and horses? That feels a little... quaint. We're talking about super-intelligent AI, and the answer is to think about Old Yeller? Joe: It sounds simple, but it's profound. Think about it. Why do we default to comparing robots to humans? It immediately creates a sense of competition. A robot is either a dumber, less capable version of us, or it's a smarter, more dangerous version that's coming for our jobs and our spot at the top of the food chain. Lewis: Right, it’s a zero-sum game. If they get smarter, we become obsolete. That's the core fear. Joe: Precisely. Darling points to a perfect example of this thinking in action. In 2017, the humanoid robot Sophia was granted citizenship in Saudi Arabia. The media went into a total frenzy. Lewis: I remember that! It was a huge publicity stunt, but everyone was debating, "Do robots deserve rights? Can a robot be a citizen?" Joe: Yes! And Darling says that was a completely useless conversation, a total distraction. We were asking if a machine deserved human rights in a country where actual humans have a complicated rights situation. It forced us down this bizarre philosophical rabbit hole because we were stuck on the human comparison. We saw a robot that looked vaguely human and immediately tried to fit it into human legal and social categories. Lewis: Okay, I see the problem there. It's like trying to figure out if a car should be allowed to vote. It's the wrong question. But what else are we supposed to compare a humanoid robot to? Joe: This is her key move. She says, let's look at our history. For millennia, we have lived and worked alongside non-human intelligence. We partnered with oxen to plow fields, with horses for transportation, with dogs for hunting and protection. These animals are autonomous agents. They can perceive the world, make decisions, and act on them. We don't expect a horse to be a person. We value it for its unique, non-human skills: its strength, its speed. Lewis: So you're saying we should think of a factory robot like an ox? It's not a replacement for the farmer; it's a partner that helps the farmer do something they couldn't do alone. Joe: You've got it. She tells this great story about visiting an Audi factory in Germany. She saw these massive robotic arms behind cages, performing this mesmerizing, precise dance with metal parts. The human workers were in a different part of the room, overseeing the process. The robots weren't replacing the humans; they were augmenting them, performing tasks with a level of precision and endurance that a human arm just can't match. It was a partnership of different kinds of intelligence. Lewis: That makes sense for a factory floor. It's physical labor. But what about the new wave of AI? The ones that can write articles, create art, or handle customer service calls. That feels a lot less like an ox and a lot more like a direct replacement for a human mind. The book got some mixed reviews on that point, right? Some readers felt it was a bit too optimistic and downplayed the real threat to white-collar jobs. Joe: That's a fair critique, and the book doesn't claim to have all the answers. Darling's point is more about the approach. Instead of panicking and asking, "How do we stop this?", the animal analogy encourages us to ask, "How can we redesign the work?" A sheepdog isn't a replacement for a shepherd. It's a partner with a unique skillset—incredible hearing, speed, instinct—that allows the shepherd to manage a flock in a way they never could alone. The shepherd's job changes. It becomes more about strategy, oversight, and managing their canine partner. Lewis: Okay, so the job of a writer might shift from just producing text to becoming an expert prompter, an editor, a creative director for an AI partner. The human moves up the chain to a more strategic role. Joe: Exactly. The analogy frees us from the replacement narrative and opens up a more creative, collaborative one. It shifts the focus from "us versus them" to "us with them." And this idea of partnership gets even weirder, more personal, and ethically messier when these new 'breeds' come out of the factory and into our homes.

From Partners to Pals: The Messy Ethics of Robot Relationships

SECTION

Lewis: Weirder is right. It's one thing to call a factory arm an 'ox.' It's another thing entirely when we're talking about robots designed for social interaction. My relationship with my Roomba is already complicated enough. Joe: Well, get ready, because it gets so much more complicated. Darling brings up one of the most fascinating and, frankly, bizarre examples of human-robot interaction I've ever heard of: the AIBO robot dog funerals in Japan. Lewis: Wait, what? Funerals? For a robot dog? You're kidding me. Joe: Not at all. Sony created this little robotic dog called AIBO in the late 90s. They were incredibly popular in Japan. People lived with them for years. But eventually, Sony stopped making them and, crucially, stopped servicing them. So when these little robot dogs "died"—when their circuits finally fried—their owners were devastated. Lewis: Devastated over a broken appliance? Joe: That's the thing, they didn't see it as an appliance. They saw it as a companion. So much so that they started holding actual funerals for them. There are pictures and videos of dozens of these little silver AIBO dogs lined up on an altar in a Buddhist temple, with a real monk in full robes chanting sutras for them, while their owners weep. Lewis: Wow. That is... a lot. I'm torn between thinking that's incredibly sweet and deeply concerning. Are we sure this is healthy? Attaching that much emotion to a machine? Joe: And that's the million-dollar question Darling wants us to ask! Our first instinct, especially in the West, is to judge it. To say, "That's not a real dog, you shouldn't feel that way." But Darling flips the script. She says the important question isn't whether the robot deserves a funeral. The robot can't feel anything. The important question is, what does our capacity for empathy toward a machine say about us? Lewis: Huh. So it's not about the robot's status, it's about our own humanity. Joe: Precisely. This leads to her "Don't Kick the Robot" argument. There have been experiments where people are asked to interact with a cute little robot, and then at the end, they're told to smash it with a hammer. And people refuse. They can't do it. It feels wrong. It feels cruel. Lewis: Even though they know it's just wires and plastic. I can see that. I'd feel like a monster. Joe: And Darling argues that this instinct is something to be encouraged, not dismissed. Maybe discouraging cruelty to a machine, even one that can't feel, is a good thing in itself because it reinforces our own empathy. It's the same logic behind many animal cruelty laws. We protect animals not just for their own sake, but because we believe that people who are cruel to animals are more likely to be cruel to humans. It's about the character of the person, not just the experience of the victim. Lewis: That's a powerful idea. But it feels like a slippery slope. If we're talking about 'robot welfare' and not kicking robots, are we just a few steps away from people demanding marriage rights for their toasters? Where do you draw the line? This is where the book gets a bit controversial for some people, right? Joe: It does, because it challenges our fundamental categories. But Darling's approach is very pragmatic. She's not arguing for abstract, universal "robot rights." She's arguing for creating smart, flexible systems that account for our very real, very messy human emotions. The animal analogy helps here, too. We have incredibly complex and inconsistent legal systems for animals. Lewis: That's for sure. A pet dog has more legal protection than a pig in a factory farm, which has more than a rat in a lab. It's a total mess. Joe: It's a total mess! But it's a mess we've been navigating for centuries. She tells this hilarious story about two llamas that escaped from an assisted-living home in Arizona and led police on a televised chase. Lewis: I think I remember that! It was a whole social media event. Joe: It was! And the point is, we have systems for this. We have laws for when a neighbor's dog trespasses, or when a swarm of bees from a beekeeper's hive attacks someone. We have concepts like strict liability, insurance, and regulations. We don't try to hold the llama morally responsible for its decisions. We look at the owner, the context, the system. Darling argues we should do the same for robots. Don't ask if the self-driving car is "guilty." Ask who is responsible: the owner, the manufacturer, the software programmer? The animal framework gives us a rich history of legal and ethical models for dealing with harm caused by non-human agents.

Synthesis & Takeaways

SECTION

Lewis: So the whole point is to get us out of these unproductive, dead-end arguments that come from comparing robots to people. Joe: Exactly. The book’s real power isn't in giving us a perfect roadmap to the future. It's about giving us a better set of tools and, most importantly, a better set of questions to ask along the way. The human-robot comparison forces us into a binary choice: they are either our friends or our foes, our servants or our masters. Lewis: But the animal analogy opens up this huge, messy, and much more realistic spectrum of possible relationships. Joe: A whole ecosystem of them! A robot can be a partner, like an ox. It can be a tool, like a wrench. It can be a companion, like a dog. It can be a ward that we have a responsibility for, like livestock. It can be all of these things at once, depending on the context. It's not one-size-fits-all. Lewis: So the real question isn't 'Will a robot take my job?' but maybe 'What kind of 'creature' is this new technology, and how do I want to relate to it?' Joe: That's the takeaway. It puts the agency back in our hands. It's not about a predetermined technological future that happens to us. It's about the social, legal, and personal choices we make about how to integrate this 'new breed' into our world. It's a future we get to design. Lewis: That's a much more empowering way to look at it. It makes you think about the technology in your own life differently. I'm already side-eying my Roomba, trying to figure out if it's more of a workhorse or a weird, dumb pet. Joe: And that's a great question to ponder. We'd actually love to hear what our listeners think. What's a piece of tech in your life that you treat a bit like a partner or a pet? Is there a device you talk to or feel a strange affection for? Let us know. It's a fascinating reflection of where we're headed. Lewis: Definitely. It’s a thought-provoking way to re-examine our relationship with all the gadgets around us. Joe: This is Aibrary, signing off.

00:00/00:00