Aibrary Logo
Podcast thumbnail

Elephants & The Algorithm

13 min

A History, A Philosophy, A Warning

Golden Hook & Introduction

SECTION

Joe: The internet wasn't invented in the 20th century. It wasn't even invented by humans. The original internet was run by elephants, fungi, and maybe even lovesick snails. And understanding that is the key to seeing why our version is so broken. Lewis: Whoa, hold on. Elephants? Snails? What are you talking about? Did you fall into a weird Wikipedia hole again? That sounds like the plot of a bizarre children's book, not a serious take on technology. Joe: It sounds wild, but it's the central argument of the book we're diving into today: The Internet Is Not What You Think It Is: A History, a Philosophy, a Warning by Justin E. H. Smith. And it’s written by this fascinating guy, who’s a professor of the history and philosophy of science. He even has an asteroid named after him, which tells you he thinks on a completely different scale. Lewis: An asteroid? Okay, that's a serious flex. I'm listening. So this philosopher with his own personal space rock is telling us that elephants had Wi-Fi before we did? You have to explain this.

The Ancient Echo: Why the Internet is Older Than You Think

SECTION

Joe: Exactly. Smith’s whole point is that we’re obsessed with the novelty of the internet, but we’re missing the big picture. Telecommunication—action at a distance—is everywhere in nature. He starts with elephants. They communicate over vast distances by stomping their feet, creating seismic vibrations that other herds can feel kilometers away. It’s a network, a way of sending signals through a shared medium. Lewis: That’s incredible. So the ground is their fiber-optic cable. But is that really like the internet? It feels more like a metaphor. Joe: It's both. Smith argues we need to see these as real, functioning networks. Take the "wood wide web." It's a real thing. Forests are connected by vast underground networks of mycorrhizal fungi. Trees use this network to send chemical signals to each other, warning of pests or sharing nutrients. A mother tree can send resources to its saplings through this fungal network. It's a complex, collaborative, information-sharing system. Lewis: Okay, the tree thing is blowing my mind. That's a literal biological internet. So our digital version is just a clunky, man-made imitation of what nature has been doing for millions of years? Joe: In a way, yes. And the human desire for it is just as old. Smith tells this hilarious story from 19th-century France about a con man named Jules Allix who claimed to have invented a "snail telegraph." Lewis: A snail telegraph? You can't be serious. How would that even work? Joe: Allix claimed that when two snails mate, they form a permanent, invisible magnetic bond. So, he'd put one snail on a device with letters of the alphabet in Paris, and its partner on a matching device in, say, America. When he zapped the Paris snail, its partner in America would supposedly feel the "escargotic commotion" and move to the same letter. Lewis: Come on. That’s completely absurd. Did anyone actually fall for that? Joe: Oh, people loved it! He was a fraud, of course. But Smith's point is profound: even when he was lying, Allix was doing the important work of imagining future possibilities. The dream of instant, wireless communication existed long before the technology did. It shows this deep, ancient human impulse to collapse space and connect. Lewis: That's a great way to put it. We had the software—the desire—long before we had the hardware. But the book also connects this to something much more industrial, right? Like weaving? Joe: Exactly. This is where it gets really interesting. The history of the computer is directly tied to the history of the loom. In the early 1800s, Joseph Marie Jacquard invented a loom that could weave incredibly complex patterns, like flowers and leaves, automatically. Lewis: How? Joe: With punched cards. A series of cards with holes was fed into the loom, and the pattern of holes told the machine which threads to lift. It was a binary system: hole or no-hole. It was, in essence, a program. The loom was 'reading' information and translating it into a physical creation. Lewis: Wow. So the first computer program wasn't for math, it was for making fancy silk fabrics. Joe: Precisely. And this directly inspired Charles Babbage's Analytical Engine, the first general-purpose computer. His collaborator, Ada Lovelace—often called the first programmer—saw the connection immediately. She famously wrote that the Analytical Engine "weaves algebraical patterns just as the Jacquard-loom weaves flowers and leaves." She understood that this machine wasn't just for crunching numbers; it could manipulate any symbol. It could compose music, create art. It was a machine for processing information, an idea born from a weaving machine. Lewis: So the internet's DNA is part elephant, part mushroom, and part silk loom. That completely changes how I see the device in my pocket. But if the idea is so ancient and natural, where did our version go so horribly wrong? Why does my internet feel less like a collaborative forest and more like a toxic, anxiety-inducing casino?

The Attention Casino: How We Became the Product and Life Became a Game

SECTION

Joe: That's the tragic turn in the book. Smith argues that our internet has been hijacked by a single, destructive economic model: the attention-seeking industry. He uses that famous line, "if it's free, you are the product." But he takes it a step further. He says we're not just the product; we're "data-cows," constantly being milked for information about our lives, our fears, our desires. Lewis: Data-cows. That's a grim but very accurate image. We're just grazing on content while these companies extract value from us. Joe: And the way they keep us grazing is by turning everything into a game. Smith calls it "existential gamification." Social media isn't a town square; it's a video game. You're constantly trying to accumulate points—likes, followers, retweets. You develop strategies to get rewards, to level up your influence. Lewis: I feel that in my bones. That little dopamine hit when you get a notification. It's a compulsion loop, just like a slot machine. You pull the lever by refreshing the feed, hoping for a little reward. Joe: Exactly. And Smith points out this has bled into the real world in terrifying ways. He makes the case that the QAnon conspiracy theory is best understood as an "alternate reality game," or ARG. Lewis: What do you mean, a game? People's lives were ruined by that. Joe: He means it's structured like one. It wasn't a set of beliefs you just accepted. It was a game where players, or "bakers," had to "go down rabbit holes" to find clues—"crumbs"—spread across the internet. They were rewarded for finding connections, for piecing together a hidden narrative. It rewarded the act of searching and connecting, just like a video game, creating this powerful, self-reinforcing loop of engagement. Lewis: That is a chillingly brilliant analysis. It's not about truth; it's about the thrill of the hunt. The game mechanics are the message. And this gamification is everywhere, isn't it? Even in how we talk about ourselves. Joe: Absolutely. The book points to how podcasts and brands will send automated messages saying things like, "Make your Brand look and sound its best." It assumes you're a brand, not a person. Or you're an individual presenting as a brand. We've internalized the logic of the system. We're all just data points trying to be attention-grabbing. Lewis: It's like that old Nabokov novel, Pnin, where the professor says he can't distinguish between the advertisements and the real articles in the newspaper. That was seen as a quirky comment in the 1950s, but now it's just… reality. Everything is an ad for something, even if it's just an ad for our own personal brand. Joe: And this constant pressure, this gamified reality, creates what Smith calls a "crisis of attention." But it's not that there are too many things to pay attention to. The crisis is that our attention is being channeled down these very narrow pathways that are designed to prevent personal transformation. They're designed for addiction, not growth. Lewis: So the system is designed to keep us busy, but not to make us better. It's like running on a treadmill. Lots of motion, but you're not actually going anywhere. This all feels very intentional, very designed. Which brings up the machines themselves. What does the book say about the AI running this whole casino?

The Myth of the Thinking Machine: Artificial Stupidity vs. True Judgment

SECTION

Joe: This is where Smith, as a philosopher, really lands his most powerful punch. He says we're asking the wrong question about AI. We're all terrified of a superintelligent AI becoming conscious and deciding to wipe us out, like Skynet in The Terminator. Lewis: Right. That's the standard fear. The tech billionaires are all building bunkers because they're worried about the AI uprising. Joe: But Smith argues the real danger isn't artificial intelligence. It's artificial stupidity. The problem isn't that the machines will become like us. The problem is that they are, and always will be, fundamentally unlike us. He brings up the 17th-century philosopher Leibniz and his famous "mill argument." Lewis: Okay, break that down for me. Joe: Leibniz asks you to imagine a machine that can think. Now, imagine you could enlarge that machine to the size of a mill and walk inside it. What would you see? You'd just see parts pushing one another—gears, levers. You would never find a single part that is "thinking" or "perceiving." You'd just find mechanical processes. Lewis: So, you can't explain consciousness by looking at the physical parts. Joe: Exactly. And that's the core difference. A computer reckons. It calculates. It follows a program. But a human mind judges. And judgment, Smith argues, requires something machines will never have: a committed, ethical engagement with the world. He uses a fantastic quote from the philosopher John Haugeland to describe what computers lack. They "don't give a damn." Lewis: They don't give a damn. That’s perfect. A computer doesn't care if its output is true, or good, or beautiful. It just cares if it followed the rules of its program. It has no skin in the game. Joe: None. And that's the source of artificial stupidity. Smith gives a simple, brilliant example. Imagine you're trying to go through a subway turnstile with a suitcase. The motion sensor reads the suitcase as a second person and the gate won't open. It resists you. Is the turnstile trying to stop you? Is it being malicious? Lewis: No, of course not. It's just a dumb machine following its programming. It can't tell the difference between a person and a suitcase. Joe: Precisely. It's resisting you out of a kind of mechanical necessity. It's not a conscious rebellion; it's a failure of judgment. And Smith's warning is that as we hand over more of our world to these systems—systems that drive our cars, approve our loans, and run our social discourse—we are creating a world full of these "turnstiles" on a massive scale. The danger isn't a machine that thinks, "I will destroy the humans." The danger is a machine that, when faced with a complex ethical dilemma, simply follows its programming and, like the turnstile, "doesn't give a damn" about the consequences. Lewis: So the real dystopia isn't a world run by evil robots, but a world run by profoundly indifferent, unthinking, and inflexible bureaucracies made of code. That's actually much scarier because it feels so much more plausible. It's already happening. Joe: It is. The book's reception was a bit polarizing because it doesn't offer easy solutions. It's a warning, a philosophical diagnosis. It challenges us to stop thinking about the internet as this new, magical thing and to see it for what it is: an ancient human dream that we've implemented with dangerously stupid tools, all powered by an economic model that preys on our deepest psychological needs.

Synthesis & Takeaways

SECTION

Lewis: Wow. Okay. So, if the internet is this ancient echo we've twisted into an attention casino run by stupid machines, what's the big takeaway? Where do we go from here? Joe: The book's warning is that we've taken a natural, ancient impulse for connection and outsourced it to a system that doesn't care about us. It's a system that calculates but can't judge, that connects but doesn't commune. The problem isn't the technology itself—the loom, the network. It's that we've forgotten the human part of the equation. We've let the logic of the machine, the logic of the market, overwrite the logic of human flourishing. Lewis: We've built a world-spanning communication network that's fundamentally bad at the most important parts of communication: understanding, empathy, judgment. Joe: Exactly. And the book leaves us with this very stark choice. We can either continue to be passive "data-cows" in this system, letting our attention and our social fabric be shredded for profit. Or we can start to consciously push back. We can try to reclaim our attention, to demand systems that are built for human well-being, not just for user retention. Lewis: It's about remembering that we are not gadgets, as another tech critic, Jaron Lanier, famously said. We are not programs. Joe: That's the heart of it. So the real question Smith leaves us with is: can we reshape this tool to serve human flourishing, or are we destined to be just another glitch in their program? Lewis: That's a heavy thought. We'd love to know what you all think. Find us on our socials and tell us one way the internet feels more like a tool for you, and one way it feels more like a trap. We're genuinely curious to hear your experiences. Joe: This is Aibrary, signing off.

00:00/00:00