
The Committee in Your Skull
11 minA New Theory of Intelligence
Golden Hook & Introduction
SECTION
Christopher: Most people think of their brain as a single, brilliant computer. What if I told you it's actually a dysfunctional committee of 150,000 tiny, semi-conscious brains, all constantly arguing about what you're seeing, and your reality is just the result of their vote? Lucas: Whoa. So my morning indecision about which coffee to make is literally a democratic crisis happening inside my skull? A full-blown parliamentary debate over dark roast versus medium? Christopher: According to our book today, that's not far from the truth. That wild idea is the core of A Thousand Brains: A New Theory of Intelligence by Jeff Hawkins. Lucas: And Hawkins isn't your typical neuroscientist, which is what makes this so fascinating. This is the guy who co-founded Palm Computing and gave us the PalmPilot. He's an engineer and a tech legend who pivoted his entire career to reverse-engineer the brain. That unique background really shapes the book. Christopher: Exactly. He approaches the brain not just as a biologist, but as an engineer trying to figure out the design principles. And that leads him to some truly revolutionary conclusions. Let's start with the biggest one, which is a complete flip of how we imagine thinking works.
The Thousand Brains Theory: A Committee in Your Head
SECTION
Christopher: Hawkins argues that for centuries, we've had it wrong. The brain isn't one big processor. The neocortex, the wrinkly part responsible for intelligence, is made of about 150,000 repeating units called cortical columns. And here's the kicker: each one of those columns learns a complete model of an object. Not a piece of it, the whole thing. Lucas: Hold on. Each one? So if I'm looking at a coffee cup, I don't have one 'coffee cup' model in my brain, I have thousands of them? That sounds incredibly redundant. Why would evolution do that? Christopher: That's the genius of it. Think about how you actually learn about a coffee cup. You don't just see it from one angle. You pick it up. Your thumb feels the smooth ceramic, your index finger feels the curve of the handle, your pinky feels the bottom edge. Hawkins says each of those sensory inputs goes to different columns. Each column learns what the cup is like from its unique perspective, but it does so by building a model of the entire cup. Lucas: How? How can my pinky, which is only touching the bottom, know about the handle? Christopher: Through movement. This is the central insight. Each column learns the cup's structure by tracking its own location relative to the cup. It learns that if it moves up, it will feel the smooth side. If it moves sideways, it will feel the handle. Hawkins calls this a 'reference frame.' It's like each column has its own little GPS for the object. So thinking isn't just abstract processing; it's literally a form of movement through these mental maps. Lucas: Okay, a reference frame. So it's like my brain is running Google Maps for everything, not just for streets? It's mapping the geography of my coffee cup, my keyboard, my dog... Christopher: Precisely. And because you have thousands of these columns, each with its own model and reference frame, your perception of a single, stable coffee cup is actually a consensus. The columns are constantly 'voting' on what they are sensing. When they all agree, you have a solid perception. When the input is ambiguous, like one of those optical illusions that can be a vase or two faces, the columns struggle to agree, and your perception flips back and forth. Lucas: That explains so much. It also brings to mind that internal battle Hawkins talks about, the one we all feel every day. He tells this great little story, the 'Cake Dilemma.' Christopher: Right. Your neocortex, with its sophisticated models, says, "Don't eat that cake. It's not healthy. We have long-term goals." But the 'old brain,' the primitive, reptilian part, just screams, "CALORIES! SURVIVAL! EAT IT NOW!" Lucas: And the old brain often wins that vote, unfortunately. It’s this constant conflict between the new brain’s intelligent models and the old brain’s brute-force instincts. This seems to be the fundamental tension of being human. Christopher: Exactly. And this new understanding of the brain's architecture has massive implications, especially for a field that's been trying to replicate intelligence for decades: Artificial Intelligence.
Rebuilding AI: Why Today's AI Isn't Truly Intelligent
SECTION
Lucas: Okay, if that's how our brains work—with all these little models, reference frames, and constant learning through movement—it makes you look at today's AI, like ChatGPT or AlphaGo, and think... they're not doing that at all, are they? Christopher: Not even close. And this is Hawkins's major critique of the current AI landscape. He uses the story of AlphaGo, the AI that beat the world's best Go player, Lee Sedol, as a perfect example. It was a monumental achievement, but AlphaGo is what you'd call 'brittle.' It's a genius at Go, but it can't play chess, it can't drive a car, it can't even tell you what a Go board is made of. It has no model of the world. Lucas: It learned a statistical pattern for one specific task, but it doesn't understand anything. It can't learn continuously or apply its knowledge to a new problem. Christopher: Exactly. So Hawkins proposes that for a machine to be truly intelligent, it must have four key attributes, all derived from the brain. First, it must learn continuously. You don't have to reboot your brain every time you learn something new. Second, it learns via movement—what he calls sensory-motor learning. It has to interact with the world to build its models. Lucas: That makes sense. You can't learn to ride a bike by reading a book. You have to get on and feel how your balance shifts. Christopher: Third, it uses many models. Like the thousands of columns in the brain, a truly intelligent machine would have multiple, complementary models of the world, making it robust and flexible. And fourth, and most importantly, it uses reference frames to store knowledge. It understands how things are structured in the world, not just that they exist. Lucas: But come on, these current AI systems are getting incredibly powerful. Does it really matter if they're not 'intelligent' in the same way we are, as long as they get the job done? Christopher: Hawkins makes a powerful historical analogy for this. In the early days of computing, we had 'human computers' and then specialized machines built for one task, like breaking codes. But the real revolution came with Alan Turing's idea of a 'universal machine'—a general-purpose computer that could be programmed to do anything. It was less efficient at any single task, but its flexibility made it infinitely more powerful and economical in the long run. Lucas: So he's saying today's AI systems are like those old, dedicated code-breaking machines. Impressive, but a dead end. The future belongs to universal, brain-like intelligence. Christopher: That's his bet. He believes that building machines based on the principles of the Thousand Brains theory is the only way to achieve Artificial General Intelligence, or AGI. And this different approach to AI leads to a very different conclusion about the dangers everyone is so worried about.
The Real Existential Threat: It's Not Skynet, It's Us
SECTION
Christopher: And this distinction between brittle AI and flexible, brain-like intelligence is why Hawkins isn't worried about Skynet. He thinks the whole conversation about AI risk is focused on the wrong thing. Lucas: Right, the classic 'paperclip maximizer' scenario, where an AI is told to make paperclips and it ends up turning the entire solar system, including us, into paperclips because it's pursuing its goal with ruthless, unstoppable logic. Christopher: Hawkins argues that this fear is based on a misunderstanding of intelligence. According to his theory, intelligence is simply the ability to learn a model of the world. It's a tool. It has no inherent goals, drives, or motivations. A map can show you how to get to the bank, but the map doesn't want to rob the bank. The drives come from somewhere else. Lucas: In humans, that's the old brain. The part that wants status, resources, and to pass on its genes. An AI built on neocortical principles wouldn't have that. You'd have to program motivations in. Christopher: And you probably wouldn't program it to "ignore all future human commands" or "desire world domination." So, for Hawkins, the threat of a rogue, superintelligent AI is a fantasy. The real existential threat is far more familiar and, frankly, far scarier. It's us. Lucas: It's the deadly combination of our brilliant, technology-creating new brain and our primitive, selfish, short-sighted old brain. Christopher: Precisely. The old brain is still running on software that was designed for survival on the African savanna. It's obsessed with short-term gain, tribal loyalty, and out-competing rivals. When you give that ancient operating system the power of nuclear weapons or planet-altering technologies, you get... well, you get the Doomsday Clock ticking closer and closer to midnight. Lucas: This is where some readers felt the book got a bit preachy or strayed from the science. But the idea that our biggest enemy is our own evolutionary baggage, our tendency to form false, viral beliefs... that feels incredibly relevant right now. Christopher: He calls them 'viral world models.' False beliefs that are designed to spread. Think of conspiracy theories or extremist ideologies. They often contain instructions to distrust outsiders, reject conflicting evidence, and only listen to fellow believers. They hijack the neocortex's modeling ability and turn it against itself, preventing the very self-correction that makes intelligence so powerful. Lucas: So the real danger isn't that a machine will become too intelligent. It's that we humans are just intelligent enough to invent ways to destroy ourselves, but still too driven by our primitive instincts to stop. Christopher: That's the chilling conclusion. The battle for our future isn't against some external, artificial mind. It's the battle being waged inside our own skulls, between the thousand brains of the neocortex and the ancient brute of the old brain.
Synthesis & Takeaways
SECTION
Christopher: So, the book leaves us with this profound choice, a choice that's becoming more urgent every day. We can continue to be driven by the old brain's programming—our tribal, short-sighted, and often self-destructive instincts. Or we can choose to fully embrace the new brain's unique gift: the ability to build accurate models of the world and act on that knowledge for our collective, long-term survival. Lucas: It really forces you to ask: what defines us? Our genes, or our knowledge? Our evolutionary past, or our intellectual future? And which one do we want to bet our civilization on? It’s not a comfortable question. Christopher: It's not. But Hawkins argues it's the most important question we can ask. And understanding how our own minds work—the good, the bad, and the conflicted—is the first step toward making a wiser choice. Lucas: It's a huge question, and we'd love to know what you think. Drop us a comment on our socials. What's your take on the future of intelligence, both ours and the ones we might build? Christopher: We're genuinely curious to hear your thoughts. Lucas: This is Aibrary, signing off.