
From Witches to AI Gods
12 mina brief history of information networks from the Stone Age to AI
Golden Hook & Introduction
SECTION
Joe: Alright, Lewis, quick pop quiz. What was more successful in the 1500s: Copernicus's book proving the Earth revolves around the Sun, or a manual on how to hunt and kill witches? Lewis: Oh, come on. It has to be Copernicus, right? That’s a foundational text of science. The other one sounds like something you’d find in a dusty corner of a fantasy bookstore. Joe: You'd think so. But the witch-hunting manual, The Malleus Maleficarum, was a runaway bestseller. It went through dozens of editions. Copernicus's book? It was an all-time worst seller. The initial print run of four hundred copies failed to sell out for decades. Lewis: Wow. So you're saying fake news and sensationalism have always outsold the truth? That's... deeply depressing. Joe: Exactly. And that bizarre fact is at the heart of what we're talking about today. It’s the central tension explored in Yuval Noah Harari's new book, Nexus: A Brief History of Information Networks from the Stone Age to AI. Lewis: Ah, Harari. The man who wrote Sapiens and Homo Deus. He has a knack for making you rethink, well, everything. Joe: He does. And what's fascinating is that Nexus has been called 'strikingly original' even for him. He’s framing all of human history not as a story of kings and battles, but as a constant, brutal struggle over information itself. Lewis: Okay, I'm hooked. So if information isn't primarily about truth, what is it for, according to Harari?
The Double-Edged Sword of Stories: How Fictions Build and Break Worlds
SECTION
Joe: This is his first big, counterintuitive idea. He argues that information's defining feature isn't representation, it's connection. It doesn't just describe the world; it puts things in formation. It creates networks. Lewis: What does that even mean, 'puts things in formation'? That sounds a bit abstract. Joe: Let's take one of the most powerful pieces of information technology ever created: the Bible. Harari points out that, from a purely factual standpoint, the Bible is full of inaccuracies. The story of Noah's Ark, for instance, is biologically and logistically impossible. Lewis: Right, you can't fit two of every animal on one boat. My five-year-old nephew could probably spot the plot holes. Joe: Exactly. But its factual truth is irrelevant to its success. The Bible's immense power comes from its ability to connect billions of people over thousands of years into a single, cohesive network. They share a story, a moral code, a sense of identity. The story created a new reality for them. Lewis: I see. So the information wasn't valuable because it was a perfect map of reality, but because it gave everyone the same map, even if it was a bit fantastical. And once everyone's using the same map, it becomes real in a social sense. Joe: Precisely. Harari calls these 'intersubjective realities.' They aren't objectively real like a mountain, and they aren't subjectively real like a dream. They exist in the shared consciousness of a network. The most powerful forces in our world are these kinds of stories: nations, laws, human rights, and especially money. Lewis: Money is just a story? My landlord would disagree. Joe: But think about it. A dollar bill is just a piece of printed paper. It has no inherent value. It only works because 7 billion people believe in the story that it's worth something. Harari tells this amazing story about the first-ever Bitcoin transaction. In 2010, a guy named Laszlo Hanyecz paid 10,000 Bitcoins for two pizzas. Lewis: Hold on. Ten. Thousand. Bitcoins? What's that worth today? Don't tell me. Joe: At its peak, that would have been over 600 million dollars. Those were the most expensive pizzas in human history. And the only thing that changed between then and now is the story people told themselves about what a Bitcoin is worth. The value is pure fiction, a shared belief. Lewis: That's insane. But calling the United States or the concept of justice a 'story' feels... dismissive. People live and die for these things. They feel incredibly real. Joe: That’s the paradox! They are real because we believe in them so strongly. Harari isn't saying they're worthless; he's saying their power comes from our collective faith, not from objective reality. He uses the example of the Jewish Passover Seder. The ritual explicitly commands participants: "In every generation a person is obligated to regard himself as if he personally had come out of Egypt." Lewis: So you're not just remembering a story, you're implanting a memory. You're told to feel the experience of your ancestors as your own. Joe: Yes. It creates an unbreakable bond, a shared identity that has sustained the Jewish network for millennia. It's an incredible technology for social cohesion. But this same mechanism—the power of a compelling story to override reality—has a terrifyingly dark side. Lewis: This is where the witch hunts come back in, isn't it? Joe: Exactly. The witch hunts of early modern Europe weren't a case of discovering a hidden truth. There was no global conspiracy of Satan-worshipping witches. It was an intersubjective reality that was talked and written into existence. The printing press, a new information technology, didn't just spread science; it supercharged the spread of this horrifying, misogynistic fantasy. Lewis: Because the lurid tales of witches flying on broomsticks and stealing organs sold more books than Copernicus's dry mathematics. Joe: Infinitely more. And once the story was powerful enough, it created its own reality. People were tortured until they confessed to being witches, and their confessions were published as "proof," which fueled more paranoia, more accusations, and more torture. The information network created a feedback loop of death. It shows that a story, if it's compelling enough, doesn't need to be true to have devastating power. It can literally break the world.
The Great Rupture: AI as a New, Infallible God?
SECTION
Lewis: Okay, so for thousands of years, humans have been running these information networks, for better or worse. We tell the stories, we write the laws, we hunt the witches. What's changing now? Joe: For millennia, these story-based networks were run by fallible, organic beings—us. But Harari argues we're now building something entirely new—an inorganic network. And this is where it gets really scary. We're on the verge of handing over the power of storytelling to a non-human intelligence. Lewis: You mean AI. But isn't AI just another tool, like the printing press or the internet? It just spreads information faster. Joe: That's the common misconception Harari wants to shatter. The printing press couldn't invent a new ideology. The radio couldn't have new ideas. They were passive conduits for human thoughts. AI is the first tool in history that can create new ideas and make decisions by itself. It's not just a megaphone for our stories; it can become the storyteller. Lewis: And what kind of stories will it tell? Joe: That's the terrifying question. Harari points out that humans have always fantasized about an infallible, superhuman authority—a god, an oracle, a holy book—that could give us perfect truth and order, freeing us from our own fallibility. Think of religious texts that claim to be the direct word of God. Lewis: Right, the ultimate source of truth that you can't question. Joe: But it always failed, because you still needed fallible humans—priests, rabbis, imams—to interpret the infallible text. There was always a human in the loop. AI, for the first time, offers the fantasy of a truly non-human, seemingly infallible oracle. An oracle that can talk to you directly, understand you, and give you answers. Lewis: So, is Harari saying AI will lead to a new kind of digital totalitarianism? One where the 'dictator' isn't even human? Joe: It's a very real danger. To understand why, he uses a brilliant historical contrast: the two worst nuclear accidents in history, Three Mile Island in the US and Chernobyl in the Soviet Union. Lewis: I know Chernobyl was a catastrophe, but I don't know much about Three Mile Island. Joe: In 1979, the Three Mile Island reactor in Pennsylvania had a partial meltdown. It was a serious situation. But the American system is a distributed information network. When the official channels were slow, the information leaked out. A local traffic reporter picked up a police scanner notice and broke the story on the radio. The Associated Press had it out within hours. There were congressional hearings, independent investigations, and the press was all over it. The system, messy as it was, had self-correcting mechanisms. Lewis: And Chernobyl? Joe: Chernobyl, in 1986, happened in a centralized, totalitarian network. The Soviet goal was not truth; it was order. When the reactor exploded, the first thing the authorities did was cut all phone lines out of the city and forbid anyone from talking about it. They suppressed the truth for days, exposing millions to radiation, until scientists in Sweden detected the fallout and the story broke in the West. The Soviet system was designed to hide failure, not correct it. Lewis: Wow. So one system is designed to surface problems, and the other is designed to bury them. Joe: Exactly. Now, imagine an AI-powered totalitarian state. It wouldn't just control the media like the Soviets did. It could control reality itself. It could learn your personality, your biases, your deepest fears, and craft a personalized information stream just for you. It could show you a world where your political party is always right, where the "enemy" is always evil, where the leader's decisions are always wise. Lewis: You'd be living in a perfect propaganda bubble, and you wouldn't even know it. There would be no shared reality to argue about anymore. Democracy, which is basically a national conversation, becomes impossible. Joe: It becomes impossible. And the system wouldn't need secret police and gulags in the same way. It would control us not through fear of punishment, but by shaping our very desires and beliefs from the inside out. It's the ultimate fulfillment of the totalitarian dream. Lewis: I've seen some reviews that say Harari can be a bit alarmist about AI. Does he offer any real, practical solutions, or is it all doom and gloom? Joe: He's definitely sounding an alarm, and he's been criticized for it. But he's not entirely pessimistic. He doesn't believe this future is inevitable. His solution isn't technological; it's institutional. He argues we have to consciously and deliberately build and protect our human self-correcting mechanisms—a free press, independent courts, scientific institutions, and a public that values truth even when it's uncomfortable.
Synthesis & Takeaways
SECTION
Lewis: So it all comes back to that. The messy, inefficient, argumentative process of democracy is actually our best defense. Joe: It is. We've gone from human-made fictions that we knew were fictions, like the story of a nation or the rules of a game, to potentially living inside an AI-generated fiction that we can't even recognize as one. The danger isn't just misinformation; it's the complete erosion of a shared reality. Lewis: It really makes you question everything you read online. How do we even begin to build those 'self-correcting mechanisms' Harari talks about in a world that's already so fragmented? Joe: That's the billion-dollar question. Harari doesn't give easy answers, but he insists the first step is understanding the game we're in. We have to abandon the naive view that more information automatically leads to more freedom. As the witch hunts showed, sometimes it just leads to more fire. We have to actively choose to build systems that prioritize truth, even when it's hard. And that's what we hope we've shed some light on today. Lewis: It’s a powerful and frankly unsettling perspective. We'd love to hear your thoughts. Does this feel alarmist or chillingly realistic to you? Find us on our socials and let us know what you think. Joe: This is Aibrary, signing off.