
The Coming Tech-pocalypse
15 minGolden Hook & Introduction
SECTION
Joe: A top AI insider, a guy who co-founded one of the most famous AI labs in the world, wrote a book that's been widely acclaimed. His core message? We're building technology that could allow someone to kill a billion people from their garage, and the people in charge are actively looking the other way. Lewis: Whoa, hold on. A billion people from a garage? That sounds like a sci-fi movie plot, not a serious warning from an industry leader. That's a bit much, isn't it? Joe: You'd think so, but that's the chilling reality at the heart of the book we're diving into today: The Coming Wave: Technology, Power, and the World's Biggest Dilemma by Mustafa Suleyman and Michael Bhaskar. And what makes this so compelling is that Suleyman isn't some outsider critic throwing stones. Lewis: Right, who is this guy exactly? Joe: He's the ultimate insider. He co-founded DeepMind, the legendary AI lab that Google bought for a fortune. He was a VP at Google, and now he's the CEO of Microsoft AI. He has seen the monster from inside the cage, and he's sounding the alarm. Lewis: Okay, so this isn't just speculation. This is a warning from the engine room. That changes things. Joe: It changes everything. And his book lays out this incredible, terrifying, and fascinating argument that we're going to unpack today from three perspectives. First, we'll explore why this tech insider believes the coming technological wave is fundamentally unstoppable. Then, we'll break down the four unique and dangerous features that make this wave so different from anything we've seen before. Lewis: And the third? I have a feeling it’s not going to be a happy one. Joe: Not exactly. Finally, we'll confront the ultimate, chilling dilemma the book presents: are we headed for a global catastrophe, or a global surveillance dystopia? Lewis: Fantastic. Catastrophe or dystopia. I’ll make some coffee.
The Unstoppable Wave & The Containment Problem
SECTION
Joe: So, Suleyman kicks off with a statement that’s designed to shock you. He says, flat out, that containing this new wave of technology—meaning AI and synthetic biology—is not possible. Lewis: Okay, my skeptical alarms are ringing immediately. That sounds incredibly fatalistic. We’ve developed dangerous technologies before. We contained nuclear weapons, more or less. What makes this different? Joe: That’s the exact question everyone asks, and the book tackles it head-on. He argues that technology, by its very nature, proliferates. It’s the default setting. He uses this fantastic historical analogy of the internal combustion engine. When Carl Benz first invented his Motorwagen, it was a clunky, unreliable curiosity. No one wanted one. Lewis: I can imagine. It probably broke down every five minutes and smelled awful. Joe: Precisely. But then his wife, Bertha Benz, in 1888, did something amazing. Without telling him, she took the car on a 65-mile road trip to her mother's house. She had to refuel at pharmacies, buying cleaning solvents. She used her hatpin to clean a fuel line. She basically invented the road trip and, in doing so, demonstrated that this machine actually worked. It was a brilliant marketing stunt. Lewis: A 65-mile road trip with a hatpin for repairs. That’s legendary. Joe: It is! And that one act helped kickstart the wave. A few decades later, you have Henry Ford, who famously said, "Every time I reduce the charge for our car by one dollar, I get a thousand new buyers." He wasn't just building a car; he was building a system of mass production that made the technology cheap, accessible, and irresistible. It proliferated. That’s what technology does. It gets cheaper, easier, and spreads everywhere. Lewis: Okay, I get the proliferation argument. But with AI and biotech, the stakes are so much higher. Surely we can just regulate it, put some rules in place? Why is containment impossible? Joe: This is the absolute core of the book. Suleyman says the biggest barrier isn't technical or legal; it's psychological. He calls it the "pessimism-aversion trap." It’s our deep-seated, emotional refusal to confront potentially dark realities. Lewis: It’s basically like ignoring that weird rattling noise your car is making, hoping it'll just go away until the engine explodes on the highway. Joe: That is the perfect analogy. And he gives these two chilling, real-life examples. In the early 2010s, when he was at the top of the tech world, he put together a presentation for a boardroom full of the most powerful tech CEOs and founders. He warned them about all the things we see today: misinformation, privacy invasion, job displacement. His final slide was a picture from The Simpsons of the townspeople of Springfield marching with pitchforks and torches. Lewis: (Laughs) No way. He showed them the angry mob? How did that go over? Joe: They dismissed him completely. They said, "Oh, technology will create new jobs," "We'll find solutions," "You're being too pessimistic." They just looked away. They were deep in the pessimism-aversion trap. Lewis: Wow. But that’s about public opinion. What about actual, physical danger? Joe: That’s the second story, and it's even scarier. Shortly before the pandemic, he attended a seminar at a university where a professor laid out the facts on DNA synthesizers. These are machines that can literally print DNA. The professor showed a graph of the price plummeting. They're now cheap enough to fit on a bench in your garage. Lewis: Okay, that sounds… problematic. Joe: The professor then argued that with this technology, a single person with some graduate-level training—or even just determined self-study online—could soon have the capacity to create a novel pathogen. A synthetic virus, more transmissible and lethal than anything in nature. He said a single person could have the power to kill a billion people. Lewis: A billion. From a garage. That’s the line from the intro. I honestly thought that was hyperbole. Joe: It's not. And the reaction in that room was exactly the same as in the tech boardroom. Unease, followed by griping, hedging, and then… dismissal. The attendees just couldn't stomach the reality of it. They looked away. That’s the pessimism-aversion trap. We are biologically and socially wired to reject the worst-case scenario, even when the evidence is staring us in the face. And that, he argues, is why containment is failing before it even begins.
The Four Horsemen of the Tech-pocalypse
SECTION
Lewis: Okay, that is genuinely terrifying. So if we're all in denial and the wave is unstoppable, what specifically makes this wave—AI and biotech—so much more dangerous than the internet or the printing press? Joe: Suleyman breaks it down into four distinct features that previous technologies just didn't have, at least not all at once. I like to think of them as the Four Horsemen of the Tech-pocalypse. They are Asymmetry, Hyper-evolution, Omni-use, and Autonomy. Lewis: The Four Horsemen. That’s dramatic. Let's start with Asymmetry. What does that mean? Joe: Asymmetry means that these new technologies give a huge amount of power to small actors. It dramatically lowers the cost of inflicting massive damage. And the book has the most incredible, cinematic story to illustrate this: the battle for Kyiv in 2022. Lewis: I remember this. The 40-kilometer-long Russian convoy stuck outside the city. Everyone thought Kyiv was going to fall in days. Joe: Exactly. The Ukrainian military was massively outgunned. But a small, elite drone unit called Aerorozvidka—made up of a wild mix of drone hobbyists, software engineers, and soldiers—had a different idea. They were working out of a village, using jerry-rigged consumer drones you could buy online. They strapped explosives to them and flew them at night. Lewis: So this is a real-life David versus Goliath situation. Joe: It's better than that. They flew these little drones under the cover of darkness and targeted the lead vehicles of that massive Russian column. They created a traffic jam from hell. Then, they targeted the supply depots at the back of the column. Suddenly, this giant, powerful army had no fuel, no food, and couldn't move forward or backward. They were sitting ducks. Lewis: All because of a few guys with souped-up hobby drones? That's unbelievable. Joe: It's the perfect example of asymmetry. A small, nimble, technologically-savvy group completely neutralized the brute force of a superpower. That same principle applies to a terrorist group with a DNA printer or a hacker with an AI cyberweapon. The power to cause chaos is being democratized. Lewis: Okay, that makes sense. What's the next horseman? Hyper-evolution? Joe: Hyper-evolution is about the sheer speed of development. Moore's Law, which predicted the doubling of computing power, looks quaint now. AI models are improving at a pace we can't even properly track. An AI can learn more about the game of Go in a single day than humanity has in 3,000 years. Our laws, our ethics, our social norms—they evolve over decades or centuries. They can't possibly keep up with a technology that reinvents itself every few months. Lewis: We're trying to build a fence around a creature that's shape-shifting every second. Joe: Exactly. And that leads to the third horseman: Omni-use. This means the technology can be used for almost anything, good or bad. The same AI that can scan millions of molecules to discover a new antibiotic can be tweaked to search for a new, undetectable poison. The line between a tool for healing and a weapon of mass destruction is just a few lines of code. Lewis: And the last one, Autonomy, that one feels the most like a movie. Joe: It does, but it's real. This is about technology that can act and make decisions without human intervention. We saw hints of it with DeepMind's AlphaGo, which made a move—Move 37—that every human expert thought was a mistake. A terrible, amateurish move. Lee Sedol, the world champion, literally had to leave the room to compose himself. Lewis: But it wasn't a mistake, was it? Joe: It was genius. A hundred moves later, that "mistake" won AlphaGo the game. The AI had discovered a strategy that no human had conceived of in three millennia of playing the game. It was operating on a level of intelligence we couldn't comprehend. Now, imagine that level of autonomous, alien intelligence controlling a power grid, a financial market, or a weapons system. That's the fourth horseman.
The Dilemma: Catastrophe or Dystopia?
SECTION
Lewis: Okay, so let's put this all together. We have an unstoppable wave of technology, defined by these four horsemen that make it incredibly powerful and dangerous, and we're all psychologically programmed to look away from the danger. Where does that leave us? What's the endgame here? Joe: This is where the book presents its grand, central dilemma. Suleyman argues that we are being pushed towards one of two terrible futures: Catastrophe or Dystopia. Lewis: So our choice is basically Mad Max or 1984? There's no door number three? Joe: That's the stark choice he lays out. Let's look at the first path: Catastrophe. This is what happens if we embrace openness and let the technology proliferate without real containment. It’s the world of the mail-ordered apocalypse, of AI-driven warfare. And he gives a terrifying example that shows this isn't a future problem; it's already here. Lewis: What happened? Joe: The assassination of Iran's top nuclear scientist, Mohsen Fakhrizadeh, in 2020. He was in a convoy, heavily guarded. Suddenly, a machine gun mounted on an empty pickup truck on the side of the road opened fire, killing him with surgical precision. Then the truck exploded, destroying all the evidence. Lewis: A robot assassin? Joe: A robot assassin. Operated via satellite, guided by AI to account for the car's movement and a slight delay in the satellite feed. A human authorized the strike, but the AI did the killing. That is the world of catastrophe—cheap, deniable, AI-powered violence available to anyone with enough resources. Lewis: That's chilling. So what's the alternative? The Dystopia option? Joe: The dystopian path is what happens when states get so terrified of the catastrophe option that they decide the only solution is total, top-down control. A lockdown on technology and society itself. And again, this isn't science fiction. We have a real-world model for this: China. Lewis: The social credit system. Joe: And so much more. The book talks about China's "Sharp Eyes" program, a name inspired by a saying from Chairman Mao: "The people have sharp eyes." It's a nationwide surveillance network with hundreds of millions of cameras, all linked to facial recognition and AI. It tracks where you go, who you talk to, what you buy. It's used to predict and prevent protests, to monitor ethnic minorities like the Uighurs, and to enforce political loyalty. Lewis: So the state uses the very technology that threatens it to create a digital prison. Joe: Precisely. It's the ultimate trade-off. To prevent the chaos of decentralized power, you create a centralized system of absolute power. That's the dilemma. Do we risk being blown up by a rogue actor with a bioweapon, or do we accept living in a world where our every move is monitored and controlled by the state to prevent that from happening? Lewis: That is a genuinely awful choice. It feels like there’s no way out. Joe: And that feeling of being trapped is exactly the point Suleyman wants us to feel. Because he argues that until we truly, deeply understand the horror of those two default options, we'll never have the motivation to build a third one.
Synthesis & Takeaways
SECTION
Joe: So, at the end of all this, it's easy to feel a bit hopeless. The book paints a pretty bleak picture. Lewis: A bit? Joe, the options are global chaos or a high-tech dictatorship. It’s profoundly unsettling. Is the book just a 300-page warning with no solution? Joe: No, and that’s the crucial final turn. The book's purpose isn't to spread despair. It's to shatter our "pessimism-aversion trap." It’s a call to action. Suleyman's argument is that containment isn't possible in our current world, with our current institutions and our current mindset. But that doesn't mean we can't change the world. Lewis: So the first step is to stop looking away and actually stare into the abyss. Joe: Exactly. To acknowledge the stark reality of the two default paths we're on. He has this powerful line where he says, "For most of history, the challenge of technology lay in creating and unleashing its power. That has now flipped: the challenge of technology today is about containing its unleashed power." Lewis: I like that. It’s a fundamental shift in our relationship with our own creations. We’ve spent centuries trying to build bigger, faster, stronger things. Now the job is to build the brakes. Joe: And to build them on a global scale, with new kinds of alliances, new business models, and a culture that values safety as much as it values innovation. The book offers ten steps towards this, from creating an "Apollo Program for AI Safety" to leveraging choke points in the supply chain. But none of it can happen until we, as a society, accept the scale of the problem. Lewis: It really makes you wonder, which is scarier—the technology itself, or our collective, almost willful, refusal to look at it straight on? It feels like that's the real dilemma. Are we brave enough to face the wave that's coming? It’s a question for everyone, not just the people in boardrooms. We’d love to hear what you think. Joe: This is Aibrary, signing off.