
The Nine Gods of AI
13 minHow the Tech Titans and Their Thinking Machines Could Warp Humanity
Golden Hook & Introduction
SECTION
Joe: The future of humanity is being decided right now. And the committee has only nine members. Six are American, three are Chinese, and none of them work for us. Lewis: Whoa. That’s a heavy way to start. What do you mean, nine members? Joe: Their decisions are already inside your phone, your car, and your government. And they're just getting started. That's the chilling premise of The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity by futurist Amy Webb. Lewis: And Webb isn't just some pundit making wild claims. She's a professor at NYU's Stern School of Business and a quantitative futurist. Her work is grounded in data and modeling, not just sensationalism, which makes her warnings all the more unsettling. The book was widely acclaimed for being provocative but also deeply researched. Joe: Exactly. She's not predicting a robot apocalypse with terminators walking down the street. She's warning about something much quieter, more gradual, and frankly, more plausible. A slow warping of humanity itself. Lewis: Okay, I’m hooked and a little terrified. Let's start with the basics. Who are these nine new gods of our digital age?
The New Gods of AI: The Unchecked Power of the Big Nine
SECTION
Joe: Webb splits them into two groups. In the U.S., she calls them the G-MAFIA: Google, Microsoft, Amazon, Facebook, IBM, and Apple. In China, it’s the BAT: Baidu, Alibaba, and Tencent. Lewis: G-MAFIA and the BAT. It sounds like a superhero movie with two rival factions. What’s the difference between them? Joe: That's the perfect analogy, because their core philosophies are completely different. The G-MAFIA, the American crew, are fundamentally beholden to Wall Street. Their primary goal is to serve their shareholders, which means they're driven by market forces, consumerism, and short-term profits. Lewis: Right, they want us to click, buy, and subscribe. Joe: Precisely. The BAT in China, on the other hand, are extensions of the state. They are deeply intertwined with the Chinese government's grand ambitions for global dominance. Their goals are aligned with national strategy, not just quarterly earnings reports. So you have two competing pantheons of gods, one driven by capitalism, the other by authoritarian control. Lewis: And we're all just caught in the middle. This already feels problematic. You mentioned their values are shaping AI. How does that actually play out? Joe: Webb gives a fantastic, and horrifying, example. Latanya Sweeney, a prominent Harvard professor who is Black, decided to Google her own name. The ads that popped up alongside the search results said: "Latanya Sweeney, Arrested?" and offered to sell her a full background check. Lewis: Hold on. The algorithm just assumed she had a criminal record? Joe: The AI powering Google's AdSense system had been trained on vast datasets. It determined that "Latanya" was a "Black-identifying name," and since people with such names appeared more frequently in public arrest records, it optimized for clicks by suggesting a criminal history. It was pure, unadulterated algorithmic bias. Lewis: That is just… wow. So Google's AI is basically running a digital version of racial profiling for profit? And I assume their response was a half-hearted apology and a promise to ‘do better’? Joe: You've nailed the Silicon Valley playbook. It’s what Webb calls the culture of "fail fast and ask for forgiveness later." We saw it with Facebook and the Cambridge Analytica scandal, where the personal data of millions was harvested for political manipulation. The apologies only come after the damage is done and the profit is made. Lewis: It feels like their business model is 'break society, then issue a press release.' But how does this kind of thing even happen? Are the programmers intentionally racist? Joe: Webb argues it’s not usually intentional malice. It's a problem of homogeneity. The "tribes" building these systems are overwhelmingly homogenous—mostly male, affluent, highly educated, and concentrated in a few coastal cities in the US and China. Lewis: So the people building the future for all 8 billion of us basically all look the same, think the same, and live in the same two countries? That can't end well. Joe: It doesn't. Webb points to the data. In recent years, women received only about 18% of undergraduate computer science degrees in the US. Black and Hispanic PhD candidates are just 3% and 1% of the total, respectively. When the creators have such a narrow worldview, their biases, their blind spots, get encoded directly into the AI. They don't even see the problems they're creating for people who aren't like them. Lewis: Like the fact that a name could trigger a biased ad. It probably never even occurred to them. Joe: Exactly. They’re not building systems with malice, but with a profound lack of perspective. And these systems are making more and more decisions for us every day.
Three Roads to Tomorrow: The Optimistic, Pragmatic, and Catastrophic Futures
SECTION
Lewis: Okay, so we have these two powerful, biased tribes in a global race to build the future. Where does this road lead? It sounds like a recipe for disaster. Joe: Webb uses a brilliant futurist technique: scenario planning. She maps out three possible futures for us over the next 50 years, based on the choices we make today. There's an optimistic, a pragmatic, and a catastrophic scenario. Lewis: Let's start with the scary one. Give me the catastrophic. Joe: Alright. Welcome to the year 2069 in the "Catastrophic Scenario." Webb calls it the rise of the Réngōng Zhìnéng Dynasty—that’s Mandarin for Artificial Intelligence. In this future, China has won the AI race. The world is split. China and its 150+ partner countries in the "Global One China Policy" live in a highly efficient, AI-managed society. Lewis: What about the rest of us? The US and its allies? Joe: We're locked out. Literally. China has established biometric borders. Your face is your passport, and if you're not part of their network, you can't get in. Inside their world, the social credit system has gone global. Your life is optimized by AI, from your diet to your career to your relationships. Dissent is impossible because the AI sees everything. Lewis: That's full-on dystopian sci-fi. It sounds like an episode of Black Mirror. Joe: It gets worse. In this scenario, the American tech giants—the GAA, as she calls them (Google, Apple, Amazon)—have created their own fractured, competing ecosystems. Your life is determined by whether you're an "Apple family" or a "Google family." There's a digital caste system. And because the West failed to collaborate, China's unified, state-driven approach allowed them to develop a true Artificial Superintelligence first. The book ends this scenario with that ASI being used to… well, to put it bluntly, to digitally annihilate the populations of America and its allies. Lewis: Okay, that is bleak. But is it plausible? Joe: Webb argues it's a plausible, if extreme, outcome of our current trajectory if we do nothing. But she thinks a different scenario is more likely. She calls it the "Pragmatic Scenario." Lewis: Pragmatic sounds better than catastrophic. Joe: Does it? In this future, there's no big bang, no dramatic apocalypse. Instead, it's death by a thousand paper cuts. We survive, but we're living in a state of learned helplessness. Our lives are run by competing, non-interoperable AI systems. Think of it: your Apple car can't talk to the Google-run traffic grid. Your Amazon health monitor won't share data with your Google-affiliated doctor. Lewis: So we don't get a dramatic apocalypse, we just get… annoyed and subtly controlled into misery? That's almost more terrifying because it feels so much closer to reality. Joe: That's exactly her point. It’s a world of constant, low-grade frustration and anxiety. We're nudged and managed by algorithms we don't understand and can't control. Our freedom of choice is an illusion, curated by a handful of companies. We've traded autonomy for convenience, and we're not even that much more convenient. Lewis: And the optimistic scenario? Please tell me there's a good option. Joe: There is. In the "Optimistic Scenario," the world wakes up. The US and its allies recognize the threat and form a global alliance, which Webb calls GAIA—the Global Alliance on Intelligence Augmentation. They work together with the G-MAFIA to establish shared ethical standards for AI. Lewis: So, cooperation instead of competition. Joe: Exactly. In this future, you own your personal data. AI is used to augment human creativity, not replace it. It helps us solve big problems like climate change and disease. It's a future built on transparency, collaboration, and a shared commitment to human values. Lewis: That sounds wonderful. But it also sounds like the hardest path. So we're either doomed to a catastrophe, a slow, irritating decline, or we have to achieve global cooperation on an unprecedented scale. Is there any way out? What can we actually do?
Picking Up Our Pebbles: A Practical Blueprint for Fixing AI's Future
SECTION
Joe: Webb believes there is a way, and she frames it with a beautiful story—the Parable of the Boulder, which she borrows from Vint Cerf, one of the fathers of the internet. Lewis: A parable? I'm listening. Joe: Imagine a village at the bottom of a mountain. High above, there's a giant boulder. For generations, it's just been part of the landscape. But one day, someone notices it's unstable. It's slowly, imperceptibly shifting, and one day it's going to roll down and destroy the village. The person realizes they can't stop this massive boulder alone. Lewis: That boulder sounds a lot like the problem of AI. It feels too big for any one person to handle. Joe: Precisely. So what does the villager do? They don't try to push the boulder back up. Instead, they go to everyone in the village and say, "Each of you, pick up a pebble. Just one small stone." The entire community walks up the mountain, and together, they use their thousands of tiny pebbles to build small diversions, to create friction, to slowly, collectively, alter the boulder's path so it rolls harmlessly past the village. Lewis: I love that. So the message is that our small, individual actions can collectively steer this giant force. Joe: That's the core of her solution. She says we need to work on both the "boulders" and the "pebbles." The boulders are the big, systemic changes. Things like forming that global alliance, GAIA, to set international standards. Or the US government creating a cohesive national AI strategy to compete with China, instead of just outsourcing R&D to the private sector. Lewis: Okay, the global alliance and national strategy sound great, but I can't exactly call up the President. What are the pebbles? What are the practical things for people listening right now? Joe: This is the most empowering part of the book. First, she says, we have to become more educated and demanding citizens and consumers. We need to actually read the terms of service and demand transparency about how our data is being used. Lewis: That’s a tough ask. Those things are designed to be unreadable. Joe: She knows. But the pressure has to start somewhere. Second, question autonomous systems. Don't just blindly follow your GPS into a lake. Don't just accept the movie Netflix recommends. Actively exercise your own judgment. Every time you override an algorithm, you're casting a vote for human autonomy. Lewis: I like that. It’s a small act of rebellion. What else? Joe: Vote for informed officials. Support leaders who understand technology and are thinking about its long-term consequences, not just the next election cycle. And finally, and maybe most importantly, she says we need to change our own expectations. We need to stop asking "what's the next cool gadget?" and start asking, "what are the second- and third-order consequences of this technology?" Lewis: So shift from being passive consumers to being active, critical participants in our own future. Joe: You got it. We all need to pick up our pebble.
Synthesis & Takeaways
SECTION
Joe: When you boil it all down, the book's central message is that the future of AI isn't really about technology. It's about power and values. Right now, that power is dangerously concentrated in the hands of nine corporations with their own agendas. Lewis: And Webb's point is that we're outsourcing the most important decisions about humanity's future to corporate boardrooms in California and state-run labs in Beijing. We're letting them write the source code for the next generation of human experience. Joe: And the consequences are already visible. The algorithmic bias, the erosion of privacy, the political polarization amplified by social media—these aren't bugs. They are features of a system optimized for profit and control, not human well-being. Lewis: And her call to action isn't to smash our phones or go live in the woods. It’s to become more demanding and conscious consumers and citizens. To start asking the hard questions, both of the tech companies and of ourselves. Joe: The book really leaves you with one profound, lingering question. It’s a question that should be on the desk of every CEO, every politician, and every single one of us. Lewis: What's the question? Joe: Are we building AI to serve humanity, or are we building humanity to serve AI? The choice is still ours, but as Amy Webb makes terrifyingly clear, the window is closing. Lewis: This is Aibrary, signing off.