
The $44 Million Joke
13 minThe Mavericks Who Brought AI to Google, Facebook, and the World
Golden Hook & Introduction
SECTION
Joe: Most people think genius ideas are recognized instantly. The truth is, the single most important idea in modern tech—the one powering your phone, your searches, your entire digital life—was considered a joke for thirty years. A scientific dead end. Lewis: A joke? That sounds harsh. What idea was so bad it got laughed out of the room by the smartest people in the world? Joe: It’s the idea of a neural network, a machine that learns like a brain. And its story is the heart of the book we’re diving into today: Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World by Cade Metz. Lewis: Ah, Cade Metz. He’s a tech correspondent for The New York Times, right? I can imagine this isn't your typical dry textbook on AI. Joe: Exactly. You can feel his journalistic background on every page. The book reads less like a technical manual and more like a character-driven thriller about the personalities, rivalries, and epic gambles behind artificial intelligence. It’s been described as a "six degrees of Geoffrey Hinton" story, and for good reason. He’s the central sun this entire universe orbits around. Lewis: I like that. So we're not just talking about algorithms, we're talking about the ambitions and feuds of the people who wrote them. Let's start at the beginning then. How does an idea become a scientific joke?
The Long Winter and the True Believers
SECTION
Joe: It starts back in the 1950s with a machine called the Perceptron. Its creator, a charismatic Cornell professor named Frank Rosenblatt, built this two-million-dollar machine for the US Navy. He demonstrated that it could learn to tell the difference between a card marked on the left and a card marked on the right. It wasn't much, but the principle was revolutionary: the machine learned on its own. Lewis: Okay, that sounds promising. A little basic, but the seed of something huge is there. Where did it go wrong? Joe: The media went wild with it. One newspaper headline screamed, "Frankenstein Monster Designed by Navy That Thinks." Rosenblatt himself made grand predictions about it recognizing faces and understanding speech. But the hype got ahead of the reality. And then came the villain of our first act: Marvin Minsky. Lewis: A villain? In a story about AI research? Joe: In this narrative, absolutely. Minsky was a brilliant, influential professor at MIT and a giant in the competing field of "symbolic AI," which believed intelligence came from hand-coding rules, not from learning like a brain. In 1966, at a conference, a young researcher presented his work on a neural network. Minsky stood up and, in front of everyone, asked, "How can an intelligent young man like you waste your time with something like this?" He then declared, "This is an idea with no future." Lewis: Whoa. That's brutal. It’s like the academic version of a diss track. But could one guy's takedown really kill an entire field? Joe: It pretty much did. Minsky co-authored a book called Perceptrons that mathematically proved the limitations of Rosenblatt's simple, single-layer network. He was right about that specific model, but his critique was so devastating that it cast a shadow over the entire concept of neural networks. Government funding dried up. The best minds went elsewhere. The "AI winter" began, and it lasted for decades. Lewis: That's incredible. So the world just moved on, convinced this whole "learning machine" thing was a fad. But obviously, someone kept the idea alive. Joe: This is where our main character, Geoffrey Hinton, enters the story. He's a British-born, Cambridge-educated psychologist turned computer scientist. And he was a true believer. He was fascinated by the idea that "neurons that fire together, wire together," the basic principle of how our brains learn. He just couldn't shake the feeling that Minsky and the establishment were wrong. Lewis: So he’s the prophet in the wilderness. What was he doing while everyone else was working on other things? Joe: He was plugging away at the core problem. He knew single-layer networks were limited, but what about multi-layer networks? The problem was, how do you train them? And in the early 80s, he and his colleagues rediscovered and popularized a crucial algorithm called "backpropagation." Lewis: Okay, "backpropagation" sounds intimidating. Break that down for me. Joe: Think of it like this: you have a team of workers trying to assemble a car, and each one only does one small task. At the end, the car comes out with a crooked wheel. A bad manager would just yell at everyone. A good manager—that's backpropagation—goes back down the line, from the last worker to the first, and tells each person exactly how to adjust their specific action just a tiny bit to fix the final outcome. It’s a way of distributing the blame for an error and correcting it layer by layer. Lewis: That's a great analogy. So this was the key to making more complex networks actually learn? Joe: It was the key. But even with this breakthrough, the world wasn't convinced. The computers weren't powerful enough, they didn't have enough data, and the stigma remained. Hinton even moved to Canada because he was uncomfortable with the fact that most AI funding in the US came from the military. He was an outsider in every sense of the word. Lewis: Wow. So he's working in relative obscurity in Canada, holding onto this idea that the entire world has rejected. When does the world finally catch on? When does this 'joke' become valuable?
The $44 Million Email: The AI Gold Rush Begins
SECTION
Joe: The tipping point comes in 2012. There’s an annual competition called ImageNet, a massive challenge to see which computer program can best recognize objects in millions of photos. For years, the error rates were stuck around 25%. Then, in 2012, Hinton and two of his graduate students, Alex Krizhevsky and Ilya Sutskever, enter a system they built called AlexNet. Lewis: And let me guess, it did pretty well. Joe: It didn't just do well. It obliterated the competition. It dropped the error rate from 25% to 15%. It was a jaw-dropping moment for the entire field. The AI world, which had ignored Hinton for years, suddenly snapped to attention. They realized this "joke" was now the most powerful tool in computer vision. And the tech giants smelled money. Lewis: So the gold rush begins. How does that play out? Joe: It culminates in one of the most incredible stories in the book. A few months later, at the annual NIPS conference—the big gathering for AI researchers—held at Harrah's casino in Lake Tahoe, Hinton decides to sell the three-person startup he formed with his students. Lewis: A three-person startup? What did they even have to sell? The algorithm? Joe: Essentially, they were selling themselves. Their brains. The know-how to build these world-changing systems. Four companies were in the hunt: Google, Microsoft, the Chinese tech giant Baidu, and a secretive UK startup called DeepMind. Lewis: This is happening in a Harrah's casino? This is wild. Joe: It gets wilder. Hinton has a chronic back injury that prevents him from sitting down, so he's managing this whole affair while lying on the floor of his hotel room. He's also worried about the dry Tahoe air making him sick, so he's created a makeshift humidifier by draping wet towels over an ironing board stretched between the beds. Every time a bidder comes to the room, he has his students hide the whole contraption. Lewis: You can't make this up. He's running a multi-million dollar auction from a hotel room floor, next to a secret, janky humidifier. It feels like a scene from The Social Network. Joe: Hinton himself said, "It feels like we’re in a movie." The auction was conducted entirely over email. He'd get a bid from Google, then email Baidu to see if they'd top it, all while keeping the bidders anonymous from each other. The price just kept climbing. $12 million from Baidu. Google counters. It goes up and up, past $20 million, then $30 million. Lewis: What must that have felt like for him? After decades of being ignored, suddenly the biggest companies in the world are in a frantic bidding war for his ideas. Joe: It must have been the ultimate vindication. The price eventually hit $44 million. And at that point, Hinton just stopped the auction. He decided to sell to Google. Lewis: Hold on. Why stop at $44 million? He had them in a frenzy. Couldn't he have pushed for way more? Joe: This is a key insight into his character. He wasn't trying to squeeze every last dollar out of the deal. He felt that Google, with engineers like Jeff Dean and a massive amount of data and computing power, was the best place for his students and his ideas to flourish. He prioritized the future of the research over maximizing his personal payday. Lewis: That’s a level of integrity you don't often hear about in Silicon Valley stories. So Google gets the holy grail of AI for $44 million. And almost immediately, the dream starts to show some cracks.
The Ghost in the Machine: Bias, Weapons, and Who Owns Intelligence
SECTION
Joe: That's right. The explosion of deep learning was so fast and so powerful that the ethical considerations couldn't keep up. The technology was a black box—it worked, but even its creators didn't always know how it worked. And that led to some very public, very ugly problems. Lewis: I think I know where this is going. Are we talking about the gorilla? Joe: We're talking about the gorilla. In 2015, Google Photos, which used this new deep learning tech to automatically tag images, identified a photo of two Black software engineers as "gorillas." Lewis: Ugh. That's just awful. How does a system that smart make a mistake that horrifying? Joe: It’s the classic "garbage in, garbage out" problem, but with a more sinister twist. The AI wasn't "racist." It was just a pattern-matching machine. It had been trained on a massive dataset of photos from the internet, a dataset that was overwhelmingly white. It hadn't seen enough pictures of Black faces to learn to distinguish them properly. The machine inherited the biases of the data it was fed, which were a reflection of the biases in our society. Lewis: And the book points out that the teams building this stuff were also not very diverse. Metz's work has been criticized for focusing on a small, mostly male group of 'mavericks,' but maybe that's the point. Is it any surprise their creations had these massive blind spots? Joe: It's a central tension in the book. These brilliant minds created something revolutionary, but they did it from within a bubble. They didn't foresee how their technology would interact with the messy, unequal real world. And this issue of bias was just the beginning. The next crisis was about what this technology would be used for. Lewis: The military. It's always the military. Joe: Exactly. The Pentagon came to Google with a program called Project Maven. They wanted to use Google's AI to analyze drone footage, to automatically identify objects like cars and people. The goal was to make drone surveillance more efficient. Lewis: Which is a very short step from making drone strikes more efficient. Joe: That's precisely what thousands of Google employees thought. When word of the project leaked internally, there was a rebellion. Employees signed petitions, engineers refused to work on it, and some even resigned. They argued that "Google should not be in the business of war." Lewis: Wow. So the very people who built the technology were the first to raise the alarm about its use. What did Google do? Joe: The leadership was torn. On one hand, it was a lucrative government contract. On the other, they were facing a full-blown internal crisis. Ultimately, the pressure worked. Google announced they would not renew the Project Maven contract and published a set of AI principles, vowing not to create AI for weapons. Lewis: That's a huge moment. It shows that the "geniuses" don't have the final say. The people implementing the code have power, too. It feels like this is the stuff we're still grappling with today—biased algorithms, AI in warfare. The seeds of all these modern problems were planted right at that moment of triumph. Joe: They were. The book makes it clear that the AI revolution wasn't just a technical story. It was a human one, and it forced a moral and ethical reckoning that is far from over.
Synthesis & Takeaways
SECTION
Lewis: So what's the big takeaway here? After hearing all this, is Genius Makers a story of genius triumphing against the odds, or is it more of a cautionary tale? Joe: I think the power of Metz's book is that it's definitively both. It shows that progress isn't a clean, straight line. It's messy. It's driven by stubborn, brilliant, and deeply flawed people. They succeeded in building machines that could "see" and "learn" in ways that were once science fiction. But those machines inevitably inherited our own human blindness. Lewis: That's a powerful way to put it. The AI is a mirror. Joe: A very powerful, very strange mirror. The "genius" in the title wasn't just in the code; it was in the human persistence of someone like Hinton, who believed in an idea for thirty years when no one else did. But the "turmoil" in the book's later chapters comes from the fact that these machines also reflect our worst failings—our biases, our conflicts, our rush to weaponize new tools. Lewis: It makes you wonder about the future. If this is what happened in the first decade of the revolution, what happens in the next? Joe: Exactly. As AI gets more powerful, the real question Cade Metz leaves us with is this: are we, as humans, smart enough to build an intelligence that is actually better, and wiser, than we are? Lewis: That's a heavy thought to end on. It's both exciting and terrifying. We'd love to know what you all think. What's the most exciting, or the most unnerving, part of this AI story for you? Let us know on our socials, we're always curious to hear your take. Joe: This is Aibrary, signing off.