
Genius and Fool
12 minAmplifying Our Humanity Through AI
Golden Hook & Introduction
SECTION
Joe: A recent study showed a new AI model passed a three-part U.S. medical licensing exam. The same model, when asked for the fifth sentence of the Gettysburg Address, can get it wrong. How can a machine be both a genius and a fool? That paradox is our topic today. Lewis: A genius and a fool. That's a perfect description for my attempts at DIY home repair. But we're talking about AI, right? That’s a massive contradiction. How is that even possible? Joe: Exactly. And it’s the central question in Reid Hoffman's new book, Impromptu: Amplifying Our Humanity Through AI. What's wild is that Hoffman, the co-founder of LinkedIn, didn't just write about AI—he wrote the book with GPT-4 as his co-author. Lewis: Wait, the AI co-wrote the book we're about to discuss? That's… meta. And a little weird. It’s like having a lion co-author a book on vegetarianism. Joe: (Laughs) It’s a "travelog of the future," as he calls it. He’s taking the AI for a test drive to see what it can really do. And our first stop on this tour is a surprisingly funny one: a lightbulb joke. Lewis: A lightbulb joke? We’re starting a deep dive on the future of humanity with a joke? I’m in.
The AI Co-Pilot: A Tool, Not an Oracle
SECTION
Joe: So, Hoffman had been playing with earlier versions of these AI models, like GPT-3. To test its creativity, he’d ask it a simple prompt: "How many restaurant inspectors does it take to change a lightbulb?" Lewis: Okay, that's a pretty specific setup. What did it say? Joe: The old version, GPT-3, gave a nonsensical, almost philosophical answer: "Only one, but the light bulb has to want to change." It sounds vaguely clever, but it has nothing to do with restaurant inspectors. It's just remixing old joke formats. Lewis: Right, it’s a classic non-answer. It’s like a politician trying to be funny. So what happened with the new version, GPT-4? Joe: This is the moment that blew Hoffman away. He gives GPT-4 the exact same prompt. The AI first gives a straight, factual answer, but then it says, "For a humorous answer..." and proceeds to tell a joke that shows it actually understands the context of restaurant inspections. Lewis: You can't just leave me hanging. What was the joke? Joe: It said: "It takes three. One to screw in the new bulb, one to document the change in triplicate, and one to issue a citation because the new bulb isn't the 'regulation' brand." Lewis: (Laughs) Okay, that's actually good. That's a real joke! It gets the whole bureaucracy and nit-picky nature of inspections. That’s a huge leap from the lightbulb having an existential crisis. Joe: It's a massive leap! Hoffman then pushes it further. He asks it to tell the joke in the style of Jerry Seinfeld, then in the style of the philosopher Ludwig Wittgenstein. And it nails both! It went from being a clever toy to a genuinely useful creative partner. This is the book's first big idea: we need to see this technology as a co-pilot. Lewis: Hold on, a co-pilot. I get the joke is clever, but is it really thinking? Or is it just an incredibly sophisticated parrot that’s read every joke book on the internet? A lot of people are worried this co-pilot is just mimicking intelligence. Joe: That's the perfect question, and the book has a fantastic analogy for it. The author Ted Chiang describes it as a "blurry JPEG of all the text on the Web." It's not reasoning from first principles. It's a prediction machine of incredible power. It analyzes a prompt and predicts, based on trillions of data points, what the most plausible and satisfying sequence of words should come next. Lewis: A blurry JPEG. I like that. So it’s not creating a perfect photograph of an idea from scratch. It’s compressing and regenerating an image based on all the other images it’s seen. Which means it can be impressive, but also… blurry. It can have weird artifacts and mistakes. Joe: Exactly. It can "hallucinate," which is a huge topic we'll get to. But for now, the key is what Hoffman says: you don't treat it like an oracle that gives you truth. You treat it like a brilliant, lightning-fast, but sometimes-wrong undergraduate research assistant. You are the director, it is the actor. You have to guide it, check its work, and provide the critical judgment. Lewis: Okay, that framing makes a lot more sense. It’s not a magic box, it’s a power tool. And like any power tool, you can build a house with it, or you can accidentally cut off your own thumb if you’re not paying attention. Joe: Precisely. And that brings us to the next big question the book tackles. If we have this incredibly powerful new tool, what should we be building with it?
Amplifying Humanity: The Rise of 'Homo Techne'
SECTION
Lewis: Well, if you listen to the headlines, we're building our own replacements. The narrative is that AI is coming for white-collar jobs, for artists, for writers. The book is famously optimistic, which is a point some critics have raised. How does Hoffman square that optimism with the very real fear people have? Joe: He squares it by flipping the entire narrative on its head. The book argues that technology doesn't diminish our humanity; it amplifies it. He introduces this idea of "Homo Techne"—Man the Tool-Maker. The argument is that what makes us human is our ability to invent tools that extend our capabilities, from the first stone axe to the printing press to AI. Lewis: That’s a powerful idea. It’s not man versus machine, but man with machine. But I need a concrete example. How does an AI chatbot amplify, say, a student's humanity instead of just helping them cheat? Joe: This is my favorite story in the book. It’s about a history professor at the University of Texas named Steven Mintz. When ChatGPT came out, schools everywhere started banning it. New York City public schools blocked it entirely. Lewis: Of course they did. It’s the ultimate plagiarism machine. I would have loved to have this in high school. My essays on The Great Gatsby would have been legendary. Joe: (Laughs) But Professor Mintz did the opposite. He required his students to use ChatGPT to write their essays. The catch was, they had to submit not only the final paper but also their prompts and the AI's original drafts, along with a log of all the changes they made. Lewis: Whoa, that's a brilliant move. He’s not grading the final product; he's grading their process, their ability to direct the AI. Joe: Exactly! He forced them to become better thinkers, editors, and critics. They couldn't just accept the first draft. They had to argue with the AI, refine its points, and add their own unique insights. As Mintz put it, "If [ChatGPT] can do a job as well as a person, then humans shouldn’t duplicate those abilities; they must surpass them." He used the tool to push his students up the value chain, from regurgitating information to true critical thinking. Lewis: That’s incredible. It reminds me of when calculators became common in math class. The fear was that kids would stop learning how to do arithmetic. But what actually happened was that it freed them up to tackle much more complex problems in calculus and physics. The tool amplified their reach. Joe: That's the perfect analogy. And it’s not just in education. Hoffman tells another story about a Grammy-winning musician who was initially terrified of AI, thinking it would write songs that would put him out of business. Lewis: A very reasonable fear. Joe: But then Hoffman reframed it. He said, "Imagine you could ask an AI to generate ten different bass lines in the style of John Lennon. Nine might be terrible, but one might be a spark, a starting point you would have never thought of." The musician's eyes lit up. He went from fear to excitement, saying, "I can create so much better now, so much faster... When do I get this thing?" It’s a tool for overcoming the blank page. Lewis: Okay, this is the optimistic vision I was looking for. It’s not about replacing human creativity, but about augmenting it. But this all sounds great for writing essays or songs in a controlled environment. What about the real world, where the same technology can be used for truly dangerous things? I'm talking about disinformation, deepfakes, propaganda. How does this optimistic 'co-pilot' model handle a world where AI can lie better and faster than any human?
Flooding the Zone with Truth: Navigating a World of AI-Generated Reality
SECTION
Joe: You've hit on the most critical challenge, and the book confronts it directly. Hoffman doesn't shy away from the dark side. In fact, he uses GPT-4 to demonstrate the threat. He asks it to generate a fake news article. Lewis: What was the topic? Joe: He prompted it to write a fake story, in the style of a reputable news agency, in which Vladimir Putin declares that AI disinformation tools are "weapons of mass destruction." Lewis: Oh man. And what did it produce? Joe: In seconds, it spat out a completely plausible-sounding article. It had fake but realistic quotes from Putin. It even invented a spokesperson from the Estonian Ministry ofForeign Affairs to provide a counter-quote. It was structured perfectly, with a dateline and a professional tone. It was terrifyingly convincing. Lewis: That is genuinely chilling. The idea that anyone can generate high-quality propaganda in seconds… it feels like we're doomed. How do you even begin to fight that? Joe: This is where the book offers its most provocative and hopeful idea. The old strategy was to play defense: debunking fake news after it spreads. But that's like trying to catch raindrops in a hurricane. The book argues for a new strategy, borrowing a phrase from political strategist Steve Bannon, but flipping its intent. The strategy is to "flood the zone with truth." Lewis: Flood the zone with truth? What does that mean? You fight fire with fire? Joe: You fight a fire hose of lies with a fire hose of truth. The same AI that can generate fake articles can be used by journalists and fact-checkers to work at superhuman speed. Imagine a journalist covering a city council meeting. The AI can provide a real-time transcript, summarize the key points, cross-reference the budget numbers being discussed with public records, and draft three different articles for different audiences—all before the meeting is even over. Lewis: So you're saying the speed of truth-telling can finally catch up to the speed of lying. Joe: It can potentially surpass it. The book also envisions a future of personalized news. You could have an AI news anchor that knows your interests and your level of knowledge on a topic. It could explain complex issues like climate science or economic policy in a way that's tailored specifically for you, with sources and fact-checks built right in. The idea is to make the truth not only faster but also more engaging and accessible than the lies. Lewis: That’s a powerful reframe. Instead of just being on the defensive, trying to stamp out every lie, you go on the offensive by making the truth more compelling, more personalized, and more abundant. You use the AI co-pilot not just to create, but to verify and to clarify. Joe: Exactly. It’s about using this amplification engine for good. It applies to the justice system, too, helping public defenders sift through mountains of evidence to find the truth, or in medicine, helping doctors diagnose rare diseases. The potential is enormous, but it always comes back to the human director.
Synthesis & Takeaways
SECTION
Lewis: You know, the paradox we started with—the AI being both a genius and a fool—that really is the whole point, isn't it? The AI is the genius engine. It has the processing power, the data, the speed. But it needs a human fool, in the best sense of the word—the curious, questioning, skeptical, value-driven person—to steer it. Joe: That's the perfect summary. The power isn't in the AI alone; it's in the partnership. It's the amplification of our own judgment and creativity. In fact, the most profound piece of advice in the book comes from GPT-4 itself. When Hoffman asked it how humans should interact with it, the AI said, and I'm quoting here: "Human beings should view a powerful large language model as a tool, not as a source of truth, authority, or intelligence." Lewis: Wow. The machine itself is telling us not to trust it blindly. That should be the warning label on the box. Joe: It really should. So, if there's one action people can take away from this, it's a simple shift in how we ask for help. The next time you're stuck on a problem at work or on a creative project, don't ask an AI for the answer. Lewis: What should you ask it for instead? Joe: Ask it for ten different ways to think about the problem. Ask it for a list of common mistakes people make. Ask it to explain the issue from the perspective of three different experts. Use it to broaden your thinking, not to end it. That's how you stay the director, not the audience. Lewis: I love that. It leaves me with one final question, for myself and for everyone listening. The book is called Amplifying Our Humanity. So, what part of your own humanity do you most want to amplify? Your creativity? Your curiosity? Your compassion? Joe: That's the question we all get to answer now. Lewis: This is Aibrary, signing off.