Podcast thumbnail

Artificial Intelligence: A Guide for Thinking Humans

10 min
4.7

Introduction

Nova: Welcome to the show. Today we are diving into a book that really strips away the Hollywood gloss from one of the most misunderstood topics of our time. We are talking about Jerry Kaplan and his definitive guide to Artificial Intelligence. Now, if you have ever felt a bit of existential dread watching a Boston Dynamics robot do a backflip, or if you have worried that a computer is going to take your job and then your house, Kaplan is the voice of reason you have been looking for.

Nova: He is definitely the guy talking us off the ledge, but not by downplaying the technology. Instead, he does something much more interesting. He argues that we have been looking at AI through the wrong lens entirely. He thinks the very name Artificial Intelligence is a bit of a marketing trick that has led us into a philosophical trap. He wants us to stop thinking about AI as a digital person and start seeing it for what it actually is: a powerful, transformative form of automation.

Nova: Because even if it is just automation, the scale and speed of it are unprecedented. Kaplan is a Silicon Valley veteran. He co-founded GO Corporation, he has been at the forefront of this for decades, and he is a fellow at the Stanford Center for Legal Informatics. He is not saying AI isn't a big deal. He is saying it is a huge deal, but for economic and social reasons, not because machines are suddenly becoming conscious. Today, we are going to break down his core philosophy, from why he thinks AI doesn't actually think, to his radical ideas for how we can survive the economic earthquake it is causing.

Key Insight 1

The Intelligence Illusion

Nova: Let's start with Kaplan's biggest bone to pick: the word intelligence. He argues that we have this deep-seated tendency to anthropomorphize anything that acts in a goal-oriented way. If a machine solves a complex puzzle, we assume it must be thinking the way we do. But Kaplan points out that very little AI research is actually modeled on the human mind.

Nova: It is a common metaphor, but Kaplan clarifies that it is just that—a metaphor. Modern AI is mostly about advanced statistics and pattern matching. When an AI identifies a cat in a photo, it isn't looking at the cat and thinking, Oh, how cute, a feline. It is calculating the probability that a certain arrangement of pixels matches a pattern it has seen millions of times before. Kaplan calls this synthetic intellect.

Nova: Exactly. He uses the example of a calculator. A calculator can do long division a million times faster than you can, but we don't say the calculator is smarter than you. We just say it is a better tool for arithmetic. Kaplan argues that AI is just a broader version of that. It is a tool for tasks that previously required human intelligence, but the machine doesn't need to be intelligent to perform them.

Nova: Kaplan would say it is a master of prediction. It is predicting the next most likely word in a sequence based on a massive dataset of human language. It is a mirror of human thought, not a source of it. He makes this great point that intelligence is a multi-dimensional thing. We have emotional intelligence, spatial awareness, social intuition. AI usually excels at one very narrow slice of that, like playing chess or translating text, but it has zero awareness of the world outside that data.

Nova: That is a perfect analogy. And because it has no consciousness, it has no desires. It doesn't want to take over the world. It doesn't want a promotion. It doesn't even want to stay turned on. Kaplan says that worrying about AI becoming our overlords is like worrying that your spreadsheet is going to decide to embezzle your money. It just doesn't have the agency to want anything.

Key Insight 2

Synthetic Labor and the Great Decoupling

Nova: Now, just because AI isn't going to turn into Skynet doesn't mean it isn't dangerous. Kaplan shifts the focus from the sci-fi horror of consciousness to the very real-world horror of economic displacement. He introduces two terms: synthetic intellect, which we just talked about, and synthetic labor.

Nova: Precisely. And Kaplan's concern is that we are entering a period he calls the Great Decoupling. Historically, as productivity went up, wages went up. If a factory got a new machine that made it twice as efficient, the workers became more valuable and their pay eventually rose. But Kaplan argues that with AI, that link is breaking.

Nova: That has been true in the past, but Kaplan points out a crucial difference. In the past, technology mostly augmented human labor. A tractor made a farmer more productive. But AI and robotics are increasingly replacing human labor entirely. If you have a self-driving truck, you don't have a more productive driver; you have no driver. The wealth generated by that truck goes entirely to the owner of the capital, not the worker.

Nova: That is exactly Kaplan's point. He notes that the benefits of AI are currently being captured by a very small group of people—the ones who own the data and the processing power. He isn't anti-technology, but he is very pro-policy. He argues that our current economic systems are built for a world where labor is the primary way people earn a living. If labor becomes less valuable because machines can do it cheaper, we need a new way to distribute wealth.

Nova: He is actually a bit more creative than that. He is skeptical of UBI because he thinks people derive a lot of meaning and social structure from work. Instead, he proposes something called a job mortgage. It is one of the most unique ideas in the book.

Nova: It is actually the opposite. Think of it like a student loan, but instead of you paying for it, the future employer or the government pays for your retraining based on the projected value of your new skills. It is a way to finance the constant re-skilling that Kaplan thinks will be necessary in an AI-driven economy. He wants to make it as easy to invest in human capital as it is for a company to invest in a new server farm.

Key Insight 3

The Legal Personhood of Machines

Nova: One of the most fascinating parts of Kaplan's work is how he looks at the legal system. If an AI causes harm—say, an autonomous trading algorithm crashes the stock market or a medical AI gives a wrong diagnosis—who is responsible? Is it the programmer? The owner? The machine itself?

Nova: You can't put it in jail, but Kaplan suggests we might need to treat AI more like we treat corporations. Corporations are legal persons. They can own property, they can be sued, and they can be held liable for damages, even though they don't have a physical body or a soul.

Nova: Essentially, yes. Kaplan proposes that advanced AI systems could be required to hold assets or insurance. If the AI messes up, the victims are compensated from those assets. This solves the problem of the judgment-proof defendant, where a small company creates a massive AI disaster and then just goes bankrupt, leaving the victims with nothing.

Nova: Exactly. Kaplan is very firm on this: AI should not have rights. It doesn't feel pain, it doesn't have feelings, so giving it rights is nonsensical. But giving it legal responsibilities? That is just good engineering for society. He also dives into the ethics of bias. Since AI learns from human data, it often inherits our worst prejudices. Kaplan argues that we can't just blame the algorithm. We have to build legal frameworks that hold the creators accountable for the outcomes, not just the intent.

Nova: Kaplan agrees. He advocates for transparency and what he calls algorithmic accountability. We need to be able to audit these systems just like we audit a company's books. He wants us to move away from the idea that AI is this mysterious, magical force and treat it like any other high-risk industrial tool, like a nuclear power plant or a commercial airliner.

Key Insight 4

The Future of Human Agency

Nova: As we look toward the future, Kaplan isn't just worried about jobs and laws. He is worried about what happens to human agency. If we let AI make all our decisions—what we eat, who we date, how we vote—do we lose something essential about being human?

Nova: Kaplan calls this the danger of delegation. We delegate these small tasks because they are convenient, but over time, we might lose the skills to do them ourselves. More importantly, we might lose the ability to think critically about the choices being made for us. He warns that AI can be used to manipulate us on a massive scale because it can find the exact psychological triggers that work on each individual.

Nova: It is. And Kaplan's solution isn't to ban the tech, but to educate the thinking humans the title of the book refers to. He wants us to understand the limitations of AI so we don't over-rely on it. He makes a point that AI is great at optimization—finding the most efficient way to reach a goal. But it is terrible at picking the goal. Only humans can decide what is worth doing.

Nova: Right. Or more seriously, an AI can optimize a company's profits, but it can't decide if that company should prioritize environmental sustainability or worker well-being. Kaplan's message is that we need to stay in the driver's seat of our own values. He envisions a future where AI handles the drudgery—the synthetic labor and the rote calculations—leaving humans free to focus on things that require true empathy, creativity, and moral judgment.

Nova: Kaplan is cautiously optimistic. He believes that if we update our laws and our economic models, we can navigate this transition. But he is very clear that the technology won't fix itself. It is up to us to ensure that AI serves humanity, rather than the other way around. He wants us to stop being afraid of the robots and start being active participants in shaping the rules they live by.

Conclusion

Nova: We have covered a lot of ground today. From Jerry Kaplan's debunking of the intelligence myth to his radical ideas for job mortgages and AI legal personhood. The core takeaway from his work is that AI is a tool, not a creature. It is a reflection of our own data and our own priorities. If we don't like what the AI is doing, we have to look at the systems we have built around it.

Nova: Exactly. Kaplan's guide for thinking humans is ultimately a call to action. He wants us to move past the hype and the fear so we can focus on the real work: building a society where the incredible benefits of automation are shared by everyone, not just a few. It is about making sure that as our machines get smarter, our society gets fairer.

Nova: A hammer that could build a house or break a window. It all depends on who is holding it. If you want to dive deeper into these ideas, I highly recommend picking up Jerry Kaplan's books. They are some of the most grounded and insightful perspectives you will find in the field.

Nova: That is what we are here for. This is Aibrary. Congratulations on your growth!

00:00/00:00