Aibrary Logo
Podcast thumbnail

AI's Real Superpower

13 min

Golden Hook & Introduction

SECTION

Joe: Everyone's worried AI is getting too smart. What if the real revolution isn't about intelligence at all? What if it's about something much simpler, and far more powerful: AI is just making one thing incredibly cheap. And that changes everything. Lewis: Whoa, that’s a bold claim. You’re saying the whole narrative around AI—the sentient robots, the super-brains—is a distraction from the real story? That sounds almost too simple. Joe: It is simple, and that’s the genius of it. That's the central, brilliant argument in Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. Lewis: Right, and these aren't computer scientists, which is key. They're three highly respected economists. They came at this whole AI explosion not from a place of 'how does the code work?' but 'what is the economic effect?' Joe: Exactly. Their work at the Creative Destruction Lab in Toronto gave them a front-row seat to hundreds of AI startups. They saw a pattern everyone else was missing and decided to cut through the hype. They argue that this whole sea change, which feels like magic, can be understood through basic economics. Lewis: Okay, I'm intrigued. So if it's not intelligence, what is this one cheap thing AI is producing that's supposedly changing the world?

The Big Reframe: AI Isn't Magic, It's Cheap Prediction

SECTION

Joe: Prediction. That’s it. The authors argue that the current wave of AI is, at its core, a technology that dramatically lowers the cost of prediction. Lewis: Prediction? Like, forecasting the weather? That doesn't sound as revolutionary as a machine that can think. Joe: Ah, but think about other things that became cheap. The book uses this fantastic analogy with the price of light. For most of human history, light was incredibly expensive. You had to burn oil or wax. When the cost of artificial light plummeted thanks to electricity, we didn't just get more, cheaper candles. Lewis: No, we lit up entire cities. We built massive windowless buildings, we started working at night... it completely reshaped society. Joe: Precisely! Or think about arithmetic. When computers made arithmetic dirt cheap, we didn't just get faster at doing our taxes. We got video games, music composition, complex scientific modeling, the internet. Ada Lovelace herself, way back in the 1800s, saw this. She said a machine that can manipulate numbers could one day manipulate music. A drop in the cost of a fundamental input changes everything. The authors say AI is doing that for prediction. Lewis: Okay, but I’m still stuck on the word 'prediction'. When I ask Alexa the capital of Delaware, that feels like it's giving me a fact, a piece of knowledge. How is that a prediction? Joe: That's the perfect example. The book tells a great little story about a parent whose kid asks, "What's the capital of Delaware?" Before the parent can even start to recall the answer, Alexa chimes in: "The capital of Delaware is Dover." Alexa doesn't 'know' that Dover is the capital in the way a human does. Lewis: Then what is it doing? Joe: It's predicting. Based on the input—the words you spoke—it runs through a massive dataset of text and information from the internet and predicts the most statistically likely sequence of words that should follow your question. It's a prediction about information. It's filling in the missing piece. Lewis: Huh. So it's pattern matching on a planetary scale. It's predicting the correct answer, not 'knowing' it. Joe: Exactly. And this reframing is powerful. Take a more complex example: autonomous vehicles. For decades, engineers tried to program cars with rules for every possible situation. "If a ball rolls into the street, then brake." But you can't code for every 'if'. Lewis: There are infinite 'ifs'. A plastic bag blowing across the road, a deer, a pedestrian suddenly jumping out. Joe: Right. The breakthrough came when engineers reframed the problem. They stopped asking "What are the rules for driving?" and started asking, "What would a good human driver predict is the right action in this exact moment?" The AI is trained on millions of hours of human driving, and its job is to constantly predict the correct next move. Driving is no longer a logic problem; it's a prediction problem. Lewis: That’s a huge mental shift. It’s not about creating a machine that thinks like a human, but a machine that can predict the outcome of a situation, or predict the right information, or predict the right action. Joe: And once you see it that way, you start seeing prediction problems everywhere. Is this credit card transaction fraudulent? That's a prediction. Does this medical scan show a tumor? Prediction. Which parts of this legal document need to be redacted? That's a prediction too. All these tasks are being transformed because the core input—prediction—is getting cheaper and better every day. Lewis: Okay, so if prediction is getting cheap and automated, what does that mean for us humans? Are our brains, which are basically prediction machines themselves, becoming obsolete?

The New Partnership: The Rising Value of Human Judgment

SECTION

Joe: That's the million-dollar question, and it leads to the book's second, and maybe even more profound, insight. Basic economics tells us that when the price of something falls, its substitutes become less valuable, but its complements become more valuable. Lewis: Okay, break that down. Substitutes and complements. Joe: A substitute for coffee is tea. If coffee gets super cheap, you'll probably buy less tea. A complement to coffee is sugar or a coffee cup. If coffee gets super cheap, you'll drink more of it, so you'll need more sugar and more cups. The demand for complements goes up. Lewis: So, in this AI world, what's the substitute and what are the complements? Joe: The substitute for machine prediction is human prediction. For routine forecasting, like inventory management or identifying a standard tumor on a scan, the value of a human doing that prediction is dropping. But the complements to prediction are skyrocketing in value. And the book identifies three key ones: Data, which feeds the prediction; Action, which is what you do with the prediction; and most importantly, Judgment. Lewis: Judgment. What do they mean by that, specifically? Isn't that just another word for making a decision? Joe: It's more specific. Judgment is the process of determining the payoffs of a decision. It's about weighing the consequences and assigning value to different outcomes. The machine can give you a prediction, but it can't tell you what you should care about. Lewis: I need a concrete case. How does this actually play out in the real world? Joe: The book gives so many great ones. Let's take the famous Moneyball story. The general manager of the Oakland A's, Billy Beane, used statistical prediction to find undervalued baseball players. In this case, the old-school human predictors were the scouts, who relied on gut feelings. Their value went down. Lewis: They were the substitute for the machine's prediction. Joe: Exactly. But what became more valuable? The judgment of Billy Beane and his team to trust the data over the scouts' intuition. And the action of actually signing those weird-looking, unconventional players that the stats said were good. The prediction was just one input. The real value came from the judgment to use it and the action to follow through. Lewis: Okay, that makes sense. The stats tell you a player has a high on-base percentage, but the judgment is deciding that on-base percentage is more important than how a player looks when he swings the bat. Joe: Let's take a higher-stakes example from the book. A study compared the bail decisions made by human judges to a machine learning algorithm. The AI was significantly better at predicting which defendants were likely to reoffend if released. It was a better predictor. Lewis: So, we should just replace the judges with the algorithm? Joe: Not so fast. The algorithm can predict risk, but it can't exercise judgment. It can say, "This person has an 80% probability of committing another crime." But it can't answer the crucial judgment questions: What is the cost to society of that crime? What is the cost to that individual and their family of being wrongly incarcerated? How do we weigh those two potential errors against each other? Lewis: Ah, I see. The machine gives you the odds, but the human has to decide what stakes are on the table. The human has to define the reward function, to decide what a 'win' even looks like. That's a much more complex, value-laden task. Joe: And that, the authors argue, is the future of high-value human work. As machines handle more of the "what will happen?" questions, we humans will have to get much, much better at answering the "so what?" and "what should we do about it?" questions. Our judgment becomes the scarce, valuable resource.

The Strategic Dilemma: How Cheap Prediction Rewrites the Rules

SECTION

Lewis: This all sounds like it creates some massive new decisions for leaders. If you can predict things you couldn't before, it must open up some wild new strategies that were impossible before. Joe: Absolutely. This is where it gets to the C-suite. The book uses the mind-bending example of Amazon. Right now, their model is "shop-then-ship." You go online, you click 'buy', and they ship it to you. Lewis: Standard e-commerce. Joe: But what if their prediction AI got so good that it could predict what you want to buy with, say, 95% accuracy before you even know you want it? The only thing stopping them from moving to a "ship-then-shop" model—literally sending you a box of things they predict you'll keep—is the uncertainty. The cost of you returning the items they got wrong is too high. Lewis: But if they could slash that uncertainty with a powerful prediction machine... Joe: The entire business model could flip. They might just send you the box, and you keep what you want and send the rest back for free. The increase in sales from that convenience could outweigh the cost of returns. That's a fundamental strategic dilemma that only cheap, powerful prediction can unlock. It's a decision that redefines the entire company. Lewis: Wow. But this has a dark side, right? The book talks about bias. If the AI is just predicting based on past data, and that data reflects our own societal biases, doesn't the AI just become a tool for automating and scaling up our worst prejudices? Joe: That is one of the biggest risks and most important societal dilemmas the book raises. And the answer is a resounding yes. They tell the story of Latanya Sweeney, a prominent Black professor at Harvard. She Googled her own name and was shocked to see ads pop up with text like "Latanya Sweeney, Arrested?" Lewis: Oh no. And she had no arrest record, I assume. Joe: None whatsoever. She investigated and found that searches for names typically associated with the Black community were 25% more likely to trigger these kinds of ads suggesting a criminal record. The crucial point is that no one at Google programmed the AI to be racist. Lewis: Then why did it happen? Joe: The AI's goal was simple: predict which ads are most likely to be clicked. The data—the behavior of millions of users—showed that people, for whatever biased reasons, were more likely to click on an arrest-related ad when it was next to a Black-sounding name. The AI simply learned that correlation and optimized for it. It didn't create the bias, but it amplified it and automated it at a massive scale. Lewis: That's terrifying. The machine is holding up a mirror to the ugliest parts of our collective behavior and turning it into a business model. Joe: And that's the societal dilemma. We have to decide how to manage these risks. Do we regulate the data? Do we demand transparency in the algorithms? Do we hold companies liable for the discriminatory outcomes of their prediction machines, even if it's unintentional? These are no longer technical questions; they are profound social and ethical ones.

Synthesis & Takeaways

SECTION

Lewis: So, when you strip it all away, this isn't a book about killer robots. It's a book about economics and trade-offs. It feels like the core message is that we're at a crossroads, and we've been so dazzled by the technology that we haven't been asking the right questions. Joe: Exactly. The authors force us to see AI not as a magical black box, but as a tool. And like any powerful tool, it presents us with choices. It makes prediction cheap, which forces us, as individuals and as a society, to be much, much better at judgment. The most important question is no longer 'what will the AI do?' Lewis: It's 'what will we decide to do with the AI's predictions?' Joe: That's the heart of it. The machine can tell you the probability of rain, but you still have to decide whether to carry the umbrella. The AI can tell you which business strategy has a higher chance of success, but you have to decide if that success aligns with your company's values. The AI can predict, but we must judge. Lewis: It makes you look at your own job differently. You start to dissect it. What parts are just routine prediction that a machine could eventually do? And what parts are pure judgment, or empathy, or creativity? Joe: That's the question for everyone listening. Think about your own work, your own decisions. Where could cheap prediction change the game? And more importantly, where does your human judgment become not just valuable, but absolutely irreplaceable? That's where the future lies. Joe: This is Aibrary, signing off.

00:00/00:00