
AI Breaks Dumb Systems
12 minThe Disruptive Economics of Artificial Intelligence
Golden Hook & Introduction
SECTION
Joe: Here’s a wild thought: What if the smartest thing your company could do with AI is... nothing? At least, not in the way you think. Because plugging a brilliant AI into a dumb system doesn't make the system smart. It just breaks it faster. Lewis: Whoa, that's a bold claim. So all this AI hype, the billions being invested, it's all misplaced? Are we all just racing to break our businesses more efficiently? Joe: In a way, yes, if we don't change our thinking. This is the central idea in Power and Prediction: The Disruptive Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb. Lewis: Right, these are the same economists from the University of Toronto who wrote the bestseller Prediction Machines. And what's fascinating is they published this right before the ChatGPT explosion. So they were already seeing that the real game wasn't just about fancy tech, but about how it completely rewires decision-making and, as the title says, power. Joe: Exactly. They argue that we're all looking for AI's impact in the wrong places. And their first big clue came from a place nobody, and I mean nobody, expected: a small city on the very edge of North America.
The Surprise of Verafin & System vs. Point Solutions
SECTION
Lewis: Okay, I'm hooked. Don't leave me hanging. Where are we talking about? Silicon Valley North? Joe: Not even close. We're talking about St. John's, Newfoundland. In the early 2000s, the authors were confident that Canada's first billion-dollar AI company, its first "unicorn," would come from a major tech hub like Montreal or Toronto. That’s where the pioneers like Geoffrey Hinton were, where the government funding was, where the big corporate labs were. Lewis: That makes perfect sense. You'd expect the breakthrough to happen where the biggest brains and the deepest pockets are. Joe: And yet, they were completely wrong. On November 19, 2020, Nasdaq acquired a company called Verafin for 2.75 billion dollars. Verafin was based in St. John's, a place more known for fishing and fog than for deep learning. Lewis: Hold on. So it wasn't about having the world's best AI researchers, but about something else entirely? How did a company from Newfoundland beat everyone to the punch? Joe: That's the million-dollar—or rather, 2.75-billion-dollar—question. Verafin didn't invent some revolutionary new form of AI. They built AI-powered tools for a very specific, very painful problem: fraud detection for banks. They realized the key wasn't building the most complex AI, but embedding a prediction tool into an existing system that was desperate for better prediction. Lewis: Ah, so they found a system that was already built for prediction, but was just bad at it. The banks were already trying to predict fraud, just with clumsy, manual processes. Verafin just gave them a better crystal ball. Joe: Precisely. And this leads to the book's first huge insight: the difference between a "point solution" and a "system solution." A point solution is when you use AI to do one specific task better. Think of an AI that can read an X-ray. A system solution is when you redesign the entire process around that new capability. Lewis: Okay, so this is like that famous story about AI pioneer Geoffrey Hinton. He quipped that we should stop training radiologists because AI would be better at reading scans within five years. He was seeing AI as a point solution to replace the human. Joe: Exactly. And he was right about the technology—AI did get incredibly good at reading scans. But five years later, the number of people training to be radiologists hadn't dropped at all. Lewis: Why not? Joe: Because Hinton, brilliant as he is, was focused on the point solution. He missed the system. A radiologist doesn't just look at a scan in a vacuum. Their work is integrated into a massive, complex system of patient consultations, treatment planning, legal liability, and human communication. You can't just unplug the human and plug in an AI without redesigning that entire workflow. The AI was a powerful new gear, but it didn't fit the old machine. Lewis: I see. So Verafin succeeded because they found a place where the new gear fit perfectly. The banking system was already designed to take a prediction—'is this transaction fraudulent?'—and act on it. They didn't have to change the whole bank. Joe: You've got it. The authors compare it to the adoption of electricity. At first, factory owners just used electric motors as a point solution to replace their big, central steam engines. They saw some minor cost savings, but nothing revolutionary. Lewis: Right, they just swapped one power source for another in the same old factory layout. Joe: The real revolution, the system solution, came when people like Henry Ford realized that with electricity, you didn't need one giant engine. You could put small, individual motors on every machine. This "fractionalized power" allowed them to completely redesign the factory into the assembly line. The factory itself was the new system. AI is the same. The big wins aren't in replacing one task; they're in enabling a whole new assembly line for decisions.
The Great Decoupling of Prediction & Judgment
SECTION
Lewis: That assembly line for decisions is a powerful image. It feels like that's where the real disruption happens. Joe: It is. And that failure to see the whole system leads to the book's most profound idea: what they call "the great decoupling." Lewis: Okay, 'decoupling prediction from judgment' sounds very academic. What does that actually look like in the real world? Give me an example. Joe: The book has the perfect one. It’s about Michael Jordan. In his second season with the Chicago Bulls, he broke his foot. The doctors, the prediction machines of their day, gave him their assessment: if he played on it, there was a 10 percent chance of a career-ending re-injury. Lewis: A 10 percent chance of losing Michael Jordan forever. That's a terrifying prediction. So the decision should be easy, right? Don't play. Joe: That's what the team owner, Jerry Reinsdorf, thought. He presented the prediction to Jordan. But Jordan wanted to play. To make his point, Reinsdorf asked him a hypothetical: "If you had a terrible headache, and I gave you a bottle of pills, and nine out of ten would cure you, but one would kill you, would you take a pill?" Lewis: That's a heavy question. What did Jordan say? Joe: He gave the most perfect, insightful answer imaginable. He looked at the owner and said, "It depends how f**king bad the headache is." Lewis: Wow. That's it, right there. That's the whole concept in one sentence. It's not about the odds, it's about the stakes. Joe: Exactly! The doctors provided the prediction—the 10 percent probability. That's the part the machine does. But Jordan provided the judgment—the value of the outcome. How much is playing worth to me? How bad is the headache of sitting on the sidelines? AI decouples these two things. It gives us the prediction, but the power shifts to whoever provides the judgment. Lewis: That’s a huge shift. So in my daily life, this is like my smart thermostat predicting the optimal temperature to save energy. That’s the prediction. But I apply the judgment of whether I'm willing to pay the higher electricity bill for the comfort of a warmer room. The power is in my wallet and my preference, not the thermostat's brain. Joe: You've nailed it. The machine predicts, the human judges. And this is where power gets reallocated. In the past, the person with the best intuition—the best internal prediction and judgment bundled together—had the power. Think of a veteran loan officer. Now, the power might shift to the person who designs the software that takes the AI's credit score prediction and applies the bank's judgment: 'at what score do we approve the loan?' Lewis: So the power moves from the person on the front lines to the person designing the system's rules. That's a massive, and kind of invisible, change. Joe: It's the biggest change of all. And it's why simply plugging a new prediction tool into an old system can cause absolute chaos.
The AI Bullwhip & The System Mindset
SECTION
Joe: Precisely. When you plug that smart thermostat into a dumb grid, or a smart AI into a dumb supply chain, you get chaos. The authors have a brilliant term for this: the 'AI Bullwhip'. Lewis: The AI Bullwhip. I love that. It sounds dangerous. What is it? Joe: Imagine a simple restaurant. For years, the manager has used a simple rule: order 100 steaks every week. The supplier knows this, the butcher knows this, the farmer knows this. The whole system is stable because it's built around a predictable, if inefficient, rule. Lewis: Okay, makes sense. A bit dumb, but reliable. Joe: Now, the restaurant gets a fancy new AI that perfectly predicts customer demand. This week, it says you'll only sell 70 steaks. Next week, 130. The restaurant's decisions are now much smarter. They reduce waste and maximize profit. It's a great point solution for them. Lewis: But I'm guessing their supplier is not so happy. Their orders just became completely unpredictable. Joe: Exactly. The supplier is now getting whipsawed by these fluctuating orders. So what do they do? They get their own AI to try and predict the restaurant's unpredictable orders. This amplifies the volatility. By the time you get to the farmer at the very end of the supply chain, the demand signal is a wildly swinging bullwhip. A small, smart change at one end created massive, costly chaos for the entire system. Lewis: That's a fantastic analogy. It's like when I decide to go on a keto diet, and suddenly my partner, the grocery store, and the local avocado farm are all thrown into chaos because my demand just went from zero to a hundred. Joe: It's the exact same principle. This is why the book argues for a "system mindset." You can't just optimize one piece. You have to redesign the whole system to handle the new, smarter, but more variable information. Lewis: Okay, but this 'system mindset' sounds great in a book, but how does a real company actually do this without going bankrupt? It sounds incredibly risky to tear down your whole factory to try a new idea. Joe: This is where the most futuristic part of the book comes in. The solution is simulation. Specifically, creating what are called "digital twins." You build a perfect virtual replica of your system—your factory, your supply chain, your city—and you let the AI loose in there first. Lewis: A risk-free playground for innovation. Joe: Exactly. The book gives this amazing example of Team New Zealand preparing for the America's Cup sailing race. They built a digital twin of their boat and taught an AI to sail it in the simulator. The AI sailor could run millions of races, trying out tiny design changes and radical new tactics that would be too expensive or dangerous to test in the real world. Lewis: So the humans learned from the AI in the simulation? Joe: Yes! The human sailors would watch the AI and say, "Wait, I never thought of turning the sail that way," and then they'd copy it. They used the simulation to coordinate the design of the boat with the design of their sailing strategy. It was a total system solution. And, by the way, they won the America's Cup.
Synthesis & Takeaways
SECTION
Lewis: That's incredible. So the big takeaway here isn't 'go out and buy more AI.' It's 'go back to the drawing board and redesign your decision-making system.' The power isn't in the prediction machine itself, but in the hands of the person who redesigns the factory, the hospital, or the supply chain around it. Joe: That is the heart of it. The book is really a guide for system architects. The authors even provide a tool for this, which they call the AI Systems Discovery Canvas. It's a simple framework that forces you to take a blank slate approach. Lewis: A blank slate? How does that work? Joe: It forces you to ask fundamental questions. First: what is the ultimate mission of this organization? For an insurance company, it might be 'to provide peace of mind.' Then, the killer question: given a perfect prediction machine, what is the absolute minimum number of decisions we need to make to achieve that mission? Lewis: I love that. It strips away all the legacy rules and bureaucratic scaffolding that's built up over the years to manage uncertainty. Joe: It does. It forces you to see the core decisions. For insurance, it might be just three: Who do we market to? What price do we charge? And do we pay this claim? Then you can ask how AI prediction changes each of those, and what new systems that enables. Lewis: It makes you wonder, in our own jobs or lives, where are we just using a 'point solution'—a new app, a new diet, a new productivity hack—when we should be rethinking the whole system? What are we trying to optimize, and what are we accidentally breaking in the process? Joe: A question to ponder. This is Aibrary, signing off.