
The Alien Logic of Code
13 minComputational Thinking for the Rest of Us
Golden Hook & Introduction
SECTION
Joe: A single line of code, repeated, can make something 38 times more powerful in just one year. But a single, tiny flaw, repeated with that same relentless logic, can shrink it to just 3% of its original value. Lewis: Whoa. That is a massive gap. And that exponential difference is exactly what makes the way machines "think" so powerful, but also so completely alien to how our own brains work. It’s like we’re trying to communicate with a life form that operates on a totally different plane of existence. Joe: And that alien thinking is exactly what we're decoding today from John Maeda's book, How to Speak Machine. He’s trying to give us a translation guide for this new world. Lewis: Maeda is such a fascinating figure for this. He's not your typical coder. He was the president of the Rhode Island School of Design, one of the most prestigious art schools in the world. He has a PhD in design. He's an artist who ended up in Silicon Valley, which gives him this incredible outsider-insider perspective on how tech is built. Joe: Exactly. He's been called the "Steve Jobs of academia" for that reason. He’s trying to translate the logic of machines for the rest of us—the designers, the leaders, the artists. And his own journey into understanding this language began not in a fancy lab, but with a very humble, and frankly, very tedious, task at his family's tofu shop.
The Unseen Engine: How Machines Think in Loops and Dimensions
SECTION
Lewis: The tofu shop? I love that. It doesn't get more analog and hands-on than that. How does tofu lead to understanding supercomputers? Joe: Well, back in the late 70s, his family ran the shop, and his mother was doing all the monthly billing by hand. It was a huge, repetitive chore. Young Maeda, having just gotten access to a school computer—a Commodore PET—decided he was going to be the hero and automate it. Lewis: A noble son! I'm picturing him saving his mom hours of work. Joe: That was the plan. His approach was, let's say, direct. He decided to create input routines for every single day of the year. So, he sat down and started writing code for January 1st, then January 2nd, January 3rd, and so on. He manually coded all 365 days. Lewis: Hold on. He wrote the same block of code 365 times? Joe: Essentially, yes. He ended up with a program that was over 14,600 lines of code. It was a monster. But it worked! He was incredibly proud of this brute-force solution. Lewis: 14,600 lines! My hand cramps just thinking about typing that. It’s like digging a tunnel with a teaspoon when there’s a bulldozer sitting right next to you. Joe: That is the perfect analogy. Because a little while later, his math teacher, a Mr. Moyer, saw what he was doing and gently introduced him to a fundamental concept in programming: the loop. He showed Maeda that instead of writing the code 365 times, he could write it once and just tell the computer, "Do this 365 times." Lewis: So a loop is basically the computer's version of 'lather, rinse, repeat,' but it never gets tired or bored. Joe: Precisely. Maeda went back and rewrote his massive program. The new version, using loops, was only 50 lines long. It did the exact same thing, but with elegance instead of brute force. And that was his 'aha' moment. He realized that computers aren't just calculators; they are masters of repetition. Their core strength is doing the same simple thing over and over again, tirelessly and perfectly. Lewis: That’s incredible. It’s a completely different way of thinking. We humans hate mindless repetition. We get sloppy, we lose focus. But for a machine, that's its happy place. Joe: It is. And this is the first "law" of speaking machine. Machines excel at repeating themselves. But then Maeda pushes this idea further. What happens when you take that simple concept and layer it? Lewis: You mean like a loop inside another loop? What does that even do? Joe: It creates a new dimension. Think about it. A single loop is like a line. You go from point A to point B, one step at a time. But if you put a loop inside that loop—say, for every day of the year, you loop through every hour of the day—you've just created a plane, a two-dimensional space. You've gone from 365 points to over 8,000. Lewis: Okay, I see. And if you add another loop for minutes, you get a cube. A three-dimensional block of time. Joe: Exactly. And computers can keep going. A fourth loop creates what mathematicians call a hypercube, a four-dimensional object that we can't even visualize. This is how computation gets so big, so fast. It's not just adding; it's multiplying. It’s exponential growth. Maeda uses the classic lily pad riddle to explain this. A pond has a lily pad that doubles every day. On day 30, the pond is full. On what day was it half-full? Lewis: Oh, I know this one. It feels like it should be day 15, but it's day 29. Our brains think linearly, but the growth is exponential. It sneaks up on you. Joe: It sneaks up on you. And that exponential, dimensional power, born from simple loops, is how we get from a 50-line tofu billing program to the unfathomable complexity of the modern internet. It’s an engine built on repetition, scaling into universes of data.
The Double-Edged Sword of Incompleteness and Instrumentation
SECTION
Lewis: Okay, so this incredible power to build huge, complex things with simple rules explains how tech is built. But Maeda argues that it's also completely changed the philosophy of how we build things. This idea that 'timely' is better than 'timeless' design... that feels radical, and honestly, a little unsettling. Joe: It is a massive shift. The old way of making things, what he calls the "waterfall" method, was about perfection. You design it, you build it, you ship it. It's done. He tells this hilarious story about a car company in the pre-digital age. They designed a car with a special, built-in compartment for a fax machine, because they predicted everyone would have fax machines in their cars. Lewis: Oh no. I can see where this is going. Joe: By the time the car actually went into production years later, the fax machine was already on its way out. So these brand-new cars rolled off the assembly line with a big, useless, empty hole in the dashboard. There was no 'undo' button. Lewis: Right, you can't just issue a software update for a car's dashboard. That’s a perfect example of timeless design failing in a timely world. Joe: Exactly. Now compare that to the computational model. Products, especially software, are never really finished. They are designed to be incomplete. The mantra is to launch a "Minimum Viable Product" and then iterate. This is only possible because of another key machine capability: instrumentation. Lewis: Instrumentation. That sounds technical. What does he mean by that? Joe: At its heart, it just means the machine can talk back to you. It can be instrumented with sensors that report on how it's being used. Maeda’s low-tech analogy is from his parents' tofu shop again. They couldn't hear customers arrive because of the noisy machinery in the back. So, they hung a set of bells on the front door. The jingle of the bells was a simple sensor, a form of telemetry, that told them, "A user has arrived." Lewis: I like that. It’s a simple signal. But in the digital world, that signal isn't just a bell. It's every click, every pause, every 'like' we make. Joe: It's everything. And this is where the idea gets its double edge. On one hand, this data allows companies to serve you better. It’s like the Japanese concept of omotenashi, or anticipatory hospitality—knowing what a customer wants before they even ask. It’s why your streaming service can recommend a movie you'll probably love. Lewis: Hold on, though. That sounds nice, but isn't 'telemetry' just a friendly-sounding word for surveillance? Maeda himself brings up the 'Scary Pizza' story, a hypothetical where you call to order a pizza and the person on the other end says, "Our records show your cholesterol is high, so we're denying your request for extra cheese." That's the other side of omotenashi, right? Joe: It is. And it’s not hypothetical anymore. The most chilling example he gives is the Facebook emotion experiment from 2014. Researchers secretly tweaked the news feeds of nearly 700,000 users, showing some more positive posts and others more negative posts, just to see if they could manipulate their emotions. Lewis: And could they? Joe: They could. It worked. They proved it was possible to make a statistically significant number of people happier or sadder without their knowledge or consent. That’s the dark side of shipping an 'incomplete' product and 'testing' on your users. You're not just testing a button color; you're potentially experimenting on human psychology. Lewis: Wow. So the same logic that lets a developer fix a bug in an app also lets a massive corporation run an emotional experiment on its users. The book has been praised for its accessibility, but some critics have pointed out that Maeda can be a bit too optimistic, a bit deterministic, and this is a perfect example. The potential for misuse is enormous. Joe: It is. And that ethical line is exactly where Maeda's argument gets most urgent. Because if these systems are learning from our data, and testing on us constantly, what happens when the data—and the creators themselves—are deeply imbalanced?
Automating Imbalance: The Human Cost of Code
SECTION
Lewis: Right. This is where it all comes home to roost. It’s not just about how machines think, it’s about what they’re thinking about. And what they’re thinking about is us, with all our messy, human biases. Joe: And they are very, very good students. Maeda tells the story of Amazon building an internal AI tool to help them screen résumés for engineering jobs. They trained it on a decade's worth of their own hiring data, thinking the AI would learn to spot the best candidates. Lewis: Let me guess. Their past hiring data wasn't exactly a model of diversity and inclusion. Joe: Not at all. The tech industry, as Maeda points out with stark statistics, is overwhelmingly male and white. So the AI learned the existing biases perfectly. It taught itself that résumés that included the word "women's," as in "captain of the women's chess club," were bad. It actively penalized female candidates. Lewis: That is horrifying. So the machine isn't biased on its own. It's just a very efficient student of our own bias. It holds up a mirror to the company's culture, and the reflection isn't pretty. Joe: And Maeda's crucial point is that Amazon initially classified this as a "computer program error." He says, no, that's a "culture error." The code just did what it was told. The problem wasn't the machine; it was the imbalanced data and the imbalanced team that fed it. As the comedian D.L. Hughley once joked, "You can't teach machines racism." The tragic punchline is, we already have. They learned it from us. Lewis: It reminds me of his other story about the soup company. They spent a fortune on an AI to replicate their master soup-makers, encoding all their actions. And the AI-made soup was terrible. An old human operator came in, sniffed it, and said, "It smells bad." There are things we know, human, intuitive things, that can't be captured in data. Joe: Exactly. And when we rely only on the data, especially biased data, we automate imbalance at the speed of Moore's Law. We create systems that exclude people on a massive, industrial scale. Lewis: So what’s the solution? If the problem is us, how do we fix the code? Joe: Maeda’s conclusion is surprisingly simple and profound. He tells a very personal story about a time he went jogging at 4 a.m. in Palo Alto. He tripped, fell hard, broke his arm, and smashed his face. He was alone, without a phone or ID. His first instinct, as an MIT-trained engineer, was to think of himself as a damaged Mars rover, calculating the most efficient path back to his Airbnb. Lewis: He tried to debug himself. Joe: He did. But what actually got him through the next ten months of recovery wasn't logic or code. It was the compassion of the doctors, nurses, cleaners, and flight attendants—all the humans who cared for him. His final law of speaking machine isn't about code at all. It's just: "Mind the humans."
Synthesis & Takeaways
SECTION
Joe: When you look at the whole arc of the book, it's incredibly powerful. We start with a simple, innocent concept—the loop. That loop gives us exponential power to build things on a scale we can barely imagine. Lewis: And that power allows us to build products that are never finished, that are constantly listening to us through instrumentation and telemetry. Joe: And because they're always listening, they learn from us. They learn our best intentions, but they also learn our worst biases. And they automate those imbalances at a scale and speed that is terrifying. Lewis: So 'speaking machine' isn't really about learning to code, is it? It's about learning to be more intentional, more inclusive, more thoughtful, more human. Because the machine is always on, and it's always listening. The real question Maeda leaves us with is: What do we want it to hear? Joe: That's a perfect question for everyone listening. The next time a piece of technology makes a decision for you—a job application gets filtered, a loan is denied, an ad follows you around—ask yourself, what did it hear about me? What did it hear about my world? Let us know your thoughts. We're always curious to hear from our community. Lewis: It’s a call for us to be better teachers, because we have some very powerful students watching our every move. Joe: This is Aibrary, signing off.