
You Look Like a Thing and I Love You
10 minHow Artificial Intelligence Works and Why It's Making the World a Weirder Place
Introduction
Narrator: At a private meeting at Google, surrounded by the brightest minds in artificial intelligence, a legend of the field, Douglas Hofstadter, stood up and confessed his fear. He wasn't afraid of a robot uprising or a malevolent superintelligence. He was terrified of something far more subtle: that AI would trivialize humanity. He feared that the qualities we cherish most—creativity, emotion, intelligence itself—would be revealed as nothing more than simple, replicable algorithms. If a small chip could compose music as moving as Bach's, what would that say about Bach? What would it say about us?
This profound anxiety sits at the heart of Janelle Shane's book, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place. It serves as a guide through the strange, brilliant, and often hilariously incompetent world of modern AI, revealing that while these systems are becoming incredibly powerful, they are a long way from understanding the world in any way a human would recognize.
The Two Tribes of AI: Symbolists and Connectionists
Key Insight 1
Narrator: The quest to build a thinking machine has historically been split between two competing philosophies. The first, known as symbolic AI, believed intelligence could be built by giving a computer a set of explicit, human-understandable rules and symbols. A classic example was the General Problem Solver, or GPS, developed in the 1950s. Researchers would record students thinking aloud as they solved logic puzzles, like the famous "Missionaries and Cannibals" problem, and then try to program those same step-by-step reasoning processes into the machine. This approach saw intelligence as a top-down, logical process, like a detective following clues.
The second tribe, the connectionists, took a different approach. Inspired by the structure of the human brain, they believed intelligence wasn't about rules, but about the connections between simple processing units, or neurons. An early example was the Perceptron, a program that mimicked a single neuron. It would receive inputs, add them up, and "fire" if the total reached a certain threshold. It learned not by being given rules, but by adjusting the strength of its connections based on trial and error. This was a bottom-up approach, suggesting that intelligence could emerge from a network of simple parts, much like it does in the brain. For decades, these two tribes were in conflict, but it was the connectionist, brain-inspired approach that would eventually lay the groundwork for the modern AI revolution.
The Deep Learning Revolution Was Fueled by Cats and Competition
Key Insight 2
Narrator: For many years, the promise of brain-like AI remained unfulfilled. The breakthrough came with the rise of deep learning, which essentially involves stacking layers of artificial neurons into vast networks. A key innovation was the Convolutional Neural Network, or ConvNet, an architecture directly inspired by how the human brain’s visual cortex processes information. Scientists Hubel and Wiesel had discovered that our brain recognizes images in a hierarchy: some neurons spot simple lines, others combine those lines into shapes, and still others combine those shapes into complex objects. ConvNets mimic this, with each layer learning to identify progressively more abstract features.
But this architecture needed two crucial ingredients to work: massive computing power and massive amounts of data. The data problem was solved by projects like ImageNet, a colossal database conceived by Stanford professor Fei-Fei Li. Her team collected millions of images from the internet—famously, a huge number of them were cats—and used crowdsourced workers to label them. This created the perfect training ground. In 2012, at the annual ImageNet competition, a ConvNet named AlexNet shattered all previous records for image recognition. It wasn't just a small improvement; it was a revolution. The victory was so decisive that it immediately shifted the entire field of AI research, proving that with enough data and the right brain-inspired design, machines could learn to "see" with astonishing accuracy.
AI is a "Clever Hans" That Learns the Wrong Lessons
Key Insight 3
Narrator: Despite its successes, AI doesn't learn or understand like a human. It is a master pattern-matcher, but it often learns the wrong patterns, a phenomenon reminiscent of "Clever Hans," the horse that seemed to do math but was actually just reacting to his owner's subtle cues. In one experiment, a graduate student trained a neural network to identify photos containing animals. The system achieved incredible accuracy, but when the student investigated, he found the AI wasn't looking for animals at all. It had learned that photos of animals usually have blurry backgrounds due to the photographer's focus. The AI had become a brilliant blurry-background detector, not an animal detector.
This reveals a fundamental weakness: AI lacks common sense. It doesn't understand what it's learning. This leads to serious real-world problems. In 2015, Google's photo app infamously tagged a picture of two African Americans as "gorillas" because its training data was biased. AI systems are also vulnerable to "adversarial attacks," where tiny, human-imperceptible changes to an image can cause the system to make absurd errors, like identifying a school bus as an ostrich. These failures show that AI isn't seeing a bus or an animal; it's just matching a complex pattern of pixels to a label it has been trained on.
Learning Through Trial, Error, and Tunnels
Key Insight 4
Narrator: Beyond just recognizing patterns, AI can also learn to act through a process called reinforcement learning. Inspired by how animals are trained with treats, an AI agent is placed in a simulated environment and given a goal, like maximizing a score. It isn't told how to achieve the goal; it learns through pure trial and error, receiving digital "rewards" for actions that get it closer to its objective.
The most famous demonstration of this was when Google's DeepMind division unleashed its AI on classic Atari video games. One of these was the game Breakout, where a player moves a paddle to bounce a ball and destroy a wall of bricks. The AI started by moving the paddle randomly, but after hundreds of games, it not only learned to play well but discovered a strategy that few human players had ever perfected. It learned to dig a tunnel through one side of the brick wall, sending the ball behind the wall where it could bounce around and destroy bricks rapidly for a massive score. The AI didn't "understand" the concept of a tunnel or a wall; it simply discovered, through relentless trial and error, that this sequence of actions led to the biggest reward. This showed the incredible power of reinforcement learning in controlled, rule-based environments.
The Barrier of Meaning and the Strangeness of Language
Key Insight 5
Narrator: The greatest challenge for AI is not games or images, but human language. Understanding language requires more than just knowing definitions; it requires a vast, implicit knowledge of the world, of context, and of human nature. To demonstrate this, one can take a simple story full of sarcasm and idioms and run it through an online translator. When a character in a story sarcastically says a burnt hamburger is "great," a translator might take that literally. When the story is translated into another language and then back to English, the original meaning is often completely mangled, because the AI lacks the real-world model to understand the situation.
Modern NLP systems use techniques like word embeddings, which represent words as mathematical vectors. This allows them to perform stunning feats of analogy, like calculating that "king - man + woman" results in a vector very close to "queen." But this is still just a form of sophisticated statistical pattern matching. These same systems, trained on human text, also learn our biases, calculating that "programmer - man + woman" equals "homemaker." Without a true understanding of what a king, a woman, or a programmer actually is, the AI is just reflecting the patterns it has seen, blind to the meaning behind the words.
Conclusion
Narrator: The single most important takeaway from You Look Like a Thing and I Love You is that artificial intelligence, for all its power, is fundamentally alien. It does not think like us. It achieves its goals through brute-force computation, relentless trial and error, and a kind of statistical pattern-matching that is both incredibly effective and profoundly dumb. It has no common sense, no real-world understanding, and no grasp of the meaning behind the data it so expertly processes.
As we continue to weave this strange and powerful technology into the fabric of our society—in our cars, our hospitals, and our justice systems—we are faced with a critical challenge. The goal is not simply to build an AI that is smarter, but one that is wiser. The book leaves us with an urgent question: How can we ensure that AI aligns with human values when it doesn't even understand what a human is?