Podcast thumbnail

What Computers Still Can't Do

9 min
4.9

A Critique of Artificial Reason

Introduction

Nova: Imagine it is 1965. The world is buzzing with the promise of the space age, and in the basement labs of MIT and Stanford, a group of brilliant scientists is making a bold prediction. They claim that within twenty years, machines will be capable of doing any work a man can do. They are building the first artificial intelligence, and they are convinced they have found the secret to the human mind.

Nova: Exactly. Enter Hubert Dreyfus. He was not a computer scientist or a mathematician. He was a philosopher at Berkeley who specialized in existentialism and phenomenology. He looked at what the AI pioneers were doing and basically told them they were chasing a ghost. He wrote a report for the RAND Corporation with the provocative title Alchemy and Artificial Intelligence. He argued that trying to create human-level intelligence with digital computers was like trying to reach the moon by climbing a taller and taller tree.

Nova: That is the heart of the story we are diving into today. Dreyfus eventually turned that report into a legendary book called What Computers Can't Do, and later updated it to What Computers Still Can't Do. It is one of the most controversial and influential critiques in the history of technology. He did not just say the technology was not ready; he said the very philosophy behind it was fundamentally broken. Today, we are going to explore why Dreyfus thought computers would always fail to be human, and whether his warnings still hold up in the age of ChatGPT and deep learning.

Key Insight 1

The Four Flawed Assumptions

Nova: To understand Dreyfus, you have to understand what he was fighting against. In the sixties and seventies, the dominant approach was something we now call GOFAI, or Good Old Fashioned AI. The idea was that the mind is essentially a device that processes symbols according to formal rules. Like a giant game of chess.

Nova: He argued that this entire project was built on four shaky pillars, which he called the four assumptions of AI. The first was the biological assumption. AI researchers assumed the brain was just a collection of on-off switches, like a digital circuit. Dreyfus pointed out that we had no evidence the brain actually works that way at a fundamental level.

Nova: That leads to his second point, the psychological assumption. This is the belief that the mind can be viewed as a device that operates on bits of information according to formal rules. Dreyfus argued that human psychology isn't just a series of calculations. We don't experience the world as a stream of data points; we experience it as a meaningful whole.

Nova: The third is the epistemological assumption. This is the idea that all knowledge can be formalized. Basically, if you know something, you must be able to write it down as a set of rules or instructions. Dreyfus, following philosophers like Heidegger and Wittgenstein, argued that most of what we know is actually tacit. It is stuff we can't put into words.

Nova: Exactly! And the final one is the big one: the ontological assumption. This is the belief that the world itself consists of a set of independent, discrete facts. AI researchers thought they could just build a massive database of every fact in the world and the computer would be smart. Dreyfus said the world doesn't work like that. Facts only make sense within a context, and context is infinite.

Key Insight 2

The Body and the World

Nova: That is a great question, and it is where Dreyfus gets really deep into phenomenology. He argued that our intelligence is embodied. We don't just observe the world from a distance; we are 'in' the world. Think about a hammer. When you are an expert carpenter, you don't think about the hammer as an object with a certain weight and a certain handle length. The hammer becomes an extension of your arm. It becomes transparent.

Nova: Precisely. Dreyfus argued that computers can never have this experience because they don't have needs, desires, or a physical presence that 'cares' about the world. For a computer, everything is an object to be processed. It can never have that 'ready-to-hand' relationship with tools or the environment. It is always stuck in the 'present-at-hand' mode, looking at things as detached data points.

Nova: Dreyfus would say no, because the robot is still just processing sensor data according to a program. It doesn't have a 'situation.' For humans, our situation determines what is relevant to us. If you walk into a room, you don't see ten thousand different objects and then filter them. You see a place to sit if you're tired, or a glass of water if you're thirsty. Your body and your needs pre-sort the world for you.

Nova: Exactly. This is often called the Frame Problem in AI. How do you tell a computer what is relevant in a changing world? If I tell a robot to go get a glass of water, how does it know that it shouldn't stop to count the number of tiles on the floor or check the temperature of the lightbulbs? To a computer, those are all just facts. To a human, they are obviously irrelevant to the task. Dreyfus argued that relevance isn't something you can program with rules.

Key Insight 3

The Five Stages of Skill

Nova: One of the most practical parts of Dreyfus's work is the model of skill acquisition he developed with his brother, Stuart. They wanted to show how humans actually learn, and why computers are basically stuck at the beginner level.

Nova: Yes. It starts with the Novice. A novice follows strict, context-free rules. Think of a student pilot using a checklist. They don't have a 'feel' for the plane yet; they just follow the steps. This is where computers excel. They are the ultimate novices.

Nova: You start to recognize 'situational' elements. You've seen enough cases to notice patterns that aren't in the rulebook. By the time you reach Competence, you are making choices and feeling a sense of responsibility for the outcome. But you're still thinking analytically.

Nova: This is the crucial jump. An expert doesn't follow rules. An expert doesn't even 'decide' in the traditional sense. They just see what needs to be done and they do it. A grandmaster in chess doesn't look at the board and calculate every possible move like a computer does. They look at the board and see a 'weakness' or a 'strong position.' The right move just occurs to them.

Nova: That was his core critique of symbolic AI. He argued that expertise is non-representational. It's a physical and intuitive 'know-how' rather than a 'know-that.' When a pro basketball player takes a shot, they aren't calculating the arc and the velocity. Their body has 'absorbed' the skill through thousands of hours of practice. Dreyfus believed that because computers lack this ability to absorb skills into a physical being, they would always be brittle. They would always fail when they hit a situation the programmer didn't anticipate.

Key Insight 4

Did Dreyfus Win?

Nova: It's a fascinating debate. When Deep Blue won, Dreyfus actually pointed out that the computer didn't play chess like a human. It used brute force calculation. It didn't 'understand' the game; it just searched a massive tree of possibilities. He argued that this proved his point: the computer was still just a high-speed novice following rules, even if it could win.

Nova: You've hit on the big shift. In the nineties, Dreyfus actually became more interested in connectionism. He thought it was a much better approach than symbolic AI because it didn't rely on formal rules. However, he still had a major reservation. He argued that even a neural network is just a mapping of inputs to outputs. It still lacks a body, it still lacks a world, and it still doesn't 'care' about anything.

Nova: Exactly. He would say that these models are still 'disembodied.' They have access to the entire library of human expression, but they don't have the human experience that produced that expression. They can simulate the 'what' of human intelligence, but they lack the 'being' of it. He called this the difference between 'simulated' and 'real' intelligence. A simulation of a fire doesn't burn anything.

Nova: That is the ultimate question of modern AI. Dreyfus would argue it matters because the lack of a real 'world' makes the AI fundamentally unstable. It will always have 'hallucinations' or 'edge cases' where it does something completely nonsensical because it doesn't have the grounding of common sense that comes from living in a physical body. We see this today with LLMs—they can be brilliant one second and then fail at basic logic the next.

Conclusion

Nova: Hubert Dreyfus passed away in 2017, just as the current AI revolution was really taking off. He spent fifty years being the thorn in the side of the AI community, and while many researchers hated him at first, many eventually admitted he was right about the failures of the early symbolic approach.

Nova: Well said. What Computers Still Can't Do challenges us to think about what makes us unique. It suggests that our weaknesses—our needs, our emotions, our physical limitations—are actually the very things that give us our intelligence. We don't just solve problems; we inhabit a world. And that is something a machine, no matter how fast its processor, may never be able to do.

Nova: And that feeling is something no algorithm can simulate. Thank you for joining us on this deep dive into the philosophy of AI. If you're interested in the intersection of technology and the human spirit, Dreyfus's work is still essential reading.

Nova: This is Aibrary. Congratulations on your growth!

00:00/00:00