
Race After Technology
Introduction
Nova: Imagine you are standing in a public restroom. You reach your hands out under an automatic soap dispenser, waiting for that little dollop of soap. But nothing happens. You wave your hands, you move them closer, you pull them back. Still nothing. Then, a person with lighter skin walks up next to you, puts their hand under the sensor, and instantly, the soap flows. This isn't a hypothetical glitch. It is a real, documented phenomenon, and it is the perfect entry point into the world of Ruha Benjamin's groundbreaking book, Race After Technology.
Atlas: That soap dispenser story is wild because it feels so small, right? It is just soap. But when you realize the sensor literally could not see darker skin because it was only tested on lighter skin, you start to wonder what else the technology in our lives is failing to see. Is it just an accident, or is it something deeper?
Nova: That is exactly what Ruha Benjamin, a sociologist at Princeton, explores. She argues that these aren't just accidents or bugs. She calls it the New Jim Code. It is the idea that our modern, shiny, high-tech systems are often just digitizing the same old biases and inequalities we have been dealing with for centuries, but hiding them behind a veneer of objective math and science.
Atlas: So, instead of a sign on a door saying who can or cannot enter, it is an algorithm quietly making that decision in the background where no one can see it. That is a heavy thought to start with, but I am ready to dive in. Why does this matter so much right now?
Nova: It matters because we are outsourcing more and more of our lives to these systems. From who gets a loan to who gets medical care to how long someone stays in prison. If the code is biased, the consequences aren't just a lack of soap. They are life-altering. Today, we are breaking down how this happens and, more importantly, what we can do to stop it.
Key Insight 1
The New Jim Code
Nova: To understand Benjamin's work, we have to start with her central concept: the New Jim Code. It is a play on the Jim Crow laws that enforced racial segregation in the United States. But while Jim Crow was explicit and visible, the New Jim Code is often invisible and marketed as progress.
Atlas: Right, because we tend to think of computers as neutral. A computer doesn't have feelings, it doesn't have prejudices, so we assume the output must be fair. How does Benjamin challenge that?
Nova: She says that neutrality is a myth. Technology is made by people, and people live in a society shaped by history. If you train an AI on data from a biased world, the AI will learn those biases. She describes four ways this happens. The first is engineered inequity, where bias is actually a feature, not a bug. The second is default discrimination, where the system just assumes a certain type of user is the norm.
Atlas: Like the soap dispenser! The default user was someone with light skin, so the technology was built around that default. Everyone else was an afterthought.
Nova: Exactly. Then there is coded exposure, where being seen by technology can actually be a form of surveillance or harm. And finally, technological benevolence, which is when tech is designed to help people but ends up reinforcing stereotypes or causing harm anyway.
Atlas: That last one, technological benevolence, sounds almost counterintuitive. How can trying to help make things worse?
Nova: Think about an app designed to help police identify high-crime neighborhoods. On the surface, it sounds like a great way to allocate resources. But if the data it uses is based on historical over-policing of Black and Brown communities, the app will just tell the police to keep going back to those same neighborhoods. It creates a feedback loop that looks like objective data but is actually just reinforcing old patterns.
Atlas: So the tech isn't fixing the problem; it is just putting a high-tech stamp of approval on it. It makes the bias harder to challenge because people say, well, the algorithm said so.
Case Study
The Beauty and the Beast of Data
Nova: Let's look at a specific example Benjamin highlights: the Beauty. AI contest. This was billed as the first international beauty pageant judged entirely by artificial intelligence. The creators thought that by using AI, they would eliminate human bias and find a truly objective standard of beauty.
Atlas: I can already see where this is going. If you ask a machine what is beautiful, it has to have a definition of beauty to start with, right?
Nova: Precisely. The AI was trained on a massive dataset of photos. But when the results came in, out of 44 winners, nearly all of them were white. There were only a handful of Asian winners and only one person with dark skin. The AI hadn't discovered an objective truth about beauty; it had simply learned the narrow, Eurocentric beauty standards that were present in the data it was fed.
Atlas: That is such a clear example of how the data we feed these systems acts like a mirror. If the mirror is tilted, the reflection is going to be distorted. But what about something more serious, like healthcare? Surely we are more careful there?
Nova: You would hope so, but Benjamin points to a massive study on an algorithm used by hospitals to identify which patients needed extra care. The algorithm was used for millions of people. It consistently ranked Black patients as healthier than white patients, even when the Black patients were actually much sicker.
Atlas: Wait, how does that even happen? Was the algorithm programmed to be racist?
Nova: No, and that is the scary part. The programmers didn't use race as a variable. They used health costs as a proxy for health needs. They assumed that if someone spends more on healthcare, they must be sicker. But because of systemic inequalities, less money is historically spent on the healthcare of Black patients, even when they have the same conditions. So the AI saw the lower spending and concluded they didn't need as much help.
Atlas: Wow. So by trying to be colorblind and just looking at the money, the algorithm actually ended up being more discriminatory. It took a social problem—unequal access to healthcare—and turned it into a mathematical rule that denied care to the people who needed it most.
Deep Dive
The Architecture of Inequity
Nova: This brings us to a really important point Benjamin makes: technology is not just a tool we use; it is an environment we inhabit. She talks about how these systems create a digital architecture that sorts and categorizes us. Think about how Amazon or Netflix suggests things to you. It feels convenient, but it is also boxing you in based on who it thinks you are.
Atlas: It is like a digital version of redlining. In the physical world, redlining was used to deny mortgages to people in certain neighborhoods. Now, it is happening through data points we don't even know are being collected.
Nova: Exactly. She calls this the architecture of inequity. And it is often hidden behind what she calls the glitch. When a system fails, like the soap dispenser or the beauty contest, we call it a glitch. But Benjamin argues that these glitches are actually windows. They reveal the underlying logic of the system.
Atlas: So instead of just fixing the glitch and moving on, we should be asking why the glitch happened in the first place. What does the glitch tell us about who the system was built for?
Nova: Right. And she challenges the tech industry's obsession with speed and disruption. The motto move fast and break things often means breaking people's lives. When you prioritize speed over safety or equity, the people who are already marginalized are the ones who get broken first.
Atlas: It feels like there is this massive power imbalance. The people building the tech have all the control, and the people being affected by it often don't even know it is happening. How do we even begin to fight back against something so pervasive and invisible?
Nova: That is where her concept of abolitionist tools comes in. She doesn't just want us to be aware of the problem; she wants us to dismantle the systems that create it. It is not about making the New Jim Code a little bit nicer; it is about building something entirely different.
Key Insight 2
Abolitionist Tools
Nova: So, what does an abolitionist tool look like? Benjamin says it starts with something she calls informed refusal. This is the idea that we don't have to accept every new technology just because it is shiny and new. We can say no to systems that are harmful.
Atlas: That sounds powerful, but also difficult. How does an individual say no to a massive facial recognition system or a hiring algorithm?
Nova: It is about collective action. She points to movements like Data for Black Lives, which is a group of scientists and activists working to use data as a tool for liberation rather than oppression. They have successfully pressured cities to stop using certain types of predictive policing software. It is about taking the power of data back from the corporations and the state.
Atlas: I love that. It is moving from being a passive consumer of tech to being an active participant in how it is used. She also mentions the importance of imagination, right?
Nova: Yes! This is one of my favorite parts of the book. She argues that we are currently living inside someone else's imagination—the imagination of the tech moguls in Silicon Valley. To change the future, we have to expand our own imaginations. We have to dream of technologies that prioritize care, community, and justice instead of just efficiency and profit.
Atlas: It is like we need a new set of blueprints. If the current architecture is built on inequity, we need to design a new architecture from the ground up. She talks about the Just Data Lab at Princeton, which she founded. What do they do there?
Nova: The Just Data Lab brings together students, educators, and community organizations to develop data-based projects that address social challenges. They are literally building those abolitionist tools. For example, they might create a map that shows where environmental hazards are located in relation to marginalized communities, using data to advocate for policy changes. It is about making the invisible visible in a way that empowers people.
Conclusion
Nova: We have covered a lot of ground today, from racist soap dispensers to the deep-seated biases in healthcare algorithms. The core takeaway from Ruha Benjamin's Race After Technology is that technology is never neutral. It is a reflection of the society that creates it, and if that society has deep-seated inequities, the technology will too.
Atlas: It is a sobering realization, but also an empowering one. Because if technology is made by people, it can be changed by people. We aren't stuck with the New Jim Code. We have the power to demand better, to refuse harmful systems, and to imagine a world where technology actually serves everyone.
Nova: Exactly. The next time you encounter a glitch, don't just shrug it off. Ask yourself what that glitch is trying to tell you. Look behind the screen and question the logic of the systems you interact with every day. The future isn't something that just happens to us; it is something we build together.
Atlas: This book really changed how I look at my phone, my apps, and even the sensors in the world around me. It is a call to action for all of us to be more critical and more imaginative.
Nova: If you want to dive deeper, I highly recommend picking up a copy of Race After Technology. It is a dense but incredibly rewarding read that will give you the tools to navigate our high-tech world with your eyes wide open. Thank you for joining us on this journey through the New Jim Code.
Atlas: This is Aibrary. Congratulations on your growth!