Aibrary Logo
Podcast thumbnail

Exposing the Coded Gaze

10 min

My Mission to Protect What Is Human in a World

Introduction

Narrator: Imagine you’re a graduate student at MIT, working on a futuristic art project. The goal is simple: use a camera and software to track your face, allowing you to paint on a digital canvas with your smile. But the software can’t see you. You try adjusting the lights, tilting your head, but the system only responds when your lighter-skinned project partner steps in. Frustrated, you grab a plain white mask, hold it up to your face, and suddenly, the machine sees. The mask is recognized, but your own face, the face of a dark-skinned Black woman, remains invisible.

This isn't a hypothetical scenario; it was the jarring reality for Joy Buolamwini. This experience, which she termed the "coded gaze," sparked a journey from an idealistic computer scientist to a world-renowned activist. In her book, Unmasking AI: My Mission to Protect What Is Human in a World, Buolamwini exposes how artificial intelligence, often presented as objective and neutral, can inherit and amplify our worst societal biases, leading to discrimination, exclusion, and real-world harm.

The Personal Discovery of the "Coded Gaze"

Key Insight 1

Narrator: Buolamwini’s journey began not with a grand theory, but with a series of frustrating personal encounters with technology that refused to see her. As an undergraduate at Georgia Tech, she worked on a social robot named Simon. Her project was to program Simon to play a game of peekaboo, which required the robot to first detect a human face. But Simon consistently failed to detect her face. After trying everything from turning on all the lights to tilting her head, she asked her fair-skinned roommate to try. The software worked flawlessly on her roommate’s face. At the time, Buolamwini dismissed it, recalling how childhood cameras often failed to properly expose her dark skin, leaving only her eyes and teeth visible.

The problem resurfaced years later in Hong Kong, where she encountered another social robot, Autom, which also failed to detect her face. She soon discovered it was using the same biased software library. The final straw came at MIT with her "Upbeat Walls" art project. Once again, the face-tracking software worked for her lighter-skinned classmates but failed on her. It was only when she put on a white mask that the system could see a "face." This wasn't just a technical glitch; it was a pattern. Buolamwini realized that the "default" human coded into these systems was not universal. This recurring, personal experience of being rendered invisible by technology was the catalyst for her life's work, leading her to coin the term "coded gaze" to describe the embedded prejudices of the people who create technology.

Defaults Are Not Neutral: Uncovering the Historical Roots of Bias

Key Insight 2

Narrator: To understand why AI systems were biased, Buolamwini looked to the past and discovered that defaults are never neutral. They reflect the priorities, preferences, and prejudices of their creators. A powerful historical example of this is the "Shirley card." In the mid-20th century, photo labs used an image of a white woman named Shirley as the standard to calibrate skin tones, color, and light in photographs. Because the "normal" reference point was a fair-skinned woman, the chemical composition of film was optimized for her skin tone. As a result, photos of Black people were often underexposed, their features lost in shadow.

It wasn't until furniture and chocolate companies complained that their dark-hued products looked muddy in advertisements that Kodak finally created a film stock that could capture a wider range of brown tones. Better representation for people of color was merely a side effect of commercial interests. This history of the "coded gaze" in analog technology was directly inherited by the digital world. Early digital cameras and the massive photo datasets used to train modern AI were built on this same biased foundation. Buolamwini argues that this shows how AI bias is not a new problem, but a continuation of historical exclusion embedded in the very tools we use to build the future.

From Data to Destiny: How Flawed Benchmarks Create "Power Shadows"

Key Insight 3

Narrator: Buolamwini knew she needed to prove that the "coded gaze" was a systemic issue in commercial AI. To do this, she had to audit the systems built by tech giants like IBM, Microsoft, and Amazon. However, she quickly discovered that the "gold standard" datasets used to train and benchmark these systems were deeply flawed. Datasets like "Labeled Faces in the Wild" were overwhelmingly composed of lighter-skinned male faces. She called these "pale male datasets."

Recognizing that you can't test for bias with a biased ruler, she meticulously created her own, more balanced dataset called the Pilot Parliaments Benchmark. It featured images of parliament members from countries with more gender-balanced leadership and a wider range of skin tones. With this new benchmark, she conducted her landmark "Gender Shades" study. The results were staggering. While the companies boasted high overall accuracy, her intersectional analysis revealed a different story. For IBM’s system, the error rate for classifying the gender of light-skinned men was less than 1%. For dark-skinned women, it was a shocking 34.7%. All the systems she tested performed worst on dark-skinned women. This proved that aggregate accuracy metrics were hiding dramatic failures for specific groups. The data used to train AI is its destiny, and when that data reflects societal "power shadows"—the overrepresentation of whiteness and maleness—the resulting AI will inevitably perpetuate and amplify those same biases.

The Power of the Poet: Using Art and Counter-Demos to Humanize Harm

Key Insight 4

Narrator: Academic papers, while crucial for scientific validation, rarely reach the public or policymakers. To make the harms of AI tangible, Buolamwini embraced her identity as a "Poet of Code." She turned to art and storytelling to humanize her findings. Her most powerful creation was the video poem, "AI, Ain't I a Woman?" In it, she tested commercial AI systems on the faces of iconic Black women, including Michelle Obama, Serena Williams, Oprah Winfrey, and the abolitionist Sojourner Truth. The results were offensive and absurd: Serena Williams was labeled "male," Sojourner Truth was labeled a "gentleman," and a young Oprah wasn't detected at all.

By pairing these dehumanizing labels with a powerful spoken-word performance, Buolamwini created what she calls a "counter-demo." Unlike a typical tech demo that celebrates a product's capabilities, a counter-demo exposes its failures and harms. She drew inspiration from historical figures like Sojourner Truth and Frederick Douglass, who used the then-new technology of photography to create dignified portraits of Black people, countering the racist caricatures of their time. This artistic approach moved the conversation from abstract metrics to visceral, emotional impact, allowing a global audience to bear witness to the coded gaze and feel the sting of being misidentified and erased.

From Protest to Policy: The Fight for Algorithmic Justice

Key Insight 5

Narrator: Armed with rigorous data and powerful stories, Buolamwini took her fight from the lab to the halls of power. When she and her research partner, Deborah Raji, published a follow-up study, "Actionable Auditing," that showed Amazon's Rekognition system had significant biases, the company aggressively tried to discredit their work. But a coalition of over 75 academics, civil society groups like the ACLU, and even a Turing Prize winner rallied to defend their research, which was later vindicated by a landmark government study.

This experience highlighted the importance of collective action. Buolamwini then applied her expertise to support a group of tenants in Brooklyn, mostly elderly women of color, who were fighting their landlord's plan to install a facial recognition system in their building. She provided an amicus letter that helped them win their case. The culmination of this advocacy came when she testified before the U.S. Congress. In a powerful exchange with Representative Alexandria Ocasio-Cortez, she explained the "pale male dataset" problem and recommended a moratorium on government use of facial recognition. This work, alongside that of countless other advocates, contributed to the momentum that led to the White House releasing its Blueprint for an AI Bill of Rights, signaling a major shift from questioning AI bias to actively addressing it through policy.

Conclusion

Narrator: The single most important takeaway from Unmasking AI is that artificial intelligence is not an objective, god-like force; it is a mirror reflecting the values, priorities, and biases of the society that creates it. Joy Buolamwini’s journey reveals that the "coded gaze" is not a series of isolated glitches but a systemic feature of technology built on flawed data and historical inequality. The harms are not hypothetical—they are happening now, resulting in wrongful arrests, discriminatory hiring, and exclusion from essential services.

The book leaves us with a profound challenge. The future of AI is not yet written. Will we allow this powerful technology to be wielded by a privileged few, amplifying injustice and deepening societal divides? Or will we, as Buolamwini urges, join the fight for algorithmic justice, demand accountability, and build a future where technology serves all of humanity? The choice, she makes clear, is ours.

00:00/00:00