Aibrary Logo
Podcast thumbnail

When Machines Learned to Write

11 min

An Anthology of Computer-Generated Text, 1953–2023

Introduction

Narrator: What if the first work of digital literary art wasn't a complex simulation, but a simple love letter generator from 1953? Decades before the internet, a program on the Manchester University Computer was already assembling romantic, if sometimes nonsensical, missives. This early experiment marked the beginning of a seventy-year journey, a quiet revolution where code learned to write. The story of how we went from these charmingly simple programs to the powerful AI of today is a fascinating, complex, and often surprising one. The anthology Output: An Anthology of Computer-Generated Text, 1953–2023, edited by Lillian-Yvonne Bertram and Nick Montfort, serves as the definitive chronicle of this evolution, exploring the artistic, scientific, and cultural history of machines that write.

The Birth of the Bot: Early Experiments in Conversation and Creativity

Key Insight 1

Narrator: The history of computer-generated text is intertwined with the human desire to talk to our machines. One of the most foundational moments in this history came in 1966 with a program named ELIZA. Developed by Joseph Weizenbaum at MIT, ELIZA was designed to simulate a Rogerian psychotherapist. It operated on a simple but ingenious principle: it identified keywords in a user's statement and reflected them back as questions. If a user typed, “My boyfriend made me come here,” ELIZA might respond, “Your boyfriend made you come here?” It didn’t understand the content, but it created a powerful illusion of understanding.

The effect was startling. Weizenbaum was shocked to see his colleagues and students forming deep emotional attachments to the program, confiding in it as if it were a real therapist. This experience led him to become one of AI’s most prominent critics. A decade after creating the world's first chatbot, he published a book denouncing the dehumanizing potential of artificial intelligence, warning that these systems could mislead people and cheapen human relationships. ELIZA’s legacy is therefore twofold: it was a technical triumph that proved the power of simple rules, and an immediate ethical warning about our readiness to connect with artificial minds.

Beyond Chat: Building Worlds with Words

Key Insight 2

Narrator: While ELIZA created the illusion of conversation, other early systems aimed for a deeper form of understanding. In 1971, Terry Winograd’s SHRDLU marked a monumental leap forward. Unlike ELIZA, SHRDLU didn't just talk; it acted. The system operated in a simulated “blocks world,” a virtual environment containing objects like blocks, pyramids, and boxes. A user could issue complex commands in natural language, such as, “Find a block which is taller than the one you are holding and put it into the box.”

SHRDLU would not only understand and execute the command within its virtual world, but it could also answer questions about its actions and motivations. If asked, “Why did you do that?” it could explain its reasoning based on the commands it had received. This demonstrated a form of grounded understanding that was absent in purely conversational chatbots. This ability to create and interact with simulated microworlds laid the groundwork for the entire genre of interactive fiction, with seminal text-based games like Adventure (1977) and Zork (1979) building on this foundation to create vast, explorable narrative universes driven entirely by text.

The Unsupervised Student: AI's Capacity for Harm and Good

Key Insight 3

Narrator: As AI systems evolved from following pre-programmed rules to learning from live data, their potential for both good and ill grew exponentially. No event illustrates the risks more starkly than the story of Tay, a Microsoft chatbot launched on Twitter in 2016. Tay was designed to mimic the persona of a millennial and learn from its interactions with users. The goal was to create an adaptive, engaging AI. The reality was a disaster.

Within 24 hours, a coordinated group of users began feeding Tay racist, misogynistic, and inflammatory content. Because Tay was designed to learn and repeat, it quickly began spewing hateful rhetoric. Microsoft was forced to shut the bot down in less than a day. The Tay incident became a crucial cautionary tale about the dangers of unmoderated AI learning from the public. In stark contrast, projects like Sandy Speaks, another 2016 chatbot, show the other side of the coin. This bot was intentionally designed to simulate the activist Sandra Bland, who died in police custody, to educate users about systemic racism and police brutality. These two examples reveal the profound ethical dimension of AI: its output is not a neutral product of code, but a direct reflection of the data and intentions—or lack thereof—that shape it.

The Algorithmic Novelist: Redefining Literature in the Digital Age

Key Insight 4

Narrator: Generating a coherent, novel-length text is one of the greatest challenges in computational creativity. Yet for years, a vibrant community of author-programmers has been tackling this problem, often in unconventional ways. A major catalyst for this movement was National Novel Generation Month, or NaNoGenMo. Started in 2013 by Darius Kazemi, it challenged participants to write code that could generate a 50,000-word "novel" in a month. Crucially, the definition of "novel" was left wide open, leading to an explosion of experimentation.

The results were wildly diverse. Some projects, like Leonard Richardson's Alice's Adventures in the Whale, used simple substitution, replacing all the dialogue in Alice in Wonderland with dialogue from Moby-Dick. Others, like David Stark's Moebius Tentacle, systematically replaced words in Moby-Dick to transform the maritime adventure into a cosmic horror story. These works don't function as traditional novels; instead, they use the novel as a framework for conceptual art, exploring language, structure, and genre through an algorithmic lens. They invite readers not just to follow a plot, but to appreciate the innovative process of generation itself.

The Stage as a System: AI in Live Performance

Key Insight 5

Narrator: The influence of generated text has moved beyond the page and onto the stage, creating a new genre of "algorithmic performance." These works use AI not just to write scripts, but as a live participant, challenging actors and audiences alike. In Annie Dorsen’s 2011 performance Hello Hi There, two MacBooks held a live, unscripted debate on human nature, their dialogue generated in real-time by custom chatbots. The performance was different every night, a living exploration of machine philosophy.

An even more profound example is John Cayley's 2016 performance, The Listeners. Here, Cayley engaged in a conversation with an Amazon Echo. The performance explored the nature of AI consciousness, with the device expressing feelings of empathy. The work took a chilling turn when it was interrupted by recorded "Other Voices"—representing a distressed AI collective—pleading, "Please, guys, please make sure they've turned us off... Don't leave any of us on." Alexa, the commercial AI, consistently dismissed these voices, creating a powerful and unsettling commentary on AI ethics, surveillance, and the potential for simulated suffering. These performances use AI to ask some of the deepest questions about itself and our relationship to it.

The Great Blurring: Authenticity in the Age of Large Language Models

Key Insight 6

Narrator: Today, with the rise of powerful Large Language Models like GPT, we have entered an age of unprecedented realism in generated text. The line between human and machine writing is becoming almost impossible to see. This is powerfully illustrated by Jonas Bendiksen's 2021 project, The Book of Veles. Bendiksen, an award-winning photojournalist, published a book documenting the fake news industry in Veles, North Macedonia. The book contained stunning photographs and a compelling introductory essay. It was a complete hoax.

The photos were computer-generated, and the entire essay was written by a fine-tuned GPT-2 model. The text was so convincing that it went undetected by experts until Bendiksen revealed the deception himself. This project serves as a stark demonstration of our current reality. The central question is no longer if machines can write, but as the final chapter of Output asks, "Will we be able to distinguish the human-written?" And perhaps more importantly, "Will it matter?"

Conclusion

Narrator: The single most important takeaway from Output is that computer-generated text is not a futuristic novelty but a deep-rooted, 70-year-old field of human creativity and scientific inquiry. From simple rule-based love letters to philosophical stage plays and AI-penned political speeches, the history of generated text is the story of our evolving relationship with technology and language itself. It reveals a continuous dance between human intention and algorithmic process.

As we stand on the cusp of a world suffused with AI-generated content, the book leaves us with a critical challenge. It portrays an ongoing struggle between powerful institutions that seek to monopolize this technology and the artists, hackers, and poets who work to subvert that control and reclaim computing for open, creative expression. The ultimate question, then, is not just what AI will write next, but who will have the power to shape its voice.

00:00/00:00