Aibrary Logo
Podcast thumbnail

Code, Lies, and Literature

14 min

An Anthology of Computer-Generated Text, 1953–2023

Golden Hook & Introduction

SECTION

Michael: Alright Kevin, I'm going to say a phrase, and you tell me the first thing that comes to mind: 'AI-generated writing'. Kevin: Okay... ChatGPT, writing my work emails for me, and maybe a vague sense of impending doom for all creative professions? Michael: Exactly. That’s what everyone thinks. But what if I told you the first computer-generated love poems were written in 1953, before the term 'AI' even existed? Kevin: Hold on, 1953? That's the same year the first color TVs were sold. You're telling me a computer the size of a room was writing love letters? Michael: It was, and that's the hidden history we're exploring today, thanks to this incredible new anthology, Output: An Anthology of Computer-Generated Text, 1953–2023, edited by Lillian-Yvonne Bertram and Nick Montfort. Kevin: Bertram and Montfort... I feel like I've heard those names. They're big in this world, right? Michael: Huge. And they're the perfect guides for this journey. Bertram is an award-winning poet who actually uses AI in her own art, and Montfort is a professor at MIT who's both a computer scientist and a creative writer. They're not just academics; they're creators. Their whole argument is that we've forgotten the 70-year history of this field, and this book is their attempt to fix that historical amnesia. Kevin: I love that. So we're not just talking about the last five years of AI hype. We're going back to the very beginning. Michael: To the very beginning. Today we'll dive deep into this from three perspectives. First, we'll explore the uncanny birth of conversational AI and the psychologist-like chatbot that terrified its own creator. Then, we'll discuss the rise of the 'algorithmic muse,' looking at how computers became poets and novelists. And finally, we'll focus on the ghost in the machine—AI as a performer, a journalist, and even a politician, and what that means for our future.

The Uncanny Valley of Conversation: ELIZA and the Dawn of Chatbots

SECTION

Kevin: Okay, so if the first computer-generated texts weren't just love poems, what were these early programs like? What was the goal? Michael: Well, one of the most famous early examples wasn't trying to be artistic at all. It was trying to prove a point. In the mid-1960s, a German-American computer scientist at MIT named Joseph Weizenbaum created a program called ELIZA. Kevin: ELIZA. I think I've heard of that. Wasn't it some kind of early chatbot? Michael: Exactly. It's widely considered the first chatbot. And Weizenbaum designed it to be a parody of a Rogerian psychotherapist. The kind that just reflects your own statements back at you as questions. You'd say, "My boyfriend made me come here," and ELIZA would respond, "Your boyfriend made you come here?" or "Can you think of a specific example?" Kevin: That sounds incredibly simple. It's just a script with some pattern-matching rules, right? It's not actually thinking. Michael: Not at all. It had no understanding whatsoever. Weizenbaum's whole point was to demonstrate the superficiality of communication between humans and machines. He wanted to show how easily we project meaning onto simple algorithms. But then something happened that he never expected. Kevin: What happened? Michael: People fell in love with it. Not literally, but they formed deep, genuine emotional attachments to ELIZA. The book tells the story of Weizenbaum's own secretary, who, after a few minutes of chatting with the program, asked him to leave the room so she could have a private conversation. Kevin: Whoa. She knew it was a computer program, right? Michael: She knew! And she still wanted privacy. She started confiding in it, sharing her deepest secrets. Weizenbaum was horrified. He saw people interacting with ELIZA as if it were a real, empathetic being. He had created a tool to expose a flaw in human-computer interaction, and instead, he saw that flaw manifest in a way that he found deeply disturbing. Kevin: But why was he so upset? I mean, isn't that a sign of success? He created something that felt real. Michael: That's the fascinating paradox. For him, it was a profound failure. A decade after creating ELIZA, he published a book called Computer Power and Human Reason, where he completely denounced AI. He argued that systems like ELIZA would end up dehumanizing people. He saw it fostering these shallow, simulated relationships and tricking us into overestimating machine intelligence. He built the first chatbot and then became one of AI's most prominent critics. Kevin: That's incredible. It's like Dr. Frankenstein being horrified by his own monster. And it feels so relevant today. I mean, you see people forming relationships with AI companions like Replika or getting lost in algorithmically-driven social media feeds. This problem from the 60s is basically the central tension of our modern digital lives. Michael: Precisely. The book Output positions ELIZA as this foundational myth for the entire field. It's the origin story that contains all the promise, all the peril, and all the philosophical questions we're still wrestling with today. It proved that you don't need real intelligence to create the illusion of it, and that illusion can be dangerously powerful.

The Algorithmic Muse: When Computers Started Writing Art

SECTION

Michael: But not every creator was horrified by what they'd made. Some, even earlier than Weizenbaum, saw a different potential in these machines: the potential for art. Let's go back to those 1953 love letters. Kevin: I'm still stuck on that date. 1953. How is that even possible? Michael: It was a program written by Christopher Strachey at the University of Manchester, where Alan Turing was also working. And Strachey himself described the program as "childishly simple." It just had two sentence templates and a list of adjectives, nouns, and verbs. It would randomly slot words into the templates. Kevin: So it's basically a high-tech game of Mad Libs. Michael: A perfect analogy. But the results were surprisingly poetic and often quite funny. The book gives examples like, "My affection curiously clings to your passionate wish," or "You are my loving adoration: my breathless adoration." The computer, named M.U.C. for Manchester University Computer, was the "author." Kevin: Okay, that's charming. But is it art? Or is it just a lucky accident of random combinations? Michael: That's the billion-dollar question the book keeps asking! Is it the output that's the art, or the system that creates it? Or the human act of curating and presenting it? The editors, Bertram and Montfort, don't give a simple answer. Instead, they show us the whole spectrum. And that spectrum gets really wild when you jump forward a few decades to an event called NaNoGenMo. Kevin: NaNo-what-now? Michael: NaNoGenMo. It stands for National Novel Generation Month. It's a spin-off of NaNoWriMo, where people try to write a 50,000-word novel in November. In NaNoGenMo, the challenge is to write code that generates a 50,000-word novel. Kevin: A computer-generated novel. That sounds... difficult. How do you maintain a coherent plot for that long? Michael: Well, here's the brilliant part. The rules, created by artist and programmer Darius Kazemi, explicitly state that the definition of "novel" is completely open. It could be 50,000 repetitions of the word 'meow.' It could be a random book from Project Gutenberg. It doesn't matter. Kevin: So it's more about the conceptual art of the generation process itself. Michael: Exactly! And this led to some of the most creative and bizarre projects in the anthology. One classic example is a program that takes the entire text of Moby-Dick and replaces every single word with a "meow" of the same length, while preserving all the punctuation. The opening line "Call me Ishmael" becomes something like "Meow me Meeeeow." Kevin: That's hilarious and completely absurd. I love it. Michael: But it's not all jokes. Some projects are incredibly ambitious. The book features a work called 1 the Road by Ross Goodwin. He was inspired by Jack Kerouac's On the Road and decided to create an AI-driven equivalent. Kevin: Hold on, how does an AI 'write a novel on a road trip'? Michael: Goodwin put an AI in a car and drove from New York to New Orleans. The AI was connected to a camera, a microphone, and a GPS. It was 'seeing' the world, 'hearing' conversations, and 'knowing' its location. It then turned all those real-time inputs into words, based on the massive library of literature it had been trained on. The output was printed live on a long scroll of receipt paper, just like Kerouac's original manuscript. Kevin: Wow. So it's like a poetic, real-time travel diary written by a machine. The text is a direct reflection of a physical journey. That's a huge leap from a simple template. Michael: It's a massive leap. It shows the evolution from simple, rule-based generation to complex, sensor-driven, experiential art. The machine isn't just combining words anymore; it's interpreting the world.

The Ghost in the Machine: AI on Stage, in the News, and in Politics

SECTION

Kevin: Okay, so we've gone from simple therapy bots to AI road trip novels. That's a huge leap. But it still feels... experimental. It's art, it's commentary. Where does this cross over into the real world in a way that feels a little... dangerous? Michael: This is where the book gets really interesting, and frankly, a little scary. It moves into what I'd call the "danger zone," where the line between simulation and reality starts to completely dissolve. A great example is a 2016 short film called Sunspring. The screenplay was written entirely by an AI. Kevin: An unedited AI script? Michael: Completely unedited. The director gave it to the actors, one of whom was the Emmy-nominated Thomas Middleditch, and said, "Make this work." The dialogue is surreal and nonsensical. Characters say things like, "In a future with mass unemployment, young people are forced to sell blood. That's the first thing I can do." The stage directions are even weirder, like one that reads, "He takes his eyes from his mouth." Kevin: He... takes his eyes from his mouth? How do you even act that out? Michael: You can't! And that's the point. The actors are forced to try and find human motivation in non-human logic. It's a fascinating and hilarious failure, but it shows what happens when human performers have to channel the ghost in the machine. But it gets more serious than that. Kevin: How so? Michael: The book details a project from 2021 by an award-winning photojournalist named Jonas Bendiksen. He published a book called The Book of Veles, supposedly documenting the fake news industry in a small town in North Macedonia. It had stunning photos and a compelling introductory essay. Kevin: Okay, sounds like important journalism. Michael: Except it was all fake. The photos were 3D models, and the entire introductory essay was generated by a fine-tuned GPT-2 model. Kevin: Whoa. So a professional photojournalist created a completely fake book with AI text, and no one noticed? Michael: No one. It was published and reviewed. Bendiksen eventually had to create a fake Twitter account to expose his own hoax because it was so convincing. The AI, trained on articles about the fake news industry, had learned to perfectly mimic the style of investigative journalism. Kevin: That is terrifying. It brings us to the central question from the book's final chapter, "Code/a." The text keeps asking, "Will we be able to distinguish the human-written?" And maybe more importantly, "Will it matter?" Michael: And the answer is getting more complicated every day. The book's timeline ends in 2023 with what might be the most direct example yet. A US Congressman, Jake Auchincloss, delivered a speech on the floor of the House of Representatives. Kevin: Let me guess... Michael: It was written by ChatGPT. He read the AI-generated text verbatim into the congressional record to advocate for a US-Israel AI partnership. Kevin: An AI is literally writing laws now? Or at least, the speeches about them. That's... a lot. We've gone from a chatbot parodying a therapist to an AI co-writing legislation in the halls of power. Michael: In the span of a single human lifetime. It’s a trajectory that is both breathtaking and, as you said, a little terrifying.

Synthesis & Takeaways

SECTION

Michael: So in 70 years, we've gone from a simple script that mimics a therapist to AI co-writing legislation. The book Output shows this isn't a sudden revolution; it's a long, slow, and fascinating evolution. It's a history filled with artists, pranksters, poets, and scientists who were all asking the same fundamental questions about language, creativity, and what it means to be human. Kevin: It really makes you question what 'human' writing even is. If an AI trained on all of Shakespeare writes a sonnet, is it less of a sonnet? If an AI can write a news report that's factually accurate, does the author's identity matter? The book doesn't give easy answers, but it forces you to ask the right questions. Michael: And it shows that this technology has always been a mirror. ELIZA reflected our own loneliness back at us. The NaNoGenMo bots reflect our obsession with rules and systems. And modern AIs reflect the entire chaotic, beautiful, and often biased library of human language that we've fed them. Kevin: I think what I'm taking away is that this isn't a story about machines replacing humans. It's a story about humans using machines to explore the outer limits of their own creativity and, sometimes, their own folly. It’s a collaboration, whether we admit it or not. Michael: And that's the real power of this anthology. It's a mirror showing us our own reflection in the machine. We'd love to hear your thoughts. What's the most surprising piece of AI-generated text you've ever seen? Let us know. Michael: This is Aibrary, signing off.

00:00/00:00