
Hacking Reality
11 minHow the Next Wave of Technology Will Break the Truth
Golden Hook & Introduction
SECTION
Joe: Most people think 'fake news' is the biggest threat to truth online. They're wrong. The real danger isn't fake articles; it's a future where your own eyes and ears become the most unreliable narrators of your life. And that future is already starting. Lewis: Whoa, hold on. My eyes and ears are unreliable? That sounds... dramatic. Are we talking about some kind of mass delusion, or is this something else? What are you getting at? Joe: It’s the central warning in a book that’s been on my mind for weeks. We're diving into The Reality Game: How the Next Wave of Technology Will Break the Truth by Samuel Woolley. And this isn't just some journalist's take. Woolley is a serious academic who founded propaganda research labs at places like the University of Texas and even Oxford. He's been on the front lines of this digital war for years. Lewis: Okay, so he's not just an observer, he's basically a general in this information war. He’s seen the battle plans. That definitely changes things. It’s not just speculation; it's a report from the field. Joe: Exactly. And some of his reports are chilling. He tells one story that perfectly captures how this isn't some abstract, far-off problem. It's happening right now, in person, in pinstriped suits.
The Present Reality Game: Hacking Public Opinion with Bots and Lies
SECTION
Joe: Woolley was at the South by Southwest conference, a huge tech and culture festival. He had just given a presentation warning about how automated accounts, or bots, could be used to manipulate elections. After his talk, a man in a sharp suit approaches him. Lewis: Let me guess, he wants to offer him a job doing exactly what he just warned against. Joe: You nailed it. The man says he works for a government in the "Indian Ocean region" and that he's been tasked with taking over its social media operations. He then flat-out asks Woolley to help him build an army of bots to boost his government's image online. Lewis: That is brazen! It’s like a firefighter giving a talk on arson prevention, and someone from the audience comes up and asks for tips on which accelerant to use. Did Woolley take the job? Please tell me he didn't. Joe: Of course not! He emphatically refused. But the story is so powerful because it shows the human intent behind the machine. This isn't just code running wild; it's people, governments, and political groups actively seeking to build tools of manipulation. Lewis: Okay, that's a wild story, but does this stuff actually work? I mean, can a few bots on Twitter really change things? It feels like we give them too much credit sometimes. Joe: That's the million-dollar question, and Woolley provides one of the earliest, clearest examples. We have to go back to 2010, to a special Senate election in Massachusetts between Scott Brown and Martha Coakley. Lewis: 2010? That feels like the Stone Age of social media. Joe: It was. And that’s what makes it so important. Researchers at Wesleyan University noticed something strange. A cluster of brand-new Twitter accounts, all with no profile pictures and few followers, were suddenly tweeting relentlessly, all with the same message: that Martha Coakley was anti-Catholic. In a state like Massachusetts, that’s a serious accusation. Lewis: And these were bots? Joe: They were. The researchers traced them back to Tea Party activists in Iowa. They had built this small army of automated accounts to look like real Massachusetts residents who were angry at Coakley. They created a fake, digital grassroots movement—what the book calls an "astroturf" campaign. It gave the illusion that there was this huge, organic wave of opposition to her. Lewis: And did it work? Joe: Well, Scott Brown won that election in an upset. It's impossible to say the bots were the only reason, but they absolutely gave the attacks against Coakley the appearance of legitimacy and popular support. They hacked public perception. Lewis: But that was over a decade ago. Surely platforms like Twitter and Facebook are better at catching this now, right? They're always talking about taking down bot networks. Joe: They are better, but the game has just evolved. The propagandists have gotten more sophisticated, and the political leaders who benefit from it have gotten more aggressive. Woolley details his own team's research at Oxford, where they uncovered how the regime of Rodrigo Duterte in the Philippines was spending hundreds of thousands of dollars on a social media army to attack critics and spread disinformation. Lewis: And what happened when they published that? Joe: This is the scary part. Duterte didn't hide or deny it. He went on television and attacked the source. He publicly called Oxford University "a school for stupid people." Lewis: Wow. So the strategy isn't to hide the manipulation anymore. It's to discredit the truth-tellers. You don't just create a fake reality; you attack the real one. Joe: Precisely. The game has moved from subtle manipulation to open warfare on facts and the institutions that produce them. And as unsettling as all that is, Woolley argues it's just the opening act. The tools they're using now are like muskets compared to the weapons of mass deception that are coming.
The Future Reality Game: When Seeing is No Longer Believing
SECTION
Lewis: Okay, so bots and fake articles are bad. I get it. But you said in the intro that's not even the real danger. What's next? What could possibly be worse than what you've just described? Joe: The next stage is a fundamental shift from manipulating information to fabricating reality. It's about making it impossible for you to trust what you see and hear. And it starts with something the book calls "shallow fakes." Lewis: Shallow fakes? As opposed to deepfakes? Joe: Exactly. A shallow fake doesn't require sophisticated AI. It's simple video or audio manipulation. The most famous example Woolley uses is the incident with CNN reporter Jim Acosta at the White House. After a heated press conference, a White House intern tried to take the microphone from him. Lewis: I remember that. It was all over the news. Joe: Right. But then, a video clip started circulating, pushed by the conspiracy site Infowars. It appeared to show Acosta aggressively chopping his arm down on the intern's arm. The White House, including the Press Secretary, shared this video on Twitter as justification for revoking Acosta's press pass. Lewis: That's unbelievable. The White House shared a doctored video from a conspiracy site? That's a whole new level. Joe: It was. And here’s the thing: it wasn't a deepfake. Analysts quickly figured out that the video had just been sped up slightly at that key moment. That tiny change in speed made Acosta's motion look like a hostile karate chop instead of a defensive reflex to hold onto the mic. It was a shallow, simple manipulation, but it was enough to fuel a massive political firestorm. Lewis: So a shallow fake is like putting a fake mustache on a photo, but a deepfake is like creating a perfect CGI clone that can walk and talk on its own? Joe: That's a perfect analogy. And that's the next frontier. Woolley points to the PSA that filmmaker Jordan Peele made with BuzzFeed. They used AI to create a video of Barack Obama saying things he never said, like calling President Trump a "total and complete dipshit." It was so convincing, and they released it as a public warning: this technology is here, and it can be used to put any words in any person's mouth. Lewis: That is terrifying. The political implications are staggering. You could start a war with a fake video of a world leader declaring one. Joe: You could. But Woolley takes it one step further, into a realm that feels like pure science fiction, except the technology is already being built. He asks us to imagine a sixteen-year-old girl in the near future. She puts on a VR headset and logs into a fully immersive social media world. Lewis: Like the metaverse everyone's talking about. Joe: Exactly. In this virtual world, she can learn, play, and socialize. But she can also be targeted. Extremist groups can build entire worlds within this system, designed for indoctrination. They can create a virtual "safe space" that feels welcoming, and then slowly barrage her with subtle, personalized propaganda, fake stories, and AI-driven characters who build a rapport with her before feeding her lies. Lewis: So they're not just sending her a fake article. They're building a fake world for her to live in. A world where the lies feel like her own discovery. Joe: A world where reality is whatever the propagandist wants it to be. The book makes the point that the cost of this tech is plummeting. A high-end VR system cost over $70,000 in the 90s. Today, you can get a basic one for fifty bucks. The accessibility is exploding, and with it, the potential for mass-scale, immersive manipulation. Lewis: My head is spinning. It feels like we're standing on a beach, watching a tsunami of falsehoods gathering on the horizon, and we're all just holding little paper umbrellas. This is all incredibly bleak, Joe. Is there any hope? What's the takeaway here?
Synthesis & Takeaways
SECTION
Joe: It does feel bleak, and Woolley doesn't sugarcoat the threat. But he's surprisingly optimistic, because he argues we're looking for the solution in the wrong place. We think the answer is better AI to detect fake videos, or better algorithms to filter out bots. We're trying to fight tech with more tech. Lewis: And that's not the answer? Joe: It's part of it, but it's not the core solution. Woolley argues this is fundamentally a human problem, not a technological one. The technology is just a tool that amplifies pre-existing social divisions, distrust, and polarization. The real problem is the fertile ground of anger and alienation that allows this propaganda to take root. Lewis: So fixing the tech is like treating the symptoms, not the disease. Joe: Precisely. He brings up a fantastic quote from former Secretary of Defense James Mattis, who once told Congress, "If you don’t fund the State Department fully, then I need to buy more ammunition ultimately." Lewis: Wow. What a line. He’s saying diplomacy is cheaper than war. Joe: And more effective. Woolley applies that logic here. Investing in education, in media literacy, in programs that bridge political divides, in rebuilding trust in institutions—that's the "diplomacy." It's the long, hard work of fixing our social fabric. Without that, we'll just be in a perpetual technological arms race against propagandists, constantly buying more "ammunition" to fight the last battle while they invent new weapons. Lewis: That actually makes a lot of sense. The problem isn't the gun; it's the person who wants to shoot it. We have to understand why they want to shoot it. Joe: Exactly. And that brings the power back to us. We can't all build AI detection models, but we can all contribute to that social solution. Woolley's ultimate call to action is about designing and demanding technology that has human rights and democratic values baked in from the start, not bolted on as an afterthought. Lewis: So what can a regular person, listening to this right now, actually do? It feels so big. Joe: He suggests something very simple. The next time you're scrolling and you see something online that makes you intensely angry, or that perfectly confirms your deepest bias, just pause for three seconds. Ask yourself: who benefits from me feeling this way right now? Who benefits from my outrage? That small moment of critical thinking, of stepping outside the emotional current, is the first and most powerful line of defense. Lewis: That's a powerful thought. It's about reclaiming our own minds from the game. We'd love to hear what you all think. What's the most convincing fake you've ever seen online? Or have you ever caught yourself getting swept up in something that turned out to be false? Let us know on our socials. Joe: It's a conversation we all need to be having. This is Aibrary, signing off.