Aibrary Logo
Podcast thumbnail

The Extremist Playbook

11 min

The Challenge of Moderating Online Extremism

Golden Hook & Introduction

SECTION

Michael: Get this: in just a few years, one major tech company, Meta, removed over 64 billion posts for misinformation, hate, and extremism. 64 billion. Kevin: Whoa. With numbers like that, you'd think the internet would be squeaky clean by now. A digital paradise. But it feels like the exact opposite is happening. The problem seems to be getting worse, not better. What's going on? Michael: That's the exact question at the heart of our book today: Safe Havens for Hate by Tamar Mitts. And what's fascinating is that the author, Tamar Mitts, isn't just an academic. She's an Associate Professor at Columbia, but she also served as a counterterrorism research officer in the Israeli Defense Forces. Kevin: Oh, wow. So she's seen this from both sides. Michael: Exactly. From the theoretical and the front-lines perspective. Her book argues the problem isn't just the bad content we're trying to remove; it's the very structure of the online world that allows hate to flourish. The rules of the game are actually helping the bad guys win. Kevin: Okay, that's a bold claim. So you're saying the whole 'whack-a-mole' idea—banning an account just for it to pop up somewhere else—is not the full story? Michael: It's not even close. The book shows their movements aren't random at all. They're strategic. It's less like whack-a-mole and more like a chess game.

The Whack-a-Mole Myth: Why Banning Extremists Isn't Random

SECTION

Kevin: A chess game? How so? I always pictured them just scrambling to the next available site. Michael: That's the myth. Mitts argues that these groups face a critical trade-off, just like any organization. It's a balance between 'authenticity' and 'impact.' Kevin: Authenticity and impact. What does that mean for a hate group? Michael: 'Authenticity' is the freedom to post their raw, unfiltered, and often violent ideology without getting banned. Think of a small, fringe forum where anything goes. 'Impact' is the ability to reach a massive, mainstream audience. Think of a huge platform like Facebook or YouTube. Kevin: Right, so they can be their true, awful selves in some dark corner of the internet, but nobody will see it. Or they can be on a huge platform, but they have to watch what they say. Michael: Precisely. And the book shows they don't just pick one or the other. They strategically hunt for platforms that offer the best possible mix of both. The perfect example is the Islamic State, or IS. Kevin: Okay, let's get into it. How did they play this game? Michael: In the early 2010s, IS was mostly on fringe websites. Their content was extreme, but their reach was tiny. Then, they made a strategic leap to mainstream platforms like Twitter and YouTube. And their growth was explosive. Kevin: How explosive? Michael: By 2015, they had over 1.6 million followers on Twitter alone. They even developed their own app called 'Dawn of Glad Tidings.' It was deviously clever. Followers would download it, and the app would use their accounts to semi-automatically blast out IS propaganda. Kevin: That's terrifying. It's like they weaponized their own followers' accounts. Michael: Completely. And it had devastating real-world consequences. The book notes that by 2016, over 30,000 foreign fighters had traveled to join IS in Syria and Iraq, many of them radicalized entirely online. This, of course, led to immense government pressure on the big tech companies. Kevin: So the platforms had to act. The whack-a-mole hammer comes down. Michael: It does. Twitter, Facebook, they all start mass-suspending IS accounts. But this is where the strategy comes in. IS didn't just scatter randomly. They migrated. And they migrated decisively to one platform in particular: Telegram. Kevin: Why Telegram? Michael: Because it hit the sweet spot. It had a large and growing user base, so it offered impact. But it also had much more lenient moderation policies and features like encrypted channels, offering them authenticity. It was the perfect new home base. Kevin: So it's like they were a startup that found product-market fit on Twitter, got too big, and then pivoted to a new market—Telegram—that was a better long-term fit for their 'business model.' Michael: That's a perfect analogy. It wasn't a panicked scramble; it was a calculated business decision. And this strategic migration is just the first part of their survival playbook.

The Resilience Playbook: Migration, Mobilization, and Messaging

SECTION

Kevin: Okay, so 'Migration' is part one. It's about finding that perfect blend of freedom and audience. What's part two? Michael: Part two is 'Mobilization.' And this is where it gets even more disturbing. The book argues that extremist groups use the act of moderation itself as a recruitment tool. Kevin: Wait, how? How does getting banned help you recruit people? Michael: By creating a grievance narrative. The book tells the story of a man named Alvin Smallberg. He was a conservative Trump supporter, very active on Twitter. As the 2020 election approached, he got increasingly angry about what he saw as 'Big Tech censorship' of right-leaning content. Kevin: I can imagine. That was a huge talking point. Michael: Exactly. Then, in November 2020, his own Twitter account got banned. He was furious. He felt silenced, targeted. So, what does he do? He migrates to Gab, a platform known for its very loose moderation. Kevin: And on Gab, he finds a community of people who feel the same way. Michael: He finds more than that. He finds the Oath Keepers, the far-right militant group. They're all over Gab, and their message is essentially: 'See? We told you so. The mainstream platforms are censoring patriots like you. They're part of the conspiracy. Join us. We're the ones fighting back.' Kevin: Oh, man. So Twitter's ban, which was meant to stop harmful content, actually pushed him directly into the arms of an extremist group that validated his anger. Michael: It made him psychologically receptive to their message. His personal experience of being 'censored' became the proof that their conspiracy was real. He went from a frustrated Twitter user to an active supporter of a militant group, sharing their calls for an 'insurrection' and promoting the January 6th protest. Kevin: That's chilling. The cure became part of the disease. So that's Mobilization. What's the third part of the playbook? Michael: 'Messaging.' This is about how they adapt their language to survive even on the strictest platforms. The book gives two brilliant examples: the Taliban and QAnon. Kevin: Two very different groups. Michael: Very. But they use similar tactics. On Twitter, the Taliban realized they couldn't just post videos glorifying violence anymore. So, they toned it down. They started posting more about governance, public services, and projecting an image of being a legitimate state actor. They 'softened' their message to stay on the platform and reach a global audience. Kevin: They're doing PR. Michael: Exactly. Meanwhile, QAnon, after getting banned from using jejich direct hashtags, did something even more insidious. They co-opted the #SaveTheChildren hashtag. Kevin: No way. The one from the actual charity? Michael: The very same. They started flooding this innocuous hashtag with their core conspiracy theory about a global child-trafficking cabal. It was a Trojan horse. They smuggled their ideology into mainstream conversations, hiding in plain sight. Kevin: That is diabolically brilliant. They're hiding their poison in a bottle labeled 'medicine.' Michael: It's a perfect example of 'threshold evasion.' They stay just below the line of what will get them automatically banned, all while spreading their narrative. So you have Migration, Mobilization, and Messaging. That's the three-part strategy for digital resilience.

The Moderator's Dilemma: When 'Fixing' the Problem Makes It Worse

SECTION

Kevin: This is all terrifyingly effective. It feels like an unsolvable problem. If banning them can backfire, and they're this clever with their messaging, what's the answer? Do we just force every platform, from Facebook to Gab, to follow the exact same strict rules? Michael: That's the big question, and it's what the book calls the 'Moderator's Dilemma.' There is a huge push for what's called 'policy convergence.' After the horrific Christchurch attack was livestreamed, governments and tech companies came together to create the 'Christchurch Call to Action,' an agreement to align their policies to stop the spread of terrorist content. Kevin: That sounds like a good thing. A united front. Michael: In theory, yes. If every platform has the same high wall, it's much harder for groups to migrate or mobilize. But the book raises two massive red flags, two major costs. The first is 'collateral damage.' Kevin: What do you mean? Michael: The author points to a case where Facebook, in a large sweep, mistakenly took down the accounts of 87 Syrian and Palestinian journalists and activists. Their content, which was documenting real-world human rights abuses, was algorithmically flagged as extremist propaganda. Kevin: Oh, that's awful. So the very people trying to expose atrocities get silenced by the tool meant to stop extremists. Michael: Precisely. Now, imagine that error wasn't just on Facebook. Imagine a centralized system where that one mistake automatically gets replicated across Twitter, YouTube, and dozens of other platforms simultaneously. The 'collateral damage' becomes catastrophic for free expression. Kevin: I see the problem. But isn't some collateral damage acceptable to prevent another Christchurch? It's a tough trade-off. Michael: It is. But it gets even worse with the second cost: 'censorship creep.' The book highlights how authoritarian governments are already exploiting these systems. Countries like Saudi Arabia, China, and Russia have passed broad 'anti-terrorism' laws that define 'disturbing public order' or 'undermining state reputation' as illegal content. Kevin: So they could use a global, converged moderation system to demand the takedown of any content that's critical of their regime, all under the guise of fighting 'harmful content.' Michael: You got it. You'd be handing the world's dictators a global censorship button. That's the dilemma. A perfectly coordinated system to stop hate could become a perfectly coordinated system for repression.

Synthesis & Takeaways

SECTION

Kevin: Wow. So we're stuck. A fragmented system allows hate to thrive, but a unified system risks mass censorship and authoritarian abuse. There's no easy answer here. Michael: There isn't. And that's the most powerful takeaway from Safe Havens for Hate. The core insight is that we can't solve this problem one platform at a time. The issue isn't just the 'moles,' it's the entire field they're playing on. The problem is the ecosystem. The inconsistent, fragmented landscape of rules is what gives these groups their power. Kevin: So any real solution has to be systemic. It has to address the whole environment, not just one corner of it. Michael: Exactly. The book suggests we need to invest in mechanisms that encourage policy convergence, but with powerful safeguards to protect non-harmful speech and prevent abuse. And just as importantly, we need to make moderation itself feel more legitimate to users, so people like Alvin Smallberg don't immediately feel like they're victims of a conspiracy. Kevin: That's a monumental task. It really leaves you wondering... is a truly 'safe' internet even possible without it becoming a totally 'censored' one? Michael: That's the billion-dollar question, isn't it? It's a tension we're all going to be living with for a long time. It's something worth thinking about, and we'd love to hear what you all think as well. Kevin: Absolutely. Let us know your thoughts on this impossible trade-off. Michael: This is Aibrary, signing off.

00:00/00:00