
Tech Tsunami: Can We Survive?
Podcast by Wired In with Josh and Drew
Technology, Power, and the Twenty-first Century's Greatest Dilemma
Tech Tsunami: Can We Survive?
Part 1
Josh: Hey everyone, welcome! Today we’re jumping into a topic that sounds like sci-fi, but trust me, it's very much a reality. We're talking about the race to harness these incredibly powerful new technologies—AI, biotech, quantum computing, the whole shebang. Drew: Okay, Josh, let me guess. It's not all sunshine and robot butlers, right? Josh: Not exactly, Drew. I mean, these technologies could revolutionize everything—curing diseases, solving climate change. But, and this is a big but, they also bring risks that could “really” shake up societies, even threaten our existence. Drew: Oh, lovely. So double-edged sword, then. We've got potentially world-changing tools, but we also might accidentally destroy the world. Josh: Exactly! That's where The Coming Wave by Mustafa Suleyman comes in. Think of it as a guide for navigating this crazy moment in history. Suleyman doesn’t just point out the dangers; he really digs into how these technologies can spread uncontrollably, which exposes some serious weaknesses in how we govern and stay stable. But he also offers ways to manage this, from smart regulations to working together globally and, of course, ethical development. Drew: Global cooperation, you say? Always easy to achieve. So, what's on the agenda today, Josh? Josh: We’re going to unpack three key ideas. First, this whole double-edged nature of these technologies—how they can both change and endanger society. Second, we'll talk about the fragilities, like societal and governmental, that they expose. And finally, Suleyman’s plan for containment and adaptation. Think of it as a compass in a storm – pretty essential. Drew: A storm? More like a potential apocalypse. Let’s see if this Suleyman guy has figured out how to build a lifeboat for us.
The Dual-Edged Impact of Emerging Technologies
Part 2
Josh: Okay, let's dive into the potential of these technologies. Mustafa Suleyman “really” paints a vivid picture of what AI and biotech could bring. I mean, imagine an AI scanning millions of medical records in seconds, spotting patterns to save lives? Or gene-editing tools like CRISPR, fighting diseases at their root? He mentions golden rice as an example, a biotech breakthrough designed to tackle vitamin A deficiency, which could prevent blindness in malnourished populations—it's pretty incredible. Drew: Golden rice is fascinating, sure, but it does bring up the question, at what cost? You've got this genetically modified crop transforming health, but what about ecological impacts? Cross-contamination with other crops? Or the biotech companies monopolizing everything? It's a real ethical maze. Josh: Exactly, and Suleyman acknowledges that. Golden rice shows that duality – a life-saving invention, but also a glimpse into how good intentions can go sideways. He uses that tension to explore the chaotic way these technologies evolve. The relentless competition, the speed of innovation – it's like a super fast race, but no one is paying attention on taking the sharp turn. Drew: Let's talk about that chaos though. Suleyman mentions AI, and specifically the "black box" problem. We've created machines so advanced, even their creators can't fully understand how they work. I mean, that's insane, right? And he brings up these AI systems that can mimic human interaction or make decisions we can't “really” understand. That's not just a black box; it's Pandora's Box. Josh: Absolutely. And think about this, Suleyman describes AI chatbots spreading misinformation faster than fact-checkers can keep up. Or algorithms making biased decisions, like favoring certain demographics in hiring without any accountability. The scary part is how globally these technologies are spreading. They're not just in regulated tech hubs; they're everywhere, in places with little oversight. The self-propagation of the system means we slowly lose control. Drew: “Losing control” feels like an understatement. One thing that stuck with me was the anecdote about amateur labs tinkering with gene-editing using CRISPR. Suleyman talks about going to a demo where a modestly equipped team did some genetic modifications. All it takes is a bright mind, access to tools, and the... willingness, or recklessness, to push boundaries. Remember when you said toddlers playing around with scissors? Well now it's toddlers with laser scalpels operating out of basements. Josh: Which leads us back to this idea of democratization. CRISPR is revolutionary, but that ease of access is its weakness. Without safeguards, anyone—well-meaning or not—could misuse these tools. And, as Suleyman points out, it's not just accidental misuse by hobbyists; it's also malicious actors who could weaponize the tech. Drew: So, it's like a dystopian movie: the villains aren't corporations, they're bio-hackers in hoodies! But seriously, Suleyman goes even further, diving into how quantum computing or robotics could destabilize economies or militaries. Autonomous drones or quantum hackers could disrupt global power dynamics. If encryption gets broken or robots start battling, entire governments could collapse. Josh: Exactly, and that's existential risk. He connects these technological shifts to global vulnerabilities. Take automation, for example. Robots don't need breaks, don't get tired, or make human errors. Sure, that might boost productivity, but it also leads to massive job displacement. Historically, humans have adapted to industrial revolutions, but this pace and scope? Entire sectors could vanish overnight without a safety net. Drew: So, we've got a jobless population, an economic system in chaos, and... what, we just hope corporations suddenly develop a social conscience? Call me a cynic, but I've never seen self-interest regulate itself. This is about governance as much as it is technology. Josh: Suleyman addresses that in his proposed solutions. He argues governance needs to be global, proactive, and adaptable. Yuval Noah Harari said humanity is reaching godlike powers but without godlike wisdom. So, governance isn't just a roadblock; it's the infrastructure for managing the chaos. Drew: "Chaos management"—that's a job title I don't want. And global governance? I mean, have you ever seen international trade negotiations? Those make planning Thanksgiving look easy. But I guess without some sort of cooperation and containment, like Suleyman says, we're playing chicken with the survival of civilization. Josh: And Suleyman sees it as a race against time. We need to implement ethical frameworks, incentivize safer innovation, and create global agreements like we did with nuclear disarmament. He knows it's challenging, but the stakes couldn't be higher. Drew: So, his roadmap is basically: don't panic, act fast, and hope the world agrees on safeguards before it's too late. Isn't that always the balancing act with technology—progress versus preservation?
Societal and Governance Challenges
Part 3
Josh: Exactly, and that naturally leads us to our next big topic: the societal and governance challenges these technologies create. Mustafa Suleyman “really” pulls back the curtain here. It's not just about cool gadgets or breakthroughs, is it? He shows us how unchecked tech growth fundamentally reshapes society, often in ways that can feel… destabilizing. We’re talking about the societal impacts of that reshaping, things like economic and labor disruptions, and then the critical need for governance and ethical frameworks to handle all of this. Drew: Right, it's the ripple effect, isn’t it? Tech doesn’t just exist in a vacuum. It sends shockwaves through absolutely everything. So, Josh, where do we even start? Do we dive into misinformation, job market chaos, or everyone's favorite impending doom: authoritarian overreach? Josh: Let's tackle misinformation first, because it’s a prime example of how advanced tech can erode societal trust, right? Suleyman highlights how generative AI models are frighteningly good at creating content that’s indistinguishable from reality. Deepfakes, fabricated news articles, even AI-generated voices that perfectly mimic real people. It doesn't just blur the line between truth and fiction; it completely wipes it out. Drew: Exactly. It's one thing to have a bot churning out nonsense tweets, but it’s a whole different ball game when it can impersonate a political leader stirring up fake wars or spread incredibly dangerous medical "advice." The potential for absolute chaos is… immense. Does Suleyman have any case studies that "really" drive this point home? Josh: He actually draws a parallel between today’s risks and this chilling historical example: Aum Shinrikyo, the doomsday cult in Japan. Believe it or not, they started as a yoga group. But they gained access to advanced tech—chemical engineering in that case—and they paired it with some seriously heavy propaganda. They used that terrifying combination to carry out the Tokyo subway sarin gas attack in 1995. Thousands of people died or were injured. And what’s “really” fascinating is how they used misinformation to recruit followers and justify their… deranged agenda, while, at the same time, they leveraged technology to actually implement their plans. Drew: That example just sent a shiver down my spine, Josh. And Suleyman's point here, I assume, is that today’s bad actors don’t need to rely on 90s tech. Armed with AI, their ability to mislead, manipulate, and cause serious harm is exponentially higher. The tools for spreading mass fear or disruption are not just more accessible, they’re “really” terrifying in their potency. Josh: Precisely. And AI-driven misinformation works at hyperspeed. By the time fact-checkers identify and counter one lie, five new ones are already circulating. Suleyman emphasizes that we need systems of verification—like an AI version of fact-checking squads that are integrated into our content distribution platforms—and he calls for public education campaigns to help people learn how to spot disinformation. Drew: “Public education,” though… when most people can’t even agree on the most basic facts? That feels like a "really" uphill battle. And that brings us neatly to economic disruptions—another hit these technologies are dealing out, especially to those who are already in vulnerable positions. Josh: Oh, the economic disruption is absolutely massive! Suleyman "really" delves into the effects of automation and AI integration, especially in industries like agriculture and manufacturing. The scale of job displacement is honestly staggering. Just imagine agricultural robots not just weeding or picking fruit, but performing "really" complex, data-driven tasks like soil analysis or pest detection. These systems optimize farming, but they also push millions of farmworkers out of their jobs. Drew: Farming without farmers. It's efficient, sure, but it’s also completely devastating if your town’s main street depends on those workers spending money there. Entire communities can collapse if their foundation disappears—like agriculture in a rural area, for example. And are we just talking about farmers here? Is there any industry that’s actually safe from all of this? Josh: Not really. Suleyman mentions manufacturing, logistics, and even knowledge work—lawyers, accountants, and writers—all facing disruptions. The overarching concern is wealth concentration, right? Who actually benefits from this revolution? Primarily corporations and those who are already at the top. Workers lose jobs, and towns lose their tax bases, and that just leads to more inequality. Suleyman suggests some pretty radical solutions here, like taxing automation or introducing universal basic income, to redistribute the economic gains from this technological efficiency. Drew: Taxing automation, huh? I can already hear lobbyists sharpening their knives somewhere. But, I mean, I see the logic, right? If robots are replacing workers, shouldn’t they "pay their way" in terms of taxes to stabilize the systems we all rely on? But even that feels like a band-aid. What about the existential disruptions? AI creating military or authoritarian risks? Josh: Suleyman explores all of that, too—particularly how these tools are being exploited by authoritarian regimes. China is the most obvious example, with its incredibly sophisticated surveillance network. AI-driven analytics linked to facial recognition tracks people in real time, monitors dissent, and enforces compliance. It’s horrifyingly effective. Drew: It's like Orwell’s "Big Brother" got a major upgrade… and a major funding boost. But Suleyman doesn't let democracies off the hook either, does he? Because even in the West, we’ve seen so-called emergency powers enacted after 9/11 or during COVID ramp up surveillance in ways that… haven’t "really" rolled back. Once governments get a taste of that level of control, it's very hard for them to let it go. Josh: Exactly. Whether they’re authoritarian or democratic, Suleyman points out the risk of a creeping normalization of high-tech surveillance. These systems initially look like responses to crises, but they quietly and consistently erode civil liberties. The dangers, then, isn't just the misuse of tech, but the erosion of trust in governance itself. Drew: It's all pretty bleak, Josh. How does Suleyman propose we fix any of this? Does he offer some magic formula, or are we just… winging it? Josh: He proposes governance as the anchor solution—strong, adaptive, and global governance frameworks. He calls for mechanisms that are similar to the nuclear disarmament treaties, but tailored to the tech of today. It would take massive international cooperation, transparent decision-making, and participation from not just states, but private enterprises and technologists themselves. Drew: Big ideas… but also hugely complicated, right? Reaching binding agreements on something as fast-moving as tech development? That’s like asking rival siblings to split an inheritance peacefully. Josh: True, but Suleyman argues that the stakes are so incredibly high that we just can’t afford to let this devolve into a free-for-all. And governance isn’t just about laws and treaties; it’s also about fostering a culture of ethical responsibility within the tech sector itself. Developers have to weigh the risks of their innovations, not just the rewards. Drew: So, he’s not just handing us regulation roadmaps. He’s also asking for a… cultural awakening on ethics and accountability in the tech world. It’s definitely ambitious, maybe idealistic, but I can absolutely see why it’s crucial. Without it, this wave could crash so, so much harder than any tsunami we’ve ever seen.
Paths to Containment and Adaptation
Part 4
Josh: Building on these challenges, Suleyman dedicates a section to potential solutions and future pathways. He lays out some really practical strategies for containment, covering everything from technical safety measures to the necessity of global cooperation. And he wraps it up with a call for cultural and ethical changes to guide innovation more responsibly. Drew: Alright, solutions! My favorite part. So, does Suleyman actually manage to tame this “wave,” or are we just hopelessly trying to hold back the tide with sandbags? Where does he even suggest we begin? Josh: He starts with safety initiatives, right? He really stresses a proactive approach, embedding ethical priorities into technological systems from the get-go. And he uses the Apollo Program – you know, the moon landing mission – as a brilliant example of how ambitious, interdisciplinary collaboration can overcome huge risks while sparking innovation. It's a pretty compelling analogy. Drew: It is. The Apollo Program wasn't just about shooting someone into space, was it? It was so meticulous—tons of simulations, layers of safety protocols, backup plans for backup plans. So, is he suggesting that AI and biotech need their own version of the "Apollo Program?" Josh: Precisely. Take AI systems, for instance. Suleyman sees them as fundamentally flawed right now because they're essentially "black boxes." Developers often struggle to fully understand how these systems arrive at their decisions. And that has resulted in tangible problems, like biased hiring algorithms that perpetuate systemic discrimination. Suleyman argues that embedding ethics at the design stage is crucial. We need safeguards in place to prevent harmful outcomes before these technologies are even deployed. Drew: Okay, but realistically, how feasible is that? It's all well and good to say, "Bake safety into the design," but what's to stop a profit-driven company from rushing a system to market, flaws and all? Josh: That's where the next layer of solutions comes in – audits. Suleyman proposes mandatory audits for emerging technologies, similar to how financial audits ensure a company's books are in order. Drew: Ah, technological “bean counters.” So, the idea is to have an independent body thoroughly examine the code and processes before the tech is released into the wild? Josh: Exactly! And it's more than just code. These audits would review datasets to identify biases, verify algorithmic fairness, and ensure that outcomes align with regulatory and ethical standards. He uses AI in mortgage lending as a cautionary example. Automated systems denied loans to underbanked communities based on historical patterns of discrimination. An effective auditing process could have caught and mitigated those biases before they were even put into practice. Drew: Okay, but let's be real. Audits aren't foolproof either. There's always the risk of companies manipulating the system, hiring compliant auditors who turn a blind eye to obvious issues. And won't there be complaints about added bureaucracy slowing down innovation? Josh: Suleyman isn't saying it will be easy, of course. It's about striking a balance, right? Harnessing the speed of innovation without sacrificing accountability and transparency. He suggests audits not just as technical reviews but as a way to rebuild public trust. Drew: Public trust? After years of Big Tech hoarding data and weathering scandal after scandal? That's a tough challenge. Josh: Absolutely. But Suleyman points out that the alternative – unchecked systems – carries far more catastrophic risks than the occasional PR blunder. And that's where his proposal to target choke points comes in. Drew: Choke points? Sounds ominous. What does he mean by that? Josh: It involves identifying the critical "nodes" in global supply chains where regulation can halt or slow the spread of dangerous technology. Think semiconductors, for example. Advanced AI depends on powerful chips largely manufactured in regions like Taiwan. By controlling the export of these key components, governments can create some breathing room to put regulatory frameworks in place before potentially harmful tools spread uncontrollably. Drew: So, essentially limiting who gets to play with the most powerful toys, at least for a while. But I imagine this ties into geopolitics as well, doesn't it? If one region restricts semiconductors, won't others retaliate or ramp up their own production? Josh: That's a valid concern, and Suleyman acknowledges that these strategies could strengthen monopolies or even trigger global trade conflicts. But he emphasizes that choke points are a temporary measure, intended to buy time for solutions like international treaties to take effect. Drew: And treaties are the next piece of his plan, right? I'm guessing these aren't your run-of-the-mill trade agreements either. Josh: Precisely. He envisions treaties similar to the Non-Proliferation Treaty for nuclear weapons, which helped prevent a wider global arms race. For emerging tech, international agreements would establish clear standards for safety, information sharing, and mutual oversight. Drew: But if history teaches us anything, it's that getting nations to agree on enforcement is a major challenge. What's his approach to these inevitable disagreements? Josh: He advocates building trust through cooperative platforms, not just among governments but also involving corporations, universities, and even grassroots organizations. A great example he highlights is pandemic preparedness. During COVID-19, nations shared genomic data on the virus to accelerate vaccine development. It's a collaborative model that can be scaled up to tackle emerging tech fields like AI and synthetic biology. Drew: Okay, but treaties and choke points only address systems. What about the people driving them? You can't regulate human greed or ambition. Josh: That's where Suleyman shifts gears to the cultural aspect. He argues that we need a significant ethical shift within corporations and the tech sector as a whole. Aligning profit incentives with shared societal goals is one way to do this, creating business models that prioritize sustainability and public welfare. Drew: He's referring to the B Corporations model, isn't he? Josh: Exactly. B Corps are businesses legally committed to balancing profit and purpose. He sees them as proof that ethics and profitability don't have to be mutually exclusive. Patagonia, for instance, is profitable but maintains a supply chain that prioritizes environmental responsibility. Suleyman suggests incentivizing more companies to adopt this model through subsidies or tax breaks, creating an ecosystem where "good tech" thrives. Drew: I like the idea, but corporate culture doesn't change overnight. And governments still need to agree on which ethics matter most before offering incentives. Doesn't "responsibility" vary widely between, say, Silicon Valley and Shenzhen? Josh: That’s exactly Suleyman’s point. The technical fixes—AI safety, audits, choke points—can only achieve so much if we don’t simultaneously cultivate a culture of shared accountability. Ethical innovation needs to become a core tenet of public and private policy globally, not just a "nice-to-have." Drew: So, the bottom line? Suleyman is essentially building a three-part solution: regulating the tools, redesigning incentives, and educating the people behind it all. Josh: Exactly. He's calling for nothing short of a paradigm shift – an Apollo Program for safety, treaties for containment, and a collective commitment to use technology for the long-term good of society. Drew: It's ambitious, no doubt. I'm still a little skeptical whether we can actually pull it off before the wave crashes over us. But, I'll admit, this roadmap is about as comprehensive as it gets.
Conclusion
Part 5
Josh: So, to sum up, Mustafa Suleyman's “The Coming Wave” really paints a picture of these emerging technologies—AI, biotech, quantum computing—as a double-edged sword. They could revolutionize healthcare, tackle climate change, and generally make life better. Drew: But, you know, there's always a “but.” It's not all sunshine and rainbows, right? Suleyman also points out that these technologies carry some pretty serious risks. We're talking about societal stability, the very foundations of democracy, maybe even the survival of humanity. It sounds like a sci-fi film. Josh: Exactly. And he doesn't shy away from the uncomfortable truths. He talks about how these rapid advancements exploit our weaknesses—misinformation, job losses, even authoritarianism. It’s a reality check on how powerful these technologies are and how easily they could disrupt the systems we depend on. Drew: So, it's not just doom and gloom, is it? What solutions does Suleyman propose? Throwing our phones in the river is clearly not a real solution. Josh: No, definitely not! What's great about the book is that it offers actionable solutions. Think embedding ethics directly into the technology itself, mandatory safety audits, using supply chains to slow down the spread of dangerous tools, and of course, international treaties for containment. Drew: Right. And then there's the big one: he's calling for a complete overhaul of corporate and cultural ethics. Putting societal good before profits. It sounds like a long shot, but Suleyman seems to think it’s the only way we can ride this wave, instead of being completely wiped out by it. Do you think that is possible? Josh: Absolutely, and it all starts with recognizing what's at stake and acting together. It’s governments, corporations, individuals – everyone has a role. The wave is coming, whether we're ready or not. The real question is, will we rise to the challenge or let it overwhelm us? Drew: Well said, Josh. I have to admit, I was skeptical going into this conversation, but Suleyman makes a compelling case. It's hard to argue with the fact that we need to treat this with urgency, accountability, and a whole lot of ingenuity. But practically speaking, how would the average person apply something like this to their own life? Josh: Great question! So, here's our takeaway: as individuals, we might not be able to control the wave itself, but we can choose how we interact with it. That means staying informed, championing responsible innovation, and demanding accountability from our leaders. Drew: Because ultimately, whether this wave becomes a force for good or a complete disaster depends on how we navigate it. Food for thought, everyone. Josh: Definitely. Thanks for joining us for this insightful discussion. Until next time, let’s keep our eyes on the horizon and our hands on the wheel!