
Tech Mess: Navigating Chaos or Building a Future?
Podcast by Wired In with Josh and Drew
Technology and the End of the Future
Introduction
Part 1
Josh: Hey everyone, and welcome! Today we're tackling something that's totally interwoven with our lives: technology. I mean, from the phones glued to our hands to the algorithms feeding us news or even running the global economy, tech is everywhere. But the big question is, is all this tech leading us to a brighter future, or, well, a more chaotic one? Drew: Exactly, Josh. It feels like every time we pat ourselves on the back for some new tech, bam! Another headline hits us about misinformation gone wild, or algorithms making hiring decisions. It's like progress always brings along its own special brand of mess. Josh: Precisely! And that's why we're digging into New Dark Age: Technology and the End of the Future by James Bridle today. It's a pretty sobering, but super important, look at our obsession with tech. Bridle basically says that instead of solving problems, tech often makes them bigger. Think climate change getting worse because of massive server farms, or fake news spreading faster than facts, or AI making inequality even worse. Drew: Oh, and let's not forget the whole surveillance thing! It feels like we just handed over our privacy for a little bit of convenience, like, "What's the worst that could happen?" Well… Josh: So, here's what we're going to do today. First, we'll look at how technology itself creates chaos – through misinformation, the environmental impact and just its sheer complexity. Drew: Then we'll dive into algorithms – those supposed "truth machines" – and ask if they’re actually bending reality into something… well, less than truthful. Josh: And finally, we'll get into the ethical choices in front of us. Bridle doesn't just point out the problems; he calls for a new kind of "systemic literacy" so we can understand – and deal with – the consequences of our tech-driven world. Drew: No big deal, right? But seriously, this book isn’t just about blaming tech; it’s a challenge. It’s asking us to rethink how we use technology in every part of our lives, from our personal habits to global policy. Josh: Exactly! So, let's jump into this tangled mess of why “more tech” doesn't automatically equal “more progress," shall we?
Technological Dependency and Societal Consequences
Part 2
Josh: So, picking up from that, let’s dive into what James Bridle “really” nails: our reliance on technology, and how it often creates chaos instead of clarity. The 2013 Boston Marathon bombing is a perfect example. Remember all those cameras that were supposedly there for security? Drew: Right, the panopticon in action. You’d think with all that surveillance tech on every corner, they could've stitched everything together in real-time. But nope—tech gave them too much. Or maybe too much unfiltered junk? Josh: Exactly. Bridle, with support from former NSA official William Binney, argues that despite having tons of data, authorities were paralyzed. Binney called it “data overload.” Imagine rooms full of CCTV footage and metadata, but no way to find what's relevant. It's like being in the ocean but you can't find one specific drop of water. Drew: So, instead of some cool detective show where they zoom in on a blurry frame and shout "Enhance!", we got a digital junk drawer overflowing with useless stuff. Josh: Pretty much. And it wasn't just a Boston thing. Remember the attempted attack by Umar Farouk Abdulmutallab in 2009? Authorities had all the information but couldn't connect it. Obama said the intelligence community had multiple red flags on Abdulmutallab—not a lack of info, but a problem with prioritizing and analyzing. Drew: Isn't that what technology promised us? To process data better and faster? Instead, we get... techno-noise. Great for movies, useless when lives are at stake. Josh: That's Bridle's point: we blindly believe that more data automatically leads to better results. Tools like CCTV only work with human judgment and systems. Otherwise, it's just noise hiding what's important. Drew: Which leads us to solutionism, this idea that every problem can be solved by throwing tech at it. Bridle “really” goes after that, doesn’t he? Josh: He does. His plumbing analogy “really” stuck with me. He says teaching everyone to code is like teaching them to fix a leaky faucet—useful, sure, but it doesn’t mean they understand water systems. Yet we treat coding as the magic skill for the digital age. Drew: Okay, extending the plumbing idea, teaching coding is like handing out wrenches while the city's water grid is falling apart. What about the bigger problems, like data governance, or how algorithms exploit inequalities? Josh: Right! And high-frequency trading shows that perfectly. It's a prime example of how technological solutionism can go wrong. These algorithms trade faster than humans can even think... to make markets "more efficient." But efficient for whom? Drew: The one percent, obviously. HFT algorithms game the system at lightning speed, exploiting small gaps and amplifying inequalities. It’s like Formula 1 cars racing on a track paid for by taxpayers, but only billionaires get to drive. Josh: And they leave everyone stuck in traffic, right? Bridle's point is that tech doesn't redistribute resources; it consolidates them. But we still believe it's neutral. Drew: Neutral! Josh, implying algorithms are neutral is like a referee at a rigged boxing match being fair. If the data is biased, so is the result. Yet we keep pushing algorithmic solutions like they're apolitical. Josh: Right. Bridle calls these so-called solutions dangerous. They don't fix problems; they worsen them. And if you want another example of our tech dependence, look at airport collapses. Drew: Ah, the great equalizer, delayed flights! Nothing shows tech's fragility like a crowded airport with a broken system. One glitch, and thousands are stranded, creating chaos in transportation. Josh: Exactly! Airports should be showcases of human technological triumph—handling logistics in real-time. But when the systems fail, we see how fragile and dependent we are on automation. No one has a backup plan. We've built this on a single point of failure. Drew: That makes you realize the irony. We created these systems to feel in control, but we panic when they mess up. Josh: Yep. Airports become a snapshot of our tech dependency. The illusion of control vanishes, reminding us how powerless we are without these crutches. That powerlessness isn't just about systems failing; it affects how we think and act. Drew: You're talking about systemic literacy, right? Josh: Yes. It's Bridle's call to action. He’s not saying everyone needs to be a tech whiz, but we need to understand how systems work—finance, climate change, or even supply chains. Without systemic literacy, how can we deal with complexity? Drew: Instead of throwing around buzzwords, we need to step back and ask, "Is this “really” doing what we need it to do?" Josh: Exactly. We're trapped in an illusion of empowerment. We have new tools that promise us agency, but they often inflate inequities and vulnerabilities. Drew: That forces us to face how much we've given up in exchange for the convenience of tech, doesn’t it?
Misinformation and Algorithmic Amplification
Part 3
Josh: So, this all begs the question: how is our dependence on technology affecting how we understand the world and make decisions? Let's really dive into one of the most common ways this plays out: misinformation and how algorithms amplify it. Building on the problematic societal impacts of tech, let’s explore how exactly misinformation spreads, with some examples, and then talk about the ethical implications. Drew: Misinformation, the gift that keeps on giving... or more accurately, the curse that keeps dividing us. It’s pretty wild how platforms that were supposed to democratize information have become fertile ground for conspiracy theories and, you know, digital tribalism. Josh: And what’s at the core of this mess? Algorithms. Take YouTube’s recommendation system, for example. Like, Bridle points out that it's not neutral at all; it's actively designed to keep us hooked. Imagine someone watches a pretty innocent video about, like, vaccine safety. The algorithm, in its infinite wisdom, sees that fear-mongering, anti-vaccine content tends to generate a lot of engagement. So, the viewer is fed increasingly extreme videos, each one pushing them further down the misinformation rabbit hole. Drew: Right. So, you start with a simple search about flu shots and wind up spending your weekend convinced that Big Pharma is running the world from some secret underground lair. It’s like a theme park ride that starts with bumper cars and somehow ends in a haunted house you can’t escape. Josh: Exactly, and the stakes are, like, disturbingly real. This kind of funneling normalizes fringe ideas. Once those anti-vaccine opinions take root, questioning them becomes almost impossible in these isolated online echo chambers. Each video, and all the comments, reinforces the misinformation. Drew: It’s like people are getting sucked into these bubbles of confirmation bias, where any dissenting voice is silenced by the algorithm and every toxic idea is amplified. But, Josh, can we really blame the algorithms entirely? Aren’t they just reflecting back what we as humans prioritize? I mean, if fear and anger get clicks, then the algorithms are just giving the people what they seem to want. Josh: Yeah, algorithms exploit human tendencies, but that doesn’t let the platforms or the algorithms off the hook. Bridle argues that these systems don’t just cater to our biases. They magnify them. They don’t show us reality; they distort it into whatever keeps people scrolling. And that’s what causes real harm. It’s not just about individual beliefs; it’s about how it shapes public discourse. Drew: Look at conspiracy theories, like the chemtrails one. Sure, some people might laugh it off, but in these algorithm-fueled ecosystems, those ideas go from fringe to, well, formidable. People share articles, join online groups, and soon they're convinced that the contrails from planes are actually chemical weapons, even though there’s zero scientific evidence! Josh: Right, it’s not just about the misinformation itself; it’s about the insularity. These communities aren’t just misinformed; they actively resist being challenged. Once people are entrenched, they’re ready to dismiss outside voices as part of "the conspiracy." Algorithms thrive on that dynamic, locking users into feedback loops. Drew: That insularity is where it starts to get scary, right? It's not just that they undermine science; they actually delegitimize institutions. And that plays right into the hands of, you know, bad actors, like these Russian troll farms. Josh: Exactly. One of Bridle’s most compelling examples is the Internet Research Agency, or the IRA, based in St. Petersburg. This group weaponized algorithms during the 2016 U.S. election to, like, deliberately deepen divisions. And they weren’t just spamming fake news. They were intentionally targeting both ends of the political spectrum with content designed to stoke outrage and chaos. Drew: Playing both sides against each other. It's, genius, in a really horrifying kind of way. Create a fake Facebook page for Black Lives Matter activists, amplify their discontent, and then, on the other hand, set up another page praising law enforcement and stirring up conservative anger. Toss in some inflammatory memes, and, bam! Democracy à la chaos theory. Josh: Yeah, it wasn’t just about posting content; they actually orchestrated events. Imagine a protest organized by trolls on one side of the street and a counter-protest, also arranged by trolls, on the other side. They weren’t encouraging dialogue; they were engineering conflict. Drew: And honestly, it worked. People couldn't tell which narratives were real and which were straight-up fabricated. Essentially, the trolls exploited exactly what algorithms optimize for which is: engagement. And by hiding where this all came from, they made that distrust stick. Josh: Bridle stresses that this erosion of trust in democracy is one of the most serious outcomes of algorithmic amplification. It wasn't just about electing one candidate over another; it was about undermining faith in the whole system. Drew: And then there's Cambridge Analytica, another example straight out of a dystopian algorithmic handbook. You harvest millions of Facebook profiles, distill them into psychological profiles, and then target users with ads tailored to their most sensitive traits. Are you afraid of crime? Well, here’s an apocalyptic message about immigrants. Do you love progressive ideals? Here's some feel-good utopian fluff designed specifically for you. Josh: It’s the precision that’s so chilling. With psychographic data, they weren’t just targeting generic demographics; they were targeting individual personalities. I mean, they could predict how you’d react before you even did. Drew: It’s like a digital horoscoper. You might not believe it, but somehow, it knows you better than your own mother. And most people had no clue their feeds were being curated like this. They thought their reactions were genuine, not primed by these algorithmic puppet masters pulling invisible strings. Josh: Right, and that’s where the illusion of autonomy comes in. People didn’t realize how manipulated they were because the processes were completely opaque. The ethical questions here are huge: how do we hold these platforms accountable for amplifying manipulation? And how do we even demand transparency when we’re this deep into the algorithmic age? Drew: Exactly, and that’s the thing, isn't it? Algorithms don't care about ethics; they care about optimizing for attention and profits. The rest is just collateral damage. So, Josh, what can we actually do? Where is this systemic literacy Bridle talks about, the thing that’s supposed to be our ticket out of this mess? Josh: Well, it’s about moving beyond those reactive fixes, like tweaking recommendation systems. Systemic literacy means understanding the mechanisms behind these platforms, both their algorithms and the profit structures that drive them. Only then can we start creating systems where technology serves the collective good, not just private interests. Drew: Okay, that’s a bold vision. But are we even close to systemic literacy when most of us can barely figure out how our privacy settings even work? Josh: It is hard, no doubt about it, but Bridle emphasizes that awareness is the first step. Understanding how data shapes power dynamics, that’s vital. And it starts, not with, you know, “more tech,” but with a more critical look at the systems we've already built. Drew: So, it’s not that we just need clearer terms of service; we need clearer terms of engagement with reality itself?
Ethical and Philosophical Ramifications of AI
Part 4
Josh: So, understanding these systemic flaws naturally leads us to discuss the ethical and philosophical dilemmas technology throws our way And that's where AI really steps into the spotlight, Drew It’s not just about “what” AI can do, but the deeper questions of “how” and “why” we're using it Bridle really digs into that, calling for ethical frameworks and a better understanding of these systems to avoid potential harm. Drew: "AI ethics," huh? That almost sounds funny when we're still trying to stop AI from mistaking a cloud for a military tank, not to mention writing old prejudices into new code Seriously, are we ready to give a moral compass to something that doesn’t even know it exists? Josh: <Laughs> Valid point Bridle uses mythology to illustrate this He talks about Prometheus and Epimetheus – foresight versus hindsight Prometheus's fire is progress, innovation, right? While Epimetheus, acting without thinking, he represents our maybe reckless adoption of tech It's like a warning, encouraging us to combine ingenuity with the wisdom to gauge its impact. Drew: Exactly Because, let's be honest, our current approach feels way too much like Epimetheus: innovate first, then panic about the fallout Okay, so what's the modern-day "fire" in that story? Is it AI itself? Or is it our blind trust in just...automation? Josh: I'd say it’s both, really Take neural networks, for instance They're meant to mimic human thought, but they’ve led to some embarrassing mistakes—remember the U.S Army’s tank identification failure? The AI was trained using flawed data—sunny skies for tanks, cloudy for clear fields—It ended up identifying weather patterns, not tanks! Drew: <Laughs> So, instead of finding tanks, it's predicting someone's barbecue is going to get rained on That's absurd, yes, but it kind of proves the point It shows us the weakness of AI: it doesn’t actually "think" It just sees patterns and spits out data, even if that data is complete garbage or incredibly biased. Josh: Exactly the problem These systems are only as good – or bad – as the information they get Take predictive policing, which Bridle examines really closely The algorithms are trained on flawed or biased crime data, and so they inherit and amplify those biases For example, data from over-policed minority neighborhoods doesn’t reflect actual crime; it reflects over-surveillance. Drew: So, the AI is basically saying "the numbers don't lie," when the numbers are already messed up And instead of questioning why, we just double down on the flawed conclusion It's like a dystopian twist on "garbage in, garbage out." Josh: Right One awful example Bridle brings up is AI used to identify "potential criminals" based on facial features—an echo of phrenology The ethical implications are staggering But the even scarier thing is that it gives those old, discredited ideas this kind of false legitimacy, simply because it's "technology." Drew: Wait a minute—AI doing phrenology? That's not just unethical; it's cartoonishly evil! Are we really letting machines inherit the biases of some discredited 19th-century pseudoscience? Josh: Sadly, yes And people's trust in these systems makes it even worse Think about the "black box" problem Bridle talks about AI decisions are often opaque Even the creators can't always explain why an algorithm came to a certain conclusion So who's accountable when things go wrong? The engineers? The politicians? The police using these tools? Drew: Yeah, and by the time you figure out who's responsible, the whole mess is completely out of hand It sounds less like progress and more like Pandora's algorithmic box—once it's open, good luck getting the demons back inside. Josh: Exactly Bridle warns that without understanding these systems, we’re setting ourselves up for a fall like Epimetheus To avoid that, we need to understand not just the mechanics of AI but also its social and ethical impacts That’s the only way we can hold these systems--and the people who are using them--accountable. Drew: Systemic literacy, okay, sounds great in theory But honestly, do most people even “want” to understand how these things work? Or are we all too willing to just let the tech run our lives? Josh: Well, I think some of it is complacency, but it's also the complexity These systems are deliberately complicated Bridle argues that understanding these systems also means recognizing power structures Like how predictive policing's biases help to uphold the problems that already exist in society. Drew: So, algorithmic transparency isn’t just about showing all the math; it's about showing who benefits from the math And surprise, surprise, it's usually not the people living under the algorithm's rule, is it? Josh: You nailed it So, without that conscious foresight—our Promethean side—technology becomes a tool for making inequality worse instead of a tool for making it better That’s why we need both critical awareness and also that ethical accountability moving forward. Drew: Otherwise, we're just handing our tech-fueled future over to Epimetheus and hoping he's learned his lesson this time Spoiler alert: he hasn't.
Conclusion
Part 5
Josh: So, today we dove deep into James Bridle’s “New Dark Age”, and honestly, it’s a bit of a wake-up call. He really breaks down how our faith in technology, especially with all this data we're drowning in, can actually make things less clear, not more. And those automated systems we rely on? Super fragile. Drew: Yeah, he’s not shy about pointing out how algorithms can amplify misinformation, or even worse, perpetuate existing biases. It's like, we think we're making progress, but are we just speeding down the wrong road? Josh: Right. And then there's the whole "solutionism" thing - the idea that tech can fix everything. Bridle argues that a lot of these so-called solutions, like high-frequency trading, or even predictive policing, can actually make the problems worse. They seem like fixes on the surface, but they can really entrench inequalities. Drew: Which leads to the big question, I think. Convenience comes at a price, right? The more we depend on these systems we don't understand, the less control we have. It's not just about the tech itself, but who it benefits and who gets left in the dust. Josh: Exactly! That’s why he emphasizes "systemic literacy." It's not just about learning to code, it's about understanding the underlying structures, the power dynamics, everything that makes these technologies tick. Without that, we're just blindly trusting systems—that might not have our best interests at heart. Drew: So, what's the takeaway here? Are we just doomed? Josh: Not at all! Bridle's point is: we need to stop treating technology like it's some unstoppable force of nature. These are human-made choices. We have the power, and the responsibility, to question them, to demand transparency. We can shape these tools to prioritize fairness and wisdom, not just profit. Drew: Hmm, so, if we don’t…what happens? Are we just spiraling further into this "new dark age" he talks about? It almost sounds like he’s saying we’re at a crossroads. Josh: Exactly! It all comes down to making a really important choice: are we going to be like Prometheus, thinking ahead and planning carefully, or Epimetheus, just reacting after the fact? Drew: Heavy stuff. Josh: Absolutely. Thanks for joining us as we worked through Bridle’s “New Dark Age”. Keep asking questions, keep digging, and start really looking behind the curtain of the technologies that are shaping our lives. Catch you next time! Drew: Later, folks. Remember – question everything, especially the algorithms…and always read the fine print, always.