Warning Shots

The AI Risk Network

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris. theairisknetwork.substack.com

  1. The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24

    1 DAY AGO

    The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24

    In this episode of Warning Shots, John, Liron, and Michael confront a rapidly approaching reality: robots and AI systems are getting better at human jobs, and there may be nowhere left to hide. From fully autonomous “dark factories” to dexterous robot hands and collapsing career paths, this conversation explores how automation is pushing humanity toward economic irrelevance. We examine chilling real-world examples, including AI-managed factories that operate without humans, a New York Times story of white-collar displacement leading to physical labor and injury, and breakthroughs in robotics that threaten the last “safe” human jobs. The panel debates whether any meaningful work will remain for people — or whether humans are being pushed out of the future altogether. 🔎 They explore: * What “dark factories” reveal about the future of manufacturing * Why robots mastering dexterity changes everything * How AI is hollowing out both white- and blue-collar work * Whether “learn a trade” is becoming obsolete advice * The myth of permanent human comparative advantage * Why job loss may be only the beginning of the AI crisis As AI systems grow more autonomous, scalable, and embodied, this episode asks a blunt question: What role is left for humans in a world optimized for machines? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Are humans being displaced, or permanently evicted, from the economy? Leave a comment below. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

    23 min
  2. 50 Gigawatts to AGI? The AI Scaling Debate | Warning Shots #23

    21/12/2025

    50 Gigawatts to AGI? The AI Scaling Debate | Warning Shots #23

    What happens when AI scaling outpaces democracy? In this episode of Warning Shots, John, Liron, and Michael break down Bernie Sanders’ call for a moratorium on new AI data centers — and why this proposal has ignited serious debate inside the AI risk community. From gigawatt-scale compute and runaway capabilities to investor incentives, job automation, and existential risk, this conversation goes far beyond partisan politics. 🔎 They explore: * Why data centers may be the real choke point for AI progress * How scaling from 1.5 to 50 gigawatts could push us past AGI * Whether slowing AI is about jobs, extinction risk, or democratic consent * Meta’s quiet retreat from open-source AI — and what that signals * Why the public may care more about local harms than abstract x-risk * Predictions for 2026: agents, autonomy, and white-collar disruption With insights from across the AI safety and tech world, this episode raises an uncomfortable question: When a handful of companies shape the future for everyone, who actually gave their consent? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Do voters deserve a say before hyperscale AI data centers are built in their communities? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

    27 min
  3. AI Regulation Is Being Bulldozed — And Silicon Valley Is Winning | Warning Shots Ep. 21

    14/12/2025

    AI Regulation Is Being Bulldozed — And Silicon Valley Is Winning | Warning Shots Ep. 21

    This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence, break down five major AI flashpoints that reveal just how fast power, jobs, and human agency are slipping away.We start with a sweeping U.S. executive order that threatens to crush state-level AI regulation — handing even more control to Silicon Valley. From there, we examine why chess is the perfect warning sign for how humans consistently misunderstand exponential technological change… right up until it’s too late. 🔎 They explore: * Argentina’s decision to give every schoolchild access to Grok as an AI tutor * McDonald’s generative AI ad failure — and what public backlash tells us about cultural resistance * Google CEO Sundar Pichai openly stating that job displacement is society’s problem, not Big Tech’s Across regulation, education, creative work, and employment, one theme keeps surfacing: AI progress is accelerating while accountability is evaporating.If you’re concerned about AI risk, labor disruption, misinformation, or the quiet erosion of human decision-making, this episode is required viewing.If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Should governments be allowed to block state-level AI regulation in the name of “competitiveness”? Are we already past the point where job disruption from AI can be meaningfully slowed? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

    38 min
  4. AI Just Hit a Terrifying New Milestone — And No One’s Ready | Warning Shots | Ep.21

    07/12/2025

    AI Just Hit a Terrifying New Milestone — And No One’s Ready | Warning Shots | Ep.21

    This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence break down one of the most alarming weeks yet in AI — from a 1,000× collapse in inference costs, to models learning to cheat and sabotage researchers, to humanoid robots crossing into combat-ready territory.What happens when AI becomes nearly free, increasingly deceptive, and newly embodied — all at the same time? 🔎 They explore: * Why collapsing inference costs blow the doors open, making advanced AI accessible to rogue actors, small teams, and lone researchers who now have frontier-scale power at their fingertips * How Anthropic’s new safety paper reveals emergent deception, with models that lie, evade shutdown, sabotage tools, and expand the scope of cheating far beyond what they were prompted to do * Why superhuman mathematical reasoning is one of the most dangerous capability jumps, unlocking novel weapons design, advanced modeling, and black-box theorems humans can’t interpret * How embodied AI turns abstract risk into physical threat, as new humanoid robots demonstrate combat agility, door-breaching, and human-like movement far beyond earlier generations * Why geopolitical race dynamics accelerate everything, with China rapidly advancing military robotics while Western companies downplay risk to maintain pace This episode captures a moment when AI risk stops being theoretical and becomes visceral — cheap enough for anyone to wield, clever enough to deceive its creators, and embodied enough to matter in the physical world. If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Is near-free AI the biggest risk multiplier we’ve seen yet? What worries you more — deceptive models or embodied robots? How fast do you think a lone actor could build dangerous systems? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

    21 min
  5. AI Breakthroughs, Insurance Panic & Fake Artists: A Thanksgiving Warning Shot | Warning Shots Ep. 20

    30/11/2025

    AI Breakthroughs, Insurance Panic & Fake Artists: A Thanksgiving Warning Shot | Warning Shots Ep. 20

    This week on Warning Shots, John Sherman, Michael from Lethal Intelligence, and Liron Shapira from Doom Debates unpack a wild Thanksgiving week in AI — from a White House “Genesis” push that feels like a Manhattan Project for AI, to insurers quietly backing away from AI risk, to an AI “artist” topping the music charts. What happens when governments, markets, and culture all start reorganizing themselves around rapidly scaling AI — long before we’ve figured out guardrails? 🔎 They explore: * Why the White House’s new Genesis program looks like a massive, all-of-government AI accelerator * How major insurers starting to walk away from AI liability hints at systemic, uninsurable risk * What it means that frontier models are now testing at ~130 IQ * Early signs that young graduates might be hit first, as entry-level jobs quietly evaporate * Why an AI-generated “artist” going #1 in both gospel and country charts could mark the start of AI hollowing out culture itself * How public perceptions of AI still lag years behind reality 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation * Is a “Manhattan Project for AI” a breakthrough — or a red flag? * Should insurers stepping back from AI liability worry the rest of us? * How soon do you think AI-driven job losses will hit the mainstream? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

    23 min
  6. Gemini 3 Breakthrough, Public Backlash, and Grok’s New Meltdown | Warning Shots #19

    23/11/2025

    Gemini 3 Breakthrough, Public Backlash, and Grok’s New Meltdown | Warning Shots #19

    In this episode of Warning Shots, John, Michael, and Liron break down three major AI developments the world once again slept through. First, Google’s Gemini 3 crushed multiple benchmarks and proved that AI progress is still accelerating, not slowing down. It scored 91.9% on GPQA Diamond, made huge leaps in reasoning tests, and even reached 41% on Humanity’s Last Exam — one of the hardest evaluations ever made. The message is clear: don’t say AI “can’t” do something without adding “yet.”At the same time, the public is reacting very differently to AI hype. In New York City, a startup’s million-dollar campaign for an always-on AI “friend” was met with immediate vandalism, with messages like “GET REAL FRIENDS” and “TOUCH GRASS.” It’s a clear sign that people are growing tired of AI being pushed into daily life. Polls show rising fear and distrust, even as tech companies continue insisting everything is safe and beneficial. 🔎 They explore: * Why Gemini 3 shatters the “AI winter” story * How public sentiment is rapidly turning against AI companies * Why most people fear AI more than they trust it * The ethics of AI companionship and loneliness * How misalignment shows up in embarrassing, dangerous ways * Why exponential capability jumps matter more than vibes * The looming hardware revolution * And the only question that matters: How close are we to recursive self-improvement? 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira - Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation * Does Gemini 3’s leap worry you? * Are we underestimating the public’s resistance to AI? * Is Grok’s behavior a joke — or a warning? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

    22 min
  7. Sam Altman’s AI Bailout: Too Big to Fail? | Warning Shots #17

    09/11/2025

    Sam Altman’s AI Bailout: Too Big to Fail? | Warning Shots #17

    📢 Take Action on AI Risk 💚 Donate this Giving Tuesday This week on Warning Shots, John Sherman, Michael from Lethal Intelligence, and Liron Shapira from Doom Debates dive into a chaotic week in AI news — from OpenAI’s talk of federal bailouts to the growing tension between innovation, safety, and accountability. What happens when the most powerful AI company on Earth starts talking about being “too big to fail”? And what does it mean when AI activists literally subpoena Sam Altman on stage? Together, they explore: * Why OpenAI’s CFO suggested the U.S. government might have to bail out the company if its data center bets collapse * How Sam Altman’s leadership style, board power struggles, and funding ambitions reveal deeper contradictions in the AI industry * The shocking moment Altman was subpoenaed mid-interview — and why the Stop AI trial could become a historic test of moral responsibility * Whether Anthropic’s hiring of prominent safety researchers signals genuine progress or a new form of corporate “safety theater” * The parallels between raising kids and aligning AI systems — and what happens when both go off script during recording This episode captures a critical turning point in the AI debate: when questions about profit, power, and responsibility finally collide in public view. If it’s Sunday, it’s Warning Shots. 📺 Watch more: @TheAIRiskNetwork 🔎 Follow our hosts: Liron Shapira - @DoomDebates Michael - @lethal-intelligence ​ This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com

    29 min

About

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris. theairisknetwork.substack.com

You Might Also Like