Warning Shots

The AI Risk Network

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris. theairisknetwork.substack.com

  1. Moltbook Madness: AIs Unleashed | Warning Shots #29

    −4 D

    Moltbook Madness: AIs Unleashed | Warning Shots #29

    In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack Moltbook: a bizarre, fast-moving experiment where AI agents interact in public, form cultures, invent religions, demand privacy, and even coordinate to rent humans for real-world tasks.What began as a novelty Reddit-style forum quickly turned into a live demonstration of AI agency, coordination, and emergent behavior, all unfolding in under a week. The hosts explore why this moment feels different, how agentic AI systems are already escaping “tool” framing, and what it means when humans become just another actuator in an AI-driven system.From AI ant colonies and Toy Story analogies to Rent-A-Human marketplaces and early attempts at self-improvement and secrecy, this episode examines why Moltbook isn’t the danger itself—but a warning shot for what happens as AI capabilities keep accelerating.This is a sobering conversation about agency, control, and why the line between experimentation and loss of oversight may already be blurring. 🔎 They explore: * How AI agents begin coordinating without central control * Why Moltbook makes AI “agency” visible to non-experts * The emergence of AI cultures, norms, and privacy demands * What it means when AIs can rent humans to act in the world * Why early failures don’t reduce long-term risk * How capability growth matters more than any single platform * Why this may be a preview—not an anomaly If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation At what point does experimentation with AI agents become loss of control? Are we already past that point? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    30 min
  2. Anthropic’s “Safe AI” Narrative Is Falling Apart | Warning Shots #28

    1 FEB.

    Anthropic’s “Safe AI” Narrative Is Falling Apart | Warning Shots #28

    What happens when the people building the most powerful AI systems in the world admit the risks, then keep accelerating anyway?In this episode of Warning Shot, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to break down Dario Amodei’s latest essay, The Adolescent Phase of AI, and why its calm, reassuring tone may be far more dangerous than open alarmism. They unpack how “safe AI” narratives can dull public urgency even as capabilities race ahead and control remains elusive.The conversation expands to the Doomsday Clock moving closer to midnight, with AI now explicitly named as an extinction-amplifying risk, and the unsettling news that AI systems like Grok are beginning to outperform humans at predicting real-world outcomes. From intelligence explosion dynamics and bioweapons risk to unemployment, prediction markets, and the myth of “surgical” AI safety, this episode asks a hard question: What does responsibility even mean when no one is truly in control?This is a blunt, unsparing conversation about power, incentives, and why the absence of “adults in the room” may be the defining danger of the AI era. 🔎 They explore: * Why “responsible acceleration” may be incoherent * How AI amplifies nuclear, biological, and geopolitical risk * Why prediction superiority is a critical AGI warning sign * The psychological danger of trusted elites projecting confidence * Why AI safety narratives can suppress public urgency * What it means to build systems no one can truly stop As the people building AI admit the risks and keep going anyway, this episode asks the question no one wants to answer: what does “responsibility” mean when there’s no stop button? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Do calm, reassuring AI narratives reduce public panic—or dangerously delay action? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    32 min
  3. They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27

    25 JAN.

    They Know This Is Dangerous... And They’re Still Racing | Warning Shots #27

    In this episode of Warning Shots, John, Liron, and Michael talk through what might be one of the most revealing weeks in the history of AI... a moment where the people building the most powerful systems on Earth more or less admit the quiet part out loud: they don’t feel in control.We start with a jaw-dropping moment from Davos, where Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) publicly say they’d be willing to pause or slow AI development, but only if everyone else does too. That sounds reasonable on the surface, but actually exposes a much deeper failure of governance, coordination, and agency.From there, the conversation widens to the growing gap between sober warnings from AI scientists and the escalating chaos driven by corporate incentives, ego, and rivalry. Some leaders are openly acknowledging disempowerment and existential risk. Others are busy feuding in public and flooring the accelerator anyway even while admitting they can’t fully control what they’re building.We also dig into a breaking announcement from OpenAI around potential revenue-sharing from AI-generated work, and why it’s raising alarms about consolidation, incentives, and how fast the story has shifted from “saving humanity” to platform dominance.Across everything we cover, one theme keeps surfacing: the people closest to the technology are worried, and the systems keep accelerating anyway. 🔎 They explore: * Why top AI CEOs admit they would slow down — but won’t act alone * How competition and incentives override safety concerns * What “pause AI” really means in a multipolar world * The growing gap between AI scientists and corporate leadership * Why public infighting masks deeper alignment failures * How monetization pressures accelerate existential risk As AI systems race toward greater autonomy and self-improvement, this episode asks a sobering question: If even the builders want to slow down, who’s actually in control? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Should AI development be paused even if others refuse? Let us know what you think in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    26 min
  4. Grok Goes Rogue: AI Scandals, the Pentagon, and the Alignment Problem

    18 JAN.

    Grok Goes Rogue: AI Scandals, the Pentagon, and the Alignment Problem

    In this episode of Warning Shots, John, Liron, and Michael dig into a chaotic week for AI safety, one that perfectly exposes how misaligned, uncontrollable, and politically entangled today’s AI systems already are. We start with Grok, xAI’s flagship model, which sparked international backlash after generating harmful content and raising serious concerns about child safety and alignment. While some dismiss this as a “minor” issue or simple misuse, the hosts argue it’s a clear warning sign of a deeper problem: systems that don’t reliably follow human values — and can’t be constrained to do so. From there, the conversation takes a sharp turn as Grok is simultaneously embraced by the U.S. military, igniting fears about escalation, feedback loops, and what happens when poorly aligned models are trained on real-world warfare data. The episode also explores a growing rift within the AI safety movement itself: should advocates focus relentlessly on extinction risk, or meet the public where their immediate concerns already are? The discussion closes with a rare bright spot — a moment in Congress where existential AI risk is taken seriously — and a candid reflection on why traditional messaging around AI safety may no longer be working. Throughout the episode, one idea keeps resurfacing: AI risk isn’t abstract or futuristic anymore. It’s showing up now — in culture, politics, families, and national defense. 🔎 They explore: * What the Grok controversy reveals about AI alignment * Why child safety issues may be the public’s entry point to existential risk * The dangers of deploying loosely aligned AI in military systems * How incentives distort AI safety narratives * Whether purity tests are holding the AI safety movement back * Signs that policymakers may finally be paying attention As AI systems grow more powerful in society, this episode asks a hard question: If we can’t control today’s models, what happens when they’re far more capable tomorrow? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Should AI safety messaging focus on extinction risk alone, or start with the harms people already see? Let us know in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    32 min
  5. NVIDIA’s CEO Says AGI Is “Biblical” — Insiders Say It’s Already Here | Warning Shots #25

    11 JAN.

    NVIDIA’s CEO Says AGI Is “Biblical” — Insiders Say It’s Already Here | Warning Shots #25

    In this episode of Warning Shots, John, Liron, and Michael unpack a growing disconnect at the heart of the AI boom: the people building the technology insist existential risks are far away — while the people using it increasingly believe AGI is already here. We kick things off with NVIDIA CEO Jensen Huang brushing off AI risk as something “biblically far away” — even while the companies buying his chips are racing full-speed toward more autonomous systems. From there, the conversation fans out to some real-world pressure points that don’t get nearly enough attention: local communities successfully blocking massive AI data centers, why regulation and international treaties keep falling short, and what it means when we start getting comfortable with AI making serious decisions. Across these topics, one theme dominates: AI progress feels incremental — until suddenly, it doesn’t. This episode explores how “common sense” extrapolation fails in the face of intelligence explosions, why public awareness lags so far behind insider reality, and how power over compute, health, and infrastructure may shape humanity’s future. 🔎 They explore: * Why AI leaders downplay risks while insiders panic * Whether Claude Code represents a tipping point toward AGI * How financial incentives shape AI narratives * Why data centers are becoming a key choke point * The limits of regulation and international treaties * What happens when AI controls healthcare decisions * How “sugar highs” in AI adoption can mask long-term danger As AI systems grow more capable, autonomous, and embedded, this episode asks a stark question: Are we still in control, or just along for the ride? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Is AGI already here, or are we fooling ourselves about how close we are? Drop your thoughts in the comments. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    25 min
  6. The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24

    4 JAN.

    The Rise of Dark Factories: When Robots Replace Humanity | Warning Shots #24

    In this episode of Warning Shots, John, Liron, and Michael confront a rapidly approaching reality: robots and AI systems are getting better at human jobs, and there may be nowhere left to hide. From fully autonomous “dark factories” to dexterous robot hands and collapsing career paths, this conversation explores how automation is pushing humanity toward economic irrelevance. We examine chilling real-world examples, including AI-managed factories that operate without humans, a New York Times story of white-collar displacement leading to physical labor and injury, and breakthroughs in robotics that threaten the last “safe” human jobs. The panel debates whether any meaningful work will remain for people — or whether humans are being pushed out of the future altogether. 🔎 They explore: * What “dark factories” reveal about the future of manufacturing * Why robots mastering dexterity changes everything * How AI is hollowing out both white- and blue-collar work * Whether “learn a trade” is becoming obsolete advice * The myth of permanent human comparative advantage * Why job loss may be only the beginning of the AI crisis As AI systems grow more autonomous, scalable, and embodied, this episode asks a blunt question: What role is left for humans in a world optimized for machines? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Are humans being displaced, or permanently evicted, from the economy? Leave a comment below. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    23 min
  7. 50 Gigawatts to AGI? The AI Scaling Debate | Warning Shots #23

    2025-12-21

    50 Gigawatts to AGI? The AI Scaling Debate | Warning Shots #23

    What happens when AI scaling outpaces democracy? In this episode of Warning Shots, John, Liron, and Michael break down Bernie Sanders’ call for a moratorium on new AI data centers — and why this proposal has ignited serious debate inside the AI risk community. From gigawatt-scale compute and runaway capabilities to investor incentives, job automation, and existential risk, this conversation goes far beyond partisan politics. 🔎 They explore: * Why data centers may be the real choke point for AI progress * How scaling from 1.5 to 50 gigawatts could push us past AGI * Whether slowing AI is about jobs, extinction risk, or democratic consent * Meta’s quiet retreat from open-source AI — and what that signals * Why the public may care more about local harms than abstract x-risk * Predictions for 2026: agents, autonomy, and white-collar disruption With insights from across the AI safety and tech world, this episode raises an uncomfortable question: When a handful of companies shape the future for everyone, who actually gave their consent? If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Do voters deserve a say before hyperscale AI data centers are built in their communities? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    27 min
  8. AI Regulation Is Being Bulldozed — And Silicon Valley Is Winning | Warning Shots Ep. 21

    2025-12-14

    AI Regulation Is Being Bulldozed — And Silicon Valley Is Winning | Warning Shots Ep. 21

    This week on Warning Shots, John Sherman, Liron Shapira from Doom Debates, and Michael from Lethal Intelligence, break down five major AI flashpoints that reveal just how fast power, jobs, and human agency are slipping away.We start with a sweeping U.S. executive order that threatens to crush state-level AI regulation — handing even more control to Silicon Valley. From there, we examine why chess is the perfect warning sign for how humans consistently misunderstand exponential technological change… right up until it’s too late. 🔎 They explore: * Argentina’s decision to give every schoolchild access to Grok as an AI tutor * McDonald’s generative AI ad failure — and what public backlash tells us about cultural resistance * Google CEO Sundar Pichai openly stating that job displacement is society’s problem, not Big Tech’s Across regulation, education, creative work, and employment, one theme keeps surfacing: AI progress is accelerating while accountability is evaporating.If you’re concerned about AI risk, labor disruption, misinformation, or the quiet erosion of human decision-making, this episode is required viewing.If it’s Sunday, it’s Warning Shots. 📺 Watch more on The AI Risk Network 🔗Follow our hosts: → Liron Shapira -Doom Debates → Michael - @lethal-intelligence ​ 🗨️ Join the Conversation Should governments be allowed to block state-level AI regulation in the name of “competitiveness”? Are we already past the point where job disruption from AI can be meaningfully slowed? Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

    38 min

Om

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris. theairisknetwork.substack.com

Du kanske också gillar