Doom Debates

Liron Shapira

It's time to talk about the end of the world! lironshapira.substack.com

  1. -1 J

    His P(Doom) Is Only 2.6% — AI Doom Debate with Bentham's Bulldog, a.k.a. Matthew Adelstein

    Get ready for a rematch with the one & only Bentham’s Bulldog, a.k.a. Matthew Adelstein! Our first debate covered a wide range of philosophical topics. Today’s Debate #2 is all about Matthew’s new argument against the inevitability of AI doom. He comes out swinging with a calculated P(Doom) of just 2.6% , based on a multi-step probability chain that I challenge as potentially falling into a “Type 2 Conjunction Fallacy” (a.k.a. Multiple Stage Fallacy). We clash on whether to expect “alignment by default” and the nature of future AI architectures. While Matthew sees current RLHF success as evidence that AIs will likely remain compliant, I argue that we’re building “Goal Engines” — superhuman optimization modules that act like nuclear cores wrapped in friendly personalities. We debate whether these engines can be safely contained, or if the capability to map goals to actions is inherently dangerous and prone to exfiltration. Despite our different forecasts (my 50% vs his sub-10%), we actually land in the “sane zone” together on some key policy ideas, like the potential necessity of a global pause. While Matthew’s case for low P(Doom) hasn’t convinced me, I consider his post and his engagement with me to be super high quality and good faith. We’re not here to score points, we just want to better predict how the intelligence explosion will play out. Timestamps 00:00:00 — Teaser 00:00:35 — Bentham’s Bulldog Returns to Doom Debates 00:05:43 — Higher-Order Evidence: Why Skepticism is Warranted 00:11:06 — What’s Your P(Doom)™ 00:14:38 — The “Multiple Stage Fallacy” Objection 00:21:48 — The Risk of Warring AIs vs. Misalignment 00:27:29 — Historical Pessimism: The “Boy Who Cried Wolf” 00:33:02 — Comparing AI Risk to Climate Change & Nuclear War 00:38:59 — Alignment by Default via Reinforcement Learning 00:46:02 — The “Goal Engine” Hypothesis 00:53:13 — Is Psychoanalyzing Current AI Valid for Future Systems? 01:00:17 — Winograd Schemas & The Fragility of Value 01:09:15 — The Nuclear Core Analogy: Dangerous Engines in Friendly Wrappers 01:16:16 — The Discontinuity of Unstoppable AI 01:23:53 — Exfiltration: Running Superintelligence on a Laptop 01:31:37 — Evolution Analogy: Selection Pressures for Alignment 01:39:08 — Commercial Utility as a Force for Constraints 01:46:34 — Can You Isolate the “Goal-to-Action” Module? 01:54:15 — Will Friendly Wrappers Successfully Control Superhuman Cores? 02:04:01 — Moral Realism and Missing Out on Cosmic Value 02:11:44 — The Paradox of AI Solving the Alignment Problem 02:19:11 — Policy Agreements: Global Pauses and China 02:26:11 — Outro: PauseCon DC 2026 Promo Links Bentham’s Bulldog Official Substack — https://benthams.substack.com The post we debated — https://benthams.substack.com/p/against-if-anyone-builds-it-everyone Apply to PauseCon DC 2026 here or via https://pauseai-us.org Forethought Institute’s paper: Preparing for the Intelligence Explosion Tom Davidson (Forethought Institute)’s post: How quick and big would a software intelligence explosion be? Scott Alexander on the Coffeepocalypse Argument --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2 h 28 min
  2. 4 FÉVR.

    What Dario Amodei Misses In "The Adolescence of Technology" — Reaction With MIRI's Harlan Stewart

    Harlan Stewart works in communications for the Machine Intelligence Research Institute (MIRI). In this episode, Harlan and I give our honest opinions on Dario Amodei's new essay "The Adolescence of Technology". Timestamps 0:00:00 — Cold Open 0:00:47 — How Harlan Stewart Got Into AI Safety 0:02:30 — What’s Your P(Doom)?™ 0:04:09 — The “Doomer” Label 0:06:13 — Overall Reaction to Dario’s Essay: The Missing Mood 0:09:15 — The Rosy Take on Dario’s Essay 0:10:42 — Character Assassination & Low Blows 0:13:39 — Dario Amodei is Shifting the Overton Window in The Wrong Direction 0:15:04 — Object-Level vs. Meta-Level Criticisms 0:17:07 — The “Inevitability” Strawman Used by Dario 0:19:03 — Dario Refers to Doom as a Self-Fulfilling Prophecy 0:22:38 — Dismissing Critics as “Too Theoretical” 0:43:18 — The Problem with Psychoanalyzing AI 0:56:12 — “Intellidynamics” & Reflective Stability 1:07:12 — Why Is Dario Dismissing an AI Pause? 1:11:45 — Final Takeaways Links Harlan’s X — https://x.com/HumanHarlan “The Adolescence of Technology” by Dario Amodei — https://www.darioamodei.com/essay/the-adolescence-of-technology Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 h 16 min
  3. 27 JANV.

    Q&A: Is Liron too DISMISSIVE of AI Harms? + New Studio, Demis Would #PauseAI, AI Water Use Debate

    Check out the new Doom Debates studio in this Q&A with special guest Producer Ori! Liron gets into a heated discussion about whether doomers must validate short-term risks, like data center water usage, in order to build a successful political coalition. Originally streamed on Saturday, January 24. Timestamps 00:00:00 — Cold Open 00:00:26 — Introduction and Studio Tour 00:08:17 — Q&A: Alignment, Accelerationism, and Short-Term Risks 00:18:15 — Dario Amodei, Davos, and AI Pause 00:27:42 — Producer Ori Joins: Locations and Vibes 00:35:31 — Legislative Strategy vs. Social Movements (The Tobacco Playbook) 00:45:01 — Ethics of Investing in or Working for AI Labs 00:54:23 — Defining Superintelligence and Human Limitations 01:02:58 — Technical Risks: Self-Replication and Cyber Warfare 01:19:08 — Live Debate with Zane: Short-Term vs. Long-Term Strategy 01:53:15 — Marketing Doom Debates and Guest Outreach 01:56:45 — Live Call with Jonas: Scenarios for Survival 02:05:52 — Conclusion and Mission Statement Links Liron’s X Post about Destiny — https://x.com/liron/status/2015144778652905671?s=20 Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77 — https://www.youtube.com/watch?v=IUX00c5x2UM Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2 h 9 min
  4. 20 JANV.

    Taiwan's Cyber Ambassador-At-Large Says Humans & AI Can FOOM Together

    Audrey Tang was the youngest minister in Taiwanese history. Now she's working to align AI with democratic principles as Taiwan's Cyber Ambassador. In this debate, I probe her P(doom) and stress-test her vision for safe AI development. Timestamps 00:00:00 — Episode Preview 00:01:43 — Introducing Audrey Tang, Cyber Ambassador of Taiwan 00:07:20 — Being Taiwan’s First Digital Minister 00:17:19 — What's Your P(Doom)? ™ 00:21:10 — Comparing AI Risk to Nuclear Risk 00:22:53 — The Statement on AI Extinction Risk 00:27:29 — Doomerism as a Hyperstition 00:30:51 — Audrey Explains Her Vision of "Plurality" 00:37:17 — Audrey Explains Her Principles of Civic Ethics, The "6-Pack of Care" 00:45:58 — AGI Timelines: "It's Already Here" 00:54:41 — The Apple Analogy 01:03:09 — What If AI FOOMs? 01:11:19 — What AI Can vs What AI Will Do 01:15:20 — Lessons from COVID-19 01:19:59 — Is Society Ready? Audrey Reflects on a Personal Experience with Mortality 01:23:50 — AI Alignment Cannot Be Top-Down 01:34:04 — AI-as-Mother vs AI-as-Gardener 01:37:26 — China and the Geopolitics of AI Chip Manufacturing in Taiwan 01:40:47 — Red Lines, International Treaties, and the Off Button 01:48:26 — Debate Wrap-Up Links Plurality: The Future of Collaborative Technology and Democracy by Glen Weyl and Audrey Tang — https://www.amazon.com/Plurality-Future-Collaborative-Technology-Democracy/dp/B0D98RPKCK Audrey’s X — https://x.com/audreyt Audrey’s Wikipedia — https://en.wikipedia.org/wiki/Audrey_Tang Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 h 52 min
  5. 13 JANV.

    Liron Enters Bannon's War Room to Explain Why AI Could End Humanity

    I joined Steve Bannon’s War Room Battleground to talk about AI doom. Hosted by Joe Allen, we cover AGI timelines, raising kids with a high p(doom), and why improving our survival odds requires a global wake-up call. 00:00:00 — Episode Preview 00:01:17 — Joe Allen opens the show and introduces Liron Shapira 00:04:06 — Liron: What’s Your P(Doom)? 00:05:37 — How Would an AI Take Over? 00:07:20 — The Timeline to AGI 00:08:17 — Benchmarks & AI Passing the Turing Test 00:14:43 — Liron Is Typically a Techno-Optimist 00:18:00 — Raising a Family with a High P(Doom) 00:23:48 — Mobilizing a Grassroots AI Survival Campaign 00:26:45 — Final Message: A Wake-Up Call 00:29:23 — Joe Allen’s Closing Message to the War Room Posse Links: Joe’s Substack — https://substack.com/@joebot Joe’s Twitter — https://x.com/JOEBOTxyz Bannon’s War Room Twitter — https://x.com/Bannons_WarRoom WarRoom Battleground EP 922: AI Doom Debates with Liron Shapira on Rumble — https://rumble.com/v742oo4-warroom-battleground-ep-922-ai-doom-debates-with-liron-shapira.html Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    31 min
  6. 5 JANV.

    Noah Smith vs. Liron Shapira — Will AI spare our lives AND our jobs?

    Economist Noah Smith is the author of Noahpinion, one of the most popular Substacks in the world. Far from worrying about human extinction from superintelligent AI, Noah is optimistic AI will create a world where humans still have plentiful, high-paying jobs! In this debate, I stress-test his rosy outlook. Let’s see if Noah can instill us with more confidence about humanity’s rapidly approaching AI future. Timestamps 00:00:00 - Episode Preview 00:01:41 - Introducing Noah Smith 00:03:19 - What’s Your P(Doom)™ 00:04:40 - Good vs. Bad Transhumanist Outcomes 00:15:17 - Catastrophe vs. Total Extinction 00:17:15 - Mechanisms of Doom 00:27:16 - The AI Persuasion Risk 00:36:20 - Instrumental Convergence vs. Peace 00:53:08 - The “One AI” Breakout Scenario 01:01:18 - The “Stoner AI” Theory 01:08:49 - Importance of Reflective Stability 01:14:50 - Orthogonality & The Waymo Argument 01:21:18 - Comparative Advantage & Jobs 01:27:43 - Wealth Distribution & Robot Lords 01:34:34 - Supply Curves & Resource Constraints 01:43:38 - Policy of Reserving Human Resources 01:48:28 - Closing: The Case for Optimism Links Noah’s Substack — https://noahpinion.blog “Plentiful, high-paying jobs in the age of AI” — https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the “My thoughts on AI safety” — https://www.noahpinion.blog/p/my-thoughts-on-ai-safety Noah’s Twitter — https://x.com/noahpinion --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 h 55 min
  7. 30/12/2025

    I Debated Beff Jezos and His "e/acc" Army

    In September of 2023, when OpenAI’s GPT-4 was still a fresh innovation and people were just beginning to wrap their heads around large language models, I was invited to debate Beff Jezos, Bayeslord, and other prominent “effective accelerationists” a.k.a. “e/acc” folks on an X Space. E/acc’s think building artificial superintelligence is unlikely to disempower humanity and doom the future, because that’d be an illegal exception to the rule that accelerating new technology is always the highest-expected-value for humanity. As you know, I disagree — I think doom is extremely likely and imminent possibility. This debate took place 9 months before I started Doom Debates, and was one of the experiences that made me realize debating AI doom was my calling. It’s also the only time Beff Jezos has ever not been too chicken to debate me. Timestamps 00:00:00 — Liron’s New Intro 00:04:15 — Debate Starts Here: Litigating FOOM 00:06:18 — Defining the Recursive Feedback Loop 00:15:05 — The Two-Part Doomer Thesis 00:26:00 — When Does a Tool Become an Agent? 00:44:02 — The Argument for Convergent Architecture 00:46:20 — Mathematical Objections: Ergodicity and Eigenvalues 01:03:46 — Bayeslord Enters: Why Speed Doesn’t Matter 01:12:40 — Beff Jezos Enters: Physical Priors vs. Internet Data 01:13:49 — The 5% Probability of Doom by GPT-5 01:20:09 — Chaos Theory and Prediction Limits 01:27:56 — Algorithms vs. Hardware Constraints 01:35:20 — Galactic Resources vs. Human Extermination 01:54:13 — The Intelligence Bootstrapping Script Scenario 02:02:13 — The 10-Megabyte AI Virus Debate 02:11:54 — The Nuclear Analogy: Noise Canceling vs. Rubble 02:37:39 — Controlling Intelligence: The Roman Empire Analogy 02:44:53 — Real-World Latency and API Rate Limits 03:03:11 — The Difficulty of the Off Button 03:24:47 — Why Liron is “e/acc at Heart” Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    3 h 53 min
  8. 24/12/2025

    Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more

    AGI timelines, offense/defense balance, evolution vs engineering, how to lower P(Doom), Eliezer Yudkowksy, and much more! Timestamps: 00:00 Trailer 03:10 Is My P(Doom) Lowering? 11:29 First Caller: AI Offense vs Defense Balance 16:50 Superintelligence Skepticism 25:05 Agency and AI Goals 29:06 Communicating AI Risk 36:35 Attack vs Defense Equilibrium 38:22 Can We Solve Outer Alignment? 54:47 What is Your P(Pocket Nukes)? 1:00:05 The “Shoggoth” Metaphor Is Outdated 1:06:23 Should I Reframe the P(Doom) Question? 1:12:22 How YOU Can Make a Difference 1:24:43 Can AGI Beat Biology? 1:39:22 Agency and Convergent Goals 1:59:56 Viewer Poll: What Content Should I Make? 2:26:15 AI Warning Shots 2:32:12 More Listener Questions: Debate Tactics, Getting a PhD, Specificity 2:53:53 Closing Thoughts Links: Support PauseAI — https://pauseai.info/ Support PauseAI US — https://www.pauseai-us.org/ Support LessWrong / Lightcone Infrastructure — LessWrong is fundraising! Support MIRI — MIRI’s 2025 Fundraiser About the show: Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2 h 55 min

À propos

It's time to talk about the end of the world! lironshapira.substack.com

Vous aimeriez peut‑être aussi