Doom Debates

Liron Shapira

It's time to talk about the end of the world! lironshapira.substack.com

  1. 5D AGO

    Doom Debates LIVE Call-In Show! Listener Q&A about AGI, evolution vs. engineering, shoggoths & more

    AGI timelines, offense/defense balance, evolution vs engineering, how to lower P(Doom), Eliezer Yudkowksy, and much more! Timestamps: 00:00 Trailer 03:10 Is My P(Doom) Lowering? 11:29 First Caller: AI Offense vs Defense Balance 16:50 Superintelligence Skepticism 25:05 Agency and AI Goals 29:06 Communicating AI Risk 36:35 Attack vs Defense Equilibrium 38:22 Can We Solve Outer Alignment? 54:47 What is Your P(Pocket Nukes)? 1:00:05 The “Shoggoth” Metaphor Is Outdated 1:06:23 Should I Reframe the P(Doom) Question? 1:12:22 How YOU Can Make a Difference 1:24:43 Can AGI Beat Biology? 1:39:22 Agency and Convergent Goals 1:59:56 Viewer Poll: What Content Should I Make? 2:26:15 AI Warning Shots 2:32:12 More Listener Questions: Debate Tactics, Getting a PhD, Specificity 2:53:53 Closing Thoughts Links: Support PauseAI — https://pauseai.info/ Support PauseAI US — https://www.pauseai-us.org/ Support LessWrong / Lightcone Infrastructure — LessWrong is fundraising! Support MIRI — MIRI’s 2025 Fundraiser About the show: Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2h 55m
  2. DEC 17

    DOOMER vs. BUILDER — AI Doom Debate with Devin Elliot, Software Engineer & Retired Pro Snowboarder

    Devin Elliot is a former pro snowboarder turned software engineer who has logged thousands of hours building AI systems. His P(Doom) is a flat ⚫. He argues that worrying about an AI takeover is as irrational as fearing your car will sprout wings and fly away. We spar over the hard limits of current models: Devin insists LLMs are hitting a wall, relying entirely on external software “wrappers” to feign intelligence. I push back, arguing that raw models are already demonstrating native reasoning and algorithmic capabilities. Devin also argues for decentralization by claiming that nuclear proliferation is safer than centralized control. We end on a massive timeline split: I see superintelligence in a decade, while he believes we’re a thousand years away from being able to “grow” computers that are truly intelligence. Timestamps 00:00:00 Episode Preview 00:01:03 Intro: Snowboarder to Coder 00:03:30 "I Do Not Have a P(Doom)" 00:06:47 Nuclear Proliferation & Centralized Control 00:10:11 The "Spotify Quality" House Analogy 00:17:15 Ideal Geopolitics: Decentralized Power 00:25:22 Why AI Can't "Fly Away" 00:28:20 The Long Addition Test: Native or Tool? 00:38:26 Is Non-Determinism a Feature or a Bug? 00:52:01 The Impossibility of Mind Uploading 00:57:46 "Growing" Computers from Cells 01:02:52 Timelines: 10 Years vs. 1,000 Years 01:11:40 "Plastic Bag Ghosts" & Builder Intuition 01:13:17 Summary of the Debate 01:15:30 Closing Thoughts Links Devin’s Twitter — https://x.com/devinjelliot --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1h 17m
  3. DEC 11

    PhD AI Researcher Says P(Doom) is TINY — Debate with Michael Timothy Bennett

    Dr. Michael Timothy Bennett, Ph.D, is an award-winning young researcher who has developed a new formal framework for understanding intelligence. He has a TINY P(Doom) because he claims superintelligence will be resource-constrained and tend toward cooperation. In this lively debate, I stress-test Michael’s framework and debate whether its theorized constraints will actually hold back superintelligent AI. Timestamps * 00:00 Trailer * 01:41 Introducing Michael Timothy Bennett * 04:33 What’s Your P(Doom)?™ * 10:51 Michael’s Thesis on Intelligence: “Abstraction Layers”, “Adaptation”, “Resource Efficiency” * 25:36 Debate: Is Einstein Smarter Than a Rock? * 39:07 “Embodiment”: Michael’s Unconventional Computation Theory vs Standard Computation * 48:28 “W-Maxing”: Michael’s Intelligence Framework vs. a Goal-Oriented Framework * 59:47 Debating AI Doom * 1:09:49 Debating Instrumental Convergence * 1:24:00 Where Do You Get Off The Doom Train™ — Identifying The Cruxes of Disagreement * 1:44:13 Debating AGI Timelines * 1:49:10 Final Recap Links Michael’s website — https://michaeltimothybennett.com Michael’s Twitter — https://x.com/MiTiBennett Michael’s latest paper, “How To Build Conscious Machines” — https://osf.io/preprints/thesiscommons/wehmg_v1?view_only Doom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1h 52m
  4. NOV 29

    Facing AI Doom, Lessons from Daniel Ellsberg (Pentagon Papers) — Michael Ellsberg

    Michael Ellsberg, son of the legendary Pentagon Papers leaker Daniel Ellsberg, joins me to discuss the chilling parallels between his father’s nuclear war warnings and today’s race to AGI. We discuss Michael’s 99% probability of doom, his personal experience being “obsoleted” by AI, and the urgent moral duty for insiders to blow the whistle on AI’s outsize risks. Timestamps 0:00 Intro 1:29 Introducing Michael Ellsberg, His Father Daniel Ellsberg, and The Pentagon Papers 5:49 Vietnam War Parallels to AI: Lies and Escalation 25:23 The Doomsday Machine & Nuclear Insanity 48:49 Mutually Assured Destruction vs. Superintelligence Risk 55:10 Evolutionary Dynamics: Replicators and the End of the “Dream Time” 1:10:17 What’s Your P(doom)?™ 1:14:49 Debating P(Doom) Disagreements 1:26:18 AI Unemployment Doom 1:39:14 Doom Psychology: How to Cope with Existential Risk 1:50:56 The “Joyless Singularity”: Aligned AI Might Still Freeze Humanity 2:09:00 A Call to Action for AI Insiders Show Notes: Michael Ellsberg’s website — https://www.ellsberg.com/ Michael’s Twitter — https://x.com/MichaelEllsberg Daniel Ellsberg’s website — https://www.ellsberg.net/ The upcoming book, “Truth and Consequence” — https://geni.us/truthandconsequence Michael’s AI-related substack “Mammalian Wetware” — https://mammalianwetware.substack.com/ Daniel’s debate with Bill Kristol in the run-up to the Iraq war — https://www.youtube.com/watch?v=HyvsDR3xnAg -- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2h 16m
  5. NOV 21

    Max Tegmark vs. Dean Ball: Should We BAN Superintelligence?

    Today's Debate: Should we ban the development of artificial superintelligence until scientists agree it is safe and controllable? Arguing FOR banning superintelligence until there’s a scientific consensus that it’ll be done safely and controllably and with strong public buy-in: Max Tegmark. He is an MIT professor, bestselling author, and co-founder of the Future of Life Institute whose research has focused on artificial intelligence for the past 8 years. Arguing AGAINST banning superintelligent AI development: Dean Ball. He is a Senior Fellow at the Foundation for American Innovation who served as a Senior Policy Advisor at the White House Office of Science and Technology Policy under President Trump, where he helped craft America’s AI Action Plan. Two of the leading voices on AI policy engaged in high-quality, high-stakes debate for the benefit of the public! This is why I got into the podcast game — because I believe debate is an essential tool for humanity to reckon with the creation of superhuman thinking machines. Timestamps 0:00 - Episode Preview 1:41 - Introducing The Debate 3:38 - Max Tegmark’s Opening Statement 5:20 - Dean Ball’s Opening Statement 9:01 - Designing an “FDA for AI” and Safety Standards 21:10 - Liability, Tail Risk, and Biosecurity 29:11 - Incremental Regulation, Timelines, and AI Capabilities 54:01 - Max’s Nightmare Scenario 57:36 - The Risks of Recursive Self‑Improvement 1:08:24 - What’s Your P(Doom)?™ 1:13:42 - National Security, China, and the AI Race 1:32:35 - Closing Statements 1:44:00 - Post‑Debate Recap and Call to Action Show Notes Statement on Superintelligence released by Max’s organization, the Future of Life Institute — https://superintelligence-statement.org/ Dean’s reaction to the Statement on Superintelligence — https://x.com/deanwball/status/1980975802570174831 America’s AI Action Plan — https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/ “A Definition of AGI” by Dan Hendrycks, Max Tegmark, et. al. —https://www.agidefinition.ai/ Max Tegmark’s Twitter — https://x.com/tegmark Dean Ball’s Twitter — https://x.com/deanwball Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1h 51m
  6. NOV 14

    The AI Corrigibility Debate: MIRI Researchers Max Harms vs. Jeremy Gillen

    Max Harms and Jeremy Gillen are current and former MIRI researchers who both see superintelligent AI as an imminent extinction threat. But they disagree on whether it’s worthwhile to try to aim for obedient, “corrigible” AI as a singular target for current alignment efforts. Max thinks corrigibility is the most plausible path to build ASI without losing control and dying, while Jeremy is skeptical that this research path will yield better superintelligent AI behavior on a sufficiently early try. By listening to this debate, you’ll find out if AI corrigibility is a relatively promising effort that might prevent imminent human extinction, or an over-optimistic pipe dream. Timestamps 0:00 — Episode Preview 1:18 — Debate Kickoff 3:22 — What is Corrigibility? 9:57 — Why Corrigibility Matters 11:41 — What’s Your P(Doom)™ 16:10 — Max’s Case for Corrigibility 19:28 — Jeremy’s Case Against Corrigibility 21:57 — Max’s Mainline AI Scenario 28:51 — 4 Strategies: Alignment, Control, Corrigibility, Don’t Build It 37:00 — Corrigibility vs HHH (”Helpful, Harmless, Honest”) 44:43 — Asimov’s 3 Laws of Robotics 52:05 — Is Corrigibility a Coherent Concept? 1:03:32 — Corrigibility vs Shutdown-ability 1:09:59 — CAST: Corrigibility as Singular Target, Near Misses, Iterations 1:20:18 — Debating if Max is Over-Optimistic 1:34:06 — Debating if Corrigibility is the Best Target 1:38:57 — Would Max Work for Anthropic? 1:41:26 — Max’s Modest Hopes 1:58:00 — Max’s New Book: Red Heart 2:16:08 — Outro Show Notes Max’s book Red Heart — https://www.amazon.com/Red-Heart-Max-Harms/dp/108822119X Learn more about CAST: Corrigibility as Singular Target — https://www.lesswrong.com/s/KfCjeconYRdFbMxsy/p/NQK8KHSrZRF5erTba Max’s Twitter — https://x.com/raelifin Jeremy’s Twitter — https://x.com/jeremygillen1 --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2h 18m

Ratings & Reviews

4.3
out of 5
14 Ratings

About

It's time to talk about the end of the world! lironshapira.substack.com

You Might Also Like