Doom Debates

Liron Shapira

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

  1. Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

    15 HR AGO

    Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

    Renowned scientists just set The Doomsday Clock closer than ever to midnight. I agree our collective doom seems imminent, but why do the clocksetters point to climate change—not rogue AI—as an existential threat? UChicago Professor Daniel Holz, a top physicist who sets the Doomsday Clock, joins me to debate society’s top existential risks. 00:00:00 — Cold Open  00:00:51 — Introducing Professor Holz 00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight!  00:04:37 — What's Your P(Doom)?™  00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation  00:12:07 — How We All Die: Nuclear vs Climate vs AI  00:21:08 — Nuclear Close Calls from The Cold War  00:28:38 — History of The Doomsday Clock  00:30:18 — The Threat of Biological Risks Like Mirror Life  00:33:40 — Professor Holz’s Position on AI Misalignment Risk  00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk?  00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab)  01:06:22 — The State of Academic Research on AI Safety & Existential Risks  01:12:32 — The Case for Pausing AI Development  01:17:11 — Debate: Is Climate Change an Existential Threat?  01:28:48 — Call to Action: How to Reduce Our Collective Threat Links Professor Daniel Holz’s Wikipedia — https://en.wikipedia.org/wiki/Daniel_Holz XLab (Existential Risk Laboratory) — https://xrisk.uchicago.edu/ 2026 Doomsday Clock Statement — https://thebulletin.org/doomsday-clock/2026-statement/ The Bulletin of the Atomic Scientists (Nonprofit that produces the clock)— https://thebulletin.org/ UChicago Magazine features Prof. Holz’s class — “Are We Doomed?”: https://mag.uchicago.edu/science-medicine/are-we-doomed The Doomsday Machine by Daniel Ellsberg — https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704 If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares — https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640 Learn more about pausing frontier AI development from PauseAI — https://pauseai.info Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1h 36m
  2. Destiny Raises His P(Doom) At The End

    17 FEB

    Destiny Raises His P(Doom) At The End

    Destiny has racked up millions of views for his sharp takes on political and cultural news. Now, I finally get to ask him about a topic he’s been agnostic on: Will AI end humanity? Come ride with us on the Doom Train™ 🚂 Timestamps 00:00:00 — Teaser 00:01:16 — Welcoming Destiny 00:02:54 — What’s Your P(Doom)?™ 00:04:46 — 2017 vs 2026: Destiny’s views on AI 00:11:04 — AI could vastly surpass human intelligence 00:16:02 — Can AI doom us? 00:18:42 — Intelligence doesn’t guarantee morality 00:22:18 — The vibes-based case against doom 00:29:58 — The human brain is inefficient 00:35:17 — Does every intelligence in the universe self-destruct via AI? 00:37:28 — Destiny turns the tables: Where does Liron get off The Doom Train™ 00:46:07 — Will a warning shot cause society to develop AI safely? 00:54:10 — Roko’s Basilisk, the AI box problem 00:59:37 — Will Destiny update his P(Doom)?™ 01:04:19 — Closing thoughts Links Destiny's YouTube — https://www.youtube.com/@destiny Destiny's X account — https://x.com/TheOmniLiberal Marc Andreessen saying AI isn’t dangerous because “it is math” — https://a16z.com/ai-will-save-the-world/ Will Smith eating spaghetti AI video — https://knowyourmeme.com/memes/ai-will-smith-eating-spaghetti Roko’s Basilisk on LessWrong — https://www.lesswrong.com/tag/rokos-basilisk Eliezer Yudkowsky’s AI Box Experiment — https://www.yudkowsky.net/singularity/aibox Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1h 7m
  3. 13 FEB

    The Only Politician Thinking Clearly About Superintelligence — California Governor Candidate Zoltan Istvan

    California gubernatorial candidate Zoltan Istvan reveals his P(Doom) and makes the case for universal basic income and radical life extension. Timestamps 00:00:00 — Teaser 00:00:50 — Meet Zoltan, Democratic Candidate for California Governor 00:08:30 — The 2026 California Governor's Race 00:12:50 — Zoltan's Platform Is Automated Abundance 00:19:45 — What's Your P(Doom)™ 00:28:26 — Campaigning on Existential Risk 00:32:36 — Does Zoltan Support a Global AI Pause? 00:48:39 — Exploring His Platform: Education, Crime, and Affordability 01:08:55 — Exploring His Platform:: Super Cities, Space, and Longevity 01:13:00 — Closing Thoughts Links Zoltan Istvan’s Campaign for California Governor – zoltanistvan2026.com The Transhumanist Wager by Zoltan Istvan – https://www.amazon.com/Transhumanist-Wager-Zoltan-Istvan/dp/0988616114 Wired Article on the “Bunker” Party – https://www.wired.com/story/ai-risk-party-san-francisco/ PauseAI – pauseai.info SL4 Mailing List Archive – sl4.org/archive Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1h 19m
  4. 10 FEB

    His P(Doom) Is Only 2.6% — AI Doom Debate with Bentham's Bulldog, a.k.a. Matthew Adelstein

    Get ready for a rematch with the one & only Bentham’s Bulldog, a.k.a. Matthew Adelstein! Our first debate covered a wide range of philosophical topics. Today’s Debate #2 is all about Matthew’s new argument against the inevitability of AI doom. He comes out swinging with a calculated P(Doom) of just 2.6% , based on a multi-step probability chain that I challenge as potentially falling into a “Type 2 Conjunction Fallacy” (a.k.a. Multiple Stage Fallacy). We clash on whether to expect “alignment by default” and the nature of future AI architectures. While Matthew sees current RLHF success as evidence that AIs will likely remain compliant, I argue that we’re building “Goal Engines” — superhuman optimization modules that act like nuclear cores wrapped in friendly personalities. We debate whether these engines can be safely contained, or if the capability to map goals to actions is inherently dangerous and prone to exfiltration. Despite our different forecasts (my 50% vs his sub-10%), we actually land in the “sane zone” together on some key policy ideas, like the potential necessity of a global pause. While Matthew’s case for low P(Doom) hasn’t convinced me, I consider his post and his engagement with me to be super high quality and good faith. We’re not here to score points, we just want to better predict how the intelligence explosion will play out. Timestamps 00:00:00 — Teaser 00:00:35 — Bentham’s Bulldog Returns to Doom Debates 00:05:43 — Higher-Order Evidence: Why Skepticism is Warranted 00:11:06 — What’s Your P(Doom)™ 00:14:38 — The “Multiple Stage Fallacy” Objection 00:21:48 — The Risk of Warring AIs vs. Misalignment 00:27:29 — Historical Pessimism: The “Boy Who Cried Wolf” 00:33:02 — Comparing AI Risk to Climate Change & Nuclear War 00:38:59 — Alignment by Default via Reinforcement Learning 00:46:02 — The “Goal Engine” Hypothesis 00:53:13 — Is Psychoanalyzing Current AI Valid for Future Systems? 01:00:17 — Winograd Schemas & The Fragility of Value 01:09:15 — The Nuclear Core Analogy: Dangerous Engines in Friendly Wrappers 01:16:16 — The Discontinuity of Unstoppable AI 01:23:53 — Exfiltration: Running Superintelligence on a Laptop 01:31:37 — Evolution Analogy: Selection Pressures for Alignment 01:39:08 — Commercial Utility as a Force for Constraints 01:46:34 — Can You Isolate the “Goal-to-Action” Module? 01:54:15 — Will Friendly Wrappers Successfully Control Superhuman Cores? 02:04:01 — Moral Realism and Missing Out on Cosmic Value 02:11:44 — The Paradox of AI Solving the Alignment Problem 02:19:11 — Policy Agreements: Global Pauses and China 02:26:11 — Outro: PauseCon DC 2026 Promo Links Bentham’s Bulldog Official Substack — https://benthams.substack.com The post we debated — https://benthams.substack.com/p/against-if-anyone-builds-it-everyone Apply to PauseCon DC 2026 here or via https://pauseai-us.org Forethought Institute’s paper: Preparing for the Intelligence Explosion Tom Davidson (Forethought Institute)’s post: How quick and big would a software intelligence explosion be? Scott Alexander on the Coffeepocalypse Argument --- Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2h 28m
  5. 4 FEB

    What Dario Amodei Misses In "The Adolescence of Technology" — Reaction With MIRI's Harlan Stewart

    Harlan Stewart works in communications for the Machine Intelligence Research Institute (MIRI). In this episode, Harlan and I give our honest opinions on Dario Amodei's new essay "The Adolescence of Technology". Timestamps 0:00:00 — Cold Open 0:00:47 — How Harlan Stewart Got Into AI Safety 0:02:30 — What’s Your P(Doom)?™ 0:04:09 — The “Doomer” Label 0:06:13 — Overall Reaction to Dario’s Essay: The Missing Mood 0:09:15 — The Rosy Take on Dario’s Essay 0:10:42 — Character Assassination & Low Blows 0:13:39 — Dario Amodei is Shifting the Overton Window in The Wrong Direction 0:15:04 — Object-Level vs. Meta-Level Criticisms 0:17:07 — The “Inevitability” Strawman Used by Dario 0:19:03 — Dario Refers to Doom as a Self-Fulfilling Prophecy 0:22:38 — Dismissing Critics as “Too Theoretical” 0:43:18 — The Problem with Psychoanalyzing AI 0:56:12 — “Intellidynamics” & Reflective Stability 1:07:12 — Why Is Dario Dismissing an AI Pause? 1:11:45 — Final Takeaways Links Harlan’s X — https://x.com/HumanHarlan “The Adolescence of Technology” by Dario Amodei — https://www.darioamodei.com/essay/the-adolescence-of-technology Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1h 16m

About

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

You Might Also Like