Doom Debates!

Liron Shapira

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

  1. VOR 1 TAG

    I'm Watching AI Take Everyone's Job | Liron on Robert Wright's NonZero Podcast

    My new interview on Robert Wright's Nonzero Podcast where we dive into the agentic AI explosion. Bob is an exceptionally sharp interviewer who connects the dots between job displacement and AI doom. I highly recommend becoming a premium subscriber to Bob’s Nonzero Newsletter so you can watch the Overtime segment in every interview he does — https://nonzero.org Our discussion continues in Overtime for premium subscribers — https://www.nonzero.org/p/early-access-the-allure-and-danger Links Nonzero Podcast on YouTube — https://www.youtube.com/@nonzero Robert Wright, The God Test (book, Amazon) — https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651 Timestamps 00:00:00 — Introduction and Today's Topics 00:03:22 — Vibe Coding and the Agentic Revolution 00:08:57 — The Future of Employment 00:17:57 — Agents and What They Can Do 00:27:59 — The "Can It" and "Will It" Framework for AI Doom 00:30:27 — OpenClaw and Liron's Experience with AI Agents 00:36:45 — The Case for Slowing Down AI Development 00:43:28 — Anthropic, the Pentagon, and AI Politics 00:48:37 — AI Safety Leadership Concerns 00:52:06 — Closing and Overtime Tease Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    55 Min.
  2. VOR 4 TAGEN

    This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update

    Noah Smith is an economist and author of Noahpinion, one of the most popular Substacks in the world. He returns to Doom Debates to share a massive update to his P(Doom), and things get a little heated. Timestamps 00:00:00 — Cold Open 00:00:41 — Welcome Back Noah Smith! 00:01:40 — Noah's P(Doom) Update 00:03:57 — The Chatbot-Genie-God Framework 00:05:14 — What's Your P(Doom)™ 00:09:59 — Unpacking Noah's Update 00:16:56 — Why Incidents of Rogue AI Lower P(Doom) 00:20:04 — Noah's Mainline Doom Scenario: Much Worse Than COVID-19 00:23:29 — Society Responds After Growing Pains 00:29:25 — Agentic AI Contributed to Noah's Position 00:31:35 — Should Yudkowsky Get Bayesian Credit? 00:33:59 — Are We Communicating the Right Way with Policymakers? 00:40:16 — Finding Common Ground on AI Policy 00:47:07 — Wrap-Up: People Need to Be More Scared Links Doom Debate with Noah Smith — Part 1: https://www.youtube.com/watch?v=AwmJ-OnK2I4 Noah’s Twitter — https://x.com/noahpinion Noah’s Substack — https://noahpinion.blog Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    48 Min.
  3. 12. MÄRZ

    Talking AI Doom with Dr. Claire Berlinski & Friends

    Dr. Claire Berlinski is a journalist, Oxford PhD, and author of The Cosmopolitan Globalist. She invited me to her weekly symposium to make the case for AI as an existential risk. Can we convince her sharp, skeptical audience that P(Doom) is high? Subscribe to The Cosmopolitan Globalist: https://claireberlinski.substack.com/ Follow Claire on X: https://x.com/ClaireBerlinski “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares — https://ifanyonebuildsit.com Timestamps 00:00:00 — Introduction 00:02:10 — Welcome and Setting the Stage 00:06:16 — Outcome Steering: The Magic of Intelligence 00:10:40 — Collective Intelligence and the Path to ASI 00:12:53 — The Five-Point Argument 00:14:56 — The Alignment Problem and Control 00:17:56 — The Genie Problem and Recursive Self-Improvement 00:20:38 — Timeline: Five Years or Fifty? 00:26:14 — Social Revolution and Pausing AI 00:28:54 — Energy Constraints and Resource Limits 00:31:23 — Morality, Empathy, and Superintelligence 00:37:45 — How AI Is Actually Built 00:38:31 — Computational Irreducibility and Co-Evolution 00:44:57 — Foom and the Discontinuity Question 00:46:44 — US-China Rivalry and the Arms Race 00:49:36 — The Co-Evolution Argument 00:55:36 — Alignment as Psychoanalysis 00:57:24 — Anthropic’s “Harmless Slop” Paper 01:00:33 — Policy Solutions: The Pause Button 01:04:47 — Military AI and the Singularity 01:07:10 — Cognitive Obstacles and Doom Fatigue 01:09:07 — Why People Don’t Act 01:13:00 — Reaching Representatives and Building a Platform 01:17:12 — Sam Altman and the Manhattan Project Parallel 01:19:14 — Community Building and Pause AI 01:22:07 — Call to Action and Closing Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 Std. 27 Min.
  4. 10. MÄRZ

    How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!

    Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon. Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem. Timestamps 00:00:00 — Cold Open 00:00:48 — Welcoming Back the Returning Champion 00:02:38 — Research Update: What's New in The Last 6 Months 00:04:31 — The Rise of AI Agents 00:07:49 — What's Your P(Doom)?™ 00:13:42 — "Brain-Like AGI": The Next Generation of AI 00:17:01 — Can LLMs Ever Match the Human Brain? 00:31:51 — Will AI Kill Us Before It Takes Our Jobs? 00:36:12 — Country of Geniuses in a Data Center 00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI 00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence 01:02:32 — Consequentialism and the Path to Superintelligence 01:17:02 — Airplanes vs. Rockets: An Analogy for AI 01:24:33 — FOOM and Recursive Self-Improvement Links Steven Byrnes’ Website & Research— https://sjbyrnes.com/ Steve’s X—https://x.com/steve47285 Astera Institute—https://astera.org/ “Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8 Steve on LessWrong—https://www.lesswrong.com/users/steve2152 AI 2027 — Scenario Timeline — https://ai-2027.com/ Part 1: “The Man Who Might SOLVE AI Alignment”— https://www.youtube.com/watch?v=_ZRUq3VEAc0 Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 Std. 29 Min.
  5. 5. MÄRZ

    Q&A — Claude Code's Impact, Anthropic vs The Pentagon, Roko('s Basilisk) Returns + Liron Updates His Views!

    Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading. I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough. Timestamps 00:00:00 — Cold Open 00:00:56 — Welcome to the Livestream & Taking Questions from Chat 00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests 00:18:30 — The Good Case Scenario 00:26:00 — Hugh Chungus Joins the Stream 00:30:54 — Producer Ori, Liron's Recent Alignment Updates 00:43:47 — We're In an Era of Centaurs 00:47:40 — Noah Smith's Updates on AGI and Alignment 00:48:44 — Co Co Chats Cybersecurity 00:57:32 — The Attacker's Advantage in Offense/Defense Balance 01:02:55 — Anthropic vs The Pentagon 01:06:20 — "We're Getting Frog Boiled" 01:11:06 — Stoner AI & Debating the Finer Points of Wireheading 01:25:00 — A Caller Backs the Penrose Argument 01:34:01 — Greyson Dials In 01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem 02:05:15 — More Q&A with Chat 02:14:26 — Closing Thoughts Links * Liron on X — https://x.com/liron * AI 2027 — https://ai-2027.com/ * “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/ * “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2 Std. 20 Min.
  6. 3. MÄRZ

    AI Will Take Our Jobs But SPARE Our Lives —Top AI Professor Moshe Vardi (Rice University)

    Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real. Who’s right? Tune into this episode and decide where you get off the Doom Train™. Some highlights of Professor Vardi’s impressive CV: * University Professor at Rice — a rare distinction that lets him teach in any department. * 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive. * He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute. * He has been sounding the alarm on AI-driven job automation for over ten years. * He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.” Links * Moshe Vardi’s Wikipedia — https://en.wikipedia.org/wiki/Moshe_Vardi * Moshe Vardi, Rice University — https://profiles.rice.edu/faculty/moshe-y-vardi * Baker Institute for Public Policy — https://www.bakerinstitute.org/ * Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World — https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642 * Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence — https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971 Timestamps 00:00:00 — Cold Open 00:00:54 — Introducing Professor Vardi 00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy 00:07:18 — What’s Your P(Doom)™? 00:12:28 — We’re Not Doomed, “We’re Screwed” 00:16:44 — AI’s Impact on Meaning & Purpose 00:27:47 — Let’s Ride the Doom Train ™ 00:35:43 — The Future of Jobs 00:39:24 — A Country of Geniuses in a Data Center 00:41:04 — Corporations as Superintelligence 00:45:49 — Agency, Consciousness, and the Limits of AI 00:50:07 — The Mad Scientist Scenario 00:54:02 — Could a Data Center of Geniuses Destroy Humanity? 01:03:13 — The WALL-E Meme and Fun Theory 01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement 01:06:02 — Wrap-Up + 1 Way Ticket to Doom Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 Std. 7 Min.
  7. Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

    24. FEB.

    Doomsday Clock Physicist Warns AI Is Major Threat to Humanity!

    Renowned scientists just set The Doomsday Clock closer than ever to midnight. I agree our collective doom seems imminent, but why do the clocksetters point to climate change—not rogue AI—as an existential threat? UChicago Professor Daniel Holz, a top physicist who sets the Doomsday Clock, joins me to debate society’s top existential risks. 00:00:00 — Cold Open  00:00:51 — Introducing Professor Holz 00:02:08 — The Doomsday Clock is at 85 Seconds to Midnight!  00:04:37 — What's Your P(Doom)?™  00:08:09 — Making A Probability of Doomsday, or P(Doom), Equation  00:12:07 — How We All Die: Nuclear vs Climate vs AI  00:21:08 — Nuclear Close Calls from The Cold War  00:28:38 — History of The Doomsday Clock  00:30:18 — The Threat of Biological Risks Like Mirror Life  00:33:40 — Professor Holz’s Position on AI Misalignment Risk  00:44:49 — Does The Doomsday Clock Give Short Shrift to AI Misalignment Risk?  00:59:09 — Why Professor Holz Founded UChicago’s Existential Risk Laboratory (XLab)  01:06:22 — The State of Academic Research on AI Safety & Existential Risks  01:12:32 — The Case for Pausing AI Development  01:17:11 — Debate: Is Climate Change an Existential Threat?  01:28:48 — Call to Action: How to Reduce Our Collective Threat Links Professor Daniel Holz’s Wikipedia — https://en.wikipedia.org/wiki/Daniel_Holz XLab (Existential Risk Laboratory) — https://xrisk.uchicago.edu/ 2026 Doomsday Clock Statement — https://thebulletin.org/doomsday-clock/2026-statement/ The Bulletin of the Atomic Scientists (Nonprofit that produces the clock)— https://thebulletin.org/ UChicago Magazine features Prof. Holz’s class — “Are We Doomed?”: https://mag.uchicago.edu/science-medicine/are-we-doomed The Doomsday Machine by Daniel Ellsberg — https://www.amazon.com/Doomsday-Machine-Confessions-Nuclear-Planner/dp/1608196704 If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares — https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640 Learn more about pausing frontier AI development from PauseAI — https://pauseai.info Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 Std. 36 Min.

Info

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

Das gefällt dir vielleicht auch