Doom Debates!

Liron Shapira

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

  1. 3 DAYS AGO

    Liron’s 700% Productivity Increase, Bernie & AOC’s Datacenter Ban, Are We In Full Takeoff? — Live Q&A

    Multiple live callers join this month's Q&A as we react to Bernie Sanders and AOC's data center moratorium, the sudden shutdown of SORA 2, and the record breaking "Stop the AI Race" protest. I explain why Claude Code has me claiming a 700% productivity boost, what that means for takeoff timelines and debate instrumental convergence. Timestamps 00:00:00 — Cold Open 00:01:08 — Can AI Train on Its Own Data to Reach Superintelligence? 00:03:42 — Are We in the Takeoff? 700% Faster with Claude Code 00:04:27 — EJJ Joins: Is Instrumental Convergence Really That Dangerous? 00:16:44 — The Positive Feedback Loop Problem 00:20:09 — S-Risk, Consciousness, and Objective Morality 00:22:27 — Futarchy and Prediction Markets 00:24:31 — Low P(Doom) Arguments and Bayesian Updates 00:31:05 — Lee Cyrano Joins: Superintelligence Won’t Matter for Decades 01:02:45 — Lesaun Joins: Are There Adults in the Room? 01:17:39 — Connor Leahy: “There Are No Adults in the Room” 01:19:51 — Bernie Sanders Calls for a Data Center Moratorium 01:24:23 — Claude Code Anecdotes and Audience Q&A 01:35:49 — The Stop the AI Race Protest in San Francisco 01:41:38 — Known Unknowns and Risk Assessment 01:45:03 — From Waymo to Existential Risk 01:51:28 — Closing: The Road to One Million Subscribers Links Quintin Pope vs Liron Shapira debate on Doom Debates — https://lironshapira.substack.com/p/ai-alignment-is-solved-phd-researcher CAP theorem, Wikipedia — https://en.wikipedia.org/wiki/CAP_theorem Google Cloud Spanner, Wikipedia — https://en.wikipedia.org/wiki/Spanner_(database) Newcomb’s problem, Wikipedia — https://en.wikipedia.org/wiki/Newcomb%27s_paradox Gödel’s incompleteness theorems, Wikipedia — https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1hr 56min
  2. AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)

    25 MAR

    AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)

    Dr. Quintin Pope, PhD, is one of the few critics of AI doomerism who is truly fluent in the concepts and arguments. In Oct, 2023 he joined me for a debate in Twitter Spaces where he argued that AI alignment was basically already solved. His “inside view” on machine-learning forced me to update my position, but could he knock me off the doom train? Timestamps 00:00:00 — Cold Open 00:00:43 — Introductions 00:01:22 — Quintin's Opening Statement 00:02:32 — Liron's Opening Statement 00:05:10 — Has RLHF Solved the Alignment Problem? 00:07:52 — AI Capabilities Are Constrained by Training Data 00:10:52 — Defining ASI and Could RLHF Align a Superintelligence? 00:13:13 — Quintin Is More Optimistic Than OpenAI 00:14:16 — What Is ASI in Your Mind? 00:15:57 — AI in 5 Years (2028) & AI Coding Agents 00:19:05 — Continuous or Discontinuous Capability Gains? 00:19:39 — DEBATE: General Intelligence Algorithm in Humans 00:30:02 — The Only Coherent Explanation of Humans Going to the Moon 00:34:01 — Are We "Fully Cooked" as a General Optimizer? 00:35:53 — Common Mistake in Forecasting Superintelligence 00:42:22 — 'Neat' vs 'Scruffy': Will Interpretable Structure Emerge Inside Neural Nets? 00:48:57 — Does This Disagreement Actually Matter for P(Doom)? 00:54:33 — Thought Experiment: Could You Have Predicted a Species Would Go to the Moon? 00:57:26 — The Basin of Attraction for Superintelligence 00:59:35 — Does a Superintelligence Even Exist in Algorithm Space? 01:09:59 — Closing Statements 01:12:40 — Audience Q&A 01:19:35 — Wrap Up Links Original Twitter Spaces debate (Quintin Pope vs. Liron Shapira) — https://x.com/i/spaces/1YpJkwOzOqEJj/peek Quintin Pope on Twitter/X — https://twitter.com/QuintinPope5 Quintin Pope, Alignment Forum profile — https://www.alignmentforum.org/users/quintin-pope InstructGPT, Wikipedia — https://en.wikipedia.org/wiki/InstructGPT AIXI, Wikipedia — https://en.wikipedia.org/wiki/AIXI AlphaZero, Wikipedia — https://en.wikipedia.org/wiki/AlphaZero MuZero, Wikipedia — https://en.wikipedia.org/wiki/MuZero DeepMind AlphaZero and MuZero page — https://deepmind.google/research/alphazero-and-muzero/ Midjourney — https://www.midjourney.com/ DALL-E, Wikipedia — https://en.wikipedia.org/wiki/DALL-E OpenAI Superalignment announcement — https://openai.com/index/introducing-superalignment/ Shard Theory sequence on LessWrong — https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX “Evolution Provides No Evidence for the Sharp Left Turn” — https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn “My Objections to ‘We’re All Gonna Die with Eliezer Yudkowsky’” — https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky “AI is Centralizing by Default; Let’s Not Make It Worse” — https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse Singular Learning Theory, Alignment Forum sequence — https://www.alignmentforum.org/s/mqwA5FcL6SrHEQzox Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1hr 20min
  3. 20 MAR

    I'm Watching AI Take Everyone's Job | Liron on Robert Wright's NonZero Podcast

    My new interview on Robert Wright's Nonzero Podcast where we dive into the agentic AI explosion. Bob is an exceptionally sharp interviewer who connects the dots between job displacement and AI doom. I highly recommend becoming a premium subscriber to Bob’s Nonzero Newsletter so you can watch the Overtime segment in every interview he does — https://nonzero.org Our discussion continues in Overtime for premium subscribers — https://www.nonzero.org/p/early-access-the-allure-and-danger Links Nonzero Podcast on YouTube — https://www.youtube.com/@nonzero Robert Wright, The God Test (book, Amazon) — https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651 Timestamps 00:00:00 — Introduction and Today's Topics 00:03:22 — Vibe Coding and the Agentic Revolution 00:08:57 — The Future of Employment 00:17:57 — Agents and What They Can Do 00:27:59 — The "Can It" and "Will It" Framework for AI Doom 00:30:27 — OpenClaw and Liron's Experience with AI Agents 00:36:45 — The Case for Slowing Down AI Development 00:43:28 — Anthropic, the Pentagon, and AI Politics 00:48:37 — AI Safety Leadership Concerns 00:52:06 — Closing and Overtime Tease Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    55 min
  4. 17 MAR

    This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update

    Noah Smith is an economist and author of Noahpinion, one of the most popular Substacks in the world. He returns to Doom Debates to share a massive update to his P(Doom), and things get a little heated. Timestamps 00:00:00 — Cold Open 00:00:41 — Welcome Back Noah Smith! 00:01:40 — Noah's P(Doom) Update 00:03:57 — The Chatbot-Genie-God Framework 00:05:14 — What's Your P(Doom)™ 00:09:59 — Unpacking Noah's Update 00:16:56 — Why Incidents of Rogue AI Lower P(Doom) 00:20:04 — Noah's Mainline Doom Scenario: Much Worse Than COVID-19 00:23:29 — Society Responds After Growing Pains 00:29:25 — Agentic AI Contributed to Noah's Position 00:31:35 — Should Yudkowsky Get Bayesian Credit? 00:33:59 — Are We Communicating the Right Way with Policymakers? 00:40:16 — Finding Common Ground on AI Policy 00:47:07 — Wrap-Up: People Need to Be More Scared Links Doom Debate with Noah Smith — Part 1: https://www.youtube.com/watch?v=AwmJ-OnK2I4 Noah’s Twitter — https://x.com/noahpinion Noah’s Substack — https://noahpinion.blog Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    48 min
  5. 12 MAR

    Talking AI Doom with Dr. Claire Berlinski & Friends

    Dr. Claire Berlinski is a journalist, Oxford PhD, and author of The Cosmopolitan Globalist. She invited me to her weekly symposium to make the case for AI as an existential risk. Can we convince her sharp, skeptical audience that P(Doom) is high? Subscribe to The Cosmopolitan Globalist: https://claireberlinski.substack.com/ Follow Claire on X: https://x.com/ClaireBerlinski “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares — https://ifanyonebuildsit.com Timestamps 00:00:00 — Introduction 00:02:10 — Welcome and Setting the Stage 00:06:16 — Outcome Steering: The Magic of Intelligence 00:10:40 — Collective Intelligence and the Path to ASI 00:12:53 — The Five-Point Argument 00:14:56 — The Alignment Problem and Control 00:17:56 — The Genie Problem and Recursive Self-Improvement 00:20:38 — Timeline: Five Years or Fifty? 00:26:14 — Social Revolution and Pausing AI 00:28:54 — Energy Constraints and Resource Limits 00:31:23 — Morality, Empathy, and Superintelligence 00:37:45 — How AI Is Actually Built 00:38:31 — Computational Irreducibility and Co-Evolution 00:44:57 — Foom and the Discontinuity Question 00:46:44 — US-China Rivalry and the Arms Race 00:49:36 — The Co-Evolution Argument 00:55:36 — Alignment as Psychoanalysis 00:57:24 — Anthropic’s “Harmless Slop” Paper 01:00:33 — Policy Solutions: The Pause Button 01:04:47 — Military AI and the Singularity 01:07:10 — Cognitive Obstacles and Doom Fatigue 01:09:07 — Why People Don’t Act 01:13:00 — Reaching Representatives and Building a Platform 01:17:12 — Sam Altman and the Manhattan Project Parallel 01:19:14 — Community Building and Pause AI 01:22:07 — Call to Action and Closing Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1hr 27min
  6. 10 MAR

    How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!

    Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon. Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem. Timestamps 00:00:00 — Cold Open 00:00:48 — Welcoming Back the Returning Champion 00:02:38 — Research Update: What's New in The Last 6 Months 00:04:31 — The Rise of AI Agents 00:07:49 — What's Your P(Doom)?™ 00:13:42 — "Brain-Like AGI": The Next Generation of AI 00:17:01 — Can LLMs Ever Match the Human Brain? 00:31:51 — Will AI Kill Us Before It Takes Our Jobs? 00:36:12 — Country of Geniuses in a Data Center 00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI 00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence 01:02:32 — Consequentialism and the Path to Superintelligence 01:17:02 — Airplanes vs. Rockets: An Analogy for AI 01:24:33 — FOOM and Recursive Self-Improvement Links Steven Byrnes’ Website & Research— https://sjbyrnes.com/ Steve’s X—https://x.com/steve47285 Astera Institute—https://astera.org/ “Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8 Steve on LessWrong—https://www.lesswrong.com/users/steve2152 AI 2027 — Scenario Timeline — https://ai-2027.com/ Part 1: “The Man Who Might SOLVE AI Alignment”— https://www.youtube.com/watch?v=_ZRUq3VEAc0 Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1hr 29min
  7. 5 MAR

    Q&A — Claude Code's Impact, Anthropic vs The Pentagon, Roko('s Basilisk) Returns + Liron Updates His Views!

    Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading. I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough. Timestamps 00:00:00 — Cold Open 00:00:56 — Welcome to the Livestream & Taking Questions from Chat 00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests 00:18:30 — The Good Case Scenario 00:26:00 — Hugh Chungus Joins the Stream 00:30:54 — Producer Ori, Liron's Recent Alignment Updates 00:43:47 — We're In an Era of Centaurs 00:47:40 — Noah Smith's Updates on AGI and Alignment 00:48:44 — Co Co Chats Cybersecurity 00:57:32 — The Attacker's Advantage in Offense/Defense Balance 01:02:55 — Anthropic vs The Pentagon 01:06:20 — "We're Getting Frog Boiled" 01:11:06 — Stoner AI & Debating the Finer Points of Wireheading 01:25:00 — A Caller Backs the Penrose Argument 01:34:01 — Greyson Dials In 01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem 02:05:15 — More Q&A with Chat 02:14:26 — Closing Thoughts Links * Liron on X — https://x.com/liron * AI 2027 — https://ai-2027.com/ * “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/ * “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    2h 20m

About

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

You Might Also Like