Doom Debates!

Liron Shapira

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

  1. -19 Ч

    Are AI Doomers “Calling for Violence”? Debate with Steven Balik

    Are AI safety advocates like Eliezer Yudkowsky at fault for the recent attacks on Sam Altman because they are “calling for violence”? I invited Steven Balik to join me on this emergency episode to hash it out. Steven is an activist short seller and data engineering professional whose Substack is popular among Silicon Valley VCs and hedge funds. Links Steven Balik on X (Twitter) — https://x.com/laurenbalik Steven Balik, “The Talmudic Stock Bubble, AI Psychosis, & Esoterrorism” (Substack, October 2025) — https://laurenbaliksalmanacandrevue.substack.com/p/the-talmudic-stock-bubble-ai-psychosis Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut It All Down” (Time, March 2023) — https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ Timestamps 00:00:00 — Cold Open 00:00:52 — Introducing Steven Balik 00:01:24 — Setting the Stage: Molotov Cocktail Incident 00:03:31 — Steven’s Opening Position 00:06:10 — Is Eliezer Yudkowsky “Calling for Violence”? 00:07:25 — Steven on AI, Yudkowsky, the Zizians & Escalating Rhetoric 00:12:16 — Focusing on the Time Article 00:18:51 — Who’s Responsible for the Violence? 00:25:33 — Debating the Key Quote in Yudkowsky’s Time Article 00:31:07 — Liron Passes the Ideological Turing Test 00:45:42 — Liron & Steven Find Common Ground 00:46:57 — Why Does Steven Call Eliezer Yudkowsky an “Esoterrorist”? 00:48:51 — Wrapping Up: Deescalating the Situation Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    52 мин.
  2. -2 ДН.

    Tristan Harris and Ted Tremper are WAKING UP Humanity to AI Extinction!

    The AI Doc: Or How I Became an Apocaloptimist could become the most important movie of our generation. It’s a new film from the producers of the Oscar-winning Everything Everywhere All at Once. Tristan Harris is a subject in the film who is well-known for his role in Netflix's The Social Dilemma. He is the co-founder of the Center for Humane Technology. Ted Tremper is a producer on the film who is the interim Executive Director of the Creators' Coalition on AI. Timestamps 00:00:00 — Cold Open 00:01:20 — Introducing Tristan Harris and Ted Tremper 00:04:31 — The Genesis of The AI Doc 00:12:48 — Tristan’s Journey From Social Media to AI 00:14:30 — Updating From AI Skeptic to AI Risk Aware 00:20:31 — How They Convinced the AI CEOs to Agree to be Interviewed 00:28:58 — Ted’s Journalism Advice, Working on Borat Subsequent Moviefilm 00:30:37 — Tristan, What’s Your P(Doom)™? 00:34:30 — The Resource Curse: What AI Revenue Does to a Society 00:44:10 — Ted, What’s Your P(Doom)™? 00:46:34 — Reacting to Demis Hassabis Statement that AGI Development Is Inevitable 00:49:52 — Liron Sharpens the Criticism Towards AGI Builders 00:55:35 — AGI Developers Claim to Want International Cooperation, But Have They Really Tried? 01:04:30 — What Should Be the Single Takeaway for Concerned Viewers? 01:11:40 — Building a Coalition Against Superintelligence Development: From Bernie to Bannon 01:19:52 — Take Action at TheAIDocGetInvolved.com 01:24:40 — Tristan’s Closing Message: We’ve Done This Before Links Watch The AI Doc — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist Get Involved with The AI Doc Community — https://theaidocgetinvolved.com/ Tristan Harris, Wikipedia — https://en.wikipedia.org/wiki/Tristan_Harris Ted Tremper, IMDb — https://www.imdb.com/name/nm3998229/ Center for Humane Technology — https://www.humanetech.com/ “The Intelligence Curse” by Luke Drago and Rudolf Laine — https://intelligence-curse.ai/ Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 ч. 27 мин.
  3. 7 АПР.

    I Challenged DON’T LOOK UP’s Screenwriter to Look Up At AGI

    David Sirota helped create “Don’t Look Up” sometimes feels like we’re living inside his movie. Does he share my belief that the looming planetary threat is rogue AI? Sirota is an award-winning investigative journalist, bestselling author, and former speechwriter for Bernie Sanders. He was nominated for an Oscar for co-writing the story of Don’t Look Up. Find our more about David’s work at The Lever: https://www.levernews.com/ Timestamps 00:00:00 — Cold Open 00:01:20 — Introducing David Sirota 00:04:34 — Why David Fights Against Power and the Concentration of Power 00:13:46 — From NAFTA to AI: The Warnings We Ignored 00:22:05 — How Big Will the AI “Jobpocalypse” Be? 00:25:28 — Superintelligence & the Parallel to Don’t Look Up 00:28:37 — What’s Your P(Doom)™? 00:31:44 — The Speed of the AI Threat 00:36:26 — Society Is Losing a Collective Capacity to Focus 00:38:34 — Is Climate Change David’s Biggest Existential Concern? 00:45:01 — David Reacts to Bernie Sanders’ Data Center Moratorium Proposal 00:49:11 — Can We Build The “Off Button”? 00:52:08 — “Don’t Look Up” x AGI Mashup 00:54:35 — Why There’s Still Hope 00:58:14 — Living in “Don’t Look Up” 00:59:46 — Wrap-Up: Where to Follow Major AI News Links Watch Don't Look Up — https://www.netflix.com/gb/title/81252357 The Lever, investigative news outlet — https://www.levernews.com/ David Sirota on X — https://x.com/davidsirota David Sirota, Wikipedia — https://en.wikipedia.org/wiki/David_Sirota Master Plan podcast — https://the.levernews.com/master-plan/ David Sirota, “Hostile Takeover” on Amazon — https://www.amazon.com/Hostile-Takeover-Corruption-Conquered-Government/dp/0307237354 The Three-Body Problem (novel), Wikipedia — https://en.wikipedia.org/wiki/The_Three-Body_Problem_(novel) WarGames (1983 film), Wikipedia — https://en.wikipedia.org/wiki/WarGames Adam McKay, Wikipedia — https://en.wikipedia.org/wiki/Adam_McKay Watch Don’t Look Up — https://www.netflix.com/title/81252357 AI 2027 scenario — https://ai-2027.com/ Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 ч. 2 мин.
  4. 31 МАР.

    Liron’s 700% Productivity Increase, Bernie & AOC’s Datacenter Ban, Are We In Full Takeoff? — Live Q&A

    Multiple live callers join this month's Q&A as we react to Bernie Sanders and AOC's data center moratorium, the sudden shutdown of SORA 2, and the record breaking "Stop the AI Race" protest. I explain why Claude Code has me claiming a 700% productivity boost, what that means for takeoff timelines and debate instrumental convergence. Timestamps 00:00:00 — Cold Open 00:01:08 — Can AI Train on Its Own Data to Reach Superintelligence? 00:03:42 — Are We in the Takeoff? 700% Faster with Claude Code 00:04:27 — EJJ Joins: Is Instrumental Convergence Really That Dangerous? 00:16:44 — The Positive Feedback Loop Problem 00:20:09 — S-Risk, Consciousness, and Objective Morality 00:22:27 — Futarchy and Prediction Markets 00:24:31 — Low P(Doom) Arguments and Bayesian Updates 00:31:05 — Lee Cyrano Joins: Superintelligence Won’t Matter for Decades 01:02:45 — Lesaun Joins: Are There Adults in the Room? 01:17:39 — Connor Leahy: “There Are No Adults in the Room” 01:19:51 — Bernie Sanders Calls for a Data Center Moratorium 01:24:23 — Claude Code Anecdotes and Audience Q&A 01:35:49 — The Stop the AI Race Protest in San Francisco 01:41:38 — Known Unknowns and Risk Assessment 01:45:03 — From Waymo to Existential Risk 01:51:28 — Closing: The Road to One Million Subscribers Links Quintin Pope vs Liron Shapira debate on Doom Debates — https://lironshapira.substack.com/p/ai-alignment-is-solved-phd-researcher CAP theorem, Wikipedia — https://en.wikipedia.org/wiki/CAP_theorem Google Cloud Spanner, Wikipedia — https://en.wikipedia.org/wiki/Spanner_(database) Newcomb’s problem, Wikipedia — https://en.wikipedia.org/wiki/Newcomb%27s_paradox Gödel’s incompleteness theorems, Wikipedia — https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 ч. 56 мин.
  5. AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)

    25 МАР.

    AI Alignment Is Solved?! PhD Researcher Quintin Pope vs Liron Shapira (2023 Twitter Debate)

    Dr. Quintin Pope, PhD, is one of the few critics of AI doomerism who is truly fluent in the concepts and arguments. In Oct, 2023 he joined me for a debate in Twitter Spaces where he argued that AI alignment was basically already solved. His “inside view” on machine-learning forced me to update my position, but could he knock me off the doom train? Timestamps 00:00:00 — Cold Open 00:00:43 — Introductions 00:01:22 — Quintin's Opening Statement 00:02:32 — Liron's Opening Statement 00:05:10 — Has RLHF Solved the Alignment Problem? 00:07:52 — AI Capabilities Are Constrained by Training Data 00:10:52 — Defining ASI and Could RLHF Align a Superintelligence? 00:13:13 — Quintin Is More Optimistic Than OpenAI 00:14:16 — What Is ASI in Your Mind? 00:15:57 — AI in 5 Years (2028) & AI Coding Agents 00:19:05 — Continuous or Discontinuous Capability Gains? 00:19:39 — DEBATE: General Intelligence Algorithm in Humans 00:30:02 — The Only Coherent Explanation of Humans Going to the Moon 00:34:01 — Are We "Fully Cooked" as a General Optimizer? 00:35:53 — Common Mistake in Forecasting Superintelligence 00:42:22 — 'Neat' vs 'Scruffy': Will Interpretable Structure Emerge Inside Neural Nets? 00:48:57 — Does This Disagreement Actually Matter for P(Doom)? 00:54:33 — Thought Experiment: Could You Have Predicted a Species Would Go to the Moon? 00:57:26 — The Basin of Attraction for Superintelligence 00:59:35 — Does a Superintelligence Even Exist in Algorithm Space? 01:09:59 — Closing Statements 01:12:40 — Audience Q&A 01:19:35 — Wrap Up Links Original Twitter Spaces debate (Quintin Pope vs. Liron Shapira) — https://x.com/i/spaces/1YpJkwOzOqEJj/peek Quintin Pope on Twitter/X — https://twitter.com/QuintinPope5 Quintin Pope, Alignment Forum profile — https://www.alignmentforum.org/users/quintin-pope InstructGPT, Wikipedia — https://en.wikipedia.org/wiki/InstructGPT AIXI, Wikipedia — https://en.wikipedia.org/wiki/AIXI AlphaZero, Wikipedia — https://en.wikipedia.org/wiki/AlphaZero MuZero, Wikipedia — https://en.wikipedia.org/wiki/MuZero DeepMind AlphaZero and MuZero page — https://deepmind.google/research/alphazero-and-muzero/ Midjourney — https://www.midjourney.com/ DALL-E, Wikipedia — https://en.wikipedia.org/wiki/DALL-E OpenAI Superalignment announcement — https://openai.com/index/introducing-superalignment/ Shard Theory sequence on LessWrong — https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX “Evolution Provides No Evidence for the Sharp Left Turn” — https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn “My Objections to ‘We’re All Gonna Die with Eliezer Yudkowsky’” — https://www.lesswrong.com/posts/wAczufCpMdaamF9fy/my-objections-to-we-re-all-gonna-die-with-eliezer-yudkowsky “AI is Centralizing by Default; Let’s Not Make It Worse” — https://forum.effectivealtruism.org/posts/zd5inbT4kYKivincm/ai-is-centralizing-by-default-let-s-not-make-it-worse Singular Learning Theory, Alignment Forum sequence — https://www.alignmentforum.org/s/mqwA5FcL6SrHEQzox Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    1 ч. 20 мин.
  6. 20 МАР.

    I'm Watching AI Take Everyone's Job | Liron on Robert Wright's NonZero Podcast

    My new interview on Robert Wright's Nonzero Podcast where we dive into the agentic AI explosion. Bob is an exceptionally sharp interviewer who connects the dots between job displacement and AI doom. I highly recommend becoming a premium subscriber to Bob’s Nonzero Newsletter so you can watch the Overtime segment in every interview he does — https://nonzero.org Our discussion continues in Overtime for premium subscribers — https://www.nonzero.org/p/early-access-the-allure-and-danger Links Nonzero Podcast on YouTube — https://www.youtube.com/@nonzero Robert Wright, The God Test (book, Amazon) — https://www.amazon.com/God-Test-Artificial-Intelligence-Reckoning/dp/1668061651 Timestamps 00:00:00 — Introduction and Today's Topics 00:03:22 — Vibe Coding and the Agentic Revolution 00:08:57 — The Future of Employment 00:17:57 — Agents and What They Can Do 00:27:59 — The "Can It" and "Will It" Framework for AI Doom 00:30:27 — OpenClaw and Liron's Experience with AI Agents 00:36:45 — The Case for Slowing Down AI Development 00:43:28 — Anthropic, the Pentagon, and AI Politics 00:48:37 — AI Safety Leadership Concerns 00:52:06 — Closing and Overtime Tease Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    55 мин.
  7. 17 МАР.

    This Top Economist's P(Doom) Just Shot Up 10x! Noah Smith Returns To Explain His Update

    Noah Smith is an economist and author of Noahpinion, one of the most popular Substacks in the world. He returns to Doom Debates to share a massive update to his P(Doom), and things get a little heated. Timestamps 00:00:00 — Cold Open 00:00:41 — Welcome Back Noah Smith! 00:01:40 — Noah's P(Doom) Update 00:03:57 — The Chatbot-Genie-God Framework 00:05:14 — What's Your P(Doom)™ 00:09:59 — Unpacking Noah's Update 00:16:56 — Why Incidents of Rogue AI Lower P(Doom) 00:20:04 — Noah's Mainline Doom Scenario: Much Worse Than COVID-19 00:23:29 — Society Responds After Growing Pains 00:29:25 — Agentic AI Contributed to Noah's Position 00:31:35 — Should Yudkowsky Get Bayesian Credit? 00:33:59 — Are We Communicating the Right Way with Policymakers? 00:40:16 — Finding Common Ground on AI Policy 00:47:07 — Wrap-Up: People Need to Be More Scared Links Doom Debate with Noah Smith — Part 1: https://www.youtube.com/watch?v=AwmJ-OnK2I4 Noah’s Twitter — https://x.com/noahpinion Noah’s Substack — https://noahpinion.blog Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏 Get full access to Doom Debates at lironshapira.substack.com/subscribe

    48 мин.

Оценки и отзывы

4,2
из 5
Оценок: 18

Об этом подкасте

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com

Вам может также понравиться