52 episodes

I interview the most interesting people and I ask the most interesting questions.

YouTube: https://www.youtube.com/DwarkeshPatel
Apple Podcasts: https://apple.co/3oBack9
Spotify: https://spoti.fi/3S5g2YK

www.dwarkeshpatel.com

Dwarkesh Podcast (Lunar Society formerly‪)‬ Dwarkesh Patel

    • Society & Culture
    • 4.8 • 69 Ratings

I interview the most interesting people and I ask the most interesting questions.

YouTube: https://www.youtube.com/DwarkeshPatel
Apple Podcasts: https://apple.co/3oBack9
Spotify: https://spoti.fi/3S5g2YK

www.dwarkeshpatel.com

    Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

    Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

    Here is my conversation with Dario Amodei, CEO of Anthropic.
    Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.
    ---
    I’m running an experiment on this episode.
    I’m not doing an ad.
    Instead, I’m just going to ask you to pay for whatever value you feel you personally got out of this conversation.
    Pay here: https://bit.ly/3ONINtp
    ---
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Timestamps
    (00:00:00) - Introduction
    (00:01:00) - Scaling
    (00:15:46) - Language
    (00:22:58) - Economic Usefulness
    (00:38:05) - Bioterrorism
    (00:43:35) - Cybersecurity
    (00:47:19) - Alignment & mechanistic interpretability
    (00:57:43) - Does alignment research require scale?
    (01:05:30) - Misuse vs misalignment
    (01:09:06) - What if AI goes well?
    (01:11:05) - China
    (01:15:11) - How to think about alignment
    (01:31:31) - Is modern security good enough?
    (01:36:09) - Inefficiencies in training
    (01:45:53) - Anthropic’s Long Term Benefit Trust
    (01:51:18) - Is Claude conscious?
    (01:56:14) - Keeping a low profile


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

    • 1 hr 58 min
    Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don’t Work

    Andy Matuschak - Self-Teaching, Spaced Repetition, & Why Books Don’t Work

    A few weeks ago, I sat beside Andy Matuschak to record how he reads a textbook.
    Even though my own job is to learn things, I was shocked with how much more intense, painstaking, and effective his learning process was.
    So I asked if we could record a conversation about how he learns and a bunch of other topics:
    * How he identifies and interrogates his confusion (much harder than it seems, and requires an extremely effortful and slow pace)
    * Why memorization is essential to understanding and decision-making
    * How come some people (like Tyler Cowen) can integrate so much information without an explicit note taking or spaced repetition system.
    * How LLMs and video games will change education
    * How independent researchers and writers can make money
    * The balance of freedom and discipline in education
    * Why we produce fewer von Neumann-like prodigies nowadays
    * How multi-trillion dollar companies like Apple (where he was previously responsible for bedrock iOS features) manage to coordinate millions of different considerations (from the cost of different components to the needs of users, etc) into new products designed by 10s of 1000s of people.
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    To see Andy’s process in action, check out the video where we record him studying a quantum physics textbook, talking aloud about his thought process, and using his memory system prototype to internalize the material.
    You can check out his website and personal notes, and follow him on Twitter.
    Cometeer
    Visit cometeer.com/lunar for $20 off your first order on the best coffee of your life!
    If you want to sponsor an episode, contact me at dwarkesh.sanjay.patel@gmail.com.
    Timestamps
    (00:02:32) - Skillful reading
    (00:04:10) - Do people care about understanding?
    (00:08:32) - Structuring effective self-teaching
    (00:18:17) - Memory and forgetting
    (00:34:50) - Andy’s memory practice
    (00:41:47) - Intellectual stamina
    (00:46:07) - New media for learning (video, games, streaming)
    (01:00:31) - Schools are designed for the median student
    (01:06:52) - Is learning inherently miserable?
    (01:13:37) - How Andy would structure his kids’ education
    (01:31:40) - The usefulness of hypertext
    (01:43:02) - How computer tools enable iteration
    (01:52:24) - Monetizing public work
    (02:10:16) - Spaced repetition
    (02:11:56) - Andy’s personal website and notes
    (02:14:24) - Working at Apple
    (02:21:05) - Spaced repetition 2


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

    • 2 hr 22 min
    Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

    Carl Shulman (Pt 2) - AI Takeover, Bio & Cyber Attacks, Detecting Deception, & Humanity's Far Future

    The second half of my 7 hour conversation with Carl Shulman is out!
    My favorite part! And the one that had the biggest impact on my worldview.
    Here, Carl lays out how an AI takeover might happen:
    * AI can threaten mutually assured destruction from bioweapons,
    * use cyber attacks to take over physical infrastructure,
    * build mechanical armies,
    * spread seed AIs we can never exterminate,
    * offer tech and other advantages to collaborating countries, etc
    Plus we talk about a whole bunch of weird and interesting topics which Carl has thought about:
    * what is the far future best case scenario for humanity
    * what it would look like to have AI make thousands of years of intellectual progress in a month
    * how do we detect deception in superhuman models
    * does space warfare favor defense or offense
    * is a Malthusian state inevitable in the long run
    * why markets haven't priced in explosive economic growth
    * & much more
    Carl also explains how he developed such a rigorous, thoughtful, and interdisciplinary model of the biggest problems in the world.
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Catch part 1 here
    80,000 hours
    This episode is sponsored by 80,000 hours. To get their free career guide (and to help out this podcast), please visit 80000hours.org/lunar.
    80,000 hours is without any close second the best resource to learn about the world’s most pressing problems and how you can solve them.
    If this conversation has got you concerned, and you want to get involved, then check out the excellent 80,000 hours guide on how to help with AI risk.
    To advertise on The Lunar Society, contact me at dwarkesh.sanjay.patel@gmail.com.
    Timestamps
    (00:02:50) - AI takeover via cyber or bio
    (00:34:30) - Can we coordinate against AI?
    (00:55:52) - Human vs AI colonizers
    (01:06:58) - Probability of AI takeover
    (01:23:59) - Can we detect deception?
    (01:49:28) - Using AI to solve coordination problems
    (01:58:04) - Partial alignment
    (02:13:44) - AI far future
    (02:25:07) - Markets & other evidence
    (02:35:29) - Day in the life of Carl Shulman
    (02:49:08) - Space warfare, Malthusian long run, & other rapid fireTranscript


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

    • 3 hr 9 min
    Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

    Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution, Robot Doublings, & Alignment

    In terms of the depth and range of topics, this episode is the best I’ve done.
    No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.
    We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.
    This part is about Carl’s model of an intelligence explosion, which integrates everything from:
    * how fast algorithmic progress & hardware improvements in AI are happening,
    * what primate evolution suggests about the scaling hypothesis,
    * how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,
    * how quickly robots produced from existing factories could take over the economy.
    We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.
    The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.
    Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Timestamps
    (00:00:00) - Intro
    (00:01:32) - Intelligence Explosion
    (00:18:03) - Can AIs do AI research?
    (00:39:00) - Primate evolution
    (01:03:30) - Forecasting AI progress
    (01:34:20) - After human-level AGI
    (02:08:39) - AI takeover scenarios


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

    • 2 hr 44 min
    Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

    Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

    It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb
    We discuss
    - similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)
    - visiting starving former Soviet scientists during fall of Soviet Union
    - whether Oppenheimer was a spy, & consulting on the Nolan movie
    - living through WW2 as a child
    - odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea
    - how the US pulled of such a massive secret wartime scientific & industrial project
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Timestamps
    (0:00:00) - Oppenheimer movie
    (0:06:22) - Was the bomb inevitable?
    (0:29:10) - Firebombing vs nuclear vs hydrogen bombs
    (0:49:44) - Stalin & the Soviet program
    (1:08:24) - Deterrence, disarmament, North Korea, Taiwan
    (1:33:12) - Oppenheimer as lab director
    (1:53:40) - AI progress vs Manhattan Project
    (1:59:50) - Living through WW2
    (2:16:45) - Secrecy
    (2:26:34) - Wisdom & war


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

    • 2 hr 37 min
    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
    We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.
    If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Timestamps
    (0:00:00) - TIME article
    (0:09:06) - Are humans aligned?
    (0:37:35) - Large language models
    (1:07:15) - Can AIs help with alignment?
    (1:30:17) - Society’s response to AI
    (1:44:42) - Predictions (or lack thereof)
    (1:56:55) - Being Eliezer
    (2:13:06) - Othogonality
    (2:35:00) - Could alignment be easier than we think?
    (3:02:15) - What will AIs want?
    (3:43:54) - Writing fiction & whether rationality helps you win


    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.dwarkeshpatel.com

    • 4 hr 3 min

Customer Reviews

4.8 out of 5
69 Ratings

69 Ratings

joe 18 pack ,

Context is scarce

I like the podcast a lot. Great guests. Good conversations. But the host refuses to contextualize or explain background information.

smallhorse ,

All great things must come to an end

The Lunar Society Podcast was the greatest. Just replacing it with a boring <author’s name> podcast is a bummer.

I’ll be back to check it out in a year, but gotta bail on the show for now. 😢

wiiasdlji ,

Great podcast

Interesting topics and a very well-prepared host

Top Podcasts In Society & Culture

Unwell Network
This American Life
iHeartPodcasts
PJ Vogt, Audacy, Jigsaw
Glennon Doyle & Cadence13
Dear Media

You Might Also Like

Mercatus Center at George Mason University
Erik Torenberg, Dan Romero, Antonio Garcia Martinez
Erik Torenberg
Russ Roberts
Machine Learning Street Talk (MLST)
Future of Life Institute