52 episodes

I interview scientists, historians, economists, & intellectuals. I ask really good questions.

YouTube: https://www.youtube.com/DwarkeshPatel
Apple Podcasts: https://apple.co/3oBack9
Spotify: https://spoti.fi/3S5g2YK

www.dwarkeshpatel.com

The Lunar Society Dwarkesh Patel

    • Society & Culture
    • 5.0 • 3 Ratings

I interview scientists, historians, economists, & intellectuals. I ask really good questions.

YouTube: https://www.youtube.com/DwarkeshPatel
Apple Podcasts: https://apple.co/3oBack9
Spotify: https://spoti.fi/3S5g2YK

www.dwarkeshpatel.com

    Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

    Richard Rhodes - Making of Atomic Bomb, AI, WW2, Oppenheimer, & Abolishing Nukes

    It was a tremendous honor & pleasure to interview Richard Rhodes, Pulitzer Prize winning author of The Making of the Atomic Bomb
    We discuss
    * similarities between AI progress & Manhattan Project (developing a powerful, unprecedented, & potentially apocalyptic technology within an uncertain arms-race situation)
    * visiting starving former Soviet scientists during fall of Soviet Union
    * whether Oppenheimer was a spy, & consulting on the Nolan movie
    * living through WW2 as a child
    * odds of nuclear war in Ukraine, Taiwan, Pakistan, & North Korea
    * how the US pulled of such a massive secret wartime scientific & industrial project
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Timestamps
    (0:00:00) - Oppenheimer movie
    (0:06:22) - Was the bomb inevitable?
    (0:29:10) - Firebombing vs nuclear vs hydrogen bombs
    (0:49:44) - Stalin & the Soviet program
    (1:08:24) - Deterrence, disarmament, North Korea, Taiwan
    (1:33:12) - Oppenheimer as lab director
    (1:53:40) - AI progress vs Manhattan Project
    (1:59:50) - Living through WW2
    (2:16:45) - Secrecy
    (2:26:34) - Wisdom & war
    Transcript
    (0:00:00) - Oppenheimer movie
    Dwarkesh Patel 0:00:51
    Today I have the great honor of interviewing Richard Rhodes, who is the Pulitzer Prize-winning author of The Making of the Atomic Bomb, and most recently, the author of Energy, A Human History. I'm really excited about this one. Let's jump in at a current event, which is the fact that there's a new movie about Oppenheimer coming out, which I understand you've been consulted about. What did you think of the trailer? What are your impressions? 
    Richard Rhodes 0:01:22
    They've really done a good job of things like the Trinity test device, which was the sphere covered with cables of various kinds. I had watched Peaky Blinders, where the actor who's playing Oppenheimer also appeared, and he looked so much like Oppenheimer to start with. Oppenheimer was about six feet tall, he was rail thin, not simply in terms of weight, but in terms of structure. Someone said he could sit in a children's high chair comfortably. But he never weighed more than about 140 pounds and that quality is there in the actor. So who knows? It all depends on how the director decided to tell the story. There are so many aspects of the story that you could never possibly squeeze them into one 2-hour movie. I think that we're waiting for the multi-part series that would really tell a lot more of the story, if not the whole story. But it looks exciting. We'll see. There have been some terrible depictions of Oppenheimer, there've been some terrible depictions of the bomb program. And maybe they'll get this one right. 
    Dwarkesh Patel 0:02:42
    Yeah, hopefully. It is always great when you get an actor who resembles their role so well. For example, Bryan Cranston who played LBJ, and they have the same physical characteristics of the beady eyes, the big ears. Since we're talking about Oppenheimer, I had one question about him. I understand that there's evidence that's come out that he wasn't directly a communist spy. But is there any possibility that he was leaking information to the Soviets or in some way helping the Soviet program? He was a communist sympathizer, right? 
    Richard Rhodes 0:03:15
    He had been during the 1930s. But less for the theory than for the practical business of helping Jews escape from Nazi Germany. One of the loves of his life, Jean Tatlock, was also busy working on extracting Jews from Europe during the 30. She was a member of the Communist Party and she, I think, encouraged him to come to meetings. But I don't think there's any possibility whatsoever that he shared information. In fact, he said he read Marx on a train trip between Berkeley and Washington one time and thought it was a bunch of hooey, just ridiculous. He was a very smart man, and he read the book with an eye to its logic, and he didn't think t

    • 2 hrs 37 min
    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.
    We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.
    If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.
    If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack.
    Timestamps
    (0:00:00) - TIME article
    (0:09:06) - Are humans aligned?
    (0:37:35) - Large language models
    (1:07:15) - Can AIs help with alignment?
    (1:30:17) - Society’s response to AI
    (1:44:42) - Predictions (or lack thereof)
    (1:56:55) - Being Eliezer
    (2:13:06) - Othogonality
    (2:35:00) - Could alignment be easier than we think?
    (3:02:15) - What will AIs want?
    (3:43:54) - Writing fiction & whether rationality helps you win
    Transcript
    TIME article
    Dwarkesh Patel 0:00:51
    Today I have the pleasure of speaking with Eliezer Yudkowsky. Eliezer, thank you so much for coming out to the Lunar Society.
    Eliezer Yudkowsky 0:01:00
    You’re welcome.
    Dwarkesh Patel 0:01:01
    Yesterday, when we’re recording this, you had an article in Time calling for a moratorium on further AI training runs. My first question is — It’s probably not likely that governments are going to adopt some sort of treaty that restricts AI right now. So what was the goal with writing it?
    Eliezer Yudkowsky 0:01:25
    I thought that this was something very unlikely for governments to adopt and then all of my friends kept on telling me — “No, no, actually, if you talk to anyone outside of the tech industry, they think maybe we shouldn’t do that.” And I was like — All right, then. I assumed that this concept had no popular support. Maybe I assumed incorrectly. It seems foolish and to lack dignity to not even try to say what ought to be done. There wasn’t a galaxy-brained purpose behind it. I think that over the last 22 years or so, we’ve seen a great lack of galaxy brained ideas playing out successfully.
    Dwarkesh Patel 0:02:05
    Has anybody in the government reached out to you, not necessarily after the article but just in general, in a way that makes you think that they have the broad contours of the problem correct?
    Eliezer Yudkowsky 0:02:15
    No. I’m going on reports that normal people are more willing than the people I’ve been previously talking to, to entertain calls that this is a bad idea and maybe you should just not do that.
    Dwarkesh Patel 0:02:30
    That’s surprising to hear, because I would have assumed that the people in Silicon Valley who are weirdos would be more likely to find this sort of message. They could kind of rocket the whole idea that AI will make nanomachines that take over. It’s surprising to hear that normal people got the message first.
    Eliezer Yudkowsky 0:02:47
    Well, I hesitate to use the term midwit but maybe this was all just a midwit thing.
    Dwarkesh Patel 0:02:54
    All right. So my concern with either the 6 month moratorium or forever moratorium until we solve alignment is that at this point, it could make it seem to people like we’re crying wolf. And it would be like crying wolf because these systems aren’t yet at a point at which they’re dangerous. 
    Eliezer Yudkowsky 0:03:13
    And nobody is saying they are. I’m not saying they are. The open letter signatories aren’t saying they are.
    Dwarkesh Patel 0:03:20
    So if there is a point at which we can get the public momentum to do some sort of stop, wouldn’t

    • 4 hrs 3 min
    Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

    Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment

    I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:
    * time to AGI
    * leaks and spies
    * what's after generative models
    * post AGI futures
    * working with Microsoft and competing with Google
    * difficulty of aligning superhuman AI
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.
    If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack.
    Timestamps
    (00:00) - Time to AGI
    (05:57) - What’s after generative models?
    (10:57) - Data, models, and research
    (15:27) - Alignment
    (20:53) - Post AGI Future
    (26:56) - New ideas are overrated
    (36:22) - Is progress inevitable?
    (41:27) - Future Breakthroughs
    Transcript
    Time to AGI
    Dwarkesh Patel  
    Today I have the pleasure of interviewing Ilya Sutskever, who is the Co-founder and Chief Scientist of OpenAI. Ilya, welcome to The Lunar Society.
    Ilya Sutskever  
    Thank you, happy to be here.
    Dwarkesh Patel  
    First question and no humility allowed. There are not that many scientists who will make a big breakthrough in their field, there are far fewer scientists who will make multiple independent breakthroughs that define their field throughout their career, what is the difference? What distinguishes you from other researchers? Why have you been able to make multiple breakthroughs in your field?
    Ilya Sutskever  
    Thank you for the kind words. It's hard to answer that question. I try really hard, I give it everything I've got and that has worked so far. I think that's all there is to it. 
    Dwarkesh Patel  
    Got it. What's the explanation for why there aren't more illicit uses of GPT? Why aren't more foreign governments using it to spread propaganda or scam grandmothers?
    Ilya Sutskever  
    Maybe they haven't really gotten to do it a lot. But it also wouldn't surprise me if some of it was going on right now. I can certainly imagine they would be taking some of the open source models and trying to use them for that purpose. For sure I would expect this to be something they'd be interested in the future.
    Dwarkesh Patel  
    It's technically possible they just haven't thought about it enough?
    Ilya Sutskever  
    Or haven't done it at scale using their technology. Or maybe it is happening, which is annoying. 
    Dwarkesh Patel  
    Would you be able to track it if it was happening? 
    Ilya Sutskever 
    I think large-scale tracking is possible, yes. It requires special operations but it's possible.
    Dwarkesh Patel  
    Now there's some window in which AI is very economically valuable, let’s say on the scale of airplanes, but we haven't reached AGI yet. How big is that window?
    Ilya Sutskever  
    It's hard to give a precise answer and it’s definitely going to be a good multi-year window. It's also a question of definition. Because AI, before it becomes AGI, is going to be increasingly more valuable year after year in an exponential way. 
    In hindsight, it may feel like there was only one year or two years because those two years were larger than the previous years. But I would say that already, last year, there has been a fair amount of economic value produced by AI. Next year is going to be larger and larger after that. So I think it's going to be a good multi-year chunk of time where that’s going to be true, from now till AGI pretty much. 
    Dwarkesh Patel  
    Okay. Because I'm curious if there's a startup that's using your model, at some point if you have AGI there's only one business in the world, it's OpenAI. How much window does any business have where they're actually producing something that AGI can’t produce?
    Ilya Sutskever  
    It's the same question as a

    • 47 min
    Nat Friedman - Reading Ancient Scrolls, Open Source, & AI

    Nat Friedman - Reading Ancient Scrolls, Open Source, & AI

    It is said that the two greatest problems of history are: how to account for the rise of Rome, and how to account for her fall. If so, then the volcanic ashes spewed by Mount Vesuvius in 79 AD - which entomb the cities of Pompeii and Herculaneum in South Italy - hold history’s greatest prize. For beneath those ashes lies the only salvageable library from the classical world.
    Nat Friedman was the CEO of Github form 2018 to 2021. Before that, he started and sold two companies - Ximian and Xamarin. He is also the founder of AI Grant and California YIMBY.
    And most recently, he has created and funded the Vesuvius Challenge - a million dollar prize for reading an unopened Herculaneum scroll for the very first time. If we can decipher these scrolls, we may be able to recover lost gospels, forgotten epics, and even missing works of Aristotle.
    We also discuss the future of open source and AI, running Github and building Copilot, and why EMH is a lie.
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.
    If you have the means and have enjoyed my podcast, I would appreciate your support via a paid subscriptions on Substack 🙏.
    Timestamps
    (0:00:00) - Vesuvius Challenge
    (0:30:00) - Finding points of leverage
    (0:37:39) - Open Source in AI
    (0:40:32) - Github Acquisition
    (0:50:18) - Copilot origin Story
    (1:11:47) - Nat.org
    (1:32:56) - Questions from Twitter
    Transcript
    Dwarkesh Patel 
    Today I have the pleasure of speaking with Nat Friedman, who was the CEO of GitHub from 2018 to 2021. Before that, he started and sold two companies, Ximian and Xamarin. And he is also the founder of AI Grant and California YIMBY. And most recently, he is the organizer and founder of the Scroll prize, which is where we'll start this conversation. Do you want to tell the audience about what the Scroll prize is? 
    Vesuvius Challenge
    Nat Friedman 
    We're calling it the Vesuvius challenge. It is just this crazy and exciting thing I feel incredibly honored to have gotten caught up in. A couple of years ago, it was the midst of COVID and we were in a lockdown, and like everybody else, I was falling into internet rabbit holes. And I just started reading about the eruption of Mount Vesuvius in Italy, about 2000 years ago. And it turns out that when Vesuvius erupted, it was AD 79. It destroyed all the nearby towns, everyone knows about Pompeii. But there was another nearby town called Herculaneum. And Herculaneum was sort of like the Beverly Hills to Pompeii. So big villas, big houses, fancy people. And in Herculaneum, there was one enormous villa in particular. It had once been owned by the father in law of Julius Caesar, a well connected guy. And it was full of beautiful statues and marbles and art. But it was also the home to a huge library of papyrus scrolls. When the villa was buried, the volcano spit out enormous quantities of mud and ash, and it buried Herculaneum in something like 20 meters of material. So it wasn't a thin layer, it was a very thick layer. Those towns were buried and forgotten for hundreds of years. No one even knew exactly where they were, until the 1700s. In 1750 a farm worker who was digging a well in the outskirts of Herculaneum struck this marble paving stone of a path that had been at this huge villa. He was pretty far down when he did that, he was 60 feet down. And then subsequently, a Swiss engineer came in and started digging tunnels from that well shaft and they found all these treasures. Looting was sort of the spirit of the time. If they encountered a wall, they would just bust through it and they were taking out these beautiful bronze statues that had survived. And along the way, they kept encountering these lumps of wha

    • 1 hr 38 min
    Brett Harrison - FTX US Former President & HFT Veteran Speaks Out

    Brett Harrison - FTX US Former President & HFT Veteran Speaks Out

    I flew out to Chicago to interview Brett Harrison, who is the former President of FTX US President and founder of Architect.
    In his first longform interview since the fall of FTX, he speak in great detail about his entire tenure there and about SBF’s dysfunctional leadership. He talks about how the inner circle of Gary Wang, Nishad Singh, and SBF mismanaged the company, controlled the codebase, got distracted by media, and even threatened him for his letter of resignation.
    In what was my favorite part of the interview, we also discuss his insights about the financial system from his decades of experience in the world's largest HFT firms.
    And we talk about Brett's new startup, Architect, as well as the general state of crypto post-FTX.
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Similar episodes
    Side note: Paying the bills
    To help pay the bills for my podcast, I've turned on paid subscriptions on Substack.
    No major content will be paywalled - please don't donate if you have to think twice before buying a cup of coffee.
    But if you have the means & have enjoyed my podcast, I would appreciate your support 🙏.
    As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.
    Timestamps
    (0:00:00) - Passive investing & HFT hacks
    (0:08:30) - Is Finance Zero-Sum?
    (0:18:38) - Interstellar Markets & Periodic Auctions
    (0:23:10) - Hiring & Programming at Jane Street
    (0:32:09) - Quant Culture
    (0:42:10) - FTX - Meeting Sam, Joining FTX US
    (0:58:20) - FTX - Accomplishments, Beginnings of Trouble
    (1:08:11) - FTX - SBF's Dysfunctional Leadership
    (1:26:53) - FTX - Alameda
    (1:33:50) - FTX - Leaving FTX, SBF"s Threats
    (1:45:45) - FTX - Collapse
    (1:53:10) - FTX - Lessons
    (2:04:34) - FTX - Regulators, & FTX Mafia
    (2:15:42) - Architect.xyz
    (2:30:10) - Institutional Interest & Uses of Crypto
    Transcript
    This transcript was autogenerated and thus may contain errors.
    Dwarkesh Patel
    Okay. Today I have the pleasure of speaking with Brett Harrison, who is now the founder of Architect, which provides traders with infrastructure for accessing digital markets. Before that he was the president of FTX US, and before that he was the head of ETF technology at Citadel. And he has a large amount of experience in leadership positions in finance and tech. So this is going to be a very interesting conversation. Thanks for coming on the Lunar Society, Brett.
    Brett Harrison
    Yeah. Thanks for coming out to Chicago.
    Dwarkesh Patel
    Yeah, my pleasure. My pleasure. Is the growth of ETFs a good thing for the health of markets? There's one view that as there's more passive investing, you're kind of diluting the power of smart money. And in fact, what these active investors are doing with their fees is subsidizing the price discovery that makes markets efficient. And with passive investing, you're sort of free writing off of that. You were head of ETF technology at Citadel, so you're the perfect person to ask this. Is it bad that there's so much passive investing?
    Brett Harrison
    I think on that it's good. I think that most investors in the market shouldn't be trying to pick individual stock names. And the best thing people can do is invest in sort of diversified instruments. And it is far less expensive to invest in indices now than it ever was in history because of the advent of ETFs.
    Dwarkesh Patel
    Yeah. So maybe it's good for individual investors to put their money in passive investments. But what about the health of the market as a whole? Is it hampered by how much money goes into passive investments?
    Brett Harrison
    It's hard to be able to tell what it would look like if there was less money in passive investment. Now, I do think one of the potential downsides is ending up creating extra correlated activity between i

    • 2 hrs 37 min
    Marc Andreessen - AI, Crypto, 1000 Elon Musks, Regrets, Vulnerabilities, & Managerial Revolution

    Marc Andreessen - AI, Crypto, 1000 Elon Musks, Regrets, Vulnerabilities, & Managerial Revolution

    My podcast with the brilliant Marc Andreessen is out!
    We discuss:
    * how AI will revolutionize software
    * whether NFTs are useless, & whether he should be funding flying cars instead
    * a16z's biggest vulnerabilities
    * the future of fusion, education, Twitter, venture, managerialism, & big tech
    Dwarkesh Patel has a great interview with Marc Andreessen. This one is full of great riffs: the idea that VC exists to restore pockets of bourgeois capitalism in a mostly managerial capitalist system, what makes the difference between good startup founders and good mature company executives, how valuation works at the earliest stages, and more. Dwarkesh tends to ask the questions other interviewers don't.
    Byrne Hobart, The Diff
    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
    Similar episodes
    You may also enjoy my interview of Tyler Cowen about the pessimism of sex and identifying talent, Byrne Hobart about FTX and how drugs have shaped financial markets, and Bethany McLean about the astonishing similarities between FTX and the Enron story (which she broke).
    Side note: Paying the bills
    To help pay the bills for my podcast, I'm turning on paid subscriptions on Substack.
    No major content will be paywalled - please don't donate if you have to think twice before buying a cup of coffee.
    But if you have the means & have enjoyed my podcast, I would appreciate your support 🙏.
    As always, the most helpful thing you can do is just to share the podcast - send it to friends, group chats, Twitter, Reddit, forums, and wherever else men and women of fine taste congregate.
    Timestamps
    (0:00:17) - Chewing glass
    (0:04:21) - AI
    (0:06:42) - Regrets
    (0:08:51) - Managerial capitalism
    (0:18:43) - 100 year fund
    (0:22:15) - Basic research
    (0:27:07) - $100b fund?
    (0:30:32) - Crypto debate
    (0:43:29) - Future of VC
    (0:50:20) - Founders
    (0:56:42) - a16z vulnerabilities
    (1:01:28) - Monetizing Twitter
    (1:07:09) - Future of big tech
    (1:14:07) - Is VC Overstaffed?
    Transcript
    Dwarkesh Patel 0:00
    Today, I have the great pleasure of speaking with Marc Andreessen, which means for the first time on the podcast, the guest’s and the host’s playback speed will actually match. Marc, welcome to The Lunar Society.
    Marc Andreessen 00:13
    Good morning. And thank you for having me. It's great to be here.
    Chewing glass
    Dwarkesh Patel 00:17
    My pleasure. Have you been tempted anytime in the last 14 years to start a company? Not a16z, but another company?
    Marc Andreessen 00:24
    No. The short answer is we did. We started our venture firm in 2009 and it's given my partner, Ben and I, a chance to fully exercise our entrepreneurial ambitions and energies to build this firm. We're over 500 people now at the firm which is small for a tech company, but it's big for a venture capital firm. And it has let us get all those urges out.
    Dwarkesh Patel 00:50
    But there's no product where you think — “Oh God, this needs to exist, and I should be the one to make it happen”?
    Marc Andreessen 00:55
    I think of this a lot. We look at this through the lens of — “What would I do if I were 23 again?” And I always have those ideas. But starting a company is a real commitment, it really changes your life. My favorite all time quote on being a startup founder is from Sean Parker, who says —“Starting a company is like chewing glass. Eventually, you start to like the taste of your own blood.” I always get this queasy look on the face of people I’m talking to when I roll that quote out. But it is really intense. Whenever anybody asks me if they should start a company, the answer is always no. Because it's such a gigantic, emotional, irrational thing to do. The implications of that decision are so profound in terms of how you live your life. Look, there are plenty of great ideas, and plenty of interesting things to do but the actual process is so difficult. It gets romant

    • 1 hr 19 min

Customer Reviews

5.0 out of 5
3 Ratings

3 Ratings

Hello63147 ,

Best new intellectual podcast

If you’re a fan of Tyler Cowen’s Conversations With Tyler, this is the younger faster-talking version. The host is very well-prepared and asks good questions. No filler.

The recent podcast with Eliezer Yudkowsky is the closest I’ve seen a podcast come to what real, extended, high-level, consequential intellectual discussion feels like. An amazing achievement!

Top Podcasts In Society & Culture

Mamamia Podcasts
ABC listen
Mamamia Podcasts
iHeartPodcasts
Shameless Media
Brittany Hockley and Laura Byrne

You Might Also Like

Mercatus Center at George Mason University
Erik Torenberg, Dan Romero, Antonio Garcia Martinez
Erik Torenberg
Russ Roberts
Future of Life Institute
Jim O'Shaughnessy