This is a free preview of a paid episode. To hear more, visit nonzero.substack.com
0:24 Why this pod’s a little odd 2:50 Ilya Sutskever and Jan Leike quit OpenAI—part of a larger pattern? 10:20 Bob: AI doomers need Hollywood 16:26 Does an AI arms race spell doom for alignment? 20:40 Why the “Pause AI” movement matters 24:54 AI doomerism and Don’t Look Up: compare and contrast 27:23 How Liron (fore)sees AI doom 33:18 Are Sam Altman’s concerns about AI safety sincere? 39:46 Paperclip maximizing, evolution, and the AI will to power question 51:34 Are there real-world examples of AI going rogue? 1:07:12 Should we really align AI to human values? 1:15:27 Heading to Overtime
Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Liron Shapira (Pause AI, Relationship Hero). Recorded May 06, 2024. Additional segment recorded May 15, 2024.
Twitter: https://twitter.com/NonzeroPods
Information
- Show
- FrequencyUpdated weekly
- Published16 May 2024 at 17:00 UTC
- Length1h 18m
- RatingClean