206 episodes

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Future of Life Institute Podcast Future of Life Institute

    • Technology
    • 4.8 • 101 Ratings

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

    Christian Nunes on Deepfakes (with Max Tegmark)

    Christian Nunes on Deepfakes (with Max Tegmark)

    Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org 

    Timestamps:

    00:00 The National Organisation for Women (NOW) 

    05:37 Deepfakes and women 

    10:12 Protecting ordinary victims of deepfakes 

    16:06 Deepfake legislation 

    23:38 Current harm from deepfakes 

    30:20 Bodily autonomy as a right 

    34:44 NOW's work on AI 

    Here's FLI's recommended amendments to legislative proposals on deepfakes: 

    https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/

    • 37 min
    Dan Faggella on the Race to AGI

    Dan Faggella on the Race to AGI

    Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What limits AI progress? 01:26:31 Which industries are using AI?

    • 1 hr 45 min
    Liron Shapira on Superintelligence Goals

    Liron Shapira on Superintelligence Goals

    Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?

    • 1 hr 26 min
    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power

    • 1 hr 26 min
    Katja Grace on the Largest Survey of AI Researchers

    Katja Grace on the Largest Survey of AI Researchers

    Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI will look like in 20 years 22:43 Experts' extinction risk predictions 29:35 Opinions on slowing down AI development 31:25 AI "arms races" 34:00 AI risk areas with the most agreement 40:41 Do "high hopes and dire concerns" go hand-in-hand? 42:00 Intelligence explosions 45:37 Discontinuous progress 49:43 Impacts of AI crossing the human-level intelligence threshold 59:39 What does AI learn from human culture? 1:02:59 AI scaling 1:05:04 What should we do?

    • 1 hr 8 min
    Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

    Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting

    Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What about China? 1:28:24 Protesting AGI corporations

    • 1 hr 36 min

Customer Reviews

4.8 out of 5
101 Ratings

101 Ratings

457/26777633 ,

Science-smart interviewer asks very good questions!

Great, in depth interviews.

VV7425795 ,

Fantastic contribution to mankind! Thanks!

🤗👍🏻

malfoxley ,

Gerat show!

Lucas, host of the Future of Life podcast, highlights all aspects of tech and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Hard Fork
The New York Times
No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
TED Radio Hour
NPR

You Might Also Like

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
Dwarkesh Podcast
Dwarkesh Patel
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery
Latent Space: The AI Engineer Podcast — Practitioners talking LLMs, CodeGen, Agents, Multimodality, AI UX, GPU Infra and al
Alessio + swyx
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington