208 episodes

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Future of Life Institute Podcast Future of Life Institute

    • Technology
    • 5.0 • 1 Rating

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

    Anton Korinek on Automating Work and the Economics of an Intelligence Explosion

    Anton Korinek on Automating Work and the Economics of an Intelligence Explosion

    Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com  

    Timestamps: 

    00:00 Automation and wages 

    14:32 Complexity for people and machines 

    20:31 Moravec's paradox 

    26:15 Can people switch careers?  

    30:57 Intelligence explosion economics 

    44:08 The lump of labor fallacy  

    51:40 An industry for nostalgia?  

    57:16 Universal basic income  

    01:09:28 Market structure in AI

    • 1 hr 32 min
    Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

    Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

    Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com  

    Timestamps: 

    00:00 US-China competition and risk  

    18:01 The security dilemma  

    30:21 Official and unofficial diplomacy 

    39:53 Hotlines between countries  

    01:01:54 Preventing escalation after war  

    01:09:58 Catastrophic biological risks  

    01:20:42 Ultraviolet germicidal light 

    01:25:54 Ancient civilizational collapse

    • 1 hr 36 min
    Christian Nunes on Deepfakes (with Max Tegmark)

    Christian Nunes on Deepfakes (with Max Tegmark)

    Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org 

    Timestamps:

    00:00 The National Organisation for Women (NOW) 

    05:37 Deepfakes and women 

    10:12 Protecting ordinary victims of deepfakes 

    16:06 Deepfake legislation 

    23:38 Current harm from deepfakes 

    30:20 Bodily autonomy as a right 

    34:44 NOW's work on AI 

    Here's FLI's recommended amendments to legislative proposals on deepfakes: 

    https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/

    • 37 min
    Dan Faggella on the Race to AGI

    Dan Faggella on the Race to AGI

    Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What limits AI progress? 01:26:31 Which industries are using AI?

    • 1 hr 45 min
    Liron Shapira on Superintelligence Goals

    Liron Shapira on Superintelligence Goals

    Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?

    • 1 hr 26 min
    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen on Nuclear War - a Second by Second Timeline

    Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power

    • 1 hr 26 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Darknet Diaries
Jack Rhysider
Y Combinator Startup Podcast
Y Combinator
Lightcone Podcast
Y Combinator
re:invent security
Jeroen Prinse / Irfaan Santoe

You Might Also Like

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Erik Torenberg, Nathan Labenz
Dwarkesh Podcast
Dwarkesh Patel
Clearer Thinking with Spencer Greenberg
Spencer Greenberg
Machine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Conversations with Tyler
Mercatus Center at George Mason University
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Sean Carroll | Wondery