159 episodes

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Future of Life Institute Podcast Future of Life Institute

    • Technology
    • 4.8 • 87 Ratings

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

    Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education

    Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education

    Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev

    Social Media Links:
    ➡️ WEBSITE: https://futureoflife.org
    ➡️ TWITTER: https://twitter.com/FLIxrisk
    ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
    ➡️ META: https://www.facebook.com/futureoflifeinstitute
    ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

    • 1 hr 5 min
    Connor Leahy on AI Safety and Why the World is Fragile

    Connor Leahy on AI Safety and Why the World is Fragile

    Connor Leahy from Conjecture joins the podcast to discuss AI safety, the fragility of the world, slowing down AI development, regulating AI, and the optimal funding model for AI safety research. Learn more about Connor's work at https://conjecture.dev

    Timestamps:
    00:00 Introduction
    00:47 What is the best way to understand AI safety?
    09:50 Why is the world relatively stable?
    15:18 Is the main worry human misuse of AI?
    22:47 Can humanity solve AI safety?
    30:06 Can we slow down AI development?
    37:13 How should governments regulate AI?
    41:09 How do we avoid misallocating AI safety government grants?
    51:02 Should AI safety research be done by for-profit companies?

    Social Media Links:
    ➡️ WEBSITE: https://futureoflife.org
    ➡️ TWITTER: https://twitter.com/FLIxrisk
    ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
    ➡️ META: https://www.facebook.com/futureoflifeinstitute
    ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/

    • 1 hr 5 min
    Connor Leahy on AI Progress, Chimps, Memes, and Markets

    Connor Leahy on AI Progress, Chimps, Memes, and Markets

    Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev

    Timestamps:
    00:00 Introduction
    01:00 Defining artificial general intelligence
    04:52 What makes humans more powerful than chimps?
    17:23 Would AIs have to be social to be intelligent?
    20:29 Importing humanity's memes into AIs
    23:07 How do we measure progress in AI?
    42:39 Gut feelings about AI progress
    47:29 Connor's predictions about AGI
    52:44 Is predicting AGI soon betting against the market?
    57:43 How accurate are prediction markets about AGI?

    • 1 hr 4 min
    Sean Ekins on Regulating AI Drug Discovery

    Sean Ekins on Regulating AI Drug Discovery

    On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about regulating AI drug discovery.

    Timestramps:
    00:00 Introduction
    00:31 Ethical guidelines and regulation of AI drug discovery
    06:11 How do we balance innovation and safety in AI drug discovery?
    13:12 Keeping dangerous chemical data safe
    21:16 Sean’s personal story of voicing concerns about AI drug discovery
    32:06 How Sean will continue working on AI drug discovery

    • 36 min
    Sean Ekins on the Dangers of AI Drug Discovery

    Sean Ekins on the Dangers of AI Drug Discovery

    On this special episode of the podcast, Emilia Javorsky interviews Sean Ekins about the dangers of AI drug discovery. They talk about how Sean discovered an extremely toxic chemical (VX) by reversing an AI drug discovery algorithm.

    Timestamps:
    00:00 Introduction
    00:46 Sean’s professional journey
    03:45 Can computational models replace animal models?
    07:24 The risks of AI drug discovery
    12:48 Should scientists disclose dangerous discoveries?
    19:40 How should scientists handle dual-use technologies?
    22:08 Should we open-source potentially dangerous discoveries?
    26:20 How do we control autonomous drug creation?
    31:36 Surprising chemical discoveries made by black-box AI systems
    36:56 How could the dangers of AI drug discovery be mitigated?

    • 39 min
    Anders Sandberg on the Value of the Future

    Anders Sandberg on the Value of the Future

    Anders Sandberg joins the podcast to discuss various philosophical questions about the value of the future.

    Learn more about Anders' work: https://www.fhi.ox.ac.uk

    Timestamps:
    00:00 Introduction
    00:54 Humanity as an immature teenager
    04:24 How should we respond to our values changing over time?
    18:53 How quickly should we change our values?
    24:58 Are there limits to what future morality could become?
    29:45 Could the universe contain infinite value?
    36:00 How do we balance weird philosophy with common sense?
    41:36 Lightning round: mind uploading, aliens, interstellar travel, cryonics

    • 49 min

Customer Reviews

4.8 out of 5
87 Ratings

87 Ratings

457/26777633 ,

Science-smart interviewer asks very good questions!

Great, in depth interviews.

VV7425795 ,

Fantastic contribution to mankind! Thanks!

🤗👍🏻

malfoxley ,

Gerat show!

Lucas, host of the Future of Life podcast, highlights all aspects of tech and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!

Top Podcasts In Technology

Lex Fridman
Jason Calacanis
The Cut & The Verge
The New York Times
NPR
The Wall Street Journal

You Might Also Like

Santa Fe Institute, Michael Garfield
Sean Carroll | Wondery
Dwarkesh Patel
The Long Now Foundation
Sam Harris
Lawrence M. Krauss