127 episodes

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Future of Life Institute Podcas‪t‬ Future of Life Institute

    • Technology
    • 4.9 • 75 Ratings

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

    Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

    Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

    Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures.

     Topics discussed in this episode include:

    -Understanding the universe through digital physics
    -How human consciousness operates and is structured
    -The path to aligned AGI and bottlenecks to beneficial futures
    -Incentive structures and collective coordination

    You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/

    You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    3:17 What is truth and knowledge?
    11:39 What is subjectivity and objectivity?
    14:32 What is the universe ultimately?
    19:22 Is the universe a cellular automaton? Is the universe ultimately digital or analogue?
    24:05 Hilbert's hotel from the point of view of computation
    35:18 Seeing the world as a fractal
    38:48 Describing human consciousness
    51:10 Meaning, purpose, and harvesting negentropy
    55:08 The path to aligned AGI
    57:37 Bottlenecks to beneficial futures and existential security
    1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures?
    1:19:39 Non-duality and collective coordination
    1:22:53 What difficulties are there for an idealist worldview that involves computation?
    1:27:20 Which features of mind and consciousness are necessarily coupled and which aren't?
    1:36:40 Joscha's final thoughts on AGI

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 38 min
    Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

    Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI

    Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.

     Topics discussed in this episode include:

    -Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI
    -The relationship between AI safety, control, and alignment
    -Virtual worlds as a proposal for solving multi-multi alignment
    -AI security

    You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/

    You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro 
    2:35 Roman’s primary research interests 
    4:09 How theoretical proofs help AI safety research 
    6:23 How impossibility results constrain computer science systems
    10:18 The inability to tell if arbitrary code is friendly or unfriendly 
    12:06 Impossibility results clarify what we can do 
    14:19 Roman’s results on unexplainability and incomprehensibility 
    22:34 Focusing on comprehensibility 
    26:17 Roman’s results on uncontrollability 
    28:33 Alignment as a subset of safety and control 
    30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment 
    33:40 What does it mean to solve AI safety? 
    34:19 What do the impossibility results really mean? 
    37:07 Virtual worlds and AI alignment 
    49:55 AI security and malevolent agents 
    53:00 Air gapping, boxing, and other security methods 
    58:43 Some examples of historical failures of AI systems and what we can learn from them 
    1:01:20 Clarifying impossibility results
    1:06 55 Examples of systems failing and what these demonstrate about AI 
    1:08:20 Are oracles a valid approach to AI safety? 
    1:10:30 Roman’s final thoughts

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 12 min
    Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons

    Stuart Russell and Zachary Kallenborn on Drone Swarms and the Riskiest Aspects of Autonomous Weapons

    Stuart Russell, Professor of Computer Science at UC Berkeley, and Zachary Kallenborn, WMD and drone swarms expert, join us to discuss the highest risk and most destabilizing aspects of lethal autonomous weapons.

     Topics discussed in this episode include:

    -The current state of the deployment and development of lethal autonomous weapons and swarm technologies
    -Drone swarms as a potential weapon of mass destruction
    -The risks of escalation, unpredictability, and proliferation with regards to autonomous weapons
    -The difficulty of attribution, verification, and accountability with autonomous weapons
    -Autonomous weapons governance as norm setting for global AI issues

    You can find the page for this podcast here: https://futureoflife.org/2021/02/25/stuart-russell-and-zachary-kallenborn-on-drone-swarms-and-the-riskiest-aspects-of-lethal-autonomous-weapons/

    You can check out the new lethal autonomous weapons website here: https://autonomousweapons.org/

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    2:23 Emilia Javorsky on lethal autonomous weapons
    7:27 What is a lethal autonomous weapon?
    11:33 Autonomous weapons that exist today
    16:57 The concerns of collateral damage, accidental escalation, scalability, control, and error risk
    26:57 The proliferation risk of autonomous weapons
    32:30 To what extent are global superpowers pursuing these weapons? What is the state of industry's pursuit of the research and manufacturing of this technology
    42:13 A possible proposal for a selective ban on small anti-personnel autonomous weapons
    47:20 Lethal autonomous weapons as a potential weapon of mass destruction
    53:49 The unpredictability of autonomous weapons, especially when swarms are interacting with other swarms
    58:09 The risk of autonomous weapons escalating conflicts
    01:10:50 The risk of drone swarms proliferating
    01:20:16 The risk of assassination
    01:23:25 The difficulty of attribution and accountability
    01:26:05 The governance of autonomous weapons being relevant to the global governance of AI
    01:30:11 The importance of verification for responsibility, accountability, and regulation
    01:35:50 Concerns about the beginning of an arms race and the need for regulation
    01:38:46 Wrapping up
    01:39:23 Outro

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 39 min
    John Prendergast on Non-dual Awareness and Wisdom for the 21st Century

    John Prendergast on Non-dual Awareness and Wisdom for the 21st Century

    John Prendergast, former adjunct professor of psychology at the California Institute of Integral Studies, joins Lucas Perry for a discussion about the experience and effects of ego-identification, how to shift to new levels of identity, the nature of non-dual awareness, and the potential relationship between waking up and collective human problems. This is not an FLI Podcast, but a special release where Lucas shares a direction he feels has an important relationship with AI alignment and existential risk issues.

    Topics discussed in this episode include:

    -The experience of egocentricity and ego-identification
    -Waking up into heart awareness
    -The movement towards and qualities of non-dual consciousness
    -The ways in which the condition of our minds collectively affect the world
    -How waking up may be relevant to the creation of AGI

    You can find the page for this podcast here: https://futureoflife.org/2021/02/09/john-prendergast-on-non-dual-awareness-and-wisdom-for-the-21st-century/

    Have any feedback about the podcast? You can share your thoughts here: https://www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    7:10 The modern human condition
    9:29 What egocentricity and ego-identification are
    15:38 Moving beyond the experience of self
    17:38 The origins and structure of self
    20:25 A pointing out instruction for noticing ego-identification and waking up out of it
    24:34 A pointing out instruction for abiding in heart-mind or heart awareness
    28:53 The qualities of and moving into heart awareness and pure awareness
    33:48 An explanation of non-dual awareness
    40:50 Exploring the relationship between awareness, belief, and action
    46:25 Growing up and improving the egoic structure
    48:29 Waking up as recognizing true nature
    51:04 Exploring awareness as primitive and primary
    53:56 John's dream of Sri Nisargadatta Maharaj
    57:57 The use and value of conceptual thought and the mind
    1:00:57 The epistemics of heart-mind and the conceptual mind as we shift levels of identity
    1:17:46 A pointing out instruction for inquiring into core beliefs
    1:27:28 The universal heart, qualities of awakening, and the ethical implications of such shifts
    1:31:38 Wisdom, waking up, and growing up for the transgenerational issues of the 21st century
    1:38:44 Waking up and its applicability to the creation of AGI
    1:43:25 Where to find, follow, and reach out to John
    1:45:56 Outro

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 46 min
    Beatrice Fihn on the Total Elimination of Nuclear Weapons

    Beatrice Fihn on the Total Elimination of Nuclear Weapons

    Beatrice Fihn, executive director of the International Campaign to Abolish Nuclear Weapons (ICAN) and Nobel Peace Prize recipient, joins us to discuss the current risks of nuclear war, policies that can reduce the risks of nuclear conflict, and how to move towards a nuclear weapons free world.

    Topics discussed in this episode include:

    -The current nuclear weapons geopolitical situation
    -The risks and mechanics of accidental and intentional nuclear war
    -Policy proposals for reducing the risks of nuclear war
    -Deterrence theory
    -The Treaty on the Prohibition of Nuclear Weapons
    -Working towards the total elimination of nuclear weapons

    You can find the page for this podcast here: https://futureoflife.org/2021/01/21/beatrice-fihn-on-the-total-elimination-of-nuclear-weapons/

    Timestamps: 

    0:00 Intro
    4:28 Overview of the current nuclear weapons situation
    6:47 The 9 nuclear weapons states, and accidental and intentional nuclear war
    9:27 Accidental nuclear war and human systems
    12:08 The risks of nuclear war in 2021 and nuclear stability
    17:49 Toxic personalities and the human component of nuclear weapons
    23:23 Policy proposals for reducing the risk of nuclear war
    23:55 New START Treaty
    25:42 What does it mean to maintain credible deterrence
    26:45 ICAN and working on the Treaty on the Prohibition of Nuclear Weapons
    28:00 Deterrence theoretic arguments for nuclear weapons
    32:36 The reduction of nuclear weapons, no first use, removing ground based missile systems, removing hair-trigger alert, removing presidential authority to use nuclear weapons
    39:13 Arguments for and against nuclear risk reduction policy proposals
    46:02 Moving all of the United State's nuclear weapons to bombers and nuclear submarines
    48:27 Working towards and the theory of the total elimination of nuclear weapons
    1:11:40 The value of the Treaty on the Prohibition of Nuclear Weapons
    1:14:26 Elevating activism around nuclear weapons and messaging more skillfully
    1:15:40 What the public needs to understand about nuclear weapons
    1:16:35 World leaders' views of the treaty
    1:17:15 How to get involved

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 17 min
    Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

    Max Tegmark and the FLI Team on 2020 and Existential Risk Reduction in the New Year

    Max Tegmark and members of the FLI core team come together to discuss favorite projects from 2020, what we've learned from the past year, and what we think is needed for existential risk reduction in 2021.

    Topics discussed in this episode include:

    -FLI's perspectives on 2020 and hopes for 2021
    -What our favorite projects from 2020 were
    -The biggest lessons we've learned from 2020
    -What we see as crucial and needed in 2021 to ensure and make -improvements towards existential safety

    You can find the page for this podcast here: https://futureoflife.org/2021/01/08/max-tegmark-and-the-fli-team-on-2020-and-existential-risk-reduction-in-the-new-year/

    Timestamps: 

    0:00 Intro
    00:52 First question: What was your favorite project from 2020?
    1:03 Max Tegmark on the Future of Life Award
    4:15 Anthony Aguirre on AI Loyalty
    9:18 David Nicholson on the Future of Life Award
    12:23 Emilia Javorksy on being a co-champion for the UN Secretary-General's effort on digital cooperation
    14:03 Jared Brown on developing comments on the European Union's White Paper on AI through community collaboration
    16:40 Tucker Davey on editing the biography of Victor Zhdanov
    19:49 Lucas Perry on the podcast and Pindex video
    23:17 Second question: What lessons do you take away from 2020?
    23:26 Max Tegmark on human fragility and vulnerability
    25:14 Max Tegmark on learning from history
    26:47 Max Tegmark on the growing threats of AI
    29:45 Anthony Aguirre on the inability of present-day institutions to deal with large unexpected problems
    33:00 David Nicholson on the need for self-reflection on the use and development of technology
    38:05 Emilia Javorsky on the global community coming to awareness about tail risks
    39:48 Jared Brown on our vulnerability to low probability, high impact events and the importance of adaptability and policy engagement
    41:43 Tucker Davey on taking existential risks more seriously and ethics-washing
    43:57 Lucas Perry on the fragility of human systems
    45:40 Third question: What is needed in 2021 to make progress on existential risk mitigation
    45:50 Max Tegmark on holding Big Tech accountable, repairing geopolitics, and fighting the myth of the technological zero-sum game
    49:58 Anthony Aguirre on the importance of spreading understanding of expected value reasoning and fixing the information crisis
    53:41 David Nicholson on the need to reflect on our values and relationship with technology
    54:35 Emilia Javorksy on the importance of returning to multilateralism and global dialogue
    56:00 Jared Brown on the need for robust government engagement
    57:30 Lucas Perry on the need for creating institutions for existential risk mitigation and global cooperation
    1:00:10 Outro

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr

Customer Reviews

4.9 out of 5
75 Ratings

75 Ratings

JordanP153 ,

I love what I’ve heard

So far I’ve listened to the episode on non violent communication and the Sam Harris episode- both are excellent!

Peterpaul1925 ,

Amazing Podcast !

People need to know about this excellent podcast (and the Future of Life Institute) focusing on the most important issues facing the world. The topics are big, current, and supremely important; the guests are luminaries in their fields; and the Interviewer, Lucas Perry, brings it all forth in such a compelling way. He is so well informed on a wide range of issues and makes the conversations stimulating and thought-provoking. Aftger each episode I listened to so far I found myself telling other people about what was discussed; it's that valuable. After one episode, I started contributing to FLI. What a find. Thankyou FLI and Lucas.

jingalli89 ,

Great podcast on initiatives that are critical for our future.

Lucas/FLI do an excellent job of conducting in-depth interviews with incredible people whose work stands to radically impact humanity’s future. It’s a badly missing and needed resource in today’s world, is always high-quality, and I'm able to learn something new/unique/valuable each time. Great job to Lucas and team!

Top Podcasts In Technology

Listeners Also Subscribed To