138 episodes

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

Future of Life Institute Podcast Future of Life Institute

    • Technology
    • 4.8 • 82 Ratings

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.

The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.

FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

    Rohin Shah on the State of AGI Safety Research in 2021

    Rohin Shah on the State of AGI Safety Research in 2021

    Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. 

    Topics discussed in this episode include:

    - Inner Alignment versus Outer Alignment
    - Foundation Models
    - Structural AI Risks
    - Unipolar versus Multipolar Scenarios
    - The Most Important Thing That Impacts the Future of Life

    You can find the page for the podcast here:
    https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021

    Watch the video version of this episode here:
    https://youtu.be/_5xkh-Rh6Ec

    Follow the Alignment Newsletter here: https://rohinshah.com/alignment-newsletter/

    Have any feedback about the podcast? You can share your thoughts here:
    https://www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    00:02:22 What is AI alignment?
    00:06:00 How has your perspective of this problem changed over the past year?
    00:06:28 Inner Alignment
    00:13:00 Ways that AI could actually lead to human extinction
    00:18:53 Inner Alignment and MACE optimizers
    00:20:15 Outer Alignment
    00:23:12 The core problem of AI alignment
    00:24:54 Learning Systems versus Planning Systems
    00:28:10 AI and Existential Risk
    00:32:05 The probability of AI existential risk
    00:51:31 Core problems in AI alignment
    00:54:46 How has AI alignment, as a field of research changed in the last year?
    00:54:02 Large scale language models
    00:54:50 Foundation Models
    00:59:58 Why don't we know that AI systems won't totally kill us all?
    01:09:05 How much of the alignment and safety problems in AI will be solved by industry?
    01:14:44 Do you think about what beneficial futures look like?
    01:19:31 Moral Anti-Realism and AI
    01:27:25 Unipolar versus Multipolar Scenarios
    01:35:33 What is the safety team at DeepMind up to?
    01:35:41 What is the most important thing that impacts the future of life?

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 43 min
    Future of Life Institute's $25M Grants Program for Existential Risk Reduction

    Future of Life Institute's $25M Grants Program for Existential Risk Reduction

    Future of Life Institute President Max Tegmark and our grants team, Andrea Berman and Daniel Filan, join us to announce a $25M multi-year AI Existential Safety Grants Program.

    Topics discussed in this episode include:

    - The reason Future of Life Institute is offering AI Existential Safety Grants
    - Max speaks about how receiving a grant changed his career early on
    - Daniel and Andrea provide details on the fellowships and future grant priorities

    Check out our grants programs here: https://grants.futureoflife.org/

    Join our AI Existential Safety Community:
    https://futureoflife.org/team/ai-exis...

    Have any feedback about the podcast? You can share your thoughts here:
    https://www.surveymonkey.com/r/DRBFZCT


    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 24 min
    Filippa Lentzos on Global Catastrophic Biological Risks

    Filippa Lentzos on Global Catastrophic Biological Risks

    Dr. Filippa Lentzos, Senior Lecturer in Science and International Security at King's College London, joins us to discuss the most pressing issues in biosecurity, big data in biology and life sciences, and governance in biological risk.

    Topics discussed in this episode include:

    - The most pressing issue in biosecurity
    - Stories from when biosafety labs failed to contain dangerous pathogens
    - The lethality of pathogens being worked on at biolaboratories
    - Lessons from COVID-19

    You can find the page for the podcast here:
    https://futureoflife.org/2021/10/01/filippa-lentzos-on-emerging-threats-in-biosecurity/

    Watch the video version of this episode here:
    https://www.youtube.com/watch?v=I6M34oQ4v4w

    Have any feedback about the podcast? You can share your thoughts here:
    https://www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    2:35 What are the least understood aspects of biological risk?
    8:32 Which groups are interested biotechnologies that could be used for harm?
    16:30 Why countries may pursue the development of dangerous pathogens
    18:45 Dr. Lentzos' strands of research
    25:41 Stories from when biosafety labs failed to contain dangerous pathogens
    28:34 The most pressing issue in biosecurity
    31:06 What is gain of function research? What are the risks?
    34:57 Examples of gain of function research
    36:14 What are the benefits of gain of function research?
    37:54 The lethality of pathogens being worked on at biolaboratorie
    40:25 Benefits and risks of big data in biology and the life sciences
    45:03 Creating a bioweather map or using big data for biodefense
    48:35 Lessons from COVID-19
    53:46 How does governance fit in to biological risk?
    55:59 Key takeaways from Dr. Lentzos

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 58 min
    Susan Solomon and Stephen Andersen on Saving the Ozone Layer

    Susan Solomon and Stephen Andersen on Saving the Ozone Layer

    Susan Solomon, internationally recognized atmospheric chemist, and Stephen Andersen, leader of the Montreal Protocol, join us to tell the story of the ozone hole and their roles in helping to bring us back from the brink of disaster.

     Topics discussed in this episode include:

    -The industrial and commercial uses of chlorofluorocarbons (CFCs)
    -How we discovered the atmospheric effects of CFCs
    -The Montreal Protocol and its significance
    -Dr. Solomon's, Dr. Farman's, and Dr. Andersen's crucial roles in helping to solve the ozone hole crisis
    -Lessons we can take away for climate change and other global catastrophic risks

    You can find the page for this podcast here: https://futureoflife.org/2021/09/16/susan-solomon-and-stephen-andersen-on-saving-the-ozone-layer/

    Check out the video version of the episode here: https://www.youtube.com/watch?v=7hwh-uDo-6A&ab_channel=FutureofLifeInstitute

    Check out the story of the ozone hole crisis here: https://undsci.berkeley.edu/article/0_0_0/ozone_depletion_01

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    3:13 What are CFCs and what was their role in society?
    7:09 James Lovelock discovering an abundance of CFCs in the lower atmosphere
    12:43 F. Sherwood Rowland's and Mario Molina's research on the atmospheric science of CFCs
    19:52 How a single chlorine atom from a CFC molecule can destroy a large amount of ozone
    23:12 Moving from models of ozone depletion to empirical evidence of the ozone depleting mechanism
    24:41 Joseph Farman and discovering the ozone hole
    30:36 Susan Solomon's discovery of the surfaces of high altitude Arctic clouds being crucial for ozone depletion
    47:22 The Montreal Protocol
    1:00:00 Who were the key stake holders in the Montreal Protocol?
    1:03:46 Stephen Andersen's efforts to phase out CFCs as the co-chair of the Montreal Protocol Technology and Economic Assessment Panel
    1:13:28 The Montreal Protocol helping to prevent 11 billion metric tons of CO2 emissions per year
    1:18:30 Susan and Stephen's key takeaways from their experience with the ozone hole crisis
    1:24:24 What world did we avoid through our efforts to save the ozone layer?
    1:28:37 The lessons Stephen and Susan take away from their experience working to phase out CFCs from industry
    1:34:30 Is action on climate change practical?
    1:40:34 Does the Paris Agreement have something like the Montreal Protocol Technology and Economic Assessment Panel?
    1:43:23 Final words from Susan and Stephen

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 44 min
    James Manyika on Global Economic and Technological Trends

    James Manyika on Global Economic and Technological Trends

    James Manyika, Chairman and Director of the McKinsey Global Institute, joins us to discuss the rapidly evolving landscape of the modern global economy and the role of technology in it.

     Topics discussed in this episode include:

    -The modern social contract
    -Reskilling, wage stagnation, and inequality
    -Technology induced unemployment
    -The structure of the global economy
    -The geographic concentration of economic growth

    You can find the page for this podcast here: https://futureoflife.org/2021/09/06/james-manyika-on-global-economic-and-technological-trends/

    Check out the video version of the episode here: https://youtu.be/zLXmFiwT0-M

    Check out the McKinsey Global Institute here: https://www.mckinsey.com/mgi/overview

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    2:14 What are the most important problems in the world today?
    4:30 The issue of inequality
    8:17 How the structure of the global economy is changing
    10:21 How does the role of incentives fit into global issues?
    13:00 How the social contract has evolved in the 21st century
    18:20 A billion people lifted out of poverty
    19:04 What drives economic growth?
    29:28 How does AI automation affect the virtuous and vicious versions of productivity growth?
    38:06 Automation and reflecting on jobs lost, jobs gained, and jobs changed
    43:15 AGI and automation
    48:00 How do we address the issue of technology induced unemployment
    58:05 Developing countries and economies
    1:01:29  The central forces in the global economy
    1:07:36 The global economic center of gravity
    1:09:42 Understanding the core impacts of AI
    1:12:32 How do global catastrophic and existential risks fit into the modern global economy?
    1:17:52 The economics of climate change and AI risk
    1:20:50 Will we use AI technology like we've used fossil fuel technology?
    1:24:34 The risks of AI contributing to inequality and bias
    1:31:45 How do we integrate developing countries voices in the development and deployment of AI systems
    1:33:42 James' core takeaway
    1:37:19 Where to follow and learn more about James' work

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 38 min
    Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse

    Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse

    Michael Klare, Five College Professor of Peace & World Security Studies, joins us to discuss the Pentagon's view of climate change, why it's distinctive, and how this all ultimately relates to the risks of great powers conflict and state collapse.

    Topics discussed in this episode include:

    -How the US military views and takes action on climate change
    -Examples of existing climate related difficulties and what they tell us about the future
    -Threat multiplication from climate change
    -The risks of climate change catalyzed nuclear war and major conflict
    -The melting of the Arctic and the geopolitical situation which arises from that
    -Messaging on climate change

    You can find the page for this podcast here: https://futureoflife.org/2021/07/30/michael-klare-on-the-pentagons-view-of-climate-change-and-the-risks-of-state-collapse/

    Check out the video version of the episode here: https://www.youtube.com/watch?v=bn57jxEoW24

    Check out Michael's website here: http://michaelklare.com/

    Apply for the Podcast Producer position here: futureoflife.org/job-postings/

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    2:28 How does the Pentagon view climate change and why are they interested in it?
    5:30 What are the Pentagon's main priorities besides climate change?
    8:31 What are the objectives of career officers at the Pentagon and how do they see climate change?
    10:32 The relationship between Pentagon career officers and the Trump administration on climate change
    15:47 How is the Pentagon's view of climate change unique and important?
    19:54 How climate change exacerbates existing difficulties and the issue of threat multiplication
    24:25 How will climate change increase the tensions between the nuclear weapons states of India, Pakistan, and China?
    26:32 What happened to Tacloban City and how is it relevant?
    32:27 Why does the US military provide global humanitarian assistance?
    34:39 How has climate change impacted the conditions in Nigeria and how does this inform the Pentagon's perspective?
    39:40 What is the ladder of escalation for climate change related issues?
    46:54 What is "all hell breaking loose?"
    48:26 What is the geopolitical situation arising from the melting of the Arctic?
    52:48 Why does the Bering Strait matter for the Arctic?
    54:23 The Arctic as a main source of conflict for the great powers in the coming years
    58:01 Are there ongoing proposals for resolving territorial disputes in the Arctic?
    1:01:40 Nuclear weapons risk and climate change
    1:03:32 How does the Pentagon intend to address climate change?
    1:06:20 Hardening US military bases and going green
    1:11:50 How climate change will affect critical infrastructure
    1:15:47 How do lethal autonomous weapons fit into the risks of escalation in a world stressed by climate change?
    1:19:42 How does this all affect existential risk?
    1:24:39 Are there timelines for when climate change induced stresses will occur?
    1:27:03 Does tying existential risks to national security issues benefit awareness around existential risk?
    1:30:18 Does relating climate change to migration issues help with climate messaging?
    1:31:08 A summary of the Pentagon's interest, view, and action on climate change
    1:33:00 Final words from Michael
    1:34:33 Where to find more of Michael's work

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 35 min

Customer Reviews

4.8 out of 5
82 Ratings

82 Ratings

malfoxley ,

Gerat show!

Lucas, host of the Future of Life podcast, highlights all aspects of tech and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!

JordanP153 ,

I love what I’ve heard

So far I’ve listened to the episode on non violent communication and the Sam Harris episode- both are excellent!

Peterpaul1925 ,

Amazing Podcast !

People need to know about this excellent podcast (and the Future of Life Institute) focusing on the most important issues facing the world. The topics are big, current, and supremely important; the guests are luminaries in their fields; and the Interviewer, Lucas Perry, brings it all forth in such a compelling way. He is so well informed on a wide range of issues and makes the conversations stimulating and thought-provoking. Aftger each episode I listened to so far I found myself telling other people about what was discussed; it's that valuable. After one episode, I started contributing to FLI. What a find. Thankyou FLI and Lucas.

Top Podcasts In Technology

Lex Fridman
Jason Calacanis
Jack Rhysider
NPR
Gimlet
Jason Calacanis

You Might Also Like

The 80000 Hours team
Sean Carroll | Wondery
Michael Shermer
New York City Skeptics
Sam Harris
Quanta Magazine