108 episodes

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

The Future of Life Future of Life Institute

    • Technology
    • 4.9, 52 Ratings

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

    Sam Harris on Global Priorities, Existential Risk, and What Matters Most

    Sam Harris on Global Priorities, Existential Risk, and What Matters Most

    Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.

    Topics discussed in this episode include:

    -The problem of communication 
    -Global priorities 
    -Existential risk 
    -Animal suffering in both wild animals and factory farmed animals 
    -Global poverty 
    -Artificial general intelligence risk and AI alignment 
    -Ethics
    -Sam’s book, The Moral Landscape

    You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/

    You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3

    You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

    Timestamps: 

    0:00 Intro
    3:52 What are the most important problems in the world?
    13:14 Global priorities: existential risk
    20:15 Why global catastrophic risks are more likely than existential risks
    25:09 Longtermist philosophy
    31:36 Making existential and global catastrophic risk more emotionally salient
    34:41 How analyzing the self makes longtermism more attractive
    40:28 Global priorities & effective altruism: animal suffering and global poverty
    56:03 Is machine suffering the next global moral catastrophe?
    59:36 AI alignment and artificial general intelligence/superintelligence risk
    01:11:25 Expanding our moral circle of compassion
    01:13:00 The Moral Landscape, consciousness, and moral realism
    01:30:14 Can bliss and wellbeing be mathematically defined?
    01:31:03 Where to follow Sam and concluding thoughts

    Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 32 min
    FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

    FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

    Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities?

    Topics discussed in this episode include:

    -Existential risk
    -Computational substrates and AGI
    -Genetics and aging
    -Risks of synthetic biology
    -Obstacles to space colonization
    -Great Filters, consciousness, and eliminating suffering

    You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-future-of-computation-synthetic-biology-and-life-with-george-church/

    You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3

    You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

    Timestamps: 

    0:00 Intro
    3:58 What are the most important issues in the world?
    12:20 Collective intelligence, AI, and the evolution of computational systems
    33:06 Where we are with genetics
    38:20 Timeline on progress for anti-aging technology
    39:29 Synthetic biology risk
    46:19 George's thoughts on COVID-19
    49:44 Obstacles to overcome for space colonization
    56:36 Possibilities for "Great Filters"
    59:57 Genetic engineering for combating climate change
    01:02:00 George's thoughts on the topic of "consciousness"
    01:08:40 Using genetic engineering to phase out voluntary suffering
    01:12:17 Where to find and follow George

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 13 min
    FLI Podcast: On Superforecasting with Robert de Neufville

    FLI Podcast: On Superforecasting with Robert de Neufville

    Essential to our assessment of risk and ability to plan for the future is our understanding of the probability of certain events occurring. If we can estimate the likelihood of risks, then we can evaluate their relative importance and apply our risk mitigation resources effectively. Predicting the future is, obviously, far from easy — and yet a community of "superforecasters" are attempting to do just that. Not only are they trying, but these superforecasters are also reliably outperforming subject matter experts at making predictions in their own fields. Robert de Neufville joins us on this episode of the FLI Podcast to explain what superforecasting is, how it's done, and the ways it can help us with crucial decision making. 

    Topics discussed in this episode include:

    -What superforecasting is and what the community looks like
    -How superforecasting is done and its potential use in decision making
    -The challenges of making predictions
    -Predictions about and lessons from COVID-19

    You can find the page for this podcast here: https://futureoflife.org/2020/04/30/on-superforecasting-with-robert-de-neufville/

    You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3

    You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

    Timestamps: 

    0:00 Intro
    5:00 What is superforecasting?
    7:22 Who are superforecasters and where did they come from?
    10:43 How is superforecasting done and what are the relevant skills?
    15:12 Developing a better understanding of probabilities
    18:42 How is it that superforecasters are better at making predictions than subject matter experts?
    21:43 COVID-19 and a failure to understand exponentials
    24:27 What organizations and platforms exist in the space of superforecasting?
    27:31 Whats up for consideration in an actual forecast
    28:55 How are forecasts aggregated? Are they used?
    31:37 How accurate are superforecasters?
    34:34 How is superforecasting complementary to global catastrophic risk research and efforts?
    39:15 The kinds of superforecasting platforms that exist
    43:00 How accurate can we get around global catastrophic and existential risks?
    46:20 How to deal with extremely rare risk and how to evaluate your prediction after the fact
    53:33 Superforecasting, expected value calculations, and their use in decision making
    56:46 Failure to prepare for COVID-19 and if superforecasting will be increasingly applied to critical decision making
    01:01:55 What can we do to improve the use of superforecasting?
    01:02:54 Forecasts about COVID-19
    01:11:43 How do you convince others of your ability as a superforecaster?
    01:13:55 Expanding the kinds of questions we do forecasting on
    01:15:49 How to utilize subject experts and superforecasters
    01:17:54 Where to find and follow Robert

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 20 min
    AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

    AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah

    Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing.

     Topics discussed in this episode include:

    -Rohin's and Buck's optimism and pessimism about different approaches to aligned AI
    -Traditional arguments for AI as an x-risk
    -Modeling agents as expected utility maximizers
    -Ambitious value learning and specification learning/narrow value learning
    -Agency and optimization
    -Robustness
    -Scaling to superhuman abilities
    -Universality
    -Impact regularization
    -Causal models, oracles, and decision theory
    -Discontinuous and continuous takeoff scenarios
    -Probability of AI-induced existential risk
    -Timelines for AGI
    -Information hazards

    You can find the page for this podcast here: https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/

    Timestamps: 

    0:00 Intro
    3:48 Traditional arguments for AI as an existential risk
    5:40 What is AI alignment?
    7:30 Back to a basic analysis of AI as an existential risk
    18:25 Can we model agents in ways other than as expected utility maximizers?
    19:34 Is it skillful to try and model human preferences as a utility function?
    27:09 Suggestions for alternatives to modeling humans with utility functions
    40:30 Agency and optimization
    45:55 Embedded decision theory
    48:30 More on value learning
    49:58 What is robustness and why does it matter?
    01:13:00 Scaling to superhuman abilities
    01:26:13 Universality
    01:33:40 Impact regularization
    01:40:34 Causal models, oracles, and decision theory
    01:43:05 Forecasting as well as discontinuous and continuous takeoff scenarios
    01:53:18 What is the probability of AI-induced existential risk?
    02:00:53 Likelihood of continuous and discontinuous take off scenarios
    02:08:08 What would you both do if you had more power and resources?
    02:12:38 AI timelines
    02:14:00 Information hazards
    02:19:19 Where to follow Buck and Rohin and learn more

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 2 hr 21 min
    FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

    FLI Podcast: Lessons from COVID-19 with Emilia Javorsky and Anthony Aguirre

    The global spread of COVID-19 has put tremendous stress on humanity’s social, political, and economic systems. The breakdowns triggered by this sudden stress indicate areas where national and global systems are fragile, and where preventative and preparedness measures may be insufficient. The COVID-19 pandemic thus serves as an opportunity for reflecting on the strengths and weaknesses of human civilization and what we can do to help make humanity more resilient. The Future of Life Institute's Emilia Javorsky and Anthony Aguirre join us on this special episode of the FLI Podcast to explore the lessons that might be learned from COVID-19 and the perspective this gives us for global catastrophic and existential risk.

    Topics discussed in this episode include:

    -The importance of taking expected value calculations seriously
    -The need for making accurate predictions
    -The difficulty of taking probabilities seriously
    -Human psychological bias around estimating and acting on risk
    -The massive online prediction solicitation and aggregation engine, Metaculus
    -The risks and benefits of synthetic biology in the 21st Century

    You can find the page for this podcast here: https://futureoflife.org/2020/04/08/lessons-from-covid-19-with-emilia-javorsky-and-anthony-aguirre/

    Timestamps: 

    0:00 Intro 
    2:35 How has COVID-19 demonstrated weakness in human systems and risk preparedness 
    4:50 The importance of expected value calculations and considering risks over timescales 
    10:50 The importance of being able to make accurate predictions 
    14:15 The difficulty of trusting probabilities and acting on low probability high cost risks
    21:22 Taking expected value calculations seriously 
    24:03 The lack of transparency, explanation, and context around how probabilities are estimated and shared
    28:00 Diffusion of responsibility and other human psychological weaknesses in thinking about risk
    38:19 What Metaculus is and its relevance to COVID-19 
    45:57 What is the accuracy of predictions on Metaculus and what has it said about COVID-19?
    50:31 Lessons for existential risk from COVID-19 
    58:42 The risk of synthetic bio enabled pandemics in the 21st century 
    01:17:35 The extent to which COVID-19 poses challenges to democratic institutions

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 26 min
    FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

    FLI Podcast: The Precipice: Existential Risk and the Future of Humanity with Toby Ord

    Toby Ord’s “The Precipice: Existential Risk and the Future of Humanity" has emerged as a new cornerstone text in the field of existential risk. The book presents the foundations and recent developments of this budding field from an accessible vantage point, providing an overview suitable for newcomers. For those already familiar with existential risk, Toby brings new historical and academic context to the problem, along with central arguments for why existential risk matters, novel quantitative analysis and risk estimations, deep dives into the risks themselves, and tangible steps for mitigation. "The Precipice" thus serves as both a tremendous introduction to the topic and a rich source of further learning for existential risk veterans. Toby joins us on this episode of the Future of Life Institute Podcast to discuss this definitive work on what may be the most important topic of our time.

    Topics discussed in this episode include:

    -An overview of Toby's new book
    -What it means to be standing at the precipice and how we got here
    -Useful arguments for why existential risk matters
    -The risks themselves and their likelihoods
    -What we can do to safeguard humanity's potential

    You can find the page for this podcast here: https://futureoflife.org/2020/03/31/he-precipice-existential-risk-and-the-future-of-humanity-with-toby-ord/

    Timestamps: 

    0:00 Intro 
    03:35 What the book is about 
    05:17 What does it mean for us to be standing at the precipice? 
    06:22 Historical cases of global catastrophic and existential risk in the real world
    10:38 The development of humanity’s wisdom and power over time  
    15:53 Reaching existential escape velocity and humanity’s continued evolution
    22:30 On effective altruism and writing the book for a general audience 
    25:53 Defining “existential risk” 
    28:19 What is compelling or important about humanity’s potential or future persons?
    32:43 Various and broadly appealing arguments for why existential risk matters
    50:46 Short overview of natural existential risks
    54:33 Anthropogenic risks
    58:35 The risks of engineered pandemics 
    01:02:43 Suggestions for working to mitigate x-risk and safeguard the potential of humanity 
    01:09:43 How and where to follow Toby and pick up his book

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 10 min

Customer Reviews

4.9 out of 5
52 Ratings

52 Ratings

jingalli89 ,

Great podcast on initiatives that are critical for our future.

Lucas/FLI do an excellent job of conducting in-depth interviews with incredible people whose work stands to radically impact humanity’s future. It’s a badly missing and needed resource in today’s world, is always high-quality, and I'm able to learn something new/unique/valuable each time. Great job to Lucas and team!

Clarisse Gomez ,

Awesome Podcast!!!

The host of The Future of Life podcast, highlights all aspects of technology, innovation and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!

V. Antimonov ,

Really enjoyed Max’s conversation with Yuval

Keep up the great work!

Top Podcasts In Technology

Listeners Also Subscribed To