112 episodes

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

The Future of Life Future of Life Institute

    • Technology
    • 4.6, 7 Ratings

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

    Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

    Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

    It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.

     Topics discussed in this episode include:

    -Inner and outer alignment
    -How and why inner alignment can fail
    -Training competitiveness and performance competitiveness
    -Evaluating imitative amplification, AI safety via debate, and microscope AI

    You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/

    Timestamps: 

    0:00 Intro 
    2:07 How Evan got into AI alignment research
    4:42 What is AI alignment?
    7:30 How Evan approaches AI alignment
    13:05 What are inner alignment and outer alignment?
    24:23 Gradient descent
    36:30 Testing for inner alignment
    38:38 Wrapping up on outer alignment
    44:24 Why is inner alignment a priority?
    45:30 How inner alignment fails
    01:11:12 Training competitiveness and performance competitiveness
    01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness
    01:17:30 Imitative amplification
    01:23:00 AI safety via debate
    01:26:32 Microscope AI
    01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment
    01:34:45 Where to follow Evan and find more of his work

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 37 min
    Barker - Hedonic Recalibration (Mix)

    Barker - Hedonic Recalibration (Mix)

    This is a mix by Barker, Berlin-based music producer, that was featured on our last podcast: Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix). We hope that you'll find inspiration and well-being in this soundscape.

    You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/

    Tracklist:

    Delta Rain Dance - 1
    John Beltran - A Different Dream
    Rrose - Horizon
    Alexandroid - lvpt3
    Datassette - Drizzle Fort
    Conrad Sprenger - Opening
    JakoJako - Wavetable#1
    Barker & David Goldberg - #3
    Barker & Baumecker - Organik (Intro)
    Anthony Linell - Fractal Vision
    Ametsub - Skydroppin’
    Ladyfish\Mewark - Comfortable
    JakoJako & Barker - [unreleased]

    Where to follow Sam Barker :

    Soundcloud: @voltek
    Twitter: twitter.com/samvoltek
    Instagram: www.instagram.com/samvoltek/
    Website: www.voltek-labs.net/
    Bandcamp: sambarker.bandcamp.com/

    Where to follow Sam's label, Ostgut Ton:

    Soundcloud: @ostgutton-official
    Facebook: www.facebook.com/Ostgut.Ton.OFFICIAL/
    Twitter: twitter.com/ostgutton
    Instagram: www.instagram.com/ostgut_ton/
    Bandcamp: ostgut.bandcamp.com/

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 43 min
    Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

    Sam Barker and David Pearce on Art, Paradise Engineering, and Existential Hope (With Guest Mix)

    Sam Barker, a Berlin-based music producer, and David Pearce, philosopher and author of The Hedonistic Imperative, join us on a special episode of the FLI Podcast to spread some existential hope. Sam is the author of euphoric sound landscapes inspired by the writings of David Pearce, largely exemplified in his latest album — aptly named "Utility." Sam's artistic excellence, motivated by blissful visions of the future, and David's philosophical and technological writings on the potential for the biological domestication of heaven are a perfect match made for the fusion of artistic, moral, and intellectual excellence. This podcast explores what significance Sam found in David's work, how it informed his music production, and Sam and David's optimistic visions of the future; it also features a guest mix by Sam and plenty of musical content.

    Topics discussed in this episode include:

    -The relationship between Sam's music and David's writing
    -Existential hope
    -Ideas from the Hedonistic Imperative
    -Sam's albums
    -The future of art and music

    You can find the page for this podcast here: https://futureoflife.org/2020/06/24/sam-barker-and-david-pearce-on-art-paradise-engineering-and-existential-hope-featuring-a-guest-mix/

    You can find the mix with no interview portion of the podcast here: https://soundcloud.com/futureoflife/barker-hedonic-recalibration-mix

    Where to follow Sam Barker :

    Soundcloud: https://soundcloud.com/voltek
    Twitter: https://twitter.com/samvoltek
    Instagram: https://www.instagram.com/samvoltek/
    Website: https://www.voltek-labs.net/
    Bandcamp: https://sambarker.bandcamp.com/

    Where to follow Sam's label, Ostgut Ton: 

    Soundcloud: https://soundcloud.com/ostgutton-official
    Facebook: https://www.facebook.com/Ostgut.Ton.OFFICIAL/
    Twitter: https://twitter.com/ostgutton
    Instagram: https://www.instagram.com/ostgut_ton/
    Bandcamp: https://ostgut.bandcamp.com/

    Timestamps: 

    0:00 Intro
    5:40 The inspiration around Sam's music
    17:38 Barker - Maximum Utility
    20:03 David and Sam on their work
    23:45 Do any of the tracks evoke specific visions or hopes?
    24:40 Barker - Die-Hards Of The Darwinian Order
    28:15 Barker - Paradise Engineering
    31:20 Barker - Hedonic Treadmill
    33:05 The future and evolution of art
    54:03 David on how good the future can be
    58:36 Guest mix by Barker

    Tracklist:

    Delta Rain Dance – 1
    John Beltran – A Different Dream
    Rrose – Horizon
    Alexandroid – lvpt3
    Datassette – Drizzle Fort
    Conrad Sprenger – Opening
    JakoJako – Wavetable#1
    Barker & David Goldberg – #3
    Barker & Baumecker – Organik (Intro)
    Anthony Linell – Fractal Vision
    Ametsub – Skydroppin’
    Ladyfish\Mewark – Comfortable
    JakoJako & Barker – [unreleased]

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 42 min
    Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

    Steven Pinker and Stuart Russell on the Foundations, Benefits, and Possible Existential Threat of AI

    Over the past several centuries, the human condition has been profoundly changed by the agricultural and industrial revolutions. With the creation and continued development of AI, we stand in the midst of an ongoing intelligence revolution that may prove far more transformative than the previous two. How did we get here, and what were the intellectual foundations necessary for the creation of AI? What benefits might we realize from aligned AI systems, and what are the risks and potential pitfalls along the way? In the longer term, will superintelligent AI systems pose an existential risk to humanity? Steven Pinker, best selling author and Professor of Psychology at Harvard, and Stuart Russell, UC Berkeley Professor of Computer Science, join us on this episode of the AI Alignment Podcast to discuss these questions and more.

     Topics discussed in this episode include:

    -The historical and intellectual foundations of AI 
    -How AI systems achieve or do not achieve intelligence in the same way as the human mind
    -The rise of AI and what it signifies 
    -The benefits and risks of AI in both the short and long term 
    -Whether superintelligent AI will pose an existential risk to humanity

    You can find the page for this podcast here: https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/

    You can take a survey about the podcast here: https://www.surveymonkey.com/r/W8YLYD3

    You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

    Timestamps: 

    0:00 Intro 
    4:30 The historical and intellectual foundations of AI 
    11:11 Moving beyond dualism 
    13:16 Regarding the objectives of an agent as fixed 
    17:20 The distinction between artificial intelligence and deep learning 
    22:00 How AI systems achieve or do not achieve intelligence in the same way as the human mind
    49:46 What changes to human society does the rise of AI signal? 
    54:57 What are the benefits and risks of AI? 
    01:09:38 Do superintelligent AI systems pose an existential threat to humanity? 
    01:51:30 Where to find and follow Steve and Stuart

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 52 min
    Sam Harris on Global Priorities, Existential Risk, and What Matters Most

    Sam Harris on Global Priorities, Existential Risk, and What Matters Most

    Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.

    Topics discussed in this episode include:

    -The problem of communication 
    -Global priorities 
    -Existential risk 
    -Animal suffering in both wild animals and factory farmed animals 
    -Global poverty 
    -Artificial general intelligence risk and AI alignment 
    -Ethics
    -Sam’s book, The Moral Landscape

    You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/

    You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3

    You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

    Timestamps: 

    0:00 Intro
    3:52 What are the most important problems in the world?
    13:14 Global priorities: existential risk
    20:15 Why global catastrophic risks are more likely than existential risks
    25:09 Longtermist philosophy
    31:36 Making existential and global catastrophic risk more emotionally salient
    34:41 How analyzing the self makes longtermism more attractive
    40:28 Global priorities & effective altruism: animal suffering and global poverty
    56:03 Is machine suffering the next global moral catastrophe?
    59:36 AI alignment and artificial general intelligence/superintelligence risk
    01:11:25 Expanding our moral circle of compassion
    01:13:00 The Moral Landscape, consciousness, and moral realism
    01:30:14 Can bliss and wellbeing be mathematically defined?
    01:31:03 Where to follow Sam and concluding thoughts

    Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 32 min
    FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

    FLI Podcast: On the Future of Computation, Synthetic Biology, and Life with George Church

    Progress in synthetic biology and genetic engineering promise to bring advancements in human health sciences by curing disease, augmenting human capabilities, and even reversing aging. At the same time, such technology could be used to unleash novel diseases and biological agents which could pose global catastrophic and existential risks to life on Earth. George Church, a titan of synthetic biology, joins us on this episode of the FLI Podcast to discuss the benefits and risks of our growing knowledge of synthetic biology, its role in the future of life, and what we can do to make sure it remains beneficial. Will our wisdom keep pace with our expanding capabilities?

    Topics discussed in this episode include:

    -Existential risk
    -Computational substrates and AGI
    -Genetics and aging
    -Risks of synthetic biology
    -Obstacles to space colonization
    -Great Filters, consciousness, and eliminating suffering

    You can find the page for this podcast here: https://futureoflife.org/2020/05/15/on-the-future-of-computation-synthetic-biology-and-life-with-george-church/

    You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3

    You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

    Timestamps: 

    0:00 Intro
    3:58 What are the most important issues in the world?
    12:20 Collective intelligence, AI, and the evolution of computational systems
    33:06 Where we are with genetics
    38:20 Timeline on progress for anti-aging technology
    39:29 Synthetic biology risk
    46:19 George's thoughts on COVID-19
    49:44 Obstacles to overcome for space colonization
    56:36 Possibilities for "Great Filters"
    59:57 Genetic engineering for combating climate change
    01:02:00 George's thoughts on the topic of "consciousness"
    01:08:40 Using genetic engineering to phase out voluntary suffering
    01:12:17 Where to find and follow George

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 13 min

Customer Reviews

4.6 out of 5
7 Ratings

7 Ratings

Odds75 ,

The most important questions of our time!

A podcast discussing the most important topics of our time.

Top Podcasts In Technology

Listeners Also Subscribed To