1 hr 32 min

Sam Harris on Global Priorities, Existential Risk, and What Matters Most Future of Life Institute Podcast

    • Technology

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.

Topics discussed in this episode include:

-The problem of communication 
-Global priorities 
-Existential risk 
-Animal suffering in both wild animals and factory farmed animals 
-Global poverty 
-Artificial general intelligence risk and AI alignment 
-Ethics
-Sam’s book, The Moral Landscape

You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/

You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3

You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

Timestamps: 

0:00 Intro
3:52 What are the most important problems in the world?
13:14 Global priorities: existential risk
20:15 Why global catastrophic risks are more likely than existential risks
25:09 Longtermist philosophy
31:36 Making existential and global catastrophic risk more emotionally salient
34:41 How analyzing the self makes longtermism more attractive
40:28 Global priorities & effective altruism: animal suffering and global poverty
56:03 Is machine suffering the next global moral catastrophe?
59:36 AI alignment and artificial general intelligence/superintelligence risk
01:11:25 Expanding our moral circle of compassion
01:13:00 The Moral Landscape, consciousness, and moral realism
01:30:14 Can bliss and wellbeing be mathematically defined?
01:31:03 Where to follow Sam and concluding thoughts

Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Human civilization increasingly has the potential both to improve the lives of everyone and to completely destroy everything. The proliferation of emerging technologies calls our attention to this never-before-seen power — and the need to cultivate the wisdom with which to steer it towards beneficial outcomes. If we're serious both as individuals and as a species about improving the world, it's crucial that we converge around the reality of our situation and what matters most. What are the most important problems in the world today and why? In this episode of the Future of Life Institute Podcast, Sam Harris joins us to discuss some of these global priorities, the ethics surrounding them, and what we can do to address them.

Topics discussed in this episode include:

-The problem of communication 
-Global priorities 
-Existential risk 
-Animal suffering in both wild animals and factory farmed animals 
-Global poverty 
-Artificial general intelligence risk and AI alignment 
-Ethics
-Sam’s book, The Moral Landscape

You can find the page for this podcast here: https://futureoflife.org/2020/06/01/on-global-priorities-existential-risk-and-what-matters-most-with-sam-harris/

You can take a survey about the podcast here: www.surveymonkey.com/r/W8YLYD3

You can submit a nominee for the Future of Life Award here: https://futureoflife.org/future-of-life-award-unsung-hero-search/

Timestamps: 

0:00 Intro
3:52 What are the most important problems in the world?
13:14 Global priorities: existential risk
20:15 Why global catastrophic risks are more likely than existential risks
25:09 Longtermist philosophy
31:36 Making existential and global catastrophic risk more emotionally salient
34:41 How analyzing the self makes longtermism more attractive
40:28 Global priorities & effective altruism: animal suffering and global poverty
56:03 Is machine suffering the next global moral catastrophe?
59:36 AI alignment and artificial general intelligence/superintelligence risk
01:11:25 Expanding our moral circle of compassion
01:13:00 The Moral Landscape, consciousness, and moral realism
01:30:14 Can bliss and wellbeing be mathematically defined?
01:31:03 Where to follow Sam and concluding thoughts

Photo by Christopher Michel: https://www.flickr.com/photos/cmichel67/

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

1 hr 32 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
The Gatekeepers
BBC Radio 4
Hard Fork
The New York Times
Darknet Diaries
Jack Rhysider