132 episodes

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

Future of Life Institute Podcast Future of Life Institute

    • Technology
    • 4.8 • 81 Ratings

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

    Avi Loeb on UFOs and if they're Alien in Origin

    Avi Loeb on UFOs and if they're Alien in Origin

    Avi Loeb, Professor of Science at Harvard University, joins us to discuss unidentified aerial phenomena and a recent US Government report assessing their existence and threat.

     Topics discussed in this episode include:

    -Evidence counting for the natural, human, and extraterrestrial origins of UAPs
    -The culture of science and how it deals with UAP reports
    -How humanity should respond if we discover UAPs are alien in origin
    -A project for collecting high quality data on UAPs

    You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-ufos-and-if-theyre-alien-in-origin/

    Apply for the Podcast Producer position here: futureoflife.org/job-postings/

    Check out the video version of the episode here: https://www.youtube.com/watch?v=AyNlLaFTeFI&ab_channel=FutureofLifeInstitute

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    1:41 Why is the US Government report on UAPs significant?
    7:08 Multiple different sensors detecting the same phenomena
    11:50 Are UAPs a US technology?
    13:20 Incentives to deploy powerful technology
    15:48 What are the flight and capability characteristics of UAPs?
    17:53 The similarities between 'Oumuamua and UAP reports
    20:11  Are UAPs some form of spoofing technology?
    22:48 What is the most convincing natural or conventional explanation of UAPs?
    25:09 UAPs as potentially containing artificial intelligence
    28:15 Can you give a credence to UAPs being alien in origin?
    29:32 Why aren't UAPs far more technologically advanced?
    32:15 How should humanity respond if UAPs are found to be alien in origin?
    35:15 A plan to get better data on UAPs
    38:56 Final thoughts from Avi
    39:40 Getting in contact with Avi to support his project

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 40 min
    Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures

    Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures

    Avi Loeb, Professor of Science at Harvard University, joins us to discuss a recent interstellar visitor, if we've already encountered alien technology, and whether we're ultimately alone in the cosmos.

     Topics discussed in this episode include:

    -Whether 'Oumuamua is alien or natural in origin
    -The culture of science and how it affects fruitful inquiry
    -Looking for signs of alien life throughout the solar system and beyond
    -Alien artefacts and galactic treaties
    -How humanity should handle a potential first contact with extraterrestrials
    -The relationship between what is true and what is good

    You can find the page for this podcast here: https://futureoflife.org/2021/07/09/avi-loeb-on-oumuamua-aliens-space-archeology-great-filters-and-superstructures/

    Apply for the Podcast Producer position here: https://futureoflife.org/job-postings/

    Check out the video version of the episode here: https://www.youtube.com/watch?v=qcxJ8QZQkwE&ab_channel=FutureofLifeInstitute

    See our second interview with Avi here: https://soundcloud.com/futureoflife/avi-loeb-on-ufos-and-if-theyre-alien-in-origin

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    3:28 What is 'Oumuamua's wager?
    11:29 The properties of 'Oumuamua and how they lend credence to the theory of it being artificial in origin
    17:23 Theories of 'Oumuamua being natural in origin
    21:42 Why was the smooth acceleration of 'Oumuamua significant?
    23:35 What are comets and asteroids?
    28:30 What we know about Oort clouds and how 'Oumuamua relates to what we expect of Oort clouds
    33:40 Could there be exotic objects in Oort clouds that would account for 'Oumuamua
    38:08 What is your credence that 'Oumuamua is alien in origin?
    44:50 Bayesian reasoning and 'Oumuamua
    46:34 How do UFO reports and sightings affect your perspective of 'Oumuamua?
    54:35 Might alien artefacts be more common than we expect?
    58:48 The Drake equation
    1:01:50 Where are the most likely great filters?
    1:11:22 Difficulties in scientific culture and how they affect fruitful inquiry
    1:27:03 The cosmic endowment, traveling to galactic clusters, and galactic treaties
    1:31:34 Why don't we find evidence of alien superstructures?
    1:36:36 Looking for the bio and techno signatures of alien life
    1:40:27 Do alien civilizations converge on beneficence?
    1:43:05 Is there a necessary relationship between what is true and good?
    1:47:02 Is morality evidence based knowledge?
    1:48:18 Axiomatic based knowledge and testing moral systems
    1:54:08 International governance and making contact with alien life
    1:55:59 The need for an elite scientific body to advise on global catastrophic and existential risk
    1:59:57 What are the most fundamental questions?

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 2 hr 4 min
    Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI

    Nicolas Berggruen on the Dynamics of Power, Wisdom, and Ideas in the Age of AI

    Nicolas Berggruen, investor and philanthropist, joins us to explore the dynamics of power, wisdom, technology and ideas in the 21st century.

    Topics discussed in this episode include:

    -What wisdom consists of
    -The role of ideas in society and civilization 
    -The increasing concentration of power and wealth
    -The technological displacement of human labor
    -Democracy, universal basic income, and universal basic capital 
    -Living an examined life

    You can find the page for this podcast here: https://futureoflife.org/2021/05/31/nicolas-berggruen-on-the-dynamics-of-power-wisdom-technology-and-ideas-in-the-age-of-ai/

    Check out Nicolas' thoughts archive here: www.nicolasberggruen.com

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    1:45 The race between the power of our technology and the wisdom with which we manage it
    5:19 What is wisdom? 
    8:30 The power of ideas 
    11:06 Humanity’s investment in wisdom vs the power of our technology 
    15:39 Why does our wisdom lag behind our power? 
    20:51 Technology evolving into an agent 
    24:28 How ideas play a role in the value alignment of technology 
    30:14 Wisdom for building beneficial AI and mitigating the race to power 
    34:37 Does Mark Zuckerberg have control of Facebook? 
    36:39 Safeguarding the human mind and maintaining control of AI 
    42:26 The importance of the examined life in the 21st century 
    45:56 An example of the examined life 
    48:54 Important ideas for the 21st century 
    52:46 The concentration of power and wealth, and a proposal for universal basic capital 
    1:03:07 Negative and positive futures 
    1:06:30 Final thoughts from Nicolas

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 8 min
    Bart Selman on the Promises and Perils of Artificial Intelligence

    Bart Selman on the Promises and Perils of Artificial Intelligence

    Bart Selman, Professor of Computer Science at Cornell University, joins us to discuss a wide range of AI issues, from autonomous weapons and AI consciousness to international governance and the possibilities of superintelligence.

    Topics discussed in this episode include:

    -Negative and positive outcomes from AI in the short, medium, and long-terms
    -The perils and promises of AGI and superintelligence
    -AI alignment and AI existential risk
    -Lethal autonomous weapons
    -AI governance and racing to powerful AI systems
    -AI consciousness

    You can find the page for this podcast here: https://futureoflife.org/2021/05/20/bart-selman-on-the-promises-and-perils-of-artificial-intelligence/

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro 
    1:35 Futures that Bart is excited about                  
    4:08 Positive futures in the short, medium, and long-terms
    7:23 AGI timelines 
    8:11 Bart’s research on “planning” through the game of Sokoban
    13:10 If we don’t go extinct, is the creation of AGI and superintelligence inevitable? 
    15:28 What’s exciting about futures with AGI and superintelligence? 
    17:10 How long does it take for superintelligence to arise after AGI? 
    21:08 Would a superintelligence have something intelligent to say about income inequality? 
    23:24 Are there true or false answers to moral questions? 
    25:30 Can AGI and superintelligence assist with moral and philosophical issues?
    28:07 Do you think superintelligences converge on ethics? 
    29:32 Are you most excited about the short or long-term benefits of AI? 
    34:30 Is existential risk from AI a legitimate threat? 
    35:22 Is the AI alignment problem legitimate? 
    43:29 What are futures that you fear? 
    46:24 Do social media algorithms represent an instance of the alignment problem? 
    51:46 The importance of educating the public on AI 
    55:00 Income inequality, cyber security, and negative futures 
    1:00:06 Lethal autonomous weapons 
    1:01:50 Negative futures in the long-term 
    1:03:26 How have your views of AI alignment evolved? 
    1:06:53 Bart’s plans and intentions for the Association for the Advancement of Artificial Intelligence
    1:13:45 Policy recommendations for existing AIs and the AI ecosystem 
    1:15:35 Solving the parts of the AI alignment that won’t be solved by industry incentives 
    1:18:17 Narratives of an international race to powerful AI systems 
    1:20:42 How does an international race to AI affect the chances of successful AI alignment? 
    1:23:20 Is AI a zero sum game? 
    1:28:51 Lethal autonomous weapons governance 
    1:31:38 Does the governance of autonomous weapons affect outcomes from AGI 
    1:33:00 AI consciousness 
    1:39:37 Alignment is important and the benefits of AI can be great

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 41 min
    Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

    Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century

    Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century.

    Topics discussed in this episode include:

    -Intelligence and coordination
    -Existential risk from AI, synthetic biology, and unknown unknowns
    -AI adoption as a delegation process
    -Jaan's investments and philanthropic efforts
    -International coordination and incentive structures
    -The short-term and long-term AI safety communities

    You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    1:29 How can humanity improve?
    3:10 The importance of intelligence and coordination
    8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans
    15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks
    17:15 How Jaan evaluates and thinks about existential risk
    18:30 Nuclear weapons as the first existential risk we faced
    20:47 The likelihood of unknown unknown existential risks
    25:04 Why Jaan doesn't see nuclear war as an existential risk
    27:54 Climate change
    29:00 Existential risk from synthetic biology
    31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge
    36:23 AI adoption as a delegation process
    42:52 Attractors in the design space of AI
    44:24 The regulation of AI
    45:31 Jaan's investments and philanthropy in AI
    55:18 International coordination issues from AI adoption as a delegation process
    57:29 AI today and the negative impacts of recommender algorithms
    1:02:43 Collective, institutional, and interpersonal coordination
    1:05:23 The benefits and risks of longevity research
    1:08:29 The long-term and short-term AI safety communities and their relationship with one another
    1:12:35 Jaan's current philanthropic efforts
    1:16:28 Software as a philanthropic target
    1:19:03 How do we move towards beneficial futures with AI?
    1:22:30 An idea Jaan finds meaningful
    1:23:33 Final thoughts from Jaan
    1:25:27 Where to find Jaan

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 26 min
    Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

    Joscha Bach and Anthony Aguirre on Digital Physics and Moving Towards Beneficial Futures

    Joscha Bach, Cognitive Scientist and AI researcher, as well as Anthony Aguirre, UCSC Professor of Physics, join us to explore the world through the lens of computation and the difficulties we face on the way to beneficial futures.

     Topics discussed in this episode include:

    -Understanding the universe through digital physics
    -How human consciousness operates and is structured
    -The path to aligned AGI and bottlenecks to beneficial futures
    -Incentive structures and collective coordination

    You can find the page for this podcast here: https://futureoflife.org/2021/03/31/joscha-bach-and-anthony-aguirre-on-digital-physics-and-moving-towards-beneficial-futures/

    You can find FLI's three new policy focused job postings here: futureoflife.org/job-postings/

    Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

    Timestamps: 

    0:00 Intro
    3:17 What is truth and knowledge?
    11:39 What is subjectivity and objectivity?
    14:32 What is the universe ultimately?
    19:22 Is the universe a cellular automaton? Is the universe ultimately digital or analogue?
    24:05 Hilbert's hotel from the point of view of computation
    35:18 Seeing the world as a fractal
    38:48 Describing human consciousness
    51:10 Meaning, purpose, and harvesting negentropy
    55:08 The path to aligned AGI
    57:37 Bottlenecks to beneficial futures and existential security
    1:06:53 A future with one, several, or many AGI systems? How do we maintain appropriate incentive structures?
    1:19:39 Non-duality and collective coordination
    1:22:53 What difficulties are there for an idealist worldview that involves computation?
    1:27:20 Which features of mind and consciousness are necessarily coupled and which aren't?
    1:36:40 Joscha's final thoughts on AGI

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 38 min

Customer Reviews

4.8 out of 5
81 Ratings

81 Ratings

malfoxley ,

Gerat show!

Lucas, host of the Future of Life podcast, highlights all aspects of tech and more in this can’t miss podcast! The host and expert guests offer insightful advice and information that is helpful to anyone that listens!

JordanP153 ,

I love what I’ve heard

So far I’ve listened to the episode on non violent communication and the Sam Harris episode- both are excellent!

Peterpaul1925 ,

Amazing Podcast !

People need to know about this excellent podcast (and the Future of Life Institute) focusing on the most important issues facing the world. The topics are big, current, and supremely important; the guests are luminaries in their fields; and the Interviewer, Lucas Perry, brings it all forth in such a compelling way. He is so well informed on a wide range of issues and makes the conversations stimulating and thought-provoking. Aftger each episode I listened to so far I found myself telling other people about what was discussed; it's that valuable. After one episode, I started contributing to FLI. What a find. Thankyou FLI and Lucas.

Top Podcasts In Technology

Listeners Also Subscribed To