102 episodes

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

The Future of Life Future of Life Institute

    • Technology

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

    AIAP: On Lethal Autonomous Weapons with Paul Scharre

    AIAP: On Lethal Autonomous Weapons with Paul Scharre

    Lethal autonomous weapons represent the novel miniaturization and integration of modern AI and robotics technologies for military use. This emerging technology thus represents a potentially critical inflection point in the development of AI governance. Whether we allow AI to make the decision to take human life and where we draw lines around the acceptable and unacceptable uses of this technology will set precedents and grounds for future international AI collaboration and governance. Such regulation efforts or lack thereof will also shape the kinds of weapons technologies that proliferate in the 21st century. On this episode of the AI Alignment Podcast, Paul Scharre joins us to discuss autonomous weapons, their potential benefits and risks, and the ongoing debate around the regulation of their development and use.

     Topics discussed in this episode include:

    -What autonomous weapons are and how they may be used
    -The debate around acceptable and unacceptable uses of autonomous weapons
    -Degrees and kinds of ways of integrating human decision making in autonomous weapons 
    -Risks and benefits of autonomous weapons
    -Whether there is an arms race for autonomous weapons
    -How autonomous weapons issues may matter for AI alignment and long-term AI safety

    You can find the page for this podcast here: https://futureoflife.org/2020/03/16/on-lethal-autonomous-weapons-with-paul-scharre/

    Timestamps: 

    0:00 Intro
    3:50 Why care about autonomous weapons?
    4:31 What are autonomous weapons? 
    06:47 What does “autonomy” mean? 
    09:13 Will we see autonomous weapons in civilian contexts? 
    11:29 How do we draw lines of acceptable and unacceptable uses of autonomous weapons? 
    24:34 Defining and exploring human “in the loop,” “on the loop,” and “out of loop” 
    31:14 The possibility of generating international lethal laws of robotics
    36:15 Whether autonomous weapons will sanitize war and psychologically distance humans in detrimental ways
    44:57 Are persons studying the psychological aspects of autonomous weapons use? 
    47:05 Risks of the accidental escalation of war and conflict 
    52:26 Is there an arms race for autonomous weapons? 
    01:00:10 Further clarifying what autonomous weapons are
    01:05:33 Does the successful regulation of autonomous weapons matter for long-term AI alignment considerations?
    01:09:25 Does Paul see AI as an existential risk?

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 16 min
    FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

    FLI Podcast: Distributing the Benefits of AI via the Windfall Clause with Cullen O'Keefe

    As with the agricultural and industrial revolutions before it, the intelligence revolution currently underway will unlock new degrees and kinds of abundance. Powerful forms of AI will likely generate never-before-seen levels of wealth, raising critical questions about its beneficiaries. Will this newfound wealth be used to provide for the common good, or will it become increasingly concentrated in the hands of the few who wield AI technologies? Cullen O'Keefe joins us on this episode of the FLI Podcast for a conversation about the Windfall Clause, a mechanism that attempts to ensure the abundance and wealth created by transformative AI benefits humanity globally.

    Topics discussed in this episode include:

    -What the Windfall Clause is and how it might function
    -The need for such a mechanism given AGI generated economic windfall
    -Problems the Windfall Clause would help to remedy 
    -The mechanism for distributing windfall profit and the function for defining such profit
    -The legal permissibility of the Windfall Clause 
    -Objections and alternatives to the Windfall Clause

    You can find the page for this podcast here: https://futureoflife.org/2020/02/28/distributing-the-benefits-of-ai-via-the-windfall-clause-with-cullen-okeefe/

    Timestamps: 

    0:00 Intro
    2:13 What is the Windfall Clause? 
    4:51 Why do we need a Windfall Clause? 
    06:01 When we might reach windfall profit and what that profit looks like
    08:01 Motivations for the Windfall Clause and its ability to help with job loss
    11:51 How the Windfall Clause improves allocation of economic windfall 
    16:22 The Windfall Clause assisting in a smooth transition to advanced AI systems
    18:45 The Windfall Clause as assisting with general norm setting
    20:26 The Windfall Clause as serving AI firms by generating goodwill, improving employee relations, and reducing political risk
    23:02 The mechanism for distributing windfall profit and desiderata for guiding it’s formation 
    25:03 The windfall function and desiderata for guiding it’s formation 
    26:56 How the Windfall Clause is different from being a new taxation scheme
    30:20 Developing the mechanism for distributing the windfall 
    32:56 The legal permissibility of the Windfall Clause in the United States
    40:57 The legal permissibility of the Windfall Clause in China and the Cayman Islands
    43:28 Historical precedents for the Windfall Clause
    44:45 Objections to the Windfall Clause
    57:54 Alternatives to the Windfall Clause
    01:02:51 Final thoughts

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 4 min
    AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

    AIAP: On the Long-term Importance of Current AI Policy with Nicolas Moës and Jared Brown

    From Max Tegmark's Life 3.0 to Stuart Russell's Human Compatible and Nick Bostrom's Superintelligence, much has been written and said about the long-term risks of powerful AI systems. When considering concrete actions one can take to help mitigate these risks, governance and policy related solutions become an attractive area of consideration. But just what can anyone do in the present day policy sphere to help ensure that powerful AI systems remain beneficial and aligned with human values? Do today's AI policies matter at all for AGI risk? Jared Brown and Nicolas Moës join us on today's podcast to explore these questions and the importance of AGI-risk sensitive persons' involvement in present day AI policy discourse.

     Topics discussed in this episode include:

    -The importance of current AI policy work for long-term AI risk
    -Where we currently stand in the process of forming AI policy
    -Why persons worried about existential risk should care about present day AI policy
    -AI and the global community
    -The rationality and irrationality around AI race narratives

    You can find the page for this podcast here: https://futureoflife.org/2020/02/17/on-the-long-term-importance-of-current-ai-policy-with-nicolas-moes-and-jared-brown/

    Timestamps: 

    0:00 Intro
    4:58 Why it’s important to work on AI policy 
    12:08 Our historical position in the process of AI policy
    21:54 For long-termists and those concerned about AGI risk, how is AI policy today important and relevant? 
    33:46 AI policy and shorter-term global catastrophic and existential risks
    38:18 The Brussels and Sacramento effects
    41:23 Why is racing on AI technology bad? 
    48:45 The rationality of racing to AGI 
    58:22 Where is AI policy currently?

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 11 min
    FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

    FLI Podcast: Identity, Information & the Nature of Reality with Anthony Aguirre

    Our perceptions of reality are based on the physics of interactions ranging from millimeters to miles in scale. But when it comes to the very small and the very massive, our intuitions often fail us. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? And given our failure to anticipate the counterintuitive nature of the universe, how accurate are our intuitions about metaphysical and personal identity? Just how seriously should we take our everyday experiences of the world? Anthony Aguirre, cosmologist and FLI co-founder, returns for a second episode to offer his perspective on these complex questions. This conversation explores the view that reality fundamentally consists of information and examines its implications for our understandings of existence and identity.

    Topics discussed in this episode include:
    - Views on the nature of reality
    - Quantum mechanics and the implications of quantum uncertainty
    - Identity, information and description
    - Continuum of objectivity/subjectivity

    You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/31/fli-podcast-identity-information-the-nature-of-reality-with-anthony-aguirre/

    Timestamps:

    3:35 - General history of views on fundamental reality
    9:45 - Quantum uncertainty and observation as interaction
    24:43 - The universe as constituted of information
    29:26 - What is information and what does the view of reality as information have to say about objects and identity
    37:14 - Identity as on a continuum of objectivity and subjectivity
    46:09 - What makes something more or less objective?
    58:25 - Emergence in physical reality and identity
    1:15:35 - Questions about the philosophy of identity in the 21st century
    1:27:13 - Differing views on identity changing human desires
    1:33:28 - How the reality as information perspective informs questions of identity
    1:39:25 - Concluding thoughts

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr 45 min
    AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

    AIAP: Identity and the AI Revolution with David Pearce and Andrés Gómez Emilsson

    In the 1984 book Reasons and Persons, philosopher Derek Parfit asks the reader to consider the following scenario: You step into a teleportation machine that scans your complete atomic structure, annihilates you, and then relays that atomic information to Mars at the speed of light. There, a similar machine recreates your exact atomic structure and composition using locally available resources. Have you just traveled, Parfit asks, or have you committed suicide?

    Would you step into this machine? Is the person who emerges on Mars really you? Questions like these –– those that explore the nature of personal identity and challenge our commonly held intuitions about it –– are becoming increasingly important in the face of 21st century technology. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. AI enabled bio-engineering will allow for human-species divergence via upgrades, and as we arrive at AGI and beyond we may see a world where it is possible to merge with AI directly, upload ourselves, copy and duplicate ourselves arbitrarily, or even manipulate and re-program our sense of identity. Are there ways we can inform and shape human understanding of identity to nudge civilization in the right direction?



    Topics discussed in this episode include:

    -Identity from epistemic, ontological, and phenomenological perspectives
    -Identity formation in biological evolution
    -Open, closed, and empty individualism
    -The moral relevance of views on identity
    -Identity in the world today and on the path to superintelligence and beyond

    You can find the page and transcript for this podcast here: https://futureoflife.org/2020/01/15/identity-and-the-ai-revolution-with-david-pearce-and-andres-gomez-emilsson/

    Timestamps: 

    0:00 - Intro
    6:33 - What is identity?
    9:52 - Ontological aspects of identity
    12:50 - Epistemological and phenomenological aspects of identity
    18:21 - Biological evolution of identity
    26:23 - Functionality or arbitrariness of identity / whether or not there are right or wrong answers
    31:23 - Moral relevance of identity
    34:20 - Religion as codifying views on identity
    37:50 - Different views on identity
    53:16 - The hard problem and the binding problem
    56:52 - The problem of causal efficacy, and the palette problem
    1:00:12 - Navigating views of identity towards truth
    1:08:34 - The relationship between identity and the self model
    1:10:43 - The ethical implications of different views on identity
    1:21:11 - The consequences of different views on identity on preference weighting
    1:26:34 - Identity and AI alignment
    1:37:50 - Nationalism and AI alignment
    1:42:09 - Cryonics, species divergence, immortality, uploads, and merging.
    1:50:28 - Future scenarios from Life 3.0
    1:58:35 - The role of identity in the AI itself

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 2 hrs 3 min
    On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

    On Consciousness, Morality, Effective Altruism & Myth with Yuval Noah Harari & Max Tegmark

    Neither Yuval Noah Harari nor Max Tegmark need much in the way of introduction. Both are avant-garde thinkers at the forefront of 21st century discourse around science, technology, society and humanity’s future. This conversation represents a rare opportunity for two intellectual leaders to apply their combined expertise — in physics, artificial intelligence, history, philosophy and anthropology — to some of the most profound issues of our time. Max and Yuval bring their own macroscopic perspectives to this discussion of both cosmological and human history, exploring questions of consciousness, ethics, effective altruism, artificial intelligence, human extinction, emerging technologies and the role of myths and stories in fostering societal collaboration and meaning. We hope that you'll join the Future of Life Institute Podcast for our final conversation of 2019, as we look toward the future and the possibilities it holds for all of us.

    Topics discussed include:

    -Max and Yuval's views and intuitions about consciousness
    -How they ground and think about morality
    -Effective altruism and its cause areas of global health/poverty, animal suffering, and existential risk
    -The function of myths and stories in human society
    -How emerging science, technology, and global paradigms challenge the foundations of many of our stories
    -Technological risks of the 21st century

    You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/31/on-consciousness-morality-effective-altruism-myth-with-yuval-noah-harari-max-tegmark/

    Timestamps:

    0:00 Intro
    3:14 Grounding morality and the need for a science of consciousness
    11:45 The effective altruism community and it's main cause areas
    13:05 Global health
    14:44 Animal suffering and factory farming
    17:38 Existential risk and the ethics of the long-term future
    23:07 Nuclear war as a neglected global risk
    24:45 On the risks of near-term AI and of artificial general intelligence and superintelligence
    28:37 On creating new stories for the challenges of the 21st century
    32:33 The risks of big data and AI enabled human hacking and monitoring
    47:40 What does it mean to be human and what should we want to want?
    52:29 On positive global visions for the future
    59:29 Goodbyes and appreciations
    01:00:20 Outro and supporting the Future of Life Institute Podcast

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1 hr

Top Podcasts In Technology

Listeners Also Subscribed To