117 episodios

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

The Future of Life Future of Life Institute

    • Tecnología

FLI catalyzes and supports research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.
Among our objectives is to inspire discussion and a sharing of ideas. As such, we interview researchers and thought leaders who we believe will help spur discussion within our community. The interviews do not necessarily represent FLI’s opinions or views.

    Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

    Stephen Batchelor on Awakening, Embracing Existential Risk, and Secular Buddhism

    Stephen Batchelor, a Secular Buddhist teacher and former monk, joins the FLI Podcast to discuss the project of awakening, the facets of human nature which contribute to extinction risk, and how we might better embrace existential threats.

     Topics discussed in this episode include:

    -The projects of awakening and growing the wisdom with which to manage technologies
    -What might be possible of embarking on the project of waking up
    -Facets of human nature that contribute to existential risk
    -The dangers of the problem solving mindset
    -Improving the effective altruism and existential risk communities

    You can find the page for this podcast here: https://futureoflife.org/2020/10/15/stephen-batchelor-on-awakening-embracing-existential-risk-and-secular-buddhism/

    Timestamps: 

    0:00 Intro
    3:40 Albert Einstein and the quest for awakening
    8:45 Non-self, emptiness, and non-duality
    25:48 Stephen's conception of awakening, and making the wise more powerful vs the powerful more wise
    33:32 The importance of insight
    49:45 The present moment, creativity, and suffering/pain/dukkha
    58:44 Stephen's article, Embracing Extinction
    1:04:48 The dangers of the problem solving mindset
    1:26:12 Improving the effective altruism and existential risk communities
    1:37:30 Where to find and follow Stephen

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1h 39 min
    Kelly Wanser on Climate Change as a Possible Existential Threat

    Kelly Wanser on Climate Change as a Possible Existential Threat

    Kelly Wanser from SilverLining joins us to discuss techniques for climate intervention to mitigate the impacts of human induced climate change.

     Topics discussed in this episode include:

    - The risks of climate change in the short-term
    - Tipping points and tipping cascades
    - Climate intervention via marine cloud brightening and releasing particles in the stratosphere
    - The benefits and risks of climate intervention techniques 
    - The international politics of climate change and weather modification

    You can find the page for this podcast here: https://futureoflife.org/2020/09/30/kelly-wanser-on-marine-cloud-brightening-for-mitigating-climate-change/

    Video recording of this podcast here: https://youtu.be/CEUEFUkSMHU

    Timestamps: 

    0:00 Intro
    2:30 What is SilverLining’s mission? 
    4:27 Why is climate change thought to be very risky in the next 10-30 years? 
    8:40 Tipping points and tipping cascades
    13:25 Is climate change an existential risk? 
    17:39 Earth systems that help to stabilize the climate 
    21:23 Days where it will be unsafe to work outside 
    25:03 Marine cloud brightening, stratospheric sunlight reflection, and other climate interventions SilverLining is interested in 
    41:46 What experiments are happening to understand tropospheric and stratospheric climate interventions? 
    50:20 International politics of weather modification 
    53:52 How do efforts to reduce greenhouse gas emissions fit into the project of reflecting sunlight? 
    57:35 How would you respond to someone who views climate intervention by marine cloud brightening as too dangerous? 
    59:33 What are the main points of persons skeptical of climate intervention approaches 
    01:13:21 The international problem of coordinating on climate change 
    01:24:50 Is climate change a global catastrophic or existential risk, and how does it relate to other large risks?
    01:33:20 Should effective altruists spend more time on the issue of climate change and climate intervention? 
     01:37:48 What can listeners do to help with this issue? 
    01:40:00 Climate change and mars colonization 
    01:44:55 Where to find and follow Kelly

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1h 45 min
    Andrew Critch on AI Research Considerations for Human Existential Safety

    Andrew Critch on AI Research Considerations for Human Existential Safety

    In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.

     Topics discussed in this episode include:

    - The mainstream computer science view of AI existential risk
    - Distinguishing AI safety from AI existential safety 
    - The need for more precise terminology in the field of AI existential safety and alignment
    - The concept of prepotent AI systems and the problem of delegation 
    - Which alignment problems get solved by commercial incentives and which don’t
    - The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives
    - Prepotent AI risk types that lead to unsurvivability for humanity

    You can find the page for this podcast here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/

    Timestamps: 

    0:00 Intro
    2:53 Why Andrew wrote ARCHES and what it’s about
    6:46 The perspective of the mainstream CS community on AI existential risk
    13:03 ARCHES in relation to AI existential risk literature
    16:05 The distinction between safety and existential safety 
    24:27 Existential risk is most likely to obtain through externalities 
    29:03 The relationship between existential safety and safety for current systems 
    33:17 Research areas that may not be solved by natural commercial incentives
    51:40 What’s an AI system and an AI technology? 
    53:42 Prepotent AI 
    59:41 Misaligned prepotent AI technology 
    01:05:13 Human frailty 
    01:07:37 The importance of delegation 
    01:14:11 Single-single, single-multi, multi-single, and multi-multi 
    01:15:26 Control, instruction, and comprehension 
    01:20:40 The multiplicity thesis 
    01:22:16 Risk types from prepotent AI that lead to human unsurvivability 
    01:34:06 Flow-through effects 
    01:41:00 Multi-stakeholder objectives 
    01:49:08 Final words from Andrew

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1h 51 min
    Iason Gabriel on Foundational Philosophical Questions in AI Alignment

    Iason Gabriel on Foundational Philosophical Questions in AI Alignment

    In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.   

     Topics discussed in this episode include:

    -How moral philosophy and political theory are deeply related to AI alignment
    -The problem of dealing with a plurality of preferences and philosophical views in AI alignment
    -How the is-ought problem and metaethics fits into alignment 
    -What we should be aligning AI systems to
    -The importance of democratic solutions to questions of AI alignment 
    -The long reflection

    You can find the page for this podcast here: https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/

    Timestamps: 

    0:00 Intro
    2:10 Why Iason wrote Artificial Intelligence, Values and Alignment
    3:12 What AI alignment is
    6:07 The technical and normative aspects of AI alignment
    9:11 The normative being dependent on the technical
    14:30 Coming up with an appropriate alignment procedure given the is-ought problem
    31:15 What systems are subject to an alignment procedure?
    39:55 What is it that we're trying to align AI systems to?
    01:02:30 Single agent and multi agent alignment scenarios
    01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals
    01:30:28 The long reflection
    01:53:55 Where to follow and contact Iason

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1h 54 min
    Peter Railton on Moral Learning and Metaethics in AI Systems

    Peter Railton on Moral Learning and Metaethics in AI Systems

    From a young age, humans are capable of developing moral competency and autonomy through experience. We begin life by constructing sophisticated moral representations of the world that allow for us to successfully navigate our way through complex social situations with sensitivity to morally relevant information and variables. This capacity for moral learning allows us to solve open-ended problems with other persons who may hold complex beliefs and preferences. As AI systems become increasingly autonomous and active in social situations involving human and non-human agents, AI moral competency via the capacity for moral learning will become more and more critical. On this episode of the AI Alignment Podcast, Peter Railton joins us to discuss the potential role of moral learning and moral epistemology in AI systems, as well as his views on metaethics.

    Topics discussed in this episode include:

    -Moral epistemology
    -The potential relevance of metaethics to AI alignment
    -The importance of moral learning in AI systems
    -Peter Railton's, Derek Parfit's, and Peter Singer's metaethical views

    You can find the page for this podcast here: https://futureoflife.org/2020/08/18/peter-railton-on-moral-learning-and-metaethics-in-ai-systems/

    Timestamps: 

    0:00 Intro
    3:05 Does metaethics matter for AI alignment?
    22:49 Long-reflection considerations
    26:05 Moral learning in humans
    35:07 The need for moral learning in artificial intelligence
    53:57 Peter Railton's views on metaethics and his discussions with Derek Parfit
    1:38:50 The need for engagement between philosophers and the AI alignment community
    1:40:37 Where to find Peter's work

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1h 41 min
    Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

    Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

    It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.

     Topics discussed in this episode include:

    -Inner and outer alignment
    -How and why inner alignment can fail
    -Training competitiveness and performance competitiveness
    -Evaluating imitative amplification, AI safety via debate, and microscope AI

    You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/

    Timestamps: 

    0:00 Intro 
    2:07 How Evan got into AI alignment research
    4:42 What is AI alignment?
    7:30 How Evan approaches AI alignment
    13:05 What are inner alignment and outer alignment?
    24:23 Gradient descent
    36:30 Testing for inner alignment
    38:38 Wrapping up on outer alignment
    44:24 Why is inner alignment a priority?
    45:30 How inner alignment fails
    01:11:12 Training competitiveness and performance competitiveness
    01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness
    01:17:30 Imitative amplification
    01:23:00 AI safety via debate
    01:26:32 Microscope AI
    01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment
    01:34:45 Where to follow Evan and find more of his work

    This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

    • 1h 37 min

Top podcasts de Tecnología

Otros usuarios también se han suscrito a