1 hr 43 min

Rohin Shah on the State of AGI Safety Research in 2021 Future of Life Institute Podcast

    • Technology

Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. 

Topics discussed in this episode include:

- Inner Alignment versus Outer Alignment
- Foundation Models
- Structural AI Risks
- Unipolar versus Multipolar Scenarios
- The Most Important Thing That Impacts the Future of Life

You can find the page for the podcast here:
https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021

Watch the video version of this episode here:
https://youtu.be/_5xkh-Rh6Ec

Follow the Alignment Newsletter here: https://rohinshah.com/alignment-newsletter/

Have any feedback about the podcast? You can share your thoughts here:
https://www.surveymonkey.com/r/DRBFZCT

Timestamps: 

0:00 Intro
00:02:22 What is AI alignment?
00:06:00 How has your perspective of this problem changed over the past year?
00:06:28 Inner Alignment
00:13:00 Ways that AI could actually lead to human extinction
00:18:53 Inner Alignment and MACE optimizers
00:20:15 Outer Alignment
00:23:12 The core problem of AI alignment
00:24:54 Learning Systems versus Planning Systems
00:28:10 AI and Existential Risk
00:32:05 The probability of AI existential risk
00:51:31 Core problems in AI alignment
00:54:46 How has AI alignment, as a field of research changed in the last year?
00:54:02 Large scale language models
00:54:50 Foundation Models
00:59:58 Why don't we know that AI systems won't totally kill us all?
01:09:05 How much of the alignment and safety problems in AI will be solved by industry?
01:14:44 Do you think about what beneficial futures look like?
01:19:31 Moral Anti-Realism and AI
01:27:25 Unipolar versus Multipolar Scenarios
01:35:33 What is the safety team at DeepMind up to?
01:35:41 What is the most important thing that impacts the future of life?

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Rohin Shah, Research Scientist on DeepMind's technical AGI safety team, joins us to discuss: AI value alignment; how an AI Researcher might decide whether to work on AI Safety; and why we don't know that AI systems won't lead to existential risk. 

Topics discussed in this episode include:

- Inner Alignment versus Outer Alignment
- Foundation Models
- Structural AI Risks
- Unipolar versus Multipolar Scenarios
- The Most Important Thing That Impacts the Future of Life

You can find the page for the podcast here:
https://futureoflife.org/2021/11/01/rohin-shah-on-the-state-of-agi-safety-research-in-2021

Watch the video version of this episode here:
https://youtu.be/_5xkh-Rh6Ec

Follow the Alignment Newsletter here: https://rohinshah.com/alignment-newsletter/

Have any feedback about the podcast? You can share your thoughts here:
https://www.surveymonkey.com/r/DRBFZCT

Timestamps: 

0:00 Intro
00:02:22 What is AI alignment?
00:06:00 How has your perspective of this problem changed over the past year?
00:06:28 Inner Alignment
00:13:00 Ways that AI could actually lead to human extinction
00:18:53 Inner Alignment and MACE optimizers
00:20:15 Outer Alignment
00:23:12 The core problem of AI alignment
00:24:54 Learning Systems versus Planning Systems
00:28:10 AI and Existential Risk
00:32:05 The probability of AI existential risk
00:51:31 Core problems in AI alignment
00:54:46 How has AI alignment, as a field of research changed in the last year?
00:54:02 Large scale language models
00:54:50 Foundation Models
00:59:58 Why don't we know that AI systems won't totally kill us all?
01:09:05 How much of the alignment and safety problems in AI will be solved by industry?
01:14:44 Do you think about what beneficial futures look like?
01:19:31 Moral Anti-Realism and AI
01:27:25 Unipolar versus Multipolar Scenarios
01:35:33 What is the safety team at DeepMind up to?
01:35:41 What is the most important thing that impacts the future of life?

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

1 hr 43 min

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
This Week in XR Podcast
Charlie Fink Productions