1 hr 26 min

Jaan Tallinn on Avoiding Civilizational Pitfalls and Surviving the 21st Century Future of Life Institute Podcast

    • Technology

Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century.

Topics discussed in this episode include:

-Intelligence and coordination
-Existential risk from AI, synthetic biology, and unknown unknowns
-AI adoption as a delegation process
-Jaan's investments and philanthropic efforts
-International coordination and incentive structures
-The short-term and long-term AI safety communities

You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/

Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

Timestamps: 

0:00 Intro
1:29 How can humanity improve?
3:10 The importance of intelligence and coordination
8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans
15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks
17:15 How Jaan evaluates and thinks about existential risk
18:30 Nuclear weapons as the first existential risk we faced
20:47 The likelihood of unknown unknown existential risks
25:04 Why Jaan doesn't see nuclear war as an existential risk
27:54 Climate change
29:00 Existential risk from synthetic biology
31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge
36:23 AI adoption as a delegation process
42:52 Attractors in the design space of AI
44:24 The regulation of AI
45:31 Jaan's investments and philanthropy in AI
55:18 International coordination issues from AI adoption as a delegation process
57:29 AI today and the negative impacts of recommender algorithms
1:02:43 Collective, institutional, and interpersonal coordination
1:05:23 The benefits and risks of longevity research
1:08:29 The long-term and short-term AI safety communities and their relationship with one another
1:12:35 Jaan's current philanthropic efforts
1:16:28 Software as a philanthropic target
1:19:03 How do we move towards beneficial futures with AI?
1:22:30 An idea Jaan finds meaningful
1:23:33 Final thoughts from Jaan
1:25:27 Where to find Jaan

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

Jaan Tallinn, investor, programmer, and co-founder of the Future of Life Institute, joins us to discuss his perspective on AI, synthetic biology, unknown unknows, and what's needed for mitigating existential risk in the 21st century.

Topics discussed in this episode include:

-Intelligence and coordination
-Existential risk from AI, synthetic biology, and unknown unknowns
-AI adoption as a delegation process
-Jaan's investments and philanthropic efforts
-International coordination and incentive structures
-The short-term and long-term AI safety communities

You can find the page for this podcast here: https://futureoflife.org/2021/04/20/jaan-tallinn-on-avoiding-civilizational-pitfalls-and-surviving-the-21st-century/

Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT

Timestamps: 

0:00 Intro
1:29 How can humanity improve?
3:10 The importance of intelligence and coordination
8:30 The bottlenecks of input and output bandwidth as well as processing speed between AIs and humans
15:20 Making the creation of AI feel dangerous and how the nuclear power industry killed itself by downplaying risks
17:15 How Jaan evaluates and thinks about existential risk
18:30 Nuclear weapons as the first existential risk we faced
20:47 The likelihood of unknown unknown existential risks
25:04 Why Jaan doesn't see nuclear war as an existential risk
27:54 Climate change
29:00 Existential risk from synthetic biology
31:29 Learning from mistakes, lacking foresight, and the importance of generational knowledge
36:23 AI adoption as a delegation process
42:52 Attractors in the design space of AI
44:24 The regulation of AI
45:31 Jaan's investments and philanthropy in AI
55:18 International coordination issues from AI adoption as a delegation process
57:29 AI today and the negative impacts of recommender algorithms
1:02:43 Collective, institutional, and interpersonal coordination
1:05:23 The benefits and risks of longevity research
1:08:29 The long-term and short-term AI safety communities and their relationship with one another
1:12:35 Jaan's current philanthropic efforts
1:16:28 Software as a philanthropic target
1:19:03 How do we move towards beneficial futures with AI?
1:22:30 An idea Jaan finds meaningful
1:23:33 Final thoughts from Jaan
1:25:27 Where to find Jaan

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

1 hr 26 min

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Acquired
Ben Gilbert and David Rosenthal
BG2Pod with Brad Gerstner and Bill Gurley
BG2Pod
The Neuron: AI Explained
The Neuron
TED Radio Hour
NPR