1 hr 57 min

#83 – Nick Bostrom: Simulation and Superintelligence Lex Fridman Podcast

    • Technology

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.



Support this podcast by signing up with these sponsors:

- Cash App - use code "LexPodcast" and download:

- Cash App (App Store): https://apple.co/2sPrUHe

- Cash App (Google Play): https://bit.ly/2MlvP5w



EPISODE LINKS:

Nick's website: https://nickbostrom.com/

Future of Humanity Institute:

- https://twitter.com/fhioxford

- https://www.fhi.ox.ac.uk/

Books:

- Superintelligence: https://amzn.to/2JckX83

Wikipedia:

- https://en.wikipedia.org/wiki/Simulation_hypothesis

- https://en.wikipedia.org/wiki/Principle_of_indifference

- https://en.wikipedia.org/wiki/Doomsday_argument

- https://en.wikipedia.org/wiki/Global_catastrophic_risk



This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.



Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.



OUTLINE:

00:00 - Introduction

02:48 - Simulation hypothesis and simulation argument

12:17 - Technologically mature civilizations

15:30 - Case 1: if something kills all possible civilizations

19:08 - Case 2: if we lose interest in creating simulations

22:03 - Consciousness

26:27 - Immersive worlds

28:50 - Experience machine

41:10 - Intelligence and consciousness

48:58 - Weighing probabilities of the simulation argument

1:01:43 - Elaborating on Joe Rogan conversation

1:05:53 - Doomsday argument and anthropic reasoning

1:23:02 - Elon Musk

1:25:26 - What's outside the simulation?

1:29:52 - Superintelligence

1:47:27 - AGI utopia

1:52:41 - Meaning of life

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.



Support this podcast by signing up with these sponsors:

- Cash App - use code "LexPodcast" and download:

- Cash App (App Store): https://apple.co/2sPrUHe

- Cash App (Google Play): https://bit.ly/2MlvP5w



EPISODE LINKS:

Nick's website: https://nickbostrom.com/

Future of Humanity Institute:

- https://twitter.com/fhioxford

- https://www.fhi.ox.ac.uk/

Books:

- Superintelligence: https://amzn.to/2JckX83

Wikipedia:

- https://en.wikipedia.org/wiki/Simulation_hypothesis

- https://en.wikipedia.org/wiki/Principle_of_indifference

- https://en.wikipedia.org/wiki/Doomsday_argument

- https://en.wikipedia.org/wiki/Global_catastrophic_risk



This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.



Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.



OUTLINE:

00:00 - Introduction

02:48 - Simulation hypothesis and simulation argument

12:17 - Technologically mature civilizations

15:30 - Case 1: if something kills all possible civilizations

19:08 - Case 2: if we lose interest in creating simulations

22:03 - Consciousness

26:27 - Immersive worlds

28:50 - Experience machine

41:10 - Intelligence and consciousness

48:58 - Weighing probabilities of the simulation argument

1:01:43 - Elaborating on Joe Rogan conversation

1:05:53 - Doomsday argument and anthropic reasoning

1:23:02 - Elon Musk

1:25:26 - What's outside the simulation?

1:29:52 - Superintelligence

1:47:27 - AGI utopia

1:52:41 - Meaning of life

1 hr 57 min

Top Podcasts In Technology

George Buhnici | #IGDLCC
George BUHNICI
Lex Fridman Podcast
Lex Fridman
Deep Questions with Cal Newport
Cal Newport
Techsploder
Jason Howell
Dwarkesh Podcast
Dwarkesh Patel
UPGRADE 100 Podcasts
UPGRADE 100 by Dragos Stanca