Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.
Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w
Nick’s website: https://nickbostrom.com/
Future of Humanity Institute:
– Superintelligence: https://amzn.to/2JckX83
This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.
Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.
00:00 – Introduction
02:48 – Simulation hypothesis and simulation argument
12:17 – Technologically mature civilizations
15:30 – Case 1: if something kills all possible civilizations
19:08 – Case 2: if we lose interest in creating simulations
22:03 – Consciousness
26:27 – Immersive worlds
28:50 – Experience machine
41:10 – Intelligence and consciousness
48:58 – Weighing probabilities of the simulation argument
1:01:43 – Elaborating on Joe Rogan conversation
1:05:53 – Doomsday argument and anthropic reasoning
1:23:02 – Elon Musk
1:25:26 – What’s outside the simulation?
1:29:52 – Superintelligence
1:47:27 – AGI utopia
1:52:41 – Meaning of life