Category Archives: ai

#86 – David Silver: AlphaGo, AlphaZero, and Deep Reinforcement Learning

David Silver leads the reinforcement learning research group at DeepMind and was lead researcher on AlphaGo, AlphaZero and co-lead on AlphaStar, and MuZero and lot of important work in reinforcement learning.

Support this podcast by signing up with these sponsors:
– MasterClass: https://masterclass.com/lex
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
Reinforcement learning (book): https://amzn.to/2Jwp5zG

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
04:09 – First program
11:11 – AlphaGo
21:42 – Rule of the game of Go
25:37 – Reinforcement learning: personal journey
30:15 – What is reinforcement learning?
43:51 – AlphaGo (continued)
53:40 – Supervised learning and self play in AlphaGo
1:06:12 – Lee Sedol retirement from Go play
1:08:57 – Garry Kasparov
1:14:10 – Alpha Zero and self play
1:31:29 – Creativity in AlphaZero
1:35:21 – AlphaZero applications
1:37:59 – Reward functions
1:40:51 – Meaning of life

#85 – Roger Penrose: Physics of Consciousness and the Infinite Universe

Roger Penrose is physicist, mathematician, and philosopher at University of Oxford. He has made fundamental contributions in many disciplines from the mathematical physics of general relativity and cosmology to the limitations of a computational view of consciousness.

Support this podcast by signing up with these sponsors:
– ExpressVPN at https://www.expressvpn.com/lexpod
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
Cycles of Time (book): https://amzn.to/39tXtpp
The Emperor’s New Mind (book): https://amzn.to/2yfeVkD

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:51 – 2001: A Space Odyssey
09:43 – Consciousness and computation
23:45 – What does it mean to “understand”
31:37 – What’s missing in quantum mechanics?
40:09 – Whatever consciousness is, it’s not a computation
44:13 – Source of consciousness in the human brain
1:02:57 – Infinite cycles of big bangs
1:22:05 – Most beautiful idea in mathematics

#84 – William MacAskill: Effective Altruism

William MacAskill is a philosopher, ethicist, and one of the originators of the effective altruism movement. His research focuses on the fundamentals of effective altruism – the use of evidence and reason to help others by as much as possible with our time and money, with a particular concentration on how to act given moral uncertainty. He is the author of Doing Good Better – Effective Altruism and a Radical New Way to Make a Difference. He is a co-founder and the President of the Centre for Effective Altruism (CEA) that encourages people to commit to donate at least 10% of their income to the most effective charities. He co-founded 80,000 Hours, a non-profit that provides research and advice on how you can best make a difference through your career.

Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
William’s Twitter: https://twitter.com/willmacaskill
William’s Website: http://www.williammacaskill.com/
Doing Good Better (book): https://amzn.to/2UsMRDj
GiveWell: https://www.givewell.org/
80,000 Hours: https://80000hours.org/

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
02:39 – Utopia – the Long Reflection
10:25 – Advertisement model
15:56 – Effective altruism
38:28 – Criticism
49:02 – Biggest problems in the world
53:40 – Suffering
1:01:40 – Animal welfare
1:09:23 – Existential risks
1:19:08 – Existential risk from AGI

#83 – Nick Bostrom: Simulation and Superintelligence

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.

Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
Nick’s website: https://nickbostrom.com/
Future of Humanity Institute:
https://twitter.com/fhioxford
https://www.fhi.ox.ac.uk/
Books:
– Superintelligence: https://amzn.to/2JckX83
Wikipedia:
https://en.wikipedia.org/wiki/Simulation_hypothesis
https://en.wikipedia.org/wiki/Principle_of_indifference
https://en.wikipedia.org/wiki/Doomsday_argument
https://en.wikipedia.org/wiki/Global_catastrophic_risk

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
02:48 – Simulation hypothesis and simulation argument
12:17 – Technologically mature civilizations
15:30 – Case 1: if something kills all possible civilizations
19:08 – Case 2: if we lose interest in creating simulations
22:03 – Consciousness
26:27 – Immersive worlds
28:50 – Experience machine
41:10 – Intelligence and consciousness
48:58 – Weighing probabilities of the simulation argument
1:01:43 – Elaborating on Joe Rogan conversation
1:05:53 – Doomsday argument and anthropic reasoning
1:23:02 – Elon Musk
1:25:26 – What’s outside the simulation?
1:29:52 – Superintelligence
1:47:27 – AGI utopia
1:52:41 – Meaning of life

Lex Fridman #1: Seven Levels of Coronavirus Attack and How We Can Fight Back

The coronavirus pandemic is a global tragedy, but it is also a moment that unites us, that reveals the strength of our community, the human capacity to be compassionate to each other and to work hard in the face of danger. In this episode I describe what, to me, might be 7 levels of attack on our society and how we can fight back. For each level, I describe our pain, our challenge, and our hope for a positive future on the other side.

Fill out this one-question survey on whether you want to see more solo episodes like these:
https://lexfridman.survey.fm/solo-episodes

EPISODE LINKS:
Video version: https://www.youtube.com/watch?v=jAYTogd38m4
Slides: https://bit.ly/2JdLQIs
References: https://bit.ly/coronavirus-levels

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
02:49 – Overview
05:46 – Level 1: Biological (Life & Death)
10:08 – Level 2: Psychological
12:30 – Level 3: Social (Collective Cognition)
15:52 – Level 4: Economic
19:39 – Level 5: Political
24:42 – Level 6: Existential
28:25 – Level 7: Philosophical

#82 – Simon Sinek: Leadership, Hard Work, Optimism and the Infinite Game

Simon Sinek is an author of several books including Start With Why, Leaders Eat Last, and his latest The Infinite Game. He is one of the best communicators of what it takes to be a good leader, to inspire, and to build businesses that solve big difficult challenges.

Support this podcast by signing up with these sponsors:
– MasterClass: https://masterclass.com/lex
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
Simon twitter: https://twitter.com/simonsinek
Simon facebook: https://www.facebook.com/simonsinek
Simon website: https://simonsinek.com/
Books:
– Infinite Game: https://amzn.to/2WxBH1i
– Leaders Eat Last: https://amzn.to/2xf70Ds
– Start with Why: https://amzn.to/2WxBH1i

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
0:00 – Introduction
3:50 – Meaning of life as an infinite game
10:13 – Optimism
13:30 – Mortality
17:52 – Hard work
26:38 – Elon Musk, Steve Jobs, and leadership

#81 – Anca Dragan: Human-Robot Interaction and Reward Engineering

Anca Dragan is a professor at Berkeley, working on human-robot interaction — algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

Support this podcast by supporting the sponsors and using the special code:
– Download Cash App on the App Store or Google Play & use code “LexPodcast” 

EPISODE LINKS:
Anca’s Twitter: https://twitter.com/ancadianadragan
Anca’s Website: https://people.eecs.berkeley.edu/~anca/

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
02:26 – Interest in robotics
05:32 – Computer science
07:32 – Favorite robot
13:25 – How difficult is human-robot interaction?
32:01 – HRI application domains
34:24 – Optimizing the beliefs of humans
45:59 – Difficulty of driving when humans are involved
1:05:02 – Semi-autonomous driving
1:10:39 – How do we specify good rewards?
1:17:30 – Leaked information from human behavior
1:21:59 – Three laws of robotics
1:26:31 – Book recommendation
1:29:02 – If a doctor gave you 5 years to live…
1:32:48 – Small act of kindness
1:34:31 – Meaning of life

#80 – Vitalik Buterin: Ethereum, Cryptocurrency, and the Future of Money

Vitalik Buterin is co-creator of Ethereum and ether, which is a cryptocurrency that is currently the second-largest digital currency after bitcoin. Ethereum has a lot of interesting technical ideas that are defining the future of blockchain technology, and Vitalik is one of the most brilliant people innovating this space today.

Support this podcast by supporting the sponsors with a special code:
– Get ExpressVPN at https://www.expressvpn.com/lexpod
– Sign up to MasterClass at https://masterclass.com/lex

EPISODE LINKS:
Vitalik blog: https://vitalik.ca
Ethereum whitepaper: http://bit.ly/3cVDTpj
Casper FFG (paper): http://bit.ly/2U6j7dJ
Quadratic funding (paper): http://bit.ly/3aUZ8Wd
Bitcoin whitepaper: https://bitcoin.org/bitcoin.pdf
Mastering Ethereum (book): https://amzn.to/2xEjWmE

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
04:43 – Satoshi Nakamoto
08:40 – Anonymity
11:31 – Open source project leadership
13:04 – What is money?
30:02 – Blockchain and cryptocurrency basics
46:51 – Ethereum
59:23 – Proof of work
1:02:12 – Ethereum 2.0
1:13:09 – Beautiful ideas in Ethereum
1:16:59 – Future of cryptocurrency
1:22:06 – Cryptocurrency resources and people to follow
1:24:28 – Role of governments
1:27:27 – Meeting Putin
1:29:41 – Large number of cryptocurrencies
1:32:49 – Mortality

#79 – Lee Smolin: Quantum Gravity and Einstein’s Unfinished Revolution

Lee Smolin is a theoretical physicist, co-inventor of loop quantum gravity, and a contributor of many interesting ideas to cosmology, quantum field theory, the foundations of quantum mechanics, theoretical biology, and the philosophy of science. He is the author of several books including one that critiques the state of physics and string theory called The Trouble with Physics, and his latest book, Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum.

EPISODE LINKS:
Books mentioned:
– Einstein’s Unfinished Revolution by Lee Smolin: https://amzn.to/2TsF5c3
– The Trouble With Physics by Lee Smolin: https://amzn.to/2v1FMzy
– Against Method by Paul Feyerabend: https://amzn.to/2VOPXCD

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:03 – What is real?
05:03 – Scientific method and scientific progress
24:57 – Eric Weinstein and radical ideas in science
29:32 – Quantum mechanics and general relativity
47:24 – Sean Carroll and many-worlds interpretation of quantum mechanics
55:33 – Principles in science
57:24 – String theory

#78 – Ann Druyan: Cosmos, Carl Sagan, Voyager, and the Beauty of Science

Ann Druyan is the writer, producer, director, and one of the most important and impactful communicators of science in our time. She co-wrote the 1980 science documentary series Cosmos hosted by Carl Sagan, whom she married in 1981, and her love for whom, with the help of NASA, was recorded as brain waves on a golden record along with other things our civilization has to offer and launched into space on the Voyager 1 and Voyager 2 spacecraft that are now, 42 years later, still active, reaching out farther into deep space than any human-made object ever has. This was a profound and beautiful decision she made as a Creative Director of NASA’s Voyager Interstellar Message Project. In 2014, she went on to create the second season of Cosmos, called Cosmos: A Spacetime Odyssey, and in 2020, the new third season called Cosmos: Possible Worlds, which is being released this upcoming Monday, March 9. It is hosted, once again, by the fun and brilliant Neil deGrasse Tyson.

EPISODE LINKS:
Cosmos Twitter: https://twitter.com/COSMOSonTV
Cosmos Website: https://fox.tv/CosmosOnTV

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:24 – Role of science in society
07:04 – Love and science
09:07 – Skepticism in science
14:15 – Voyager, Carl Sagan, and the Golden Record
36:41 – Cosmos
53:22 – Existential threats
1:00:36 – Origin of life
1:04:22 – Mortality