Category Archives: ai

#113 – Manolis Kellis: Human Genome and Evolutionary Dynamics

Manolis Kellis is a professor at MIT and head of the MIT Computational Biology Group. He is interested in understanding the human genome from a computational, evolutionary, biological, and other cross-disciplinary perspectives.

Support this podcast by supporting our sponsors:
– Blinkist: https://blinkist.com/lex
– Eight Sleep: https://eightsleep.com/lex
– MasterClass: https://masterclass.com/lex

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:54 – Human genome
17:47 – Sources of knowledge
29:15 – Free will
33:26 – Simulation
35:17 – Biological and computing
50:10 – Genome-wide evolutionary signatures
56:54 – Evolution of COVID-19
1:02:59 – Are viruses intelligent?
1:12:08 – Humans vs viruses
1:19:39 – Engineered pandemics
1:23:23 – Immune system
1:33:22 – Placebo effect
1:35:39 – Human genome source code
1:44:40 – Mutation
1:51:46 – Deep learning
1:58:08 – Neuralink
2:07:07 – Language
2:15:19 – Meaning of life

#112 – Ian Hutchinson: Nuclear Fusion, Plasma Physics, and Religion

Ian Hutchinson is a nuclear engineer and plasma physicist at MIT. He has made a number of important contributions in plasma physics including the magnetic confinement of plasmas seeking to enable fusion reactions, which is the energy source of the stars, to be used for practical energy production. Current nuclear reactors are based on fission as we discuss. Ian has also written on the philosophy of science and the relationship between science and religion.

Support this podcast by supporting our sponsors:
– Sun Basket, use code LEX: https://sunbasket.com/lex
– PowerDot, use code LEX: https://powerdot.com/lex

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
05:32 – Nuclear physics and plasma physics
08:00 – Fusion energy
35:22 – Nuclear weapons
42:06 – Existential risks
50:29 – Personal journey in religion
56:27 – What is God like?
1:01:34 – Scientism
1:04:21 – Atheism
1:06:39 – Not knowing
1:09:57 – Faith
1:13:46 – The value of loyalty and love
1:23:26 – Why is there suffering in the world
1:35:08 – AGI
1:40:27 – Consciousness
1:48:14 – Simulation
1:52:20 – Adam and Eve
1:54:57 – Meaning of life

#111 – Richard Karp: Algorithms and Computational Complexity

Richard Karp is a professor at Berkeley and one of the most important figures in the history of theoretical computer science. In 1985, he received the Turing Award for his research in the theory of algorithms, including the development of the Edmonds–Karp algorithm for solving the maximum flow problem on networks, Hopcroft–Karp algorithm for finding maximum cardinality matchings in bipartite graphs, and his landmark paper in complexity theory called “Reducibility Among Combinatorial Problems”, in which he proved 21 problems to be NP-complete. This paper was probably the most important catalyst in the explosion of interest in the study of NP-completeness and the P vs NP problem.

Support this podcast by supporting our sponsors:
– Eight Sleep: https://eightsleep.com/lex
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:50 – Geometry
09:46 – Visualizing an algorithm
13:00 – A beautiful algorithm
18:06 – Don Knuth and geeks
22:06 – Early days of computers
25:53 – Turing Test
30:05 – Consciousness
33:22 – Combinatorial algorithms
37:42 – Edmonds-Karp algorithm
40:22 – Algorithmic complexity
50:25 – P=NP
54:25 – NP-Complete problems
1:10:29 – Proving P=NP
1:12:57 – Stable marriage problem
1:20:32 – Randomized algorithms
1:33:23 – Can a hard problem be easy in practice?
1:43:57 – Open problems in theoretical computer science
1:46:21 – A strange idea in complexity theory
1:50:49 – Machine learning
1:56:26 – Bioinformatics
2:00:37 – Memory of Richard’s father

#110 – Jitendra Malik: Computer Vision

Jitendra Malik is a professor at Berkeley and one of the seminal figures in the field of computer vision, the kind before the deep learning revolution, and the kind after. He has been cited over 180,000 times and has mentored many world-class researchers in computer science.

Support this podcast by supporting our sponsors:
– BetterHelp: http://betterhelp.com/lex
– ExpressVPN: https://www.expressvpn.com/lexpod

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:17 – Computer vision is hard
10:05 – Tesla Autopilot
21:20 – Human brain vs computers
23:14 – The general problem of computer vision
29:09 – Images vs video in computer vision
37:47 – Benchmarks in computer vision
40:06 – Active learning
45:34 – From pixels to semantics
52:47 – Semantic segmentation
57:05 – The three R’s of computer vision
1:02:52 – End-to-end learning in computer vision
1:04:24 – 6 lessons we can learn from children
1:08:36 – Vision and language
1:12:30 – Turing test
1:16:17 – Open problems in computer vision
1:24:49 – AGI
1:35:47 – Pick the right problem

#109 – Brian Kernighan: UNIX, C, AWK, AMPL, and Go Programming

Brian Kernighan is a professor of computer science at Princeton University. He co-authored the C Programming Language with Dennis Ritchie (creator of C) and has written a lot of books on programming, computers, and life including the Practice of Programming, the Go Programming Language, his latest UNIX: A History and a Memoir. He co-created AWK, the text processing language used by Linux folks like myself. He co-designed AMPL, an algebraic modeling language for large-scale optimization.

Support this podcast by supporting our sponsors:
– Eight Sleep: https://eightsleep.com/lex
– Raycon: http://buyraycon.com/lex

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
04:24 – UNIX early days
22:09 – Unix philosophy
31:54 – Is programming art or science?
35:18 – AWK
42:03 – Programming setup
46:39 – History of programming languages
52:48 – C programming language
58:44 – Go language
1:01:57 – Learning new programming languages
1:04:57 – Javascript
1:08:16 – Variety of programming languages
1:10:30 – AMPL
1:18:01 – Graph theory
1:22:20 – AI in 1964
1:27:50 – Future of AI
1:29:47 – Moore’s law
1:32:54 – Computers in our world
1:40:37 – Life

#108 – Sergey Levine: Robotics and Machine Learning

Sergey Levine is a professor at Berkeley and a world-class researcher in deep learning, reinforcement learning, robotics, and computer vision, including the development of algorithms for end-to-end training of neural network policies that combine perception and control, scalable algorithms for inverse reinforcement learning, and deep RL algorithms.

Support this podcast by supporting these sponsors:
– ExpressVPN: https://www.expressvpn.com/lexpod
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:05 – State-of-the-art robots vs humans
16:13 – Robotics may help us understand intelligence
22:49 – End-to-end learning in robotics
27:01 – Canonical problem in robotics
31:44 – Commonsense reasoning in robotics
34:41 – Can we solve robotics through learning?
44:55 – What is reinforcement learning?
1:06:36 – Tesla Autopilot
1:08:15 – Simulation in reinforcement learning
1:13:46 – Can we learn gravity from data?
1:16:03 – Self-play
1:17:39 – Reward functions
1:27:01 – Bitter lesson by Rich Sutton
1:32:13 – Advice for students interesting in AI
1:33:55 – Meaning of life

#107 – Peter Singer: Suffering in Humans, Animals, and AI

Peter Singer is a professor of bioethics at Princeton, best known for his 1975 book Animal Liberation, that makes an ethical case against eating meat. He has written brilliantly from an ethical perspective on extreme poverty, euthanasia, human genetic selection, sports doping, the sale of kidneys, and happiness including in his books Ethics in the Real World and The Life You Can Save. He was a key popularizer of the effective altruism movement and is generally considered one of the most influential philosophers in the world.

Support this podcast by supporting these sponsors:
– MasterClass: https://masterclass.com/lex
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
05:25 – World War II
09:53 – Suffering
16:06 – Is everyone capable of evil?
21:52 – Can robots suffer?
37:22 – Animal liberation
40:31 – Question for AI about suffering
43:32 – Neuralink
45:11 – Control problem of AI
51:08 – Utilitarianism
59:43 – Helping people in poverty
1:05:15 – Mortality

#106 – Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind

Matt Botvinick is the Director of Neuroscience Research at DeepMind. He is a brilliant cross-disciplinary mind navigating effortlessly between cognitive psychology, computational neuroscience, and artificial intelligence.

Support this podcast by supporting these sponsors:
– The Jordan Harbinger Show: https://www.jordanharbinger.com/lex
– Magic Spoon: https://magicspoon.com/lex and use code LEX at checkout

If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:29 – How much of the brain do we understand?
14:26 – Psychology
22:53 – The paradox of the human brain
32:23 – Cognition is a function of the environment
39:34 – Prefrontal cortex
53:27 – Information processing in the brain
1:00:11 – Meta-reinforcement learning
1:15:18 – Dopamine
1:19:01 – Neuroscience and AI research
1:23:37 – Human side of AI
1:39:56 – Dopamine and reinforcement learning
1:53:07 – Can we create an AI that a human can love?

#105 – Robert Langer: Edison of Medicine

Robert Langer is a professor at MIT and one of the most cited researchers in history, specializing in biotechnology fields of drug delivery systems and tissue engineering. He has bridged theory and practice by being a key member and driving force in launching many successful biotech companies out of MIT.

Support this podcast by supporting these sponsors:
– MasterClass: https://masterclass.com/lex
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:07 – Magic and science
05:34 – Memorable rejection
08:35 – How to come up with big ideas in science
13:27 – How to make a new drug
22:38 – Drug delivery
28:22 – Tissue engineering
35:22 – Beautiful idea in bioengineering
38:16 – Patenting process
42:21 – What does it take to build a successful startup?
46:18 – Mentoring students
50:54 – Funding
58:08 – Cookies
59:41 – What are you most proud of?

#104 – David Patterson: Computer Architecture and Data Storage

David Patterson is a Turing award winner and professor of computer science at Berkeley. He is known for pioneering contributions to RISC processor architecture used by 99% of new chips today and for co-creating RAID storage. The impact that these two lines of research and development have had on our world is immeasurable. He is also one of the great educators of computer science in the world. His book with John Hennessy “Computer Architecture: A Quantitative Approach” is how I first learned about and was humbled by the inner workings of machines at the lowest level.

Support this podcast by supporting these sponsors:
– Jordan Harbinger Show: https://jordanharbinger.com/lex/
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:28 – How have computers changed?
04:22 – What’s inside a computer?
10:02 – Layers of abstraction
13:05 – RISC vs CISC computer architectures
28:18 – Designing a good instruction set is an art
31:46 – Measures of performance
36:02 – RISC instruction set
39:39 – RISC-V open standard instruction set architecture
51:12 – Why do ARM implementations vary?
52:57 – Simple is beautiful in instruction set design
58:09 – How machine learning changed computers
1:08:18 – Machine learning benchmarks
1:16:30 – Quantum computing
1:19:41 – Moore’s law
1:28:22 – RAID data storage
1:36:53 – Teaching
1:40:59 – Wrestling
1:45:26 – Meaning of life