Category Archives: ai

Michael Kearns: Algorithmic Fairness, Bias, Privacy, and Ethics in Machine Learning

Michael Kearns is a professor at University of Pennsylvania and a co-author of the new book Ethical Algorithm that is the focus of much of our conversation, including algorithmic fairness, bias, privacy, and ethics in general. But, that is just one of many fields that Michael is a world-class researcher in, some of which we touch on quickly including learning theory or theoretical foundations of machine learning, game theory, algorithmic trading, quantitative finance, computational social science, and more.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. This episode is sponsored by Pessimists Archive podcast. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
02:45 – Influence from literature and journalism
07:39 – Are most people good?
13:05 – Ethical algorithm
24:28 – Algorithmic fairness of groups vs individuals
33:36 – Fairness tradeoffs
46:29 – Facebook, social networks, and algorithmic ethics
58:04 – Machine learning
58:05 – Machine learning
59:19 – Algorithm that determines what is fair
1:01:25 – Computer scientists should think about ethics
1:05:59 – Algorithmic privacy
1:11:50 – Differential privacy
1:19:10 – Privacy by misinformation
1:22:31 – Privacy of data in society
1:27:49 – Game theory
1:29:40 – Nash equilibrium
1:30:35 – Machine learning and game theory
1:34:52 – Mutual assured destruction
1:36:56 – Algorithmic trading
1:44:09 – Pivotal moment in graduate school

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot

Elon Musk is the CEO of Tesla, SpaceX, Neuralink, and a co-founder of several other companies. This is the second time Elon has been on the podcast. You can watch the first time on YouTube or listen to the first time on its episode page. You can read the transcript (PDF) here. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:57 – Consciousness
05:58 – Regulation of AI Safety
09:39 – Neuralink – understanding the human brain
11:53 – Neuralink – expanding the capacity of the human mind
17:51 – Neuralink – future challenges, solutions, and impact
24:59 – Smart Summon
27:18 – Tesla Autopilot and Full Self-Driving
31:16 – Carl Sagan and the Pale Blue Dot

Bjarne Stroustrup: C++

Bjarne Stroustrup is the creator of C++, a programming language that after 40 years is still one of the most popular and powerful languages in the world. Its focus on fast, stable, robust code underlies many of the biggest systems in the world that we have come to rely on as a society. If you’re watching this on YouTube, many of the critical back-end component of YouTube are written in C++. Same goes for Google, Facebook, Amazon, Twitter, most Microsoft applications, Adobe applications, most database systems, and most physical systems that operate in the real-world like cars, robots, rockets that launch us into space and one day will land us on Mars.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:40 – First program
02:18 – Journey to C++
16:45 – Learning multiple languages
23:20 – Javascript
25:08 – Efficiency and reliability in C++
31:53 – What does good code look like?
36:45 – Static checkers
41:16 – Zero-overhead principle in C++
50:00 – Different implementation of C++
54:46 – Key features of C++
1:08:02 – C++ Concepts
1:18:06 – C++ Standards Process
1:28:05 – Constructors and destructors
1:31:52 – Unified theory of programming
1:44:20 – Proudest moment

Sean Carroll: Quantum Mechanics and the Many-Worlds Interpretation

Sean Carroll is a theoretical physicist at Caltech and Santa Fe Institute specializing in quantum mechanics, arrow of time, cosmology, and gravitation. He is the author of Something Deeply Hidden and several popular books and he is the host of a great podcast called Mindscape. This is the second time Sean has been on the podcast. You can watch the first time on YouTube or listen to the first time on its episode page. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:23 – Capacity of human mind to understand physics
10:49 – Perception vs reality
12:29 – Conservation of momentum
17:20 – Difference between math and physics
20:10 – Why is our world so compressable
22:53 – What would Newton think of quantum mechanics
25:44 – What is quantum mechanics?
27:54 – What is an atom?
30:34 – What is the wave function?
32:30 – What is quantum entanglement?
35:19 – What is Hilbert space?
37:32 – What is entropy?
39:31 – Infinity
42:43 – Many-worlds interpretation of quantum mechanics
1:01:13 – Quantum gravity and the emergence of spacetime
1:08:34 – Our branch of reality in many-worlds interpretation
1:10:40 – Time travel
1:12:54 – Arrow of time
1:16:18 – What is fundamental in physics
1:16:58 – Quantum computers
1:17:42 – Experimental validation of many-worlds and emergent spacetime
1:19:53 – Quantum mechanics and the human mind
1:21:51 – Mindscape podcast

Garry Kasparov: Chess, Deep Blue, AI, and Putin

Garry Kasparov is considered by many to be the greatest chess player of all time. From 1986 until his retirement in 2005, he dominated the chess world, ranking world number 1 for most of those 19 years. While he has many historic matches against human chess players, in the long arc of history he may be remembered for his match again a machine, IBM’s Deep Blue. His initial victories and eventual loss to Deep Blue captivated the imagination of the world of what role Artificial Intelligence systems may play in our civilization’s future. That excitement inspired an entire generation of AI researchers, including myself, to get into the field. Garry is also a pro-democracy political thinker and leader, a fearless human-rights activist, and author of several books including How Life Imitates Chess which is a book on strategy and decision-making, Winter Is Coming which is a book articulating his opposition to the Putin regime, and Deep Thinking which is a book the role of both artificial intelligence and human intelligence in defining our future. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:33 – Love of winning and hatred of losing
04:54 – Psychological elements
09:03 – Favorite games
16:48 – Magnus Carlsen
23:06 – IBM Deep Blue
37:39 – Morality
38:59 – Autonomous vehicles
42:03 – Fall of the Soviet Union
45:50 – Putin
52:25 – Life

Michio Kaku: Future of Humans, Aliens, Space Travel & Physics

Michio Kaku is a theoretical physicist, futurist, and professor at the City College of New York. He is the author of many fascinating books on the nature of our reality and the future of our civilization. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:14 – Contact with Aliens in the 21st century
06:36 – Multiverse and Nirvana
09:46 – String Theory
11:07 – Einstein’s God
15:01 – Would aliens hurt us?
17:34 – What would aliens look like?
22:13 – Brain-machine interfaces
27:35 – Existential risk from AI
30:22 – Digital immortality
34:02 – Biological immortality
37:42 – Does mortality give meaning?
43:42 – String theory
47:16 – Universe as a computer and a simulation
53:16 – First human on Mars

David Ferrucci: IBM Watson, Jeopardy & Deep Conversations with AI

David Ferrucci led the team that built Watson, the IBM question-answering system that beat the top humans in the world at the game of Jeopardy. He is also the Founder, CEO, and Chief Scientist of Elemental Cognition, a company working engineer AI systems that understand the world the way people do. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:06 – Biological vs computer systems
08:03 – What is intelligence?
31:49 – Knowledge frameworks
52:02 – IBM Watson winning Jeopardy
1:24:21 – Watson vs human difference in approach
1:27:52 – Q&A vs dialogue
1:35:22 – Humor
1:41:33 – Good test of intelligence
1:46:36 – AlphaZero, AlphaStar accomplishments
1:51:29 – Explainability, induction, deduction in medical diagnosis
1:59:34 – Grand challenges
2:04:03 – Consciousness
2:08:26 – Timeline for AGI
2:13:55 – Embodied AI
2:17:07 – Love and companionship
2:18:06 – Concerns about AI
2:21:56 – Discussion with AGI

Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI

Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:37 – Singularity
05:48 – Physical and psychological knowledge
10:52 – Chess
14:32 – Language vs physical world
17:37 – What does AI look like 100 years from now
21:28 – Flaws of the human mind
25:27 – General intelligence
28:25 – Limits of deep learning
44:41 – Expert systems and symbol manipulation
48:37 – Knowledge representation
52:52 – Increasing compute power
56:27 – How human children learn
57:23 – Innate knowledge and learned knowledge
1:06:43 – Good test of intelligence
1:12:32 – Deep learning and symbol manipulation
1:23:35 – Guitar

Peter Norvig: Artificial Intelligence: A Modern Approach

Peter Norvig is a research director at Google and the co-author with Stuart Russell of the book Artificial Intelligence: A Modern Approach that educated and inspired a whole generation of researchers including myself to get into the field. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
00:37 – Artificial Intelligence: A Modern Approach
09:11 – Covering the entire field of AI
15:42 – Expert systems and knowledge representation
18:31 – Explainable AI
23:15 – Trust
25:47 – Education – Intro to AI – MOOC
32:43 – Learning to program in 10 years
37:12 – Changing nature of mastery
40:01 – Code review
41:17 – How have you changed as a programmer
43:05 – LISP
47:41 – Python
48:32 – Early days of Google Search
53:24 – What does it take to build human-level intelligence
55:14 – Her
57:00 – Test of intelligence
58:41 – Future threats from AI
1:00:58 – Exciting open problems in AI

Leonard Susskind: Quantum Mechanics, String Theory, and Black Holes

Leonard Susskind is a professor of theoretical physics at Stanford University, and founding director of the Stanford Institute for Theoretical Physics. He is widely regarded as one of the fathers of string theory and in general as one of the greatest physicists of our time both as a researcher and an educator. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here’s the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode):

00:00 – Introduction
01:02 – Richard Feynman
02:09 – Visualization and intuition
06:45 – Ego in Science
09:27 – Academia
11:18 – Developing ideas
12:12 – Quantum computers
21:37 – Universe as an information processing system
26:35 – Machine learning
29:47 – Predicting the future
30:48 – String theory
37:03 – Free will
39:26 – Arrow of time
46:39 – Universe as a computer
49:45 – Big bang
50:50 – Infinity
51:35 – First image of a black hole
54:08 – Questions within the reach of science
55:55 – Questions out of reach of science