Category Archives: ai

#75 – Marcus Hutter: Universal Artificial Intelligence, AIXI, and AGI

Marcus Hutter is a senior research scientist at DeepMind and professor at Australian National University. Throughout his career of research, including with Jürgen Schmidhuber and Shane Legg, he has proposed a lot of interesting ideas in and around the field of artificial general intelligence, including the development of the AIXI model which is a mathematical approach to AGI that incorporates ideas of Kolmogorov complexity, Solomonoff induction, and reinforcement learning.

EPISODE LINKS:
Hutter Prize: http://prize.hutter1.net
Marcus web: http://www.hutter1.net
Books mentioned:
– Universal AI: https://amzn.to/2waIAuw
– AI: A Modern Approach: https://amzn.to/3camxnY
– Reinforcement Learning: https://amzn.to/2PoANj9
– Theory of Knowledge: https://amzn.to/3a6Vp7x

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:32 – Universe as a computer
05:48 – Occam’s razor
09:26 – Solomonoff induction
15:05 – Kolmogorov complexity
20:06 – Cellular automata
26:03 – What is intelligence?
35:26 – AIXI – Universal Artificial Intelligence
1:05:24 – Where do rewards come from?
1:12:14 – Reward function for human existence
1:13:32 – Bounded rationality
1:16:07 – Approximation in AIXI
1:18:01 – Godel machines
1:21:51 – Consciousness
1:27:15 – AGI community
1:32:36 – Book recommendations
1:36:07 – Two moments to relive (past and future)

#74 – Michael I. Jordan: Machine Learning, Recommender Systems, and the Future of AI

Michael I. Jordan is a professor at Berkeley, and one of the most influential people in the history of machine learning, statistics, and artificial intelligence. He has been cited over 170,000 times and has mentored many of the world-class researchers defining the field of AI today, including Andrew Ng, Zoubin Ghahramani, Ben Taskar, and Yoshua Bengio.

EPISODE LINKS:
(Blog post) Artificial Intelligence—The Revolution Hasn’t Happened Yet

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
03:02 – How far are we in development of AI?
08:25 – Neuralink and brain-computer interfaces
14:49 – The term “artificial intelligence”
19:00 – Does science progress by ideas or personalities?
19:55 – Disagreement with Yann LeCun
23:53 – Recommender systems and distributed decision-making at scale
43:34 – Facebook, privacy, and trust
1:01:11 – Are human beings fundamentally good?
1:02:32 – Can a human life and society be modeled as an optimization problem?
1:04:27 – Is the world deterministic?
1:04:59 – Role of optimization in multi-agent systems
1:09:52 – Optimization of neural networks
1:16:08 – Beautiful idea in optimization: Nesterov acceleration
1:19:02 – What is statistics?
1:29:21 – What is intelligence?
1:37:01 – Advice for students
1:39:57 – Which language is more beautiful: English or French?

#73 – Andrew Ng: Deep Learning, Education, and Real-World AI

Andrew Ng is one of the most impactful educators, researchers, innovators, and leaders in artificial intelligence and technology space in general. He co-founded Coursera and Google Brain, launched deeplearning.ai, Landing.ai, and the AI fund, and was the Chief Scientist at Baidu. As a Stanford professor, and with Coursera and deeplearning.ai, he has helped educate and inspire millions of students including me.

EPISODE LINKS:
Andrew Twitter: https://twitter.com/AndrewYNg
Andrew Facebook: https://www.facebook.com/andrew.ng.96
Andrew LinkedIn: https://www.linkedin.com/in/andrewyng/
deeplearning.ai: https://www.deeplearning.ai
landing.ai: https://landing.ai
AI Fund: https://aifund.ai/
AI for Everyone: https://www.coursera.org/learn/ai-for-everyone
The Batch newsletter: https://www.deeplearning.ai/thebatch/

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching “Ride Home” in your podcast app.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
02:23 – First few steps in AI
05:05 – Early days of online education
16:07 – Teaching on a whiteboard
17:46 – Pieter Abbeel and early research at Stanford
23:17 – Early days of deep learning
32:55 – Quick preview: deeplearning.ai, landing.ai, and AI fund
33:23 – deeplearning.ai: how to get started in deep learning
45:55 – Unsupervised learning
49:40 – deeplearning.ai (continued)
56:12 – Career in deep learning
58:56 – Should you get a PhD?
1:03:28 – AI fund – building startups
1:11:14 – Landing.ai – growing AI efforts in established companies
1:20:44 – Artificial general intelligence

#72 – Scott Aaronson: Quantum Computing

Scott Aaronson is a professor at UT Austin, director of its Quantum Information Center, and previously a professor at MIT. His research interests center around the capabilities and limits of quantum computers and computational complexity theory more generally.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

This episode is also supported by the Techmeme Ride Home podcast. Get it on Apple Podcasts, on its website, or find it by searching “Ride Home” in your podcast app.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 – Introduction
05:07 – Role of philosophy in science
29:27 – What is a quantum computer?
41:12 – Quantum decoherence (noise in quantum information)
49:22 – Quantum computer engineering challenges
51:00 – Moore’s Law
56:33 – Quantum supremacy
1:12:18 – Using quantum computers to break cryptography
1:17:11 – Practical application of quantum computers
1:22:18 – Quantum machine learning, questionable claims, and cautious optimism
1:30:53 – Meaning of life

Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence

Vladimir Vapnik is the co-inventor of support vector machines, support vector clustering, VC theory, and many foundational ideas in statistical learning. He was born in the Soviet Union, worked at the Institute of Control Sciences in Moscow, then in the US, worked at AT&T, NEC Labs, Facebook AI Research, and now is a professor at Columbia University. His work has been cited over 200,000 times.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 – Introduction
02:55 – Alan Turing: science and engineering of intelligence
09:09 – What is a predicate?
14:22 – Plato’s world of ideas and world of things
21:06 – Strong and weak convergence
28:37 – Deep learning and the essence of intelligence
50:36 – Symbolic AI and logic-based systems
54:31 – How hard is 2D image understanding?
1:00:23 – Data
1:06:39 – Language
1:14:54 – Beautiful idea in statistical theory of learning
1:19:28 – Intelligence and heuristics
1:22:23 – Reasoning
1:25:11 – Role of philosophy in learning theory
1:31:40 – Music (speaking in Russian)
1:35:08 – Mortality

Jim Keller: Moore’s Law, Microprocessors, Abstractions, and First Principles

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 – Introduction
02:12 – Difference between a computer and a human brain
03:43 – Computer abstraction layers and parallelism
17:53 – If you run a program multiple times, do you always get the same answer?
20:43 – Building computers and teams of people
22:41 – Start from scratch every 5 years
30:05 – Moore’s law is not dead
55:47 – Is superintelligence the next layer of abstraction?
1:00:02 – Is the universe a computer?
1:03:00 – Ray Kurzweil and exponential improvement in technology
1:04:33 – Elon Musk and Tesla Autopilot
1:20:51 – Lessons from working with Elon Musk
1:28:33 – Existential threats from AI
1:32:38 – Happiness and the meaning of life

David Chalmers: The Hard Problem of Consciousness

David Chalmers is a philosopher and cognitive scientist specializing in philosophy of mind, philosophy of language, and consciousness. He is perhaps best known for formulating the hard problem of consciousness which could be stated as “why does the feeling which accompanies awareness of sensory information exist at all?”

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 – Introduction
02:23 – Nature of reality: Are we living in a simulation?
19:19 – Consciousness in virtual reality
27:46 – Music-color synesthesia
31:40 – What is consciousness?
51:25 – Consciousness and the meaning of life
57:33 – Philosophical zombies
1:01:38 – Creating the illusion of consciousness
1:07:03 – Conversation with a clone
1:11:35 – Free will
1:16:35 – Meta-problem of consciousness
1:18:40 – Is reality an illusion?
1:20:53 – Descartes’ evil demon
1:23:20 – Does AGI need conscioussness?
1:33:47 – Exciting future
1:35:32 – Immortality

Cristos Goodrow: YouTube Algorithm

Cristos Goodrow is VP of Engineering at Google and head of Search and Discovery at YouTube (aka YouTube Algorithm).

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 – Introduction
03:26 – Life-long trajectory through YouTube
07:30 – Discovering new ideas on YouTube
13:33 – Managing healthy conversation
23:02 – YouTube Algorithm
38:00 – Analyzing the content of video itself
44:38 – Clickbait thumbnails and titles
47:50 – Feeling like I’m helping the YouTube algorithm get smarter
50:14 – Personalization
51:44 – What does success look like for the algorithm?
54:32 – Effect of YouTube on society
57:24 – Creators
59:33 – Burnout
1:03:27 – YouTube algorithm: heuristics, machine learning, human behavior
1:08:36 – How to make a viral video?
1:10:27 – Veritasium: Why Are 96,000,000 Black Balls on This Reservoir?
1:13:20 – Making clips from long-form podcasts
1:18:07 – Moment-by-moment signal of viewer interest
1:20:04 – Why is video understanding such a difficult AI problem?
1:21:54 – Self-supervised learning on video
1:25:44 – What does YouTube look like 10, 20, 30 years from now?

Paul Krugman: Economics of Innovation, Automation, Safety Nets & Universal Basic Income

Paul Krugman is a Nobel Prize winner in economics, professor at CUNY, and columnist at the New York Times. His academic work centers around international economics, economic geography, liquidity traps, and currency crises.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 – Introduction
03:44 – Utopia from an economics perspective
04:51 – Competition
06:33 – Well-informed citizen
07:52 – Disagreements in economics
09:57 – Metrics of outcomes
13:00 – Safety nets
15:54 – Invisible hand of the market
21:43 – Regulation of tech sector
22:48 – Automation
25:51 – Metric of productivity
30:35 – Interaction of the economy and politics
33:48 – Universal basic income
36:40 – Divisiveness of political discourse
42:53 – Economic theories
52:25 – Starting a system on Mars from scratch
55:11 – International trade
59:08 – Writing in a time of radicalization and Twitter mobs

Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems

Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments.

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”. 

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

00:00 – Introduction
02:09 – Favorite robot
05:05 – Autonomous vehicles
08:43 – Tesla Autopilot
20:03 – Ethical responsibility of safety-critical algorithms
28:11 – Bias in robotics
38:20 – AI in politics and law
40:35 – Solutions to bias in algorithms
47:44 – HAL 9000
49:57 – Memories from working at NASA
51:53 – SpotMini and Bionic Woman
54:27 – Future of robots in space
57:11 – Human-robot interaction
1:02:38 – Trust
1:09:26 – AI in education
1:15:06 – Andrew Yang, automation, and job loss
1:17:17 – Love, AI, and the movie Her
1:25:01 – Why do so many robotics companies fail?
1:32:22 – Fear of robots
1:34:17 – Existential threats of AI
1:35:57 – Matrix
1:37:37 – Hang out for a day with a robot