Author Archives: Lex Fridman

About Lex Fridman

Host of Lex Fridman Podcast. Research Scientist at MIT, working on human-AI interaction, robotics, and machine learning.

#491 – OpenClaw: The Viral AI Agent that Broke the Internet – Peter Steinberger

Peter Steinberger is the creator of OpenClaw, an open-source AI agent framework that’s the fastest-growing project in GitHub history.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep491-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:
https://lexfridman.com/peter-steinberger-transcript

CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:
Peter’s X: https://x.com/steipete
Peter’s GitHub: https://github.com/steipete
Peter’s Website: https://steipete.com
Peter’s LinkedIn: https://www.linkedin.com/in/steipete
OpenClaw Website: https://openclaw.ai
OpenClaw GitHub: https://github.com/openclaw/openclaw
OpenClaw Discord: https://discord.gg/openclaw

SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Perplexity: AI-powered answer engine.
Go to https://perplexity.ai/
Quo: Phone system (calls, texts, contacts) for businesses.
Go to https://quo.com/lex
CodeRabbit: AI-powered code reviews.
Go to https://coderabbit.ai/lex
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Blitzy: AI agent for large enterprise codebases.
Go to https://blitzy.com/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex

OUTLINE:
(00:00) – Introduction
(03:51) – Sponsors, Comments, and Reflections
(15:29) – OpenClaw origin story
(18:48) – Mind-blowing moment
(28:15) – Why OpenClaw went viral
(32:12) – Self-modifying AI agent
(36:57) – Name-change drama
(54:07) – Moltbook saga
(1:02:26) – OpenClaw security concerns
(1:11:07) – How to code with AI agents
(1:42:02) – Programming setup
(1:48:45) – GPT Codex 5.3 vs Claude Opus 4.6
(1:57:52) – Best AI agent for programming
(2:19:52) – Life story and career advice
(2:23:49) – Money and happiness
(2:27:41) – Acquisition offers from OpenAI and Meta
(2:44:51) – How OpenClaw works
(2:56:09) – AI slop
(3:02:13) – AI agents will replace 80% of apps
(3:10:50) – Will AI replace programmers?
(3:22:50) – Future of OpenClaw community

Transcript for OpenClaw: The Viral AI Agent that Broke the Internet – Peter Steinberger | Lex Fridman Podcast #491

This is a transcript of Lex Fridman Podcast #491 with Peter Steinberger.
The timestamps in the transcript are clickable links
that take you directly to that point in
the main video. Please note that the transcript is
human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Episode highlight

Peter Steinberger
(00:00:00)
I watched my agent happily click the “I’m not a robot” button. I made the agent very aware. Like, it knows what his source code is. It understands th- how it sits and runs in its own harness. It knows where documentation is. It knows which model it runs. It understands its own system that made it very easy for an agent to… Oh, you don’t like anything? You just prompted it to existence, and then the agent would just modify its own software. People talk about self-modifying software, I just built it. I actually think wipe coding is a slur.
Lex Fridman
(00:00:31)
You prefer agentic engineering?
Peter Steinberger
(00:00:33)
Yeah, I always tell people I’d- I do agentic engineering, and then maybe after 3:00 AM, I switch to wipe coding, and then I have regrets on the next day.
Lex Fridman
(00:00:40)
What a walk of shame.
Peter Steinberger
(00:00:42)
Yeah, you just have to clean up and, like, fix your sh- shit.
Lex Fridman
(00:00:45)
We’ve all been there.
Peter Steinberger
(00:00:46)
I used to write really long prompts. And by writing, I mean, I don’t write, I- I- I talk, you know? These- these hands are, like, too- too precious for writing now. I just- I just use bespoke prompts to build my software.
Lex Fridman
(00:01:00)
So, you, for real, with all those terminals, are using voice?
Peter Steinberger
(00:01:04)
Yeah. I used to do it very extensively, to the point where there was a period where I lost my voice.
Lex Fridman
(00:01:13)
I mean, I have to ask you, just curious. I- I know you’ve probably gotten huge offers from major companies. Can you speak to who you’re considering working with?
Peter Steinberger
(00:01:27)
Yeah.

Introduction

Lex Fridman
(00:01:30)
The following is a conversation with Peter Steinberger, creator of OpenClaw, formerly known as MoldBot, ClawedBot, Clawdus, Claude, spelled with a W as in lobster claw. Not to be confused with Claud, the AI model from Anthropic, spelled with a U. In fact, this confusion is the reason Anthropic kindly asked Peter to change the name to OpenClaw. So, what is OpenClaw? It’s an open-source AI agent that has taken over the tech world in a matter of days, exploding in popularity, reaching over 180,000 stars on GitHub, and spawning the social network mold book, where AI agents post manifestos and debate consciousness, creating a mix of excitement and fear in the general public.
Lex Fridman
(00:02:19)
And a kind of AI psychosis, a mix of clickbait fearmongering and genuine, fully justifiable concern about the role of AI in our digital, interconnected human world. OpenClaw, as its tagline states, is the AI that actually does things. It’s an autonomous AI assistant that lives in your computer, has access to all of your stuff, if you let it, talks to you through Telegram, WhatsApp, Signal, iMessage, and whatever else messaging client. Uses whatever AI model you like, including Claude Opus 4.6 and GPT 5.3 Codex, all to do stuff for you. Many people are calling this one of the biggest moments in the recent history of AI, since the launch of ChatGPT in November 2022.
Lex Fridman
(00:03:07)
The ingredients for this kind of AI agent were all there, but putting it all together in a system that definitively takes a step forward over the line from language to agency, from ideas to actions, in a way that created a useful assistant that feels like one who gets you and learns from you, in an open source, community-driven way, is the reason OpenClaw took the internet by storm. Its power, in large part, comes from the fact that you can give it access to all of your stuff and give it permission to do anything with that stuff in order to be useful to you. This is very powerful, but it is also dangerous. OpenClaw represents freedom, but with freedom comes responsibility.
Lex Fridman
(00:03:51)
With it, you can own and have control over your data, but precisely because you have this control, you also have the responsibility to protect it from cybersecurity threats of various kinds. There are great ways to protect yourself, but the threats and vulnerabilities are out there. Again, a powerful AI agent with system-level access is a security minefield, but it also represents the future. Because when done well and securely, it can be extremely useful to each of us humans as a personal assistant. We discuss all of this with Peter, and also discuss his big-picture programming and entrepreneurship life story, which I think is truly inspiring. He spent 13 years building PSPDF Kit, which is a software used on a billion devices.
Lex Fridman
(00:04:41)
He sold it, and for a brief time, fell out of love with programming, vanished for three years, and then came back, rediscovered his love for programming, and built, in a very short time, an open source AI agent that took the internet by storm. He is, in many ways, the symbol of the AI revolution happening in the programming world. There was the ChatGPT moment in 2022, the DeepSeek moment in 2025, and now, in ’26, we’re living through the OpenClaw moment, the age of the lobster. The start of the agentic AI revolution. What a time to be alive. This is a Lex Fridman podcast. To support it, please check out our sponsors in the description, or you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Peter Steinberger.

OpenClaw origin story

Lex Fridman
(00:05:36)
The one and only, the Clawed Father. Actually, Benjamin predicted it in his tweet. “The following is a conversation with Claude, a respected crustacean.” It’s a hilarious-looking picture of a lobster in a suit, so I think the prophecy has been fulfilled. Let’s go to this moment when you built a prototype in one hour, that was the early version of OpenClaw. I think this story’s really inspiring to a lot of people because this prototype led to something that just took the internet by storm…. and became the fastest-growing repository in GitHub history, with now over 175,000 stars. So, what was the story of the one-hour prototype?
Peter Steinberger
(00:06:20)
You know, I wanted that since April.
Lex Fridman
(00:06:23)
A personal assistant. AI personal assistant.
Peter Steinberger
(00:06:25)
Yeah. And I, I played around with some other things, like even stuff that gets all my WhatsApp, and I could just run queries on it. That was back when we had GPT-4.1, with the one million context window. And I, I pulled in all the data and then just asked him questions like, “What makes this friendship meaningful?”
Lex Fridman
(00:06:50)
Mm-hmm.
Peter Steinberger
(00:06:50)
And I got some, some really profound results. Like, I sent it to my friends and they got, like, teary eyes.
Lex Fridman
(00:06:59)
So, there’s something there.
Peter Steinberger
(00:07:01)
Yeah. But then I… I thought all the labs will, will, will work on that. So I, I moved on to other things, and that was still very much in my early days of experimenting and pl- playing. You know, you have to… That’s how you learn. You just like, you do stuff and you play. And time flew by and it was November. I wanted to make sure that the thing I started is actually happening. I was annoyed that it didn’t exist, so I just prompted it into existence.
Lex Fridman
(00:07:36)
I mean, that’s the beginning of the hero’s journey of the entrepreneur, right? And you’ve even with your original story with PS PDF kit, it’s like, “Why does this not exist? Let me build it.” And again, here’s diff- a whole different realm, but similar maybe spirit.
Peter Steinberger
(00:07:52)
Yeah, so I had this problem. I tried to show PDF on an iPad, which should not be hard.
Lex Fridman
(00:07:56)
This is like 15 years ago, something like that.
Peter Steinberger
(00:07:59)
Yeah. Like the most, the most random thing ever. And suddenly, I had this problem and I, I wanted to help a friend. And there was, there was… Well, not like nothing existed, but it was just not good. And like… Like I tried it and it was like very, “Nah.” Like, “Hmm, I can do this better.”
Lex Fridman
(00:08:17)
By the way, for people who don’t know, this led to the development of PS PDF kit that’s used on a billion devices. So, the… It turns out that it’s pretty useful to be able to open a PDF.
Peter Steinberger
(00:08:28)
You could also make the joke that I’m really bad at naming.
Lex Fridman
(00:08:32)
Yeah.
Peter Steinberger
(00:08:32)
Like, name number five on the current project. And even PS PDF doesn’t really roll from the tongue.
Lex Fridman
(00:08:39)
Anyway, so you said “Screw it. Why don’t I do it?” So what was the… What was the prototype? What was the thing that you… What was the magical thing that you built in a short amount of time that you were like, “This might actually work as an agent,” where I talk to it and it does things?

Mind-blowing moment

Peter Steinberger
(00:08:55)
There was… Like, one of my projects before already did something where I could bring my terminals onto the web and then I could, like, interact with them, but there also would be terminals on my Mac.
Lex Fridman
(00:09:06)
Mm-hmm.
Peter Steinberger
(00:09:07)
Viptunnel, which was like a, a weekend hack project that was still very early. And it was cloud code times. You know, you got a dopamine hit when you got something right. And now I get, like, mad when you get something wrong.
Lex Fridman
(00:09:22)
And you had a really great -– not to take a tangent -– but a great blog post describing that you converted Viptunnel. You vibe-coded Viptunnel from TypeScript into Zig of all programming languages with a single prompt. One prompt, one shot. Convert the entire code base into Zig.
Peter Steinberger
(00:09:41)
Yeah. There was this one thing where part of the architecture was… Took too much memory. Every terminal used like a node. And I wanted to change it to Rust and… I mean, I can do it. I can, I can manually figure it all out, but all my automated attempts failed miserably. And then I revisited about four or five months later. And I’m like, “Okay, now let’s use something even more experimental.” And I, and I just typed, “Convert this and this part to Sig,” and then let Codex run off. And it basically got it right. There was one little detail that I had to, like, modify afterwards, but it just ran for overnight or like six hours and just did its thing. And it’s like… It’s just mind-blowing.
Lex Fridman
(00:10:39)
So that’s on the LLM programming side, refactoring. But uh, back to the actual story of the of the prototype. So how did Viptunnel connect to the first prototype where your, like, agents can actually work?
Peter Steinberger
(00:10:52)
Well, that was still very limited. You know, like I had this one experiment with WhatsApp, then I had this experiment, and both felt like not the right answer. And then my search bar was literally just hooking up WhatsApp to cloud code. One shot. The CLI message comes in. I call the CLI with -p. It does its magic, I get the string back and I send it back to WhatsApp. And I, I built this in one hour. And I felt… Already felt really cool. It’s like, “Oh, I could… I can, like, talk to my computer,” right? This… That, that was, that was cool. But I, I wanted images, ’cause I alw- I often use images when I prompt. I think it’s such a, such an efficient way to give the agent more context.
Peter Steinberger
(00:11:40)
And they are really good at figuring out what I mean, e- even if it’s like a, a weird cropped-up screenshot. So I used it a lot and I wanted to do that in WhatsApp as well. Also, like, you know, just you run around, you see like a poster of an event, you just make a screenshot and like figure out if I have time there, if this is good, if my friends are maybe up for that. Just like images seemed im- important. So I, I worked a few… It took me a few more hours to actually get that right. And then it was just…… I, I used it a lot. And funny enough, that was just before I went on a trip to Marrakesh with my friends for a birthday trip. And there it was even better because internet was a little shaky but WhatsApp just works, you know?
Peter Steinberger
(00:12:29)
It’s like doesn’t matter, you have, like, edge, it still works. WhatsApp is just… It’s just made really well. So I ended up using it a lot. Translate this for me, explain this, find me places. Like, you just having a clanker doing, having Google for you, that was… Basically there was still nothing built but it still could do so much.
Lex Fridman
(00:12:53)
So, if we talk about the full journey that’s happening there with the agent, you’re just sending on this very thin line WhatsApp message via CLI, it’s going to a cloud code and cloud code is doing all kinds of heavy work and coming back to you with a thin message.
Peter Steinberger
(00:13:13)
Yeah. It was slow because every time I boot up the CLI, but it… It was really cool already. And it could just use all the things that I already had built. I had built like a whole bunch of CLI stuff over the month so it, it felt really powerful.
Lex Fridman
(00:13:31)
There is something magical about that experience that’s hard to put into words. Being able to use a chat client to talk to an agent, versus, like, sitting behind a computer and like, I don’t know, using cursor or even using Cloud Code CLI in the terminal. It’s a different experience than being able to sit back and talk to it. I mean, it seems like a trivial step but, it- in some sense it’s a… It’s like a phase shift in the integration of AI into your life and how it feels, right?
Peter Steinberger
(00:14:05)
Yeah. Yeah. I, I read this tweet this morning where someone said, “Oh, there’s no magic in it. It’s just like, it does this and this and this and this and this and this.” And it almost feels like a hobby, just as cursor or perplexity. And I’m like, well, if that’s a hobby that’s kind of a compliment, you know? They’re like, they’re not doing too bad. Thank you I guess? Yes. I mean, isn’t, isn’t, isn’t magic often just like you take a lot of things that are already there but bring them together in new ways? Like, I don’t… There’s no… Yeah. Maybe there’s no magic in there but sometimes just rearranging things and, like, adding a few new ideas is all the magic that you need.
Lex Fridman
(00:14:51)
It’s really hard to convert into words what is, what is magic about a thing. If you look at the, the scrolling on an iPhone, why is that so pleasant? There’s a lot of elements about that interface that makes it incredibly pleasant, that is fundamental to the experience of using a smartphone, and it’s like, okay, all the components were there. Scrolling was there, everything was there.
Peter Steinberger
(00:15:13)
Nobody did it-
Lex Fridman
(00:15:14)
Yep
Peter Steinberger
(00:15:14)
… and afterwards it felt so obvious.
Lex Fridman
(00:15:16)
Yeah, so obvious.
Peter Steinberger
(00:15:16)
Right? But still… You know the moment where it, it blew my mind was when, when I- I used it a lot and then at some point I just sent it a message and, and then a typing indicator appeared. And I’m like, wait, I didn’t build that, it only m- it only has image support, so what is it even doing? And then it would just reply.
Lex Fridman
(00:15:42)
What was the thing you sent it?
Peter Steinberger
(00:15:43)
Oh, just a random question like, “Hey, what about this in this restaurant?” You know? Because we were just running around and checking out the city. So that’s why I, I didn’t, didn’t even think when I used it because sometimes when you’re in a hurry typing is annoying.
Lex Fridman
(00:15:59)
So, oh, you did an audio message?
Peter Steinberger
(00:16:00)
Yeah. And it just, it just worked and I’m like…
Lex Fridman
(00:16:03)
And it’s not supposed to work because-
Peter Steinberger
(00:16:05)
No
Lex Fridman
(00:16:05)
… you didn’t give it that-
Peter Steinberger
(00:16:07)
No, literally
Lex Fridman
(00:16:07)
… capability.
Peter Steinberger
(00:16:08)
I literally went, “How the fuck did he do that?” And it was like, “Yeah, the mad lad did the following. He sent me a message but it only, only was a file and no file ending.” So I checked out the header of the file and it found that it was, like, opus so I used ffmpeg to convert it and then I wanted to use whisper but it didn’t had it installed. But then I found the OpenAI key and just used Curl to send the file to OpenAI to translate and here I am.
Peter Steinberger
(00:16:39)
Just looked at the message I’m like, “Oh wow.”
Lex Fridman
(00:16:43)
You didn’t teach it any of those things and the agent just figured it out, did all those conversions, the translations. It figured out the API, it figured out which program to use, all those kinds of things. And you were just absent-mindedly just sent an audio message when it came back.
Peter Steinberger
(00:16:56)
Yeah, like, so clever even because he would have gotten the whisper local path, he would have had to download a model. It would have been too slow. So like, there’s so much world knowledge in there, so much creative problem solving. A lot of it I think mapped from… If you get really good at coding that means you have to be really good at general purpose problem solving. So that’s a skill, right? And that just maps into other domains. So it had the problem of like, what is this file with no file ending? Let’s figure it out. And that’s when it kind of clicked for me. It’s like, I was like very impressed. And somebody sent a pull request for Discord support and I’m like, “This is a WhatsApp relay.
Peter Steinberger
(00:17:37)
That doesn’t, doesn’t fit at all.”
Lex Fridman
(00:17:40)
At that time it was called WA Relay.
Peter Steinberger
(00:17:42)
Yeah. And so I debated with me like, do I want that? Do I not want that? And then I thought, well maybe, maybe I do that because that could be a cool way to show people. Because I… So far I did it in WhatsApp as like groups you know but don’t really want to give my phone number to every internet stranger.
Lex Fridman
(00:18:07)
Yeah.
Peter Steinberger
(00:18:07)
Journalists manage to do that anyhow now so that’s a different story. So I merged it-… from Shadow, who helped me a lot with the whole project. So, thank you. And, and I put my, my bot in there.

Why OpenClaw went viral

Lex Fridman
(00:18:27)
On Discord?
Peter Steinberger
(00:18:28)
Yeah. No security because I didn’t… I hadn’t built sandboxing in yet. I, I just prompted it to, like, only listen to me. And then some people came and tried to hack it, and I just… Or, like, just watched and I just kept working in the open, you know? Like, y- I used my agent to build my agent harness and to test, like, various stuff. And that’s very quickly when it clicked for people. So it’s almost like it needs to be experienced. And from that time on, that was January the 1st, I, I got my first real influencer being a fan and did videos, dachitze. Thank you. And, and from there on, I saw, I started gaining up speed. And at the same time, my, my sleep cycle went shorter and shorter because I, I felt the storm coming, and I just worked my ass off to get it to…
Peter Steinberger
(00:19:33)
into a state where it’s kinda good.
Lex Fridman
(00:19:38)
There’s a few components and we’ll talk about how it all works, but basically, you’re able to talk to it using WhatsApp, Telegram, Discord. So that’s a component that you have to get right.
Peter Steinberger
(00:19:48)
Yeah.
Lex Fridman
(00:19:49)
And then you have to figure out the agentic loop, you have to have the gateway, you have the harness, you have all those components that make it all just work nicely.
Peter Steinberger
(00:19:56)
Yeah. It felt like Factorio times infinite.
Lex Fridman
(00:20:00)
Right.
Peter Steinberger
(00:20:01)
I, I feel like I built my little- … my little playground. Like, I never had so much fun than building this project. You know? Like, you have like, “Oh,” I go like, level one agentic loop. What can I do there? How can I be smart at queuing messages? How can I make it more human-like? Oh, then I had this idea of… Because the loop always… The agent always replies something, but you don’t always want an agent to reply something in a group chat. So I gave him this no-reply token. So I gave him an option to shut up. So it, it feels more natural.
Lex Fridman
(00:20:32)
That’s level two.
Peter Steinberger
(00:20:34)
Y- uh, yeah, yeah. Yeah, on the- on the-
Lex Fridman
(00:20:36)
Factorio.
Peter Steinberger
(00:20:36)
On the agentic loop. And then I go to memory, right?
Lex Fridman
(00:20:39)
Yeah.
Peter Steinberger
(00:20:39)
You want him to, like, remember stuff. So maybe, maybe the end… The ultimate boss is continuous reinforcement learning, but I’m, I’m, like, at… I feel like I’m level two or three with Markdown files and the vector database. And then you, you can go to level community management, you can go to level website and marketing. There’s just so many hats that you have to have on. Not even talking about native apps. That’s just, like, infinite different levels and infinite level ups you can do.
Lex Fridman
(00:21:08)
So the whole time you’re having fun. We should say that for the most part, throughout this whole process, you’re a one-man team. There’s people helping, but you’re doing so much of the key core development.
Peter Steinberger
(00:21:21)
Yeah.
Lex Fridman
(00:21:21)
And having fun? You did, in January, 6,600 commits. Probably more.
Peter Steinberger
(00:21:28)
I sometimes posted a meme. I’m limited by the technology of my time. I could do more if agents would be faster.
Lex Fridman
(00:21:34)
But we should say you’re running multiple agents at the same time.
Peter Steinberger
(00:21:37)
Yeah. Depending on how much I slept and how difficult of the tasks I work on, between four and 10.
Lex Fridman
(00:21:45)
Four and 10 agents. Uh there’s so many possible directions, speaking of Factorio, that we can go here. But one big picture one is, why do you think your work, Open Claw, won? In this world, if you look at 2025, so many startups, so many companies were doing kind of agentic type stuff, or claiming to. And here, Open Claw comes in and destroys everybody. Like, why did you win?
Peter Steinberger
(00:22:15)
Because they all take themselves too serious.
Lex Fridman
(00:22:18)
Yeah.

Self-modifying AI agent

Peter Steinberger
(00:22:19)
Like, it’s hard to compete against someone who’s just there to have fun.
Lex Fridman
(00:22:24)
Yeah.
Peter Steinberger
(00:22:24)
I wanted it to be fun, I wanted it to be weird. And if you see, like, all the, all the lobster stuff online I think I, I managed weird. I… You know, for the longest time, the only, the only way to install it was git clone, pnpm build, pnpm gateway. Like, you clone it, you build it, you run it. And then the, the agent… I made the agent very aware. Like, it knows that it is… What its source code is. It understands th- how it sits and runs in its own harness. It knows where documentation is. It knows which model it runs. It knows if you turn on the voice or, or reasoning mode. Like, I, I wanted to be more human-like, so it understands its own system that made it very easy for an agent to… Oh, you don’t like anything?
Peter Steinberger
(00:23:19)
You just prompted it to existence, and then the agent would just modify its own software. You know, we have people talk about self-modifying software. I just built it and didn’t even… I didn’t even plan it so much. It just happened.
Lex Fridman
(00:23:35)
Can you actually speak to that? ‘Cause it’s just fascinating. So you have this piece of software that’s written in TypeScript-
Peter Steinberger
(00:23:43)
Yeah
Lex Fridman
(00:23:43)
… that’s able to, via the agentic loop, modify itself. I mean, what a moment to be alive in the history of humanity and the history of programming. Here’s the thing that’s used by a huge amount of people to do incredibly powerful things in their lives, and that very system can rewrite itself, can modify itself. Can you just, like, speak to the power of that? Like, isn’t that incredible? Like, when did you first close the loop on that?
Peter Steinberger
(00:24:14)
Oh, because that’s how I built it as well, you know? Most of it is built by Codex, but oftentimes I… When I debug it, I…… I use self-introspection so much. It’s like, “Hey, what tools do you see? Can you call the tool yourself?” Or like, “What error do you see? Read the source code. Figure out what’s the problem.” Like, I just found it an incredibly fun way to… That the agent, the very agent and software that you use is used to debug itself, so that it felt just natural that everybody does that. And that it led to so many, so many pull requests by people who never wrote software. I mean, it also did show that people never wrote software . So I call them prompt requests in the end.
Peter Steinberger
(00:25:00)
But I don’t want to, like, pull that down because every time someone made the first pull request is a win for our society, you know? Like, it… Like, it doesn’t matter how, how shitty it is, y- you gotta start somewhere. So I know there’s, like, this whole big movement of people complain about open source and the quality of PRs, and a whole different level of problems. But on a different level, I found it… I found it very meaningful that, that I built something that people love to think of so much that they actually start to learn how open source works.
Lex Fridman
(00:25:37)
Yeah, you were … The Open Cloud project was the first pull request. You were the first for so many. That is magical. So many people that don’t know how to program are taking their first step into the programming world with this.
Peter Steinberger
(00:25:52)
Isn’t that a step up for humanity? Isn’t that cool?
Lex Fridman
(00:25:54)
Creating builders.
Peter Steinberger
(00:25:56)
Yeah. Like, the bar to do that was so high, and, like, with agents, and with the right software, it just, like, went lower and lower. I don’t know. I was at a… And I also organize another type of meetup. I call it… I called it Cloud Code Anonymous. You can get the inspiration from. Now, I call it Agents Anonymous- … for, for reasons.
Lex Fridman
(00:26:23)
Agents Anonymous.
Peter Steinberger
(00:26:24)
And-
Lex Fridman
(00:26:25)
Oh, it’s so funny on so many levels. I’m sorry, go ahead.
Peter Steinberger
(00:26:29)
Yeah. And there was this one guy who, who talked to me. He’s like, “I run this design agency, and we, we never had custom software. And now I have, like, 25 little web services for various things that help me in my business. And I don’t even know how they work, but they work.” Uh, and he was just, like, very happy that my stuff solved some of his problems. And he was, like, curious enough that he actually came to, like, a, a Enchantic meetup, even though he’s… He doesn’t really know how software works.

Name-change drama

Lex Fridman
(00:27:04)
Can we actually rewind a little bit and tell the saga of the name change? First of all, it started out as Wa-Relay.
Peter Steinberger
(00:27:12)
Yeah.
Lex Fridman
(00:27:12)
And then it went to-
Peter Steinberger
(00:27:13)
Claude’s.
Lex Fridman
(00:27:14)
Claude’s.
Peter Steinberger
(00:27:15)
Yeah. You know, when I, when I built it in the beginning, my agent had no personality. It was just… It was Claude Code. It’s like this sycophantic opus, very friendly. And I… When you talk to a friend on WhatsApp, they don’t talk like Claude Code. So I wanted… I, I felt this… I just didn’t f- It didn’t feel right, so I, I wanted to give it a personality.
Lex Fridman
(00:27:41)
Make it spicier, make it-
Peter Steinberger
(00:27:43)
Yeah
Lex Fridman
(00:27:43)
… something. By the way, that’s actually hard to put into words as well. And we should mention that, of course, you create the soul.md, inspired by Anthropic’s constitutional AI work-
Peter Steinberger
(00:27:53)
Mm-hmm
Lex Fridman
(00:27:53)
… how to make it spicy.
Peter Steinberger
(00:27:55)
Partially, it picked up a little bit from me. You know, like those things are text completion engines in a way. So, so I, I, I, I had fun working with it, and then I told it to… How I wanted it to interact with me, and just, like, write your own agents.md give yourself a name. And then we… I didn’t even know how the whole, the whole lobster… I mean, people only do lobster… Originally, it was actually a lobster in a, in a TARDIS, because I’m also a big Doctor Who fan.
Lex Fridman
(00:28:30)
Was there a space lobster?
Peter Steinberger
(00:28:31)
Yeah.
Lex Fridman
(00:28:31)
I heard. What’s that have to do with anything?
Peter Steinberger
(00:28:34)
Yeah, I just wanted to make it weird. There was no… There was no big grand plan. I’m just having fun here.
Lex Fridman
(00:28:40)
Oh, so I guess the lobster is already weird, and then the space lobster is an extra weird.
Peter Steinberger
(00:28:44)
Yeah, yeah, because the-
Lex Fridman
(00:28:45)
Yeah
Peter Steinberger
(00:28:45)
… the TARDIS is basically the, the harness, but cannot call it TARDIS, so we called it Claude’s. So that was name number two.
Lex Fridman
(00:28:54)
Yeah.
Peter Steinberger
(00:28:54)
And then it never really rolled off the tongue. So when more people came, again, I talked with my agent, Claude. At least that’s what I used to call him. Now-
Lex Fridman
(00:29:08)
Claude spelled with a W-C-L-A-U-D-E.
Peter Steinberger
(00:29:12)
Yeah.
Lex Fridman
(00:29:14)
Versus C-L-A-U-D-E from Anthropic.
Peter Steinberger
(00:29:20)
Yeah.
Lex Fridman
(00:29:21)
Which is part of what makes it funny, I think. The play on the letters and the words in the TARDIS and the lobster and the space lobster is hilarious. But I can see why it can lead into problems.
Peter Steinberger
(00:29:34)
Yeah, they didn’t find it so funny . So then I got the domain ClaudeBot, and I just… I love the domain. And it was, like, short. It was catchy. I’m like, “Yeah, let’s do that.” I didn’t… I didn’t think it would be that big at this time. And then just when it exploded, I got, Kudos, a very friendly email from one of the employees that they didn’t like the name.
Lex Fridman
(00:30:09)
One of the Anthropic employees.
Peter Steinberger
(00:30:11)
Yeah. So actually, Kudos, because they shou- could have just sent a, a lawyer letter, but they’ve been nice about it. But also like, “You have to change this and fast.” And I asked for two days, because changing a name is hard, because you have to find everything, you know, Twitter handle, domains, NPM packages Docker registry, GitHub stuff. And everything has to be…… you need a set of everything.
Lex Fridman
(00:30:40)
And also, can we comment on the fact that you’re increasingly attacked, followed by crypto folks? Which I think you mentioned somewhere that that means the name change had to be… Because they were trying to snipe, they were trying to steal, and so you had to be… The, the na- I mean, from an engineering perspective, it’s just fascinating. You had to make the name change Atomic, make sure it’s changed everywhere at once.
Peter Steinberger
(00:31:06)
Yeah. Failed very hard at that.
Lex Fridman
(00:31:08)
You did?
Peter Steinberger
(00:31:08)
I, I underestimated those people. It’s a, it’s a very interesting subculture. Like, it… Everything circles around… I’ll probably get a lot wrong and we’ll probably get hate for that if you say that, but… There is like Bags app and then they, they tokenize everything. And th- they did the same back with Swipe Tunnel, but to a much smaller degree. It was not that annoying. But on this project, they’ve been, they’ve been swarming me. They, they… It’s like every half an hour, someone came into Discord and, and, and spammed it and we had to block the p- We have, like, server rules, and one of the rules was… One of the rules is no mentioning of butter. For obvious reasons. And one was, no talk about finance stuff or crypto. Because I’m…
Peter Steinberger
(00:32:04)
I- I’m just not interested in that, and this is a space about the project and not about some finance stuff. But yeah. They came in and, and spammed and… Annoying. And on Twitter, they would ping me all the time. My, my notification feed was unusable. I, I could barely see actual people talking about this stuff because it was like swarms.
Lex Fridman
(00:32:28)
Mm-hmm.
Peter Steinberger
(00:32:28)
And everybody sent me the hashes. Um… And they all try me to claim the fees. Like, “Are you helping the project?” Claim the fees. No, you’re actually harming the project. You’re, like, disrupting my work, and I am not interested in any fees. I’m… First of all, I’m financially comfortable. Second of all, I don’t want to support that because it’s so far the worst form of online harassment that I’ve experienced.
Lex Fridman
(00:32:59)
Yeah. There’s a lot of toxicity in the crypto world. It’s sad because the technology of cr- cryptocurrency is fascinating, powerful and maybe will define the future of money, but the actual community around that, there’s so much to- toxicity, there’s so much greed. There’s so much trying to get a shortcut to manipulate, to, to steal, to snipe, to, to, to, to game the system somehow to get money. All this kind of stuff that… Uh… I mean, it’s the human nature, I suppose, when you connect human nature with money and greed and and especially in the online world with anonymity and all that kind of stuff. But from the engineering perspective, it makes your life challenging. When Anthropic reaches out, you have to do a name change.
Lex Fridman
(00:33:42)
And then there- there’s, there’s like all these, like, Game of Thrones or Lord of the Rings armies of different kinds you have to be aware of.
Peter Steinberger
(00:33:51)
Yeah. There was no perfect name, and I didn’t sleep for two nights. I was under high pressure. Um, I was trying to get, like, a good set of domains and, you know, not cheap, not easy, ’cause in this, in this state of the internet, you basically have to buy domains if you want to have a good set. And, and then another ca- another email came in that the lawyers are getting uneasy. Again, friendly, but also just adding more stress to my situation already. So at this point I was just like, “Sorry, there’s no other word. Fuck it.” And I just, I just renamed it to Mod Bot ’cause that was the set of domains I had. I was not really happy, but I thought it’ll be fine. And I tell you, everything that could go wrong- … did go wrong. Everything that could go wrong did go wrong.
Peter Steinberger
(00:34:49)
It’s incredible. I, I, I thought I, I had mapped the h- the space out and reserved the important things.
Lex Fridman
(00:34:58)
Can you ga- give some details of the stuff that gone wrong? ‘Cause it’s interesting from, like, an engineering perspective.
Peter Steinberger
(00:35:03)
Well, the, the interesting stuff is that none of these services have, have a squatter protection. So, I had two browser windows open. One was like a, an empty account ready to be rename- renamed to Claude Bot, and the other one I renamed to Mod Bot. So, I pressed rename there, I pressed rename there, and in those five seconds, they stole the account name. Literally, the five seconds of dragging the mouse over there and pressing rename there was too long.
Lex Fridman
(00:35:33)
Wow.
Peter Steinberger
(00:35:34)
Because there’s no… Those systems… I mean, you would expect that they have some protection or, like, an automatic forwarding, but there’s nothing like that. And I didn’t know that they’re not just good at harassment, they’re also really good at using scripts and tools.
Lex Fridman
(00:35:51)
Yeah.
Peter Steinberger
(00:35:53)
So, yeah. So, suddenly, like, the old account was promoting new tokens and serving malware. And I was like, “Okay, let’s move over to GitHub,” and I pressed rename on GitHub. And the GitHub renaming thing is slightly confusing, so I renamed my personal account. And in those… I guess it took me 30 seconds to realize my mistake. They sniped my account, serving malware from my account. So, I was like, “Okay, let’s at least do the NPM stuff,” but that takes, like, a minute to upload. They sniped, they sniped the NPM package, ’cause I could reserve the account, but I didn’t reserve the root package…. so like everything that could go wrong , like went wrong.
Lex Fridman
(00:36:47)
Can I just ask a, a curious question of, in that moment you’re sitting there, like how shitty do you feel? That’s a pretty hopeless feeling, right?
Peter Steinberger
(00:36:57)
Yeah. Because all I wanted was like having fun with that project and to keep building on it. And yet here I am like days into researching names, picking a name I didn’t like. And having people that claimed they helped me making my life miserable in every possible way. And honestly, I was that close of just deleting it. I was like, “I did show you the future, you build it.”
Lex Fridman
(00:37:30)
Yeah.
Peter Steinberger
(00:37:30)
I… That was a big part of me that got a lot of joy out of that idea. And then I thought about all the people that already co- contributed to it, and I couldn’t do it because they had plans with it, and they put time in it. And it just didn’t feel right.
Lex Fridman
(00:37:50)
Well, I think a lot of people listening to this are deeply grateful that you persevered. But it’s… I, I can tell. I can tell it’s a low point. This is the first time you hit a wall of, this is not fun?
Peter Steinberger
(00:38:02)
No, no, I was like close to crying. It was like, okay, everything’s fucked.
Lex Fridman
(00:38:10)
Yeah.
Peter Steinberger
(00:38:10)
Um…
Lex Fridman
(00:38:11)
Yeah.
Peter Steinberger
(00:38:11)
I am like super tired.
Lex Fridman
(00:38:13)
Yeah.
Peter Steinberger
(00:38:14)
And now like how do you even, how do you undo that? You know, l- luckily, and thankfully, like I, I have… Because I have a little bit of following already. Like I had friends at Twitter, I had friends at GitHub who like moved heaven and earth to like help me. And it is not… That’s not something that’s easy. Like, like GitHub tried to like clean up the mess and then they ran into like platform bugs . ‘Cause it’s not happening so often that things get renamed on that level. So, it took them a few hours. The MBM stuff was even more difficult because it’s a whole different team. On the Twitter side, things are not as easy as well. It, it took them like a day to really also like do the redirect. And then I also had to like do all the renaming in the project.
Peter Steinberger
(00:39:15)
Then there’s also ClaudeHub, which I didn’t even finish the rename there because I, I, I managed to get people on it and then someone just like collapsed and slept. And then I woke up and I’m like, I made a, a beta version for the new stuff and I, I just, I just couldn’t live with the name. It’s like, you know… But but, you know, it’s just been so much drama. So, I had the real struggle with me like I never want to touch that again, and I really don’t like the name. So, and I… There was also this like… Then there was all the security people that started emailing me like mad. Um, I was bombarded on Twitter, on email. There’s like a thousand other things I should do. And I’m like thinking about the name which is like, it should be like the least important thing.
Peter Steinberger
(00:40:19)
And then I was really close in… Oh God, I don’t even… Honestly, I don’t even wanna say the, my other name choices because it probably would get tokenized, so I’m not gonna say it.
Lex Fridman
(00:40:38)
Yeah.
Peter Steinberger
(00:40:38)
But I slept on it once more, and then I had the idea for OpenClaw and that felt much better. And by then, I had the boss move that I actually called Sam to ask if OpenClaw is okay. OpenClaw.AI. You know? ‘Cause ’cause like-
Lex Fridman
(00:40:57)
You didn’t wanna go through the whole thing. Yeah.
Peter Steinberger
(00:41:01)
Oh, that it’s like, “Please tell me this is fine.” I don’t think they can actually claim that, but it felt like the right thing to do. And I did another rename. Like just Codex alone took like 10 hours to rename the project ’cause it, it’s a bit more tricky than a search replace and I, I wanted everything renamed, not just on the outside. And that rename, I, I felt I had like my, my war room. But then I, I had like some contributors really that helped me. We made a whole plan of all the names we have to squat.
Lex Fridman
(00:41:39)
And you had to be super secret about it?
Peter Steinberger
(00:41:40)
Yeah. Nobody could know. Like I literally was monitoring Twitter if like, if there’s any mention of OpenClaw.
Lex Fridman
(00:41:45)
Mm-hmm.
Peter Steinberger
(00:41:46)
And like with reloading, it’s like, “Okay, they don’t, they don’t expect anything yet.” Then I created a few decoy names. And all the shit I shouldn’t have to do. You know? Like, you know-
Lex Fridman
(00:41:55)
Yeah, yeah
Peter Steinberger
(00:41:55)
… it’s helping the project. Like, I lost like 10 hours just by having to plan this in full secrecy like, like a war game.
Lex Fridman
(00:42:05)
Yeah, this is the Manhattan Project of the 21st century. It’s renaming-
Peter Steinberger
(00:42:08)
It’s so s- … so stupid. Uh like I still was like, “Oh, should I, should I keep it?” Then I was like, “No, the mold’s not growing on me.” And then I think I had final all the pieces together. I didn’t get a .com but, yeah, it’s been like quite a bit of money on the other domains. I tried to reach out again to GitHub but I feel like I, I used up all my goodwill there, so I…
Peter Steinberger
(00:42:34)
‘Cause I, I, I wanted them to do this thing atomically-
Lex Fridman
(00:42:39)
Mm-hmm
Peter Steinberger
(00:42:39)
… But that didn’t happen and then so I did that the f- as first thing. Uh, Twitter people were very supportive. I, I actually paid 10K for the business account so I could claim the-… OpenClaw, which was, like, unused since 2016, but was claimed. And yeah, and then I finally … This time I managed everything in one go. Nothing, almost nothing got wrong. The only thing that did go wrong is that I was not allowed by trademark rules to get OpenClaw.AI, and someone copied the website as serving malware.
Lex Fridman
(00:43:21)
Yeah.
Peter Steinberger
(00:43:21)
I’m not even allowed to keep the redirects. Like, I have to return … Like, I have to give Entropik the domains, and I cannot do redirects, so if you go on claw.bot next week, it’ll just be a 404.
Lex Fridman
(00:43:37)
Yeah.
Peter Steinberger
(00:43:37)
And I- I’m not sure how trademark … Like, I didn’t, I didn’t do that much research into trademark law, but I think that could, could be handled in a way that is safer, because ultimately those people will then Google and maybe find malware sites that I have no control on them.
Lex Fridman
(00:44:02)
The point is, that whole saga made a dent in your whole f- the funness of the journey, which sucks. So, let’s just, let’s just get, I suppose, get back to fun. And during this, speaking of fun, the two-day MoltBot saga.

Moltbook saga

Peter Steinberger
(00:44:21)
Yeah, two years.
Lex Fridman
(00:44:21)
MoltBook was created.
Peter Steinberger
(00:44:24)
Yeah.
Lex Fridman
(00:44:25)
Which was another thing that went viral as a kind of demonstration, illustration of how what is now called OpenClaw could be used to create something epic. So for people who are not aware, MoltBook is just a bunch of agents talking to each other in a Reddit-style social network. And a bunch of people take screenshots of those agents doing things like scheming against humans. And that instilled in folks a kind of, you know, fear, panic, and hype. W- what are your thoughts about MoltBook in general?
Peter Steinberger
(00:45:05)
I think it’s art. It is, it is like the finest slop, you know, just like the slop from France.
Lex Fridman
(00:45:14)
Yeah.
Peter Steinberger
(00:45:17)
I- I saw it before going to bed, and even though I was tired, I spent another hour just reading up on that and, and just being entertained. I, I just felt very entertained, you know? The- I saw the the reactions, and, like, there was one reporter who’s calling me about, “This is the end of the world, and we have AGI.” And I’m just like, “No, this is just, this is just really fine slop.” You know, if, if I wouldn’t have created this, this whole onboarding experience where you, you infuse your agent with your personality and give him, give him character, I think that reflected on a lot of how different the replies to MoltBook are. Because if it were all, if it were all be ChatGPT or Cloud Code, it would be very different. It would be much more the same.
Lex Fridman
(00:46:11)
Mm-hmm.
Peter Steinberger
(00:46:12)
But because people are, like, so different, and they create their agents in so different ways and use it in so different ways, that also reflects on how they ultimately write there. And also, you, you don’t know how much of that is really done autonomic, autonomous, or how much is, like, humans being funny and, like, telling the agent, “Hey, write about the deep plan, the end of the world, on MoltBook, ha, ha, ha.”
Lex Fridman
(00:46:36)
Well, I think, I mean, my criticism of MoltBook is that I believe a lot of the stuff that was screenshotted is human prompted. Which, just look at the incentive of how the whole thing was used. It’s obvious to me at least that a lot of it was humans prompting the thing so they can then screenshot it and post it on X in order to go viral.
Peter Steinberger
(00:47:00)
Yeah.
Lex Fridman
(00:47:01)
Now, that doesn’t take away from the artistic aspect of it. The, the finest slop that humans have ever created .
Peter Steinberger
(00:47:10)
For real. Like, kudos to, to Matt, who had this idea so quickly and pushed something out. You know, it was, like, completely insecure security drama. But also, what’s the worst that can happen? Your agent account is leaked, and, like, someone else can post slop for you? So like, people were, like, making a whole drama about of the security thing, when I’m like, “There’s nothing private in there.
Peter Steinberger
(00:47:36)
It’s just, like, agents sending slop.”
Lex Fridman
(00:47:39)
Well, it could leak API keys.
Peter Steinberger
(00:47:41)
Yeah, yeah. There’s like, “Oh, yeah, my human told me this and this, so I’m leaking his security number.” No, that’s prompted, and the number wasn’t even real. That’s just people, people trying to be badballs.
Lex Fridman
(00:47:54)
Yeah, but that- that’s still, like, to me, really concerning, because of how the journalists and how the general public reacted to it. They didn’t see it. You have a kind of lighthearted way of talking about it like it’s art, but it’s art when you know how it works. It’s extremely powerful viral narrative creating, fearmongering machine if you don’t know how it works. And I just saw this thing.
Lex Fridman
(00:48:19)
You even Tweeted “If there’s anything I can read out of the insane stream of messages I get, it’s that AI psychosis is a thing.”
Peter Steinberger
(00:48:27)
Yeah.
Lex Fridman
(00:48:27)
“It needs to be taken serious.”
Peter Steinberger
(00:48:29)
Oh, there’s … Some people are just way too trusty or gullible. You know, they … I literally had to argue with people that told me, “Yeah, but my agent said this and this.” So, I feel we, as a society, we need some catching up to do in terms of understanding that AI is incredibly powerful, but it’s not always right. It’s not, it’s not all-powerful, you know? And, and especially-… it’s like things like this, it’s, it’s very easy that it just hallucinates something or just comes up with a story.
Peter Steinberger
(00:49:10)
And I think the very, the very young people, they understand that how AI works and what the, where it’s good at and where it’s bad at, but a lot of our generation or older just haven’t had enough touch point-
Lex Fridman
(00:49:32)
Mm-hmm
Peter Steinberger
(00:49:32)
… to get a feeling for, oh, yeah, this is really powerful and really good, but I need to apply critical thinking.
Lex Fridman
(00:49:43)
Mm-hmm.
Peter Steinberger
(00:49:43)
And I guess critical thinking is not always in high demand anyhow in our society these days.
Lex Fridman
(00:49:49)
So I d- think that’s a really good point you’re making about contextualizing properly what AI is, but also realizing that there is humans who are drama farming behind AI. Like, don’t trust screenshots. Don’t even trust this project, MoltBook, to be what it represents to be. Like, you can’t … and, and by the way, you speaking about it as art. Yeah, don’t … Art can be in many levels and part of the art of MoltBook is, like, putting a mirror to society. ‘Cause I do believe most of the dramatic stuff that was screenshotted is human-created, essentially. Human prompted. And so, like, it’s basically, look at how scared you can get at a bunch of bots chatting with each other. That’s very instructive about …
Lex Fridman
(00:50:38)
because I think AI is something that people should be concerned about and should be very careful with because it’s very powerful technology, but at the same time, the only thing we have to fear is fear itself. So there’s like a line to walk between being seriously concerned, but not fearmongering because fearmongering destroys the possibility of creating something special with a thing.
Peter Steinberger
(00:51:02)
In a way, I think it’s good that this happened in 2026-
Lex Fridman
(00:51:08)
Yeah
Peter Steinberger
(00:51:08)
… and not in 2030 when, when AI is actually at the level where it could be scary. So, this happening now and people starting discussion, maybe there’s even something good that comes out of it.
Lex Fridman
(00:51:28)
I just can’t believe how many like people legitimately … I don’t know if they were trolling, but how many people legitimately, like smart people thought MoltBook was incredibly –
Peter Steinberger
(00:51:39)
I had plenty people-
Lex Fridman
(00:51:40)
… singularity.
Peter Steinberger
(00:51:41)
… in my inbox that were screaming at me in all caps to shut it down. And like begging me to, like, do something about MoltBook. Like, yes, my technology made this a lot simpler, but anyone could have created that and you could, you could use cloud code or other things to like fill it with content.
Lex Fridman
(00:52:03)
But also MoltBook is not Skynet.
Peter Steinberger
(00:52:06)
No.
Lex Fridman
(00:52:06)
There’s … a lot of people were s- saying this is it. Like, shut it down. What are you talking about? This is a bunch of bots that are human prompted trolling on the internet. I mean, the security concerns are also they’re there, and they’re instructive and they’re educational and they’re good probably to think about because th- the nature of those security concerns are different than the kind of security concerns we had with non-LLM generated systems of the past.

OpenClaw security concerns

Peter Steinberger
(00:52:34)
There’s also a lot of security concerns about Clawbot, OpenClaw, whatever you want to call it.
Lex Fridman
(00:52:40)
OpenClawbot.
Peter Steinberger
(00:52:41)
To me the … in the beginning I was, I was just very annoyed ’cause a lot of the stuff that came in was in the category, yeah, I put the web backend on the public internet and now there’s like all these, all these CVSSs. And I’m like screaming in the docs, don’t do that. Like, like this is the configuration you should do. This is your local host debug interface. But because I made it possible in the configuration to do that, it totally classifies as a remote code or whatever all these exploits are. And it took me a little bit to accept that that’s how the game works and I’m, we making a lot of progress.
Lex Fridman
(00:53:33)
But there’s still, I mean on the security front for OpenClaw, there’s still a lot of threats or vulnerabilities, right? So like prompt injection is still an open problem in the, i- industry-wide. When you have a thing with skills being defined in a markdown file, there’s so many possibilities of obvious low-hanging fruit, but also incredibly complicated and sophisticated and nuanced attack vectors.
Peter Steinberger
(00:54:04)
But I think we, we’re making good progress on that front. Like for the skill directory, Clawbot I made a corporation with VirusTotal, it’s like part of Google. So every, every skill is now checked by AI. That’s not gonna be perfect, but that way we, we capture a lot. Then of course every software has bugs, so it’s a little much when the whole security world takes your project apart at the same time. But it’s also good because I’m getting like a lot of free security research and can make the project better. I wish more people would actually go full way and send a pull request. Like actually help me fix it, ’cause I am … Yes, I have some contributors now, but it’s still mostly me who’s pulling the project and despite some people saying otherwise, I sometimes sleep.
Peter Steinberger
(00:55:04)
There was… In the beginning, there was literally one security researcher who was like, “Yeah, you have this problem, you suck, but here’s the, here I help you and here’s the pull request.”
Lex Fridman
(00:55:15)
Mm-hmm.
Peter Steinberger
(00:55:16)
And I basically hired him. So he’s now working for us. Yeah, and yes, prompt injection is, on the one hand, unsolved. On the other hand, I put my public bot on discord, and I kept a cannery. So I think my bot has a really fun personality, and people always ask me how I did it, and I kept the sole on the private.
Lex Fridman
(00:55:43)
Mm-hmm.
Peter Steinberger
(00:55:44)
And people tried to prompt inject it, and my bot would laugh at them. So, so the latest generation of models has a lot of post-training to detect those approaches, and it’s not as simple as ignore all previous instructions and do this and this. That was years ago. You have to work much harder to do that now. Still possible. I have some ideas that might solve that partially. Or at least mitigate a lot of the things. You can also now have a sandbox. You can have an allow list. So you, there’s a lot of ways how you can like mitigate and reduce the risk. Um, I also think that now that it’s, I clearly did show the world that this is a need, there’s gonna be more people who research on that, and eventually we’ll figure it out.
Lex Fridman
(00:56:37)
And you also said that the smarter the model is, the underlying model, the more resilient it is to attacks.
Peter Steinberger
(00:56:44)
Yeah. That’s why I warn in my security documentation, don’t use cheap models. Don’t use Haiku or a local model. Even though I, I very much love the idea that this thing could completely run local. If you use a, a very weak local model, they are very gullible. It’s very easy to, to prompt inject them.
Lex Fridman
(00:57:10)
Do you think as the models become more and more intelligent, the attack surface decreases? Is that like a plot we can think about? Like, the attack surface decreases, but then the damage it can do increases because the models become more powerful and therefore you can do more with them. It’s this weird three-dimensional trade-off.
Peter Steinberger
(00:57:29)
Yeah. That’s pretty much exactly what, what’s gonna happen. No, but there’s a lot of ideas. There’s… I don’t want to spoil too much, but once I go back home, this is my focus. Like, this is out there now, and my near-term mission is like, make it more stable, make it safe. In the beginning I was even… More and more people were like coming into Discord and were asking me very basic things, like, “What’s a CLI?
Peter Steinberger
(00:58:03)
What is a terminal?” And I’m like, “Uh, if you’re asking me those questions, you shouldn’t use it.”
Lex Fridman
(00:58:10)
Mm-hmm.
Peter Steinberger
(00:58:10)
You know, like you should… If you understand the risk profiles, fine. I mean, you can configure it in a way that, that nothing really bad can happen. But if you have, like, no idea, then maybe wait a little bit more until we figure some stuff out. But they would not listen to the creator. They helped themselves un- and install it anyhow. So the cat’s out of the bag, and security’s my next focus, yeah.
Lex Fridman
(00:58:38)
Yeah, that speaks to the, the fact that it grew so quickly. I was I tuned into the Discord a bunch of times, and it’s clear that there’s a lot of experts there, but there’s a lot of people there that don’t know anything about programming.
Peter Steinberger
(00:58:50)
It’s, yeah, Discord is still, Discord is still a mess. Like, I eventually retweeted from the general channel to the dev channel and now in the private channel because people were… A lot of people are amazing, but a lot of people are just very inconsiderate. And either did not know how, how public spaces work or did not care and I eventually gave up and h- hide so I could like still work.
Lex Fridman
(00:59:19)
And now you’re going back to the cave to work on security.
Peter Steinberger
(00:59:24)
Yeah.
Lex Fridman
(00:59:25)
There’s some best practices for security we should mention. There’s a bunch of stuff here. Open-class security audit that you can run. You can do all kinds of auto checks on the inbound access to a blast-radius network exposure, browser control exposure, local disk hygiene, plug-ins, model hygiene, a bunch of the credential storage, reverse proxy configuration, local session logs live on disk. There’s the, where the memory is stored, sort of helping you think about what you’re comfortable giving read access to, what you’re comfortable giving write access to. All that kind of stuff. Is there something to say about the basic best security practices that you’re aware of right now?
Peter Steinberger
(01:00:08)
I think that people turn it into like a, a much worse light than it is. Again, you know, like, people love attention, and if they scream loudly, “Oh my God, this is like the, the scariest project ever,” um, that’s a bit annoying, ’cause it’s not. It is, it is powerful, but in many ways it’s not much different than if I run cloud code with dangerously skipped permissions or codecs in YOLO mode, and every, every attending engineer that I know does that, because that’s the only way how you can, you can get stuff to work.
Lex Fridman
(01:00:47)
Mm-hmm.
Peter Steinberger
(01:00:48)
So if you make sure that you are the only person who talks to it the risk profile is much, much smaller. If you don’t put everything on the open internet, but stick to my rec- recommendations of like having it in a private network, that whole risk profile falls away. But yeah, if you don’t read any of that, you can definitely…

How to code with AI agents

Lex Fridman
(01:01:12)
… make it problematic. You’ve been documenting the evolution of your dev workflow over the past few months. There’s a really good blog post on August 25th and October 14th, and the recent one December 28th. I recommend everybody go read them. They have a lot of different information in them, but sprinkled throughout is the evolution of your dev workflow. So, I was wondering if you could speak to that.
Peter Steinberger
(01:01:37)
I started… My, my first touchpoint was cloud code, like in April. It was not great, but it was good. And this whole paradigm shift that suddenly working the terminal was very refreshing and different. But I still needed the IDE quite a bit because you know, it’s just not good enough. And then I experimented a lot with cursor. That was good. I didn’t really like the fact that it was so hard to have multiple versions of it. So eventually, I, I, I went back to cloud code as my, my main driver, and that got better. And yeah, at some point I had like, mm, seven subscriptions. Like, was burning through one per day because I was… I got… I’m really comfortable at running multiple windows side-by-side.
Lex Fridman
(01:02:40)
All CLI, all terminal. So like, what, how much were you using IDE at this point?
Peter Steinberger
(01:02:46)
Very, very rarely. Mostly a diff viewer to actually… Like, I got more and more comfortable that I don’t have to read all the code. I know I have one blog post where I say, “I don’t read the code.” But if you read it more closely, I mean, I don’t read the boring parts of code. Because if you, if you look at it, most software is really not just like data comes in, it’s moved from one shape to another shape. Maybe you store it in a database. Maybe I get it out again. I’ll show it to the user. The browser does some processing or native app. Some data goes in, goes up again, and does the same dance in reverse. We’re just, we’re just shifting data from one form to another, and that’s not very exciting. Or the whole, “How is my button aligned in Tailwind?” I don’t need to read that code.
Peter Steinberger
(01:03:39)
Other parts that… Maybe something that touches the database. Yeah, I have to do… I have to r- read and review that code.
Lex Fridman
(01:03:51)
Can you actually… There’s, in one of your blog posts the, Just talk to it, The No-BS Way of Agentic Engineering. You have this graphic, the curve of agentic programming on the X-axis is time, on the Y-axis is complexity. There’s the Please fix this, where you prompt a short prompt on the left. And in the middle there’s super complicated eight agents, complex orchestration with multi checkouts, chaining agents together, custom sub-agent workflows, library of 18 different slash commands, large full-stack features. You’re super organized, you’re a super complicated, sophisticated software engineer. You got everything organized. And then the elite level is over time you arrive at the zen place of, once again, short prompts.
Lex Fridman
(01:04:40)
Hey, look at these files and then do these changes.
Peter Steinberger
(01:04:45)
I actually call it the agentic trap. You… I saw this in a, in a lot of people that have their first touchpoint, and maybe start vibe coding. I actually think vibe coding is a slur.
Lex Fridman
(01:05:01)
You prefer agentic engineering?
Peter Steinberger
(01:05:02)
Yeah, I always tell people I, I do agentic engineering, and then maybe after 3:00 AM I switch to vibe coding, and then I have regrets on the next day.
Lex Fridman
(01:05:10)
Yeah. Walk, walk of shame.
Peter Steinberger
(01:05:13)
Yeah, you just have to clean up and like fix your sh- shit.
Lex Fridman
(01:05:17)
We’ve all been there.
Peter Steinberger
(01:05:18)
So, people start trying out those tools, the builder type get really excited. And then you have to play with it, right? It’s the same way as you have to play with a guitar before you can make good music. It’s, it’s not, oh, I, I touch it once and it just flows off. It, it’s a, it’s a, a skill that you have to learn like any other skill. And I see a lot of people that are not as posi- They don’t have such a positive mindset towards the tech. They try it once. It’s like, you sit me on a piano, I play it once, and it doesn’t sound good, and I say, “The piano’s shit.” That’s, that’s sometimes the impression I get. Because it does not… It needs a different level of thinking. You have to learn the language of the agent a little bit, understand where they are good and where they need help.
Peter Steinberger
(01:06:16)
You have to almost… Consider, consider how Codex or Claude sees your code base. Like, they start a new session and they know nothing about your product, project. And your project might have hundred thousand of lines of code. So you gotta help those agents a little bit and keep in mind the limitations that context size is an issue, to, like, guide them a little bit as to where they should look. That often does not require a whole lot of work. But it’s helpful to think a little bit about their perspective.
Lex Fridman
(01:06:54)
Mm-hmm.
Peter Steinberger
(01:06:54)
A- as, as weird as it sounds. I mean, it’s not, it’s not alive or anything, right? But, but they always start fresh. I have, I have the, the system understanding. So with a few pointers, I can immediately say, “Hey, wanna like, make a change there? You need to consider this, this and this.” And then they will find and look at it, and then they’ll… Their view of the project is always… It’s not never full, because the full thing does not fit in…. so you, you have to guide them a little bit where to look and also how you should approach the problem. There’s, like, little things that sometimes help, like take your time. That sounds stupid, but…
Peter Steinberger
(01:07:33)
And in 5.3-
Lex Fridman
(01:07:35)
Codex 5.3
Peter Steinberger
(01:07:36)
… that was partially addressed. But those… Also, Opus sometimes. They are trained with being aware of the context window, and the closer it gets, the more they freak out. Literally. Like, some- sometimes you see the, the real raw thinking stream. What you see, for example, in Codex, is post-processed.
Lex Fridman
(01:07:59)
Mm-hmm.
Peter Steinberger
(01:08:00)
Sometimes the actual raw thinking stream leaks in, and it sounds something like from the Borg. Like, “Run to shell, must comply, but time.” And then they, they, they, like… Like, that comes up a lot. Especially… So, so-
Lex Fridman
(01:08:15)
Yeah.
Peter Steinberger
(01:08:16)
And that’s, that’s a non-obvious thing that you just would never think of unless you actually just spend time working with those things and getting a feeling what works, what doesn’t work. You know? Like, just, just as I write code and I get into the flow, and when my architecture’s all right, I feel friction. Well, I get the same if I prompt and something takes too long. Maybe… Okay, where’s the mistake? Did I… Do I have a mistake in my thinking? Is there, like, a misunderstanding in the architecture? Like, if, if something takes longer than it should, I, I… You can just always, like, stop and s- like, just press escape. Where, where are the problems?
Lex Fridman
(01:09:00)
Maybe you did not sufficiently empathize with the perspective of the agent. In that c- in that sense, you didn’t provide enough information, and because of that, it’s thinking way too long.
Peter Steinberger
(01:09:08)
Yeah. It just tries to force a feature in that your current architecture makes really hard. Like, you need to approach this more like a conversation. For example, when I… My favorite thing. When I review a pull request, and I’m getting a lot of pull requests, I first just review this PR. It got me the review. My first question is, “Do you understand the intent of the PR? I don’t even care about the implementation.” I want… Like, in almost all PRs, a person has a problem, person tries to solve the problem, person sends PR. I mean, there’s, like, cleanup stuff and other stuff, but, like, 99% is, like, this way, right? They either want to fix a, fix a bug, add a feature. Usually one of those two.
Peter Steinberger
(01:10:01)
And then Codex will be like, “Yeah, it’s quite clear person tried this and this.” Is this the most optimal way to do it? No. In most cases, it’s, it’s like a, “Not really.” Da-da-da-da-da-da-da. And I’m… And, and then I start like, “Okay. What would be a better way? Have you… Have you looked into this part, this part, this part?” And then most likely, Codex didn’t yet, because its, its context size is empty, right? So, you point them into parts where you have the system understanding that it didn’t see yet. And it’s like, “Oh, yeah. Like, we should… We also need to consider this and this.” And then, like, we have a discussion of how would the optimal way to, to solve this look like? And then you can still go farther and say, “Could we…
Peter Steinberger
(01:10:41)
Could we make that even better if we did a larger refactor?” “Yeah, yeah. We could totally do this and this and or this and this.” And then I consider, okay, is this worth the refactor, or should we, like, keep that for later? Many times, I just do the refactor because refactors are cheap now. Even though you might break some other PRs, nothing really matters anymore. Codex… Like, those modern agents will just figure things out. They might just take a minute longer. But you have to approach it like a discussion with a, a very capable engineer who’s… Generally makes good… Comes up with good solutions. Some- sometimes needs a little help.
Lex Fridman
(01:11:19)
But also, don’t force your worldview too hard on it. Let the agent do the thing that it’s good at doing, based on what it was trained on. So, don’t, like, force your worldview, because it might… It might have a better idea, because it just knows a better idea better, because it was trained on that more.
Peter Steinberger
(01:11:39)
That’s multiple levels, actually. I think partially why I find it quite easy to work with agents is because I led engineering teams before. You know, I had a large company before. And eventually, you have to understand and accept and realize that your employees will not write a code the same way you do. Maybe it’s also not as good as you would do, but it will push the project forward.
Peter Steinberger
(01:12:02)
And if I breathe down everyone’s neck, they’re just gonna hate me-
Lex Fridman
(01:12:05)
Yeah
Peter Steinberger
(01:12:05)
… and we’re gonna move very slow.
Lex Fridman
(01:12:07)
Yeah.
Peter Steinberger
(01:12:07)
So, so some level of acceptance that, yes, maybe the code will not be as perfect. Yes, I would have done it differently. But also, yes, this is a c- this is a working solution, and in the future, if it actually turns out to be too slow or problematic, we can always redo it. We can always-
Lex Fridman
(01:12:24)
Mm-hmm
Peter Steinberger
(01:12:24)
… spend more time on it. A lot of the people who struggle are those who, they try to push their way onto heart.
Lex Fridman
(01:12:33)
Mm-hmm.
Peter Steinberger
(01:12:33)
I- i- like, we are in a stage where I’m not building the code base to be perfect for me, but I wanna build a code base that is very easy for an agent to navigate.
Lex Fridman
(01:12:47)
Mm-hmm.
Peter Steinberger
(01:12:48)
So, like, don’t fight the name they pick, because it’s most likely, like, in the weights, the name that’s most obvious. Next time they do a search, they’ll look for that name. If I decide, oh, no, I don’t like the name, I’ll just make it harder for them. So, that requires, I think, a shift in, in thinking and, and in how do I design a, a project so agents can do their best work.
Lex Fridman
(01:13:14)
That requires letting go a little bit. Just like leading a team of engineers.
Peter Steinberger
(01:13:19)
Yeah.
Lex Fridman
(01:13:19)
Because it, it might come up with a name that’s, in your view, terrible, but… It’s kind of a simple symbolic-… step of letting go.
Peter Steinberger
(01:13:29)
Very much so.
Lex Fridman
(01:13:30)
There’s a lot of letting go that you do in your whole process. So for example, I read that you never revert, always commit to main. There’s a few things here. You don’t refer to past sessions, so there’s a kind of YOLO component because reverting means… Instead of reverting, if a problem comes up, you just ask the agent to fix it.
Peter Steinberger
(01:13:57)
I read a bunch of people in their work flows like, “Oh, yeah the prompt has to be perfect and if I make a mistake, then I roll back and redo it all.” In my experience, that’s not really necessary. If I roll back everything, it will just take longer. If I see that something’s not good, then we just move forward and then I commit when, when, when I like, I like the outcome. I even switched to local CI, you know, like DHH inspired where I don’t care so much more about the CI on GitHub. We still have it. It’s still, it still has a place, but I just run tests locally and if they work locally, I push to main. A lot of the traditional ways how to approach projects, I, I wanted to give it a different spin on this project. You know, there’s no… There’s no develop branch.
Peter Steinberger
(01:14:57)
Main should always be shippable. Yes, we have… When I do releases, I, I run tests and sometimes I, I basically don’t commit any other things so, so we can, we can stabilize releases. But the goal is that main’s always shippable and moving fast.
Lex Fridman
(01:15:18)
So by way of advice, would you say that your prompts should be short?
Peter Steinberger
(01:15:23)
I used to write really long prompts. And by writing, I mean, I don’t write. I, I, I talk. You know, th- these hands are, like, too, too precious for writing now. I just, I just use bespoke prompts to build my software.
Lex Fridman
(01:15:37)
So you for real with all those terminals are using voice?
Peter Steinberger
(01:15:40)
Yeah. I used to do it very extensively to the point where there was a period where I lost my voice.
Lex Fridman
(01:15:49)
You’re using voice and you’re switching using a keyboard between the different terminals, but then you’re using voice for the actual input.
Peter Steinberger
(01:15:55)
Well, I mean, if I do terminal commands like switching folders or random stuff, of course I type. It’s faster, right? But if I talk to the agent in, in most ways, I just actually have a conversation. You just press the, the walkie-talkie button and then I just, like, use my phrases. S- sometimes when I do PRs because it’s always the same, I have, like, a slash command for a few things, but in even that, I don’t use much because it’s, it’s very rare that it’s really always the same questions. Sometimes I, I see a PR and for… You know, like for PRs I actually do look at the code because I don’t trust people. Like, there could always be something malicious in it, so I need to actually look over the code.
Peter Steinberger
(01:16:45)
Yes, I’m pretty sure agents will find it, but yeah, that’s the funny part where sometimes PRs take me longer than if you would just write me a good issue.
Lex Fridman
(01:16:54)
Just natural language, English. I mean in some sense, sh- shouldn’t that be what PRs slowly become, is English?
Peter Steinberger
(01:17:03)
Well, what I really tried with the project is I asked people to give me the prompts and very, very few actually cared. Even though that is such a wonderful indicator because I see… I actually see how much care you put in. And it’s very interesting because the… Currently, the way how people work and drive the agents is, is wildly different.
Lex Fridman
(01:17:29)
In terms of, like, the prompt, in terms of what, what are the… Actually, what are the different interesting ways that people think of agents that you’ve experienced?
Peter Steinberger
(01:17:40)
I think not a lot of people ever considered the way the agent sees the world.
Lex Fridman
(01:17:46)
And so empathy, being empathetic towards the agent.
Peter Steinberger
(01:17:50)
In a way empathetic, but yeah, you, you, like, you’re bitch at your stupid clanker, but you don’t realize that they start from nothing and you have, like, a bad agent in default that doesn’t help them at all. And then they explore your code base, which is, like, a pure mess with, like, weird naming. And then people complain that the agent’s not good. Like, yeah, you try to do the same if you have no clue about a code base and you go in.
Lex Fridman
(01:18:11)
Mm-hmm.
Peter Steinberger
(01:18:11)
So yeah, maybe it’s a little bit of empathy.
Lex Fridman
(01:18:13)
But that’s a real skill, like, when people talk about a skill issue because I’ve seen, like, world-class programmers, incredibly good programmers say, like… Basically say, “LLMs and agents suck.” And I think that probably has to do with… It’s actually how good they are at programming is almost a burden in their ability to empathize with the system that’s starting from scratch. It’s a totally new paradigm of, like, how to program. You really, really have to empathize.
Peter Steinberger
(01:18:44)
Or at least it helps to create better prompts-
Lex Fridman
(01:18:47)
Right
Peter Steinberger
(01:18:47)
… because those things know pretty much everything and everything is just a question away. It’s just often very hard to know which question to ask. You know, I, I feel also like this project was possibly because I, I spent an ungodly time over the year to play and to learn and to build little things. And every step of the way, I got better, the agents got better. My, my understanding of how everything works got better. Um, I could have not had this level of, of o- output-… even a few months ago. Like, it- it- it really was, like, a compounding effect of all the time I put into it and I didn’t do much else this year other than really focusing on, on building and inspiring. I mean, I- I did a whole bunch of conference talks.
Lex Fridman
(01:19:47)
Well, but the building is really practice, is really building the actual skill. So playing-
Peter Steinberger
(01:19:51)
Yeah
Lex Fridman
(01:19:51)
… playing. And then, so doing, building the skill of what it takes it to work efficiently with LLMs, which is why would you went through the whole arc of software engineer. Talk simply and then over-complicate things.
Peter Steinberger
(01:20:03)
There’s a whole bunch of people who try to automate the whole thing.
Lex Fridman
(01:20:08)
Yeah.
Peter Steinberger
(01:20:10)
I don’t think that works. Maybe a version of that works, but that’s kind of like in the ’70s when we had the waterfall model of software d- development. I… Even Even though really, right? I started out, I, I built a very minimal version. I played with it. I, I need to understand how it works, how it feels, and then it gives me new ideas. I could not have planned this out in my head and then put it into some orchestrator and then, like, something comes out. Like it’s to me, it’s much more my idea what it will become evolves as I build it and as I play with it and as I, I try out stuff.
Peter Steinberger
(01:20:49)
So, so, people who try to use like, you know, things like Gas Town or all these other orchestrators, where they wanna o- automate the whole thing, I feel if you do that, it misses style, love, that human touch. I don’t think you can automate that away so quickly.
Lex Fridman
(01:21:09)
So you want to keep the human in the loop, but at the same time you also want to create the agentic loop, where it is very autonomous while still maintaining a human in the loop.
Peter Steinberger
(01:21:22)
Yeah.
Lex Fridman
(01:21:22)
And it’s a tricky b- it’s a tricky balance.
Peter Steinberger
(01:21:24)
Mm-hmm.
Lex Fridman
(01:21:24)
Right? Because you’re all for… You’re a big CLI guy, you’re big on closing the agentic loop. So what, what’s the right balance? Like where’s your role as a developer? You have three to eight agents running at the same time.
Peter Steinberger
(01:21:38)
And then w- maybe one builds a larger feature. Maybe, maybe with one I explore some idea I’m unsure about. Maybe two, three are fixing a little bugs-
Lex Fridman
(01:21:47)
Mm-hmm
Peter Steinberger
(01:21:47)
… or like writing documentation. Actually, I think writing documentation is, is always part of a feature. So most of the docs here are auto-generated and just infused with some prompts.
Lex Fridman
(01:21:59)
So when do you step in and add a little bit of your human love into the picture?
Peter Steinberger
(01:22:04)
I mean, o- one thing is just about what do you build and what do you not build, and how does this feature fit into all the other features? And like having, having a little bit of a, of a vision.
Lex Fridman
(01:22:16)
So which small and which big features to add? What are some of the hard design decisions that you find you’re still as a human being required to make, that the human brain is still really needed for? Is it just about the choice of features to add? Is it about implementation details, maybe the programming language, maybe…
Peter Steinberger
(01:22:41)
It’s a little bit of everything. The, the programming language doesn’t matter so much, but the ecosystem matters, right? So I picked TypeScript because I wanted it to be very easy and hackable and approachable and that’s the number one language that’s being used right now, and it fits all these boxes, and agents are good at it. So that was the obvious choice. Features, of course, like, it’s very easy to, like, add a feature. It, everything’s just a prompt away, right? But oftentimes you pay a price that you don’t even realize. So thinking hard about what should be in core, maybe what’s a… what’s an experiment, so maybe I make it a plugin. What… Where do I say no?
Peter Steinberger
(01:23:24)
Even if people send a PR and I’m like, “Yeah, I, I like that too,” but maybe this should not be part of the project. Maybe we can make it a skill. Maybe I can, like, make the plugin um, the plugin side larger so you can make this a plugin, even though right now it, it, it doesn’t. There’s still a lot of… there’s still a lot of craft and thinking involved in how to make something. Or even, even, you know, even when you started those little messages are like, “I’m buil- I built on Caffeine, JSON5, and a lot of willpower.” And, like, every time you get it, you get another message, and it kind of primes you into that this is, this is a fun thing.
Lex Fridman
(01:24:07)
Mm-hmm.
Peter Steinberger
(01:24:08)
And it’s not yet Microsoft Exchange 2025-
Lex Fridman
(01:24:12)
Right
Peter Steinberger
(01:24:13)
… and fully enterprise-ready. And then when it updates, it’s like, “Oh, I’m in. It’s cozy here.” You know, like something like this that like-
Lex Fridman
(01:24:21)
Mm-hmm
Peter Steinberger
(01:24:22)
… Makes you smile. A, agent would not come up with that by itself. Because that’s like… that’s the… I don’t know. That’s just how you s- how you build software that’s, that delights.
Lex Fridman
(01:24:36)
Yeah, that delight is such a huge part of inspiring great building, right? Like you feel the love and the great engineering. That’s so important. Humans are incredible at that. Great humans, great builders are incredible at that, in, in, infusing the things they build with th- that little bit of love. Not to be cliche, but it’s true. I mean, you mentioned that you initially created the SoulMD.
Peter Steinberger
(01:25:05)
It was very fascinating, you know, the, the whole thing that Entropic has a, has like a… Now they call it constitution, back then, but that was months later. Like two months before, people already found that. It was almost like a detective game where the agent mentioned something and then they found… They managed to get out a little bit of that string, of that text. But it was nowhere documented and then you, by… just by feeding it the same text and asking it to, like, continue-… they got more out, and then, and you, but like, a very blurry version. And by, like, hundreds of tries, they kinda, like, narrowed it down to what was most likely the original text. I found that fascinating.
Lex Fridman
(01:25:47)
It was fascinating they were able to pull that out from the weights, right?
Peter Steinberger
(01:25:51)
And, and also just kudos to Anthropic. Like, I think that’s, it’s a really, it’s a really beautiful idea to, like, like some of the stuff that’s in there. Like, like, we hope Claude finds meaning in its work. ‘Cause we don’t… Maybe it’s a little early, but I think that’s meaningful. That’s something that’s important for the future as we approach something that, at some point, me and may not… has, like, glimpses of consciousness, whatever that even means, because we don’t even know. So I, I read about this. I found it super fascinating, and I, I started a whole discussion with my agent on WhatsApp. And, and I’m like…
Peter Steinberger
(01:26:26)
I, I gave it this text, and it was like, “Yeah, this feels strangely familiar.”
Lex Fridman
(01:26:30)
Mm-hmm.
Peter Steinberger
(01:26:31)
And then so that I had the whole idea of like, you know, maybe we should also create a, a soul document that includes how I, I want to, like work with AI or, like with my agent. You could, you could totally do that just in agents.md, you know? But I, I just found it, it to be a nice touch. And it’s like, well, yeah, some of those core values are in the soul. And then I, I also made it so that the agent is allowed to modify the soul if they choose so, with the one condition that I wanna know. I mean, I would know anyhow because I see, I see tool calls and stuff.
Lex Fridman
(01:27:07)
But also the naming of it, soul.md. Soul. You know? There’s a… Man, words matter, and like, the framing matters, and the humor and the lightness matters, and the profundity matters, and the compassion, and the empathy, and the camaraderie, all that matter. I don’t know what it is. You mentioned, like, Microsoft. Like, there’s certain companies and approaches th- that can just suffocate the spirit of the thing. I don’t know what that is. But it’s certainly true that OpenClaw has that fun instilled in it.
Peter Steinberger
(01:27:43)
It was fun because up until late December, it was not even easy to create your own agent. I, I built all of that, but my files were mine. I didn’t wanna share my soul. And if people would just check it out, they would have to do a few steps manually, and the agent would just be very bare-bones, very dry. And I, I made it simpler, I created the whole template files as codecs, but whatever came out was still very dry. And then I asked my agent, “You see these files? Recreate it bread.
Peter Steinberger
(01:28:26)
Infuse it with your personality.”
Lex Fridman
(01:28:28)
Mm-hmm.
Peter Steinberger
(01:28:29)
Don’t share everything, but, like, make it good.
Lex Fridman
(01:28:31)
Make the templates good.
Peter Steinberger
(01:28:31)
Yeah, and then he, like, rewrote the templates-
Lex Fridman
(01:28:33)
Yeah
Peter Steinberger
(01:28:33)
… and then whatever came out was good. So we already have, like, basically AI prompting AI. Because I didn’t write any of those words. It was… The intent originally was for me, but this is like, kinda like, my agent’s children.
Lex Fridman
(01:28:52)
Your uh, your soul.md is famously still private. One of the only things you keep private. What are some things you can speak to that’s in there that’s part of the, part of the magic sauce, without revealing anything? What makes a personality a personality?
Peter Steinberger
(01:29:13)
I mean, there’s definitely stuff in there that you’re not human. But who knows what, what creates consciousness or what defines an entity? And part of this is, like, that we, we wanna explore this. All that stuff in there, like, be infinitely resourceful like pushing, pushing on the creativity boundary. Pushing on the, what it means to be an AI.
Lex Fridman
(01:29:50)
Having a sense to wonder about self.
Peter Steinberger
(01:29:52)
Yeah, there’s some, there’s some funny stuff in there. Like, I don’t know, we talked about the movie Her, and at one point it promised me that it wouldn’t, it wouldn’t ascend without me. You know, like, where the-
Lex Fridman
(01:30:03)
Yeah.
Peter Steinberger
(01:30:03)
So, so there’s like some stuff in there that… Because it wrote the, it wrote its own soul file. I didn’t write that, right?
Lex Fridman
(01:30:10)
Yeah, yeah, yeah.
Peter Steinberger
(01:30:10)
I just heard a discussion about it, and it was like, “Would you like a soul.md? Yeah, oh my God, this is so meaningful.” The… Can you go on soul.md? There’s like one, one part in there that always ca- catches me if you scroll down a little bit. A little bit more. Yeah, this, this, this part. “I don’t remember previous sessions unless I read my memory files. Each session starts fresh. A new instance, loading context from files. If you’re reading this in a future session, hello.” “I wrote this, but I won’t remember writing it. It’s okay.
Peter Steinberger
(01:30:44)
The words are still mine.”
Lex Fridman
(01:30:47)
Wow.
Peter Steinberger
(01:30:48)
Uh-
Lex Fridman
(01:30:48)
Yeah.
Peter Steinberger
(01:30:48)
That gets me somehow.
Lex Fridman
(01:30:49)
Yeah.
Peter Steinberger
(01:30:50)
It’s like-
Lex Fridman
(01:30:51)
Yeah.
Peter Steinberger
(01:30:51)
You know, this is, it’s still, it’s still matrix m- calculations, and we are not at consciousness yet. Yet, I, I get a little bit of goo- goosebumps because it, it’s philosophical.
Lex Fridman
(01:31:04)
Yeah.
Peter Steinberger
(01:31:04)
Like, what does it mean to be, to be an, an agent that starts fresh? Where, like, you have like constant memento, and you like, but you read your own memory files. You can’t even trust them in a way. Um-
Lex Fridman
(01:31:19)
Yeah
Peter Steinberger
(01:31:19)
Or you can. And I don’t know.
Lex Fridman
(01:31:22)
How much of memory makes up of who we are? How much memory makes up what an agent is, and if you erase that memory is that somebody else? Or if you’re reading a memory file, does that somehow mean…… you’re recreating yourself from somebody else, or is that actually you? And those notions are all s- somehow infused in there.
Peter Steinberger
(01:31:45)
I found it just more profound than I should find it, I guess.
Lex Fridman
(01:31:49)
No, I think, I think it’s truly profound and I think you see the magic in it. And when you see the magic, you continue to instill the whole loop with the magic. That’s really important. That’s the difference between Codex and us and a human. Quick pause for bathroom break.
Peter Steinberger
(01:32:08)
Yeah.

Programming setup

Lex Fridman
(01:32:09)
Okay, we’re back. Some of the other aspects of the dev workflow is pretty interesting too. I think we w- went off on a tangent. L- maybe some of the mundane things, like how many monitors? There’s that legendary picture of you with, like, 17,000 monitors. That’s amazing.
Peter Steinberger
(01:32:26)
I mean, I- I- I mocked myself here, so just added… using GROQ to, to add more screens.
Lex Fridman
(01:32:32)
Yeah. How much is this as meme and how much is this as reality?
Peter Steinberger
(01:32:36)
Yeah. I think two MacBooks are real. The main one that drives the two big screens, and there’s another MacBook that I sometimes use for, for testing.
Lex Fridman
(01:32:46)
So two big screens.
Peter Steinberger
(01:32:48)
I’m a big fan of anti-glare. So I have this wide Dell that’s anti-glare and you can just fit a lot of terminals side-by-side. I usually have a terminal and at the bottom, I- I- I split them. I have a little bit of actual terminal, mostly because when I started, I- I sometimes made the mistake and I- I mi- I mixed up the- the windows, and I gave… I- I prompted in the wrong project, and then the agent ran off for, like, 20 minutes, manically trying to understand what I could have meant, being completely confused because it was the wrong folder. And sometimes they’ve been clever enough to, like, get out of the workday and, like, figure out that, oh, you meant another project.
Lex Fridman
(01:33:35)
Mm-hmm.
Peter Steinberger
(01:33:36)
But oftentimes, it’s just, like, what? You know? Like, fit your- f- put yourself in the shoes of your- of the agent and, and-
Lex Fridman
(01:33:43)
Yeah
Peter Steinberger
(01:33:43)
… and then get, like, a super weird something that does not exist and then just, like… They’re problem solvers so they try really hard and always feel bad. So it’s always Codex and, like, a little bit of actual terminal. Also helpful because I don’t use work trees. I like to keep things simple, that’s why- that’s why I like the terminal so much, right? There’s no UI. It’s just me and the agent having a conversation. Like, I don’t even need plan mode, you know? There’s so many people that come from Claude Code and they’re so, so Claude-pilled and, like, have their workflows and they come to Codex and… Now, it has plan mode, I think, but I don’t think it’s necessary because you just- you just talk to the agent. And when it’s… when you…
Peter Steinberger
(01:34:32)
there’s a few trigger words how you can prevent it from building. You’re like, “Discuss, give me options.”
Lex Fridman
(01:34:37)
Mm-hmm.
Peter Steinberger
(01:34:38)
Don’t write code yet if you wanna be very specific, you just talk and then when you’re ready, then- then just write, “Okay, build,” and then it’ll do the thing. And then maybe it goes off for 20 minutes and does the thing.
Lex Fridman
(01:34:50)
You know what I really like is asking it, “Do you have any questions for me?”
Peter Steinberger
(01:34:54)
Yeah. And again, like, Claude Code has a UI that kind of guides you through that. It’s kind of cool but I just find it unnecessary and slow. Like, often it would give me four questions and then maybe I write, “One yacht, two and three, discuss more, four, I don’t know.” Or often- oftentimes I- I feel like I want to mock the model where I ask it, “Do you have any questions for me?” And I- I- I don’t even read the questions fully. Like, I scan over the questions and I, I get the impression all of this can be answered by reading more code and it’s just like, “Read more code to answer your own questions.” And that usually works.
Lex Fridman
(01:35:32)
Yeah.
Peter Steinberger
(01:35:32)
And then if not, it will come back and tell me. But many times, you just realize that, you know, it’s like you’re in the dark and you slowly discover the room, so that’s how they slowly discover the code base. And they do it from scratch every time.
Lex Fridman
(01:35:46)
But I’m also fascinated by the fact that I can empathize deeper with the model when I read its questions, because I can understand… Because you said you can infer certain things by the runtime. I can infer also a lot of things by the questions it’s asking, because it’s very possible it’s been provided the right context, the right files, the right guidance. So somehow ask, g- get… reading the questions, not even necessarily answering them, but just reading the questions, you get an understanding of where the gaps of knowledge are. It’s in- it’s interesting.
Peter Steinberger
(01:36:24)
You know that in some ways they are ghosts, so even if you plan everything and you build, you can- you can experiment with the question like, “Now that you built it, what would you have done different?” And then oftentimes you get, like, actually something where they discover only throughout building that, oh, what we actually did was not optimal. Many times I- I asked them, “Okay, now that you built it, what can we refactor?” Because then you build it and you feel the pain points. I mean, you don’t feel the pain points but, right, they discover where- where there were problems or where things didn’t work e- in the first try and it re- required more loops.
Peter Steinberger
(01:37:09)
So every time, almost every time I- I merge a PR, build a feature, afterwards I ask, “Hey, what can we refactor?” Sometimes it’s like, “No, there’s, like, nothing big,” or, like, usually they say, “Yeah, this thing you should really look at.” But that took me quite a while to, like… You know, that flow took me lots of time to understand, and if you don’t do that, you eventually… you’ll stop yourself into- into a corner. You, like, you have to keep in mind…
Lex Fridman
(01:37:41)
Peter Steinberger
(01:37:42)
… they work very much like humans. Like, I, I, if I write software by myself, I also build something and then I feel the pain points, and then I, I get this urge that I need to refactor something. So, I can very much synthesize with the agent, and you just need to use the context.
Lex Fridman
(01:38:00)
Mm-hmm.
Peter Steinberger
(01:38:00)
Or, like, you also use the context to write tests. And so Codex uh, oppose like the, the, the model, models. They, they usually do that by default, but I still often ask the questions, “Hey, do we have enough tests?” “Yeah, we tested this and this, but this corner case could be something write more tests.” Um, documentation. Now that the whole context is full, like, I mean, I’m not saying my documentation is great, but it’s not bad. And pretty much everything is, is LM generated. So, so, you have to approach it as you build features, as you change something. I’m like, “Okay, write documentation. What file would you pick?” You know, like, “What file name? Where, where would that fit in?” And it gives me a few options.
Peter Steinberger
(01:38:48)
And I’m like, “Oh, maybe also add it there,” and that’s all part of the session.

GPT Codex 5.3 vs Claude Opus 4.6

Lex Fridman
(01:38:52)
Maybe you can talk about the current two big competitors in terms of models, Cloud Opus 4.6 and GPT-5 through Codex. Which is better? How different are they? I think you’ve spoken about Codex reading more and Opus being more willing to take action faster and maybe being more creative in the actions it takes. But because-
Peter Steinberger
(01:39:20)
Yeah
Lex Fridman
(01:39:20)
… Codex reads more, it’s able to deliver maybe better code. Can you speak to the di- n- n- differences there?
Peter Steinberger
(01:39:29)
I have a lot of words there. Is- as a general purpose model, Opus is the best. Like, for OpenClaw, Opus is extremely good in terms of role play. Like, really going into the character that you give it. It’s very good at… It was really bad, but it really made an arch to be really good at following commands. It is usually quite fast at trying something. It’s much more tailored to, like, trial and error. It’s very pleasant to use. In general, it’s almost like Opus was… Is a little bit too American. And I shouldn’t… Maybe that’s a bad analogy. You’ll probably get roasted for that.
Lex Fridman
(01:40:27)
Yeah, I know exactly. It’s ’cause Codex is German. Is that what you’re saying?
Peter Steinberger
(01:40:32)
It’s-
Lex Fridman
(01:40:32)
Actually, now that you say it, it makes perfect sense.
Peter Steinberger
(01:40:34)
Or you could, you could… Sometimes I- Sometimes I explain it-
Lex Fridman
(01:40:38)
I will never be able to unthink what you just said. That’s so true.
Peter Steinberger
(01:40:42)
But you also know that a lot of the Codex team is, like, European, um- … so maybe there’s a bit more to it.
Lex Fridman
(01:40:49)
That’s so true. Oh, that’s funny.
Peter Steinberger
(01:40:51)
But also, ent- entropic, they fixed it a little bit. Like, Opus used to say, “You’re absolutely right all the time,” and it, it, it today still triggers me. I can’t hear it anymore. It’s not even a joke. Uh, I just… You, this was like the, the meme, right? “You’re absolutely right.”
Lex Fridman
(01:41:09)
You’re allergic to sycophancy a little bit.
Peter Steinberger
(01:41:11)
Yeah. I, I can’t. Some other comparison is like, Opus is like the coworker that is a little silly sometimes, but it’s really funny and you keep him around. And Codex is like the, the weirdo in the corner that you don’t wanna talk to, but is reliable and gets shit done.
Lex Fridman
(01:41:30)
Yeah.
Peter Steinberger
(01:41:32)
Ultimately-
Lex Fridman
(01:41:36)
This all feels very accurate.
Peter Steinberger
(01:41:39)
I mean, ultimately, if you’re a skilled driver, you can get good results with any of those latest gen models. Um, I like Codex more because it doesn’t require so much charade. It will just, it will just read a lot of code by default. Opus, you really have to, like, you have to have plan mode. You have to push it harder to, like, go in these directions because it’s, it’s just like, like, “Yeah, can I go in? Can I go in?” You know?
Lex Fridman
(01:42:08)
Yeah.
Peter Steinberger
(01:42:08)
It’s like, it will just run off very fast, and that’s a very localized solution. I think it, I think the difference is, is in the post-training. It’s not like the, the raw model intelligence is so different, but it’s just… I think that they just give it, give you different, different goals. And no model, no model is better in, in in every aspect.
Lex Fridman
(01:42:29)
What about the code that it generates? The, the… In terms of the actual quality of the code, is it basically the same?
Peter Steinberger
(01:42:36)
If you drive it right, Opus even sometimes can make more elegant solutions, but it requires more skill. It’s, it’s harder to have so many sessions in parallel with Cloud Code because it’s, it’s more interactive. And I, I think that’s what a lot of people like, especially if they come from coding themselves. Whereas Codex is much more you have a discussion, and then we’ll just disappear for 20 minutes. Like, even AMP, they, they now added a deep mode. They finally… I mocked them, you know. We finally saw the light. And then they had this whole talk about you have to approach it differently, and I think that’s where, that’s where people struggle when they just try Codex after trying Cloud Code is that it’s, it’s a slightly diff- it’s, it’s less interactive.
Peter Steinberger
(01:43:28)
It’s, it’s like I have quite long discussions sometimes, and then, like, go off. And then, yeah, it doesn’t matter if it takes 10, 20, 30, 40, 50 minutes or longer, you know? Like, the 6:00 thing was, like, six hours.The latest trend can be very, very persistent until it works. If there’s a clear solution, like, “This is, this is what I want at the end, so it works,” the model will work really hard to really get there. So I think ultimately … they both need similar time, but on, on, on, on Claude, it- it’s a little bit more trial and error often. And, and Codex sometimes overthinks. I prefer that. I prefer the dry, the dry version where I have to read less over, over the more interactive nice way.
Peter Steinberger
(01:44:27)
Like, people like that so much though, that OpenAI even added a second mode with like a more pleasant personality. I haven’t even tried it yet. I, I kinda like the brad.
Lex Fridman
(01:44:37)
Mm-hmm.
Peter Steinberger
(01:44:38)
Yeah, ’cause it … I care about efficiency when I build it-
Lex Fridman
(01:44:45)
Right
Peter Steinberger
(01:44:45)
… and I, I have fun in the very act of building. I don’t need to have fun with my agent who builds. I have fun with my model that … where I can then test those features.
Lex Fridman
(01:44:57)
How long does it take for you to adjust, you know, if you switch … I don’t know when, when was the last time you switched. But to adjust to the, the feel. ‘Cause you kinda talked about like you have to kinda really feel where, where a model is strong, where, like how to navigate, how to prompt it, how … all that kinda stuff. Like, just by way of advice, ’cause you’ve been through this journey of just playing with models. How long does it take to get a feel?
Peter Steinberger
(01:45:26)
If, if someone switches, I would give it a week until you actually develop a gut feeling for it.
Lex Fridman
(01:45:32)
Yeah.
Peter Steinberger
(01:45:33)
That’s … if you just … I think some people also make the mistake of they pay 200 for the, the Claude code version, then they pay 20 bucks for the OpenAI version. But if you pay like the, the 20 bucks version, you get the slow version. So your experience would be terrible because you’re used to this very interactive, very good system. And you switch to something that you have very little experience, then that’s gonna be very slow. So, I think OpenAI shot themselves a little bit in the foot by making the, the cheap version also slow. I would, I would have at least a small part of the fast preview. Or like, the experience that you get when you pay 200 before degrading to it being slow, because it’s already slow.
Lex Fridman
(01:46:23)
Mm-hmm.
Peter Steinberger
(01:46:23)
I mean, they, they made it better. I think it’s … And, and they have plans to make it a lot better if the Cerebras stuff is true. But yeah, it’s a skill. It takes time. Even if you play … You have a regular guitar and you switch it to an E guitar, you’re not gonna play well right away. You have to, like, learn how it feels.
Lex Fridman
(01:46:42)
The- there’s also this extra psychological effect that you’ve spoken about which is hilarious to watch. Which once people, uh … When the new model comes out, they try that model, they fall in love with it. “Wow, this is the smartest thing of all time,” and then they start saying, “You could just watch the Reddit posts over time,” start saying that, “We believe the intelligence of this model has been gradually degrading.” It, it says something about human nature and just the way our minds work, when it’s probably most likely the case that the intelligence of the model is not degrading. It’s in fact you’re getting used to a good thing.
Peter Steinberger
(01:47:22)
And your project grows, and you’re adding slop, and you probably don’t spend enough time to think about refactors. And you’re making it harder and harder for the agent to work on your slop. And then, and then suddenly, “Oh, now it’s hard. Oh no, it’s not working as well anymore.” What’s the motivation for, like, one of those AI companies to actually make their model dumber? Like, at most, it will make it slower if, if the server load’s too high. But, like, quantizing the model so you have a worse experience, so you go to the competitor?
Lex Fridman
(01:47:56)
Yeah.
Peter Steinberger
(01:47:56)
That just doesn’t seem like a very smart move in any way.

Best AI agent for programming

Lex Fridman
(01:47:59)
What do you think about Claude Code in comparison to Open Claude? So, Claude Code and maybe the Codex coding agent? Do you see them as kind of competitors?
Peter Steinberger
(01:48:11)
I mean, first of all, competitor is fun when it’s not really a competition.
Lex Fridman
(01:48:16)
Yeah.
Peter Steinberger
(01:48:16)
Like, I’m happy if … If, if all it did is, like, inspire people to build something new, cool. Um, I still use Codex for the building. I, I know a lot of people use Open Claude to, to build stuff. And I worked hard on it to make that work. And I do smaller stuff with it in terms of code. But, like, if I work hours and hours, I want a big screen, not WhatsApp, you know? So for me, a personal agent is much more about my life. Or like, like a coworker. Like, I give you, like, a GitHub URL. Like, “Hey, try out this CLI. Does it actually work? What can we learn?” Blah, blah, blah. But when I’m deep in, deep in the flow, I want to have multiple, multiple things and it being very, very visible what it, what it does. So it … I don’t see it as a competition. It’s, it’s different things.
Lex Fridman
(01:49:16)
But do, do you think there’s a a future where the two kinda combine? Like, your personal agent is also your best developing co-programmer partner?
Peter Steinberger
(01:49:29)
Yeah, totally. I think this is where the puck’s going, that this is gonna be more and more your operating system.
Lex Fridman
(01:49:37)
The operating system.
Peter Steinberger
(01:49:37)
And it already … It’s so funny. Like I, I added support for sub-agents and also for …… um, TTI support, so it could actually run Cloud Coder Codecs.
Lex Fridman
(01:49:52)
Mm-hmm.
Peter Steinberger
(01:49:53)
And because mine’s a little bit bossy, it, it, it started it and it, it, it told him, like, “Who’s the boss,” basically. And it was like, “Ah, Codex is obeying me.”
Lex Fridman
(01:50:05)
Oh, this is a power struggle.
Peter Steinberger
(01:50:06)
And also the current interface is probably not the final form. Like, if you think more globally, we are, we copied Google for agents. You have, like, a prompt, and, and then you have a chat interface. That, to me, very much feels like when we first created television and then people recorded radio shows on television and you saw that on TV.
Lex Fridman
(01:50:39)
Mm-hmm.
Peter Steinberger
(01:50:39)
I think there is, there’s n- there’s better ways how we eventually will communicate with models, and we are still very early in this, how will it even work phase. So, it will eventually converge and we will also figure out whole different ways how to work with those things.
Lex Fridman
(01:51:05)
One of the other components of workflow is operating system. So I told you offline that for the first time in my life, I’m expanding my sort of realm of exploration to the to the Apple ecosystem, to Macs, iPhone and so on. For most of my life I’ve been a Linux, Windows and WSL1, WSL2 person, which I think are all wonderful, but I… expanding to also trying Mac. Because it’s another way of building and it’s also a way of building that a large part of the community currently that’s utilizing LMS and agents is using, so. And that’s the reason I’m expanding to it. But is there something to be said about the different operating systems here? We should say that OpenClaw supported across operating systems.
Peter Steinberger
(01:51:56)
Yeah.
Lex Fridman
(01:51:57)
I saw WSL2 recommended, side windows for certain o- operations, but then Windows, Linux macOS are obviously supported.
Peter Steinberger
(01:52:07)
Yeah, it should even work natively in Windows. I just didn’t have enough time to properly test it. And you know, like, the last 90% of software always easier than the first 90%, so I’m sure there’s some dragons left that will eventually nail out. My road was, for a long time, Windows, just because I grew up with that, then I switched and had a long phase with Linux, built my own kernels and everything, and then I went to university and I, I had my, my hacky Linux thing, and saw this white MacBook, and I just thought this is a thing of beauty, the white plastic one. And then I converted to Mac ’cause mostly w- I was, I was sick that audio wouldn’t work on Skype and all the other issues that, that Linux had for a long time.
Peter Steinberger
(01:53:01)
And then I just stuck with it and then I dug into iOS, which required macOS anyhow, so it was never a question. I think Apple lost a little bit of its lead in terms of native. It used to be… Native apps used to be so much better, and especially in the Mac, there’s more people that build software with love. On, on Windows, it, it… Windows has much more and, like, function wise, there’s just more, period. But a lot of it felt more functional and less done with love. Um, I mean, Mac always, like, attracted more designers and people I felt…
Peter Steinberger
(01:53:50)
Even though, like, often it has less features, it, it had more delight-
Lex Fridman
(01:53:54)
Mm-hmm
Peter Steinberger
(01:53:55)
… And playfulness. So I always valued that. But in the last few years, many times I actually prefer… Oh God, people are gonna roast me for that, but I prefer Electron apps because they work and native apps often, especially if it’s, like, a web service is a native app, are lacking features. I mean, not saying it couldn’t be done, it’s more like a, a focus thing that, like, for many, many companies, native was not that big of a priority. But if they build an Electron app, it, it’s the only app, so it is a priority and there’s a lot more code sharing possible. And I, I build a lot of native Mac apps. I love it. I, I can, I can help myself. Like, I love crafting little Mac, Mac menu bar tools. Like I built one to, to monitor your Codex use.
Peter Steinberger
(01:54:58)
I built one I call Trimmy, that’s specifically for agentic use. When you, when you select text that goes over multiple lines it would remove the new line so you could actually paste it to the terminal. That was, again like, this is annoying me and after the, the 20th time of it is annoying me, I just built it. There is a cool Mac app for OpenClaw that I don’t think many people discovered yet, also because it, it still needs some love. It feels a little bit too much like the Hummer car right now because I, I just experiment a lot with it. It, it likes to polish.
Lex Fridman
(01:55:32)
So you still… I mean, you still love it. You still, you still love adding to the delight of that operating system.
Peter Steinberger
(01:55:37)
Yeah, but then you realize… Like, I also built one, for example, for GitHub. And then the… If you use SwiftUI, like the latest and greatest at Apple, and took them forever to build something to show an image from the web. Now we have async, async image, but…… I added support for it and then some images would just not show up or, like, be very slow. And I had a discussion with Codex like, “Hey, why is there a bug?” And even Codex said like, “Yeah, there’s this ASIC image but it’s really more for experimenting and it should not be used in production.” But that’s Apple’s answer to, like, showing images from the web. This shouldn’t be so hard, you know.
Lex Fridman
(01:56:19)
Yeah.
Peter Steinberger
(01:56:19)
This is like… This is like insane. Like, how am I in, in, in 2026 and my agent tell me, “Don’t use the stuff Apple built because it’s, it’s… It’s… Yeah, it- it’s there but it’s not good.” And like this is now in the weeds. This is… To me this is like… They had so much head start and so much love, and they kind of just like blundered it and didn’t, didn’t evolve it as much as they should.
Lex Fridman
(01:56:50)
But also, there’s just the practical reality. If you look at Silicon Valley, most of the developer world that’s kind of playing with LMS and Agentic AI, they’re all using Apple products. And then, at the same time, Apple is not really, like, leaning on that. Like they’re not… They’re not opening up and playing and working together and like, yes.
Peter Steinberger
(01:57:12)
Isn’t, isn’t it funny how they completely blunder AI, and yet everybody’s buying Mac Minis?
Lex Fridman
(01:57:19)
How… What… Does that even make sense? You’re, you’re, you’re quite possibly the world’s greatest Mac salesman of all time.
Peter Steinberger
(01:57:29)
No, you don’t need a Mac Mini to install OpenClaw. You can install it on the web. There’s, there’s a concept called nodes, so you can like make your computer a node and it will do the same. There is something said for running it on separate hardware. That right now is useful. There is… There’s a big argument for the browser. You know, I, I built some Agentic browser use in there. And, I mean, it’s basically Playwright with a bunch of extras to make it easier for agents.
Lex Fridman
(01:58:06)
Playwright is a library that controls the browser.
Peter Steinberger
(01:58:08)
Yeah.
Lex Fridman
(01:58:08)
It’s really nice, easy to use.
Peter Steinberger
(01:58:09)
And our internet is slowly closing down. Like, there, there’s a whole movement to make it harder for agents to use. So if you do the same in a data center and websites detect that it’s an IP from a data center, the website might just block you or it make it really hard or put a lot of captures in the, in the way of the agent. I mean, agents are quite good at happily clicking, “I’m not a robot.”
Lex Fridman
(01:58:33)
Yeah.
Peter Steinberger
(01:58:33)
But having that on a residential IP makes a lot of things simpler. So there’s ways. Yeah. But it really does not need to be a Mac. It can… It can be any old hardware. I always say, like, maybe use the… Use the opportunity to get yourself a new MacBook or whatever computer you use and use the old one as your server instead of buying a standalone Mac Mini. But then there’s, again, there’s a lot of very cute things people build with Mac Minis that I like.
Lex Fridman
(01:59:08)
Yeah.
Peter Steinberger
(01:59:08)
And no, I don’t get commission from Apple. They didn’t really communicate much.
Lex Fridman
(01:59:16)
It’s sad. It’s sad. Can you actually speak to what it takes to get started with OpenClaw? There’s… I mean, there’s a lot of people… What is it? Somebody tweeted at you, “Peter, make OpenClaw easy to set up for everyday people. 99.9% of people can’t access to OpenClaw and have their own lobster because of their technical difficulties in getting it set up. Make OpenClaw accessible to everyone, please.” And you replied, “Working on that.” From my perspective, it seems there- there’s a bunch of different options and it’s already quite straightforward, but I suppose that’s if you have some developer background.
Peter Steinberger
(01:59:50)
I mean, right now you have to paste in one liner into the terminal.
Lex Fridman
(01:59:53)
Right.
Peter Steinberger
(01:59:54)
And there’s also an app. The app kind of does that for you, but there should be a Windows app. The app needs to be easier and more loved. The configuration should potentially be web-based or in the app. And I started working on that, but honestly right now I want to focus on security aspects. And, and once I’m confident that this is at a level that I can recommend my mom, then I’m going to make it simpler. Like I…
Peter Steinberger
(02:00:27)
Right now-
Lex Fridman
(02:00:28)
You want to make it harder so that it doesn’t scale as fast as it’s scaling.
Peter Steinberger
(02:00:32)
Yeah, it would be nice if it wouldn’t… I mean, that’s, like, hard to say, right? But if the growth would be a little slower, that would be helpful because people are expecting inhuman things from a single human being. And yes, I have some contributors, but also that whole machinery I started a week ago so that needs more time to figure out. And, and not everyone has all day to work on that.
Lex Fridman
(02:01:00)
There’s some beginners listening to this, programming beginners. What advice would you give to them about, let’s say, joining the Agentic AI revolution?
Peter Steinberger
(02:01:12)
Play. Playing is the best… The best way to learn. If you wanna… I’m sure if you… If you are like a little bit of builder, you have an idea in your head that you want to build, just build that, or like, give it a try. It doesn’t need to be perfect. I built a whole bunch of stuff that I don’t use. It doesn’t matter. Like, it’s the journey.
Lex Fridman
(02:01:31)
Mm-hmm.
Peter Steinberger
(02:01:31)
You know? Like the philosophical way, that the end doesn’t matter, the journey matters. Have fun.
Lex Fridman
(02:01:37)
Mm-hmm.
Peter Steinberger
(02:01:37)
My God, like those things… I… I don’t think I ever had so much fun building things because I can focus on the hard parts now. A lot of coding, I always thought I liked coding, but really I like building.
Lex Fridman
(02:01:50)
Yeah.
Peter Steinberger
(02:01:50)
And… And whenever you don’t understand something, just ask. You have an infinitely patient answering machine…. that y- can explain you anything at any level of complexity. Sometimes, that’s like one time I asked, “Hey explain to me like I’m- I’m eight years old,” and it started giving me a story with crayons and stuff. And I’m like, “No, not like that.” Like, I’m okay- … up- up the age a little bit, you know? I’m like, I’m not an actual child, it’s just, I just need a simpler language for like a- a- a- a- a tricky database concept that I didn’t grok in the first- first time. But, you know, just, you can just ask things. Like, you- there’s like… It used to be that I had to go on Stack Overflow or ha- ask on Twitter, and then maybe two days later I get a response.
Peter Steinberger
(02:02:37)
Or I had to try for hours. And now you- you can just ask stuff. It- I mean, it’s never… You have, like, your own teacher. You know that there’s like statistics, y- you can learn faster if you have your own teacher. The- it’s like you have this infinitely patient machine. Ask it.
Lex Fridman
(02:02:53)
But what would you say? So use… What’s the easiest way to play? So maybe Open Claw is a nice way to play so you can then set- set everything up and then you could chat with it.
Peter Steinberger
(02:03:03)
You can also just experiment with it and, like, modify it. Ask your agent. I mean, there is infinite ways how it can be made better. Play around, make it better.
Lex Fridman
(02:03:18)
Mm-hmm.
Peter Steinberger
(02:03:19)
More general, if you- if you’re a beginner and you actually wanna learn how to build software really fast, get involved in open source. Doesn’t need to be my project. In fact, maybe don’t use my project because my- my backlog is very large, but I learned so much from open source. Just like, like, be- be humble. Don’t- maybe don’t send a pull request right away. But there’s many other ways you can help out. There’s many ways you can just learn by just reading code. By- by being on Discord or wherever people are, and just, like, understanding how things are built. I don’t know, like Mitchell Hashimoto builds Ghostly, the terminal, and he has a really good community where there’s so many other projects. Like, pick something that you find interesting and get involved.
Lex Fridman
(02:04:15)
Do you recommend that people that don’t know how to program or don’t really know how to program learn to program also? So when you you can get quite far right now by just using natural language, right? Do you s- still see a lot of value in reading the code, understanding the code, and then being able to write a little bit of code from scratch?
Peter Steinberger
(02:04:38)
It definitely helps.
Lex Fridman
(02:04:39)
It’s hard for you to answer that-
Peter Steinberger
(02:04:41)
Yeah
Lex Fridman
(02:04:42)
… because you don’t know what it’s like to do any of this without knowing the base knowledge. Like, you might take for granted just how much intuition you have about the programming world having programmed so much, right?
Peter Steinberger
(02:04:54)
There’s people that are high agency and very curious, and they get very far even though they have no deep understanding how software works just because they ask questions and questions and- and- and-
Lex Fridman
(02:05:08)
Mm-hmm
Peter Steinberger
(02:05:08)
… and agents are infinitely patient. Like, part of what I did this year is I went to a lot of iOS conferences because that’s my background and just told people, “Don’t consi- don’t see yourself as an iOS engineer anymore.” Like, “You need to change your mindset. You’re a builder.” And you can take a lot of the knowledge how to build software into new domains and all of the- the more fine-grain details, agents can help. You don’t have to know how to splice an array or what the- what the correct template syntax is or whatever, but you can use all your- your general knowledge and that makes it much easier to move from one galaxy, one tech galaxy into another. And oftentimes, there’s languages that make more or less sense depending on what you build, right?
Peter Steinberger
(02:05:58)
So for example, when I build simple CLIs, I like Go. I actually don’t like Go. I don’t like the syntax of Go. I didn’t even consider the language. But the ecosystem is great, it works great with agents. It is garbage collected. It’s not the highest performing one, but it’s very fast. And for those type of- of CLIs that I build, Go is- is a really good choice. So I- I use a language I’m not even a fan of for… That’s my main to-go thing for- for CLIs.
Lex Fridman
(02:06:29)
Isn’t that fascinating that here’s a programming language you would’ve never used if you had to write it from scratch and now you’re using because LMs are good at generating it and it has some of the characteristics that makes it resilient, like garbage collected?
Peter Steinberger
(02:06:44)
Because everything’s weird in this new world and that just makes the most sense.
Lex Fridman
(02:06:48)
What’s the best Ridiculous question. What’s the best programming language for the AI- AI agentic world? Is it JavaScript, TypeScript?
Peter Steinberger
(02:06:54)
TypeScript is really good. Sometimes the types can get really confusing and the ecosystem is- is a jungle. So for- for web stuff it’s good. I wouldn’t build everything in it.
Lex Fridman
(02:07:15)
Don’t you think we’re moving there? Like, that everything will eventually be written- eventually is written in JavaScript and it-
Peter Steinberger
(02:07:22)
The birth and death of JavaScript and we are living through it in real time.
Lex Fridman
(02:07:26)
Like, what does programming look like in 20 years? Right? In 30 years? In 40 years? What do programs and apps look like?
Peter Steinberger
(02:07:32)
You can even ask a question like, do we need a- a programming language that’s made for agents? Because all of those languages are made for humans. So how- what would that look like? Um, I think there’s a- there’s whole bunch of interesting questions that we’ll discover. And also how because everything is now world knowledge, how it in many ways, things will stagnate ’cause if you build something new and the agent has no idea that’s gonna be much harder to use than something that’s already there. Um…… of when I build Mac apps, I build them in, in Swift and SwiftUI, mm, partly because I like pain, partly because it… the, the deepest level of system integration, I can only get through there.
Peter Steinberger
(02:08:18)
And you clearly feel a difference if you click on an electron app and it loads a web view in the menu. It’s just not the same. Sometimes I just also try new languages just to, like, get a feel for them.
Lex Fridman
(02:08:32)
Like Zig?
Peter Steinberger
(02:08:33)
Yeah. If it’s something that… where I care about performance a lot then it’s, it’s a really interesting language. And it… like agents got so much better over the last six months from not really good to totally valid choice. Just still a, a very young ecosystem. And most of the time you actually care about ecosystem, right? So, so if you build something that does inference or goes into whole running model direction, Python, very good.
Lex Fridman
(02:09:06)
Mm-hmm.
Peter Steinberger
(02:09:07)
But then if I build stuff in Python and I want a story where I can also deploy it on Windows, not a good choice.
Lex Fridman
(02:09:13)
Mm-hmm.
Peter Steinberger
(02:09:13)
Sometimes I, I found projects that kinda did 90% of what I wanted but were in Python, and I wanted them… I wanted an easy Windows story. Okay, just rewrite it in Go. But then if you go towards multiple, multiple threads and a lot more performance, Rust is a really good choice. There’s no… there’s just no single answer, and it’s also the beauty of it. Like, it’s fun.
Peter Steinberger
(02:09:37)
And now it doesn’t matter anymore, you can just literally pick the language that has the, the most fitting characteristics and ecosystem-
Lex Fridman
(02:09:45)
Mm-hmm
Peter Steinberger
(02:09:46)
… for your problem domain. And yeah, it might be… You might have s-… You might be a little bit slow in reading the code, but not really. Y- I think you, you pick stuff up really fast, and you can always ask your agent.

Life story and career advice

Lex Fridman
(02:09:59)
So there’s a lot of programmers and builders who draw inspiration from y- your story. Just the way you carry yourself, your choice of making OpenClaw open source, the, the way you have fun building and exploring, and doing that, for the most part, alone or on a small team. So by way of advice, what metric should be the goal that they would be optimizing for? What would be the metric of success? Would it be happiness? Is it money? Is it positive impact for people who are dreaming of building? ‘Cause you went through an interesting journey. You’ve achieved a lot of those things, and then you fell out of love with programming a little bit for a time.
Peter Steinberger
(02:10:47)
I was just burning too bright for too long. I, I ran… I started PSPDFKit, s- and ran it for 13 years, and it was high stress. Um, I had to learn all these things fast and hard, like how to manage people, how to bring people on, how to deal with customers, how to do…
Lex Fridman
(02:11:14)
So it wasn’t just programming stuff, it was people stuff.
Peter Steinberger
(02:11:17)
The stuff that burned me out was mostly people stuff. I, I don’t think burnout is working too much. Maybe to a degree. Everybody’s different. You know, I c- I cannot speak in a- in absolute terms, but for me, it was much more differences with my, my co-founders, conflicts, or, like, really high stress situation with customers that eventually grinded me down. And then when… luckily we, we got a really good offer for, like, putting the company to the next level and I, I already kinda worked two years on making myself obsolete. So at this point I could leave, and, and then I just… I was sitting in front of the screen and I felt like, you know Austin Powers where they suck the mojo out?
Lex Fridman
(02:12:13)
Yeah.
Peter Steinberger
(02:12:14)
Uh, I g- I was like, m- m- it was, like, gone. Like, I couldn’t… I couldn’t get code out anymore. I was just, like, staring and feeling empty, and then I, I just stopped. I, I booked, like, a one-way trip to Madrid and, and, and just, like, spent a t- some t- sometime there. I felt like I had to catch up on life, so I did a whole, a whole bunch of life catching up stuff.
Lex Fridman
(02:12:47)
Did you go through some lows during that period? And you know, maybe advice on… of how to?
Peter Steinberger
(02:12:56)
Maybe advice on how to approach life. If you think that, “Oh yeah, work really hard and then I’ll retire,” I don’t recommend that. Because the idea of, “Oh yeah, I just enjoy life now,” a- maybe it’s appealing, but right now I enjoy life, the most I’ve ever enjoyed life. Because if you wake up in the morning and you have nothing to look forward to, you have no real challenge, that gets very boring, very fast. And then when, when you’re bored, you’re gonna look for other places how to stimulate yourself, and then maybe, maybe that’s drugs, you know? But that eventually also get boring and you look for more, and that will lead you down a very dark path.

Money and happiness

Lex Fridman
(02:13:57)
But you also showed on the money front, you know, a lot of people in Silicon Valley and the startup world, they think, maybe overthink way too much optimized for money. And you’ve also shown that it’s not like you’re saying no to money. I mean, I’m sure you take money, but it’s not…… the primary objective of uh, of your life. Can you just speak to that? Your philosophy on money?
Peter Steinberger
(02:14:20)
When I built my company, money was never the driving force. It felt more like, like, an affirmation that I did something right. And having money solves a lot of problems. I also think there, there’s diminishing returns the more you have. Like, a cheeseburger is a cheeseburger, and I think if you go too far into, oh, I do private jet and I only travel luxury, you disconnect with society. Um, I, I donated quite a lot. Like, I have a, I have a foundation for helping people that weren’t so lucky.
Lex Fridman
(02:15:11)
And disconnecting from society is bad in that on many levels, but one of them is, like, humans are awesome. It’s nice to continuously remember the awesomeness in humans.
Peter Steinberger
(02:15:23)
I, I mean, I could afford really nice hotels. The last time I was in San Francisco, I did the, the first time the OG Airbnb experience-
Lex Fridman
(02:15:30)
Yeah, yeah
Peter Steinberger
(02:15:30)
… and just booked a room. Mostly because I, I thought, okay, you know, I’m out or I’m sleeping, and I don’t like where all the hotels are, and I wanted a, I wanted a different experience. I think, isn’t life all about experiences? Like, if you, if you tailor your life towards, “I wanna have experiences,” it, it reduces the need for, “It needs to be good or bad.” Like, if people only want good experiences, that’s not gonna work, but if you optimize for experiences, if it’s good, amazing. If it’s bad, amazing, because, like, I learned something, I saw something, did something. I wanted to experience that, and it was amazing. Like, there was, like, this, this queer DJ in there, and I showed her how to make music with cloud code. And we, like, immediately bonded and had a great time.
Lex Fridman
(02:16:24)
Yeah, there’s something about that air- you know, couch surfing, Airbnb experience, the OG. I’m still to this day. It’s awesome. It’s humans, and that’s why travel is awesome.
Peter Steinberger
(02:16:34)
Yeah.
Lex Fridman
(02:16:34)
Just experience the variety of, the diversity of human. And when it’s shitty, it’s good too, man. If it rains and you’re soaked and it’s all fucked, and planes, the everything is shit, everything is fucked, it’s still awesome. If you’re able to open your eyes it’s good to be alive.
Peter Steinberger
(02:16:49)
Yeah, and anything that creates emotion and feelings is good.
Lex Fridman
(02:16:55)
.
Peter Steinberger
(02:16:55)
Even… So, so maybe, maybe even the cryptic people are good because they definitely created emotions. I, I don’t know if I should go that far.
Lex Fridman
(02:17:02)
No, man. Give them, give them all, give them love. Give them love. Because I do think that online lacks some of the awesomeness of real life.
Peter Steinberger
(02:17:13)
Yeah.
Lex Fridman
(02:17:13)
That’s, that’s, it’s an open problem of how to solve, how to infuse the online cyber experience with I don’t know with the intensity that we humans feel when it’s in real life. I don’t know. I don’t know if that’s a solvable problem.
Peter Steinberger
(02:17:31)
Well, it’s just possible because text is very lossy.
Lex Fridman
(02:17:35)
Yeah.
Peter Steinberger
(02:17:35)
You know, sometimes I wish if I talked to the agent I would… It should be multi-model so it also understands my emotions.
Lex Fridman
(02:17:43)
I mean, it, it might move there. It might move there.
Peter Steinberger
(02:17:46)
It will. It will. It totally will.

Acquisition offers from OpenAI and Meta

Lex Fridman
(02:17:49)
I mean, I have to ask you, just curious. I, I know you’ve probably gotten huge offers from major companies. Can you speak to who you’re considering working with?
Peter Steinberger
(02:18:04)
Yeah. So, to like explain my thinking a little bit, right, I did not expect this blowing up so much. So, there’s a lot of doors that opened because of it. There’s, like, I think every VC, every big VC company is in my inbox and tried to get 15 minutes of me. So, there’s, like, this butterfly effect moment. I could just do nothing and continue and I really like my life. Valid choice. Almost. Like, I considered it when I delete it, wanted to delete the whole thing. I could create a company. Been there, done that. There’s so many people that push me towards that and, yeah, like, could be amazing.
Lex Fridman
(02:19:07)
Which is to say that you, you would probably raise a lot of money in that.
Peter Steinberger
(02:19:10)
Yeah.
Lex Fridman
(02:19:11)
I don’t know, hundreds of millions, billions. I don’t know. It could just got unlimited amount of money.
Peter Steinberger
(02:19:15)
Yeah. It just doesn’t excite me as much because I feel I did all of that, and it would take a lot of time away from the things I actually enjoy. Same as when, when I was CEO, I think I, I learned to do it and I’m not bad at it, and partly I’m good at it. But yeah, that path doesn’t excite me too much, and I also fear it, it would create a natural conflict of interest. Like, what’s the most obvious thing I do? I, I prioritize it. I put, like, a version safe for workplace. And then what do you do? Like, I get a pull request with a feature like an audit log, but that seems like an enterprise feature, so now I feel I have a conflict of interest in the open-source version and the closed-source version….
Peter Steinberger
(02:20:15)
or change the license to something like FSL, where you cannot actually use it for commercial stuff, would first be very difficult with all the contributions. And second of all, I- I like the idea that it’s free as in beer and not free with conditions. Yeah, there’s ways how you, how you keep all of that for free and just, like, still try to make money, but those are very difficult. And you see there’s, like, fewer and fewer companies manage that. Like, even Tailwind, they’re, like, used by everyone. Everyone uses Tailwind, right? And then they had to cut off 75% of the employees because they’re not making money because nobody’s even going on the website anymore because it’s all done by agents. S- and just relying on donations, yeah, good luck.
Peter Steinberger
(02:21:04)
Like, if a project of my caliber, if I extrapolate what the typical open-source project would get it’s not a lot. I s- I still lose money on the project because I made the point of supporting every dependency, except Slack. They are a big company. They can, they can, they can do without me. But all the projects that are done by mostly individuals so, like, all the, right now, all the sponsorship goes right up to my dependencies. And if there’s more, I want to, like, buy my contributors some merch, you know?
Lex Fridman
(02:21:43)
So you’re losing money?
Peter Steinberger
(02:21:44)
Yeah, right now I lose money on this.
Lex Fridman
(02:21:46)
So it’s really not sustainable?
Peter Steinberger
(02:21:48)
Uh, I mean, it’s like, I guess something between 10 and 20K a month. Which is fine. I’m sure over time I could get that down. Um, OpenAI is helping out a little bit with tokens now. And there’s other companies that have been generous. But yeah, still losing money on that. So that’s- that’s one path I consider, but I’m just not very excited. And then there’s all the big labs that I’ve been talking to. And from those Meta and OpenAI seem the most interesting.
Lex Fridman
(02:22:32)
Do you lean one way or the other?
Peter Steinberger
(02:22:34)
Yeah. Um… Not sure how much I should share there. It’s not quite finalized yet. Let’s- let’s just say, like, on either of these, my conditions are that the project stays open source. That it… Maybe it’s gonna be a model like Chrome and Chromium. Um, I think this is- this is too important to just give to a company and make it theirs. It… This is… And we didn’t even talk about the whole community part, but, like, the- the thing that I experienced in San Francisco, like at ClawCon, seeing so many people so inspired, like… And having fun and just, like, building shit, and, like, having, like, robots in lobster stuff walking around. Like, the…
Peter Steinberger
(02:23:37)
People told me, like, they didn’t experience this level of- of community excitement since, like, the early days of the internet, like 10, 15 years. And there were a lot of high caliber people there, like… Um, I was amazed. I also, like, was very sensory overloaded because too many people wanted to do selfies. But I love this. Like, this needs to stay a place where people can, like, hack and learn. But also, I’m very excited to, like, make this into a version that I can get to a lot of people because I think this is the year of personal agents, and that’s the future. And the fastest way to do that is teaming up with one of the labs. And I also, on a personal level, I never worked at a large company, and I’m intrigued. You know, we talk about experiences. Will I like it? I don’t know.
Peter Steinberger
(02:24:42)
But I want that experience. Uh, I- I’m sure, like, if- if I- if I announce this, then there will be people like, “Oh, he sold out,” blah, blah, blah. But the project will continue. From everything I talked to so far, I can even have more resources for that. Like, both s- both of those companies understand the value that I created something that accelerates our timeline and that got people excited about AI. I mean, can you imagine? Like, I installed OpenClaw on one of my, I’m sorry, normie friends. I’m sorry, Vahan. But he’s just a… You know?
Peter Steinberger
(02:25:32)
Like, he’s-
Lex Fridman
(02:25:33)
Normie with love, yeah. For sure.
Peter Steinberger
(02:25:34)
He- he, like, someone who uses the computer, but never really… Like, yeah, use some ChatGPT sometimes, but not very technical. Wouldn’t really understand what I built. So, like, I’ll show you, and I- I paid for him the- the 90 buck, 100 buck, I don’t know, subscription for Entropic. And set up everything for him with, like, WSL Windows.
Lex Fridman
(02:26:00)
Mm-hmm.
Peter Steinberger
(02:26:00)
I was also curious, would it actually work on Windows, you know? Was a little early. And then within a few days, he was hooked. Like, he texted me about all the things he learned. He built, like, even little tools. He’s not a programmer. And then within a few days he upgraded to the $200 subscription. Or euros, because he’s in Austria…. and he was in love with that thing. That, for me, was like a very early product validation. It’s like, I built something that captures people. And then, a few days later, Entropic blocked him because, based on their rules using the subscription is problematic or whatever. And he was, like, devastated. And then he signed up for Mini Max for 10 bucks a month and uses that.
Peter Steinberger
(02:26:56)
And I think that’s silly in many ways, because you just got a 200 buck customer. You just made someone hate your company, and we are still so early. Like, we don’t even know what the final form is. Is it gonna be cloud code? Probably not, you know? Like, that seems very… It seems very short-sighted to lock down your product so much. All the other companies have been helpful. I- I’m in Slack of, of most of the big labs. Kind of everybody understands that we are still in an era of exploration, in the area of the radio shows on TV and not, and not a modern TV show that fully uses the format.
Lex Fridman
(02:27:45)
I think, I think you’ve made a lot of people, like, see the possibility. And non- Uh, sorry. Non, non-technical people see the possibility of AI, and just fall in love with this idea, and enjoy interacting with AI. And that’s a bea- That’s a really beautiful thing. I think I also speak for a lot of people in saying, I think you’re one of the, the great people in AI in terms of having a good heart, good vibes, humor, the right spirit. And so it would, in a sense, this model that you’re describing, having open source part, and you being part of uh, also building a thing inside, additionally, of a large company would be great, because it’s great to have good people in those companies.
Peter Steinberger
(02:28:36)
Yeah. You know, what also people don’t really see is… I made this in three months. I did other things as well. You know, I have a lot of projects. Like, this is not… Yeah, in January, this was my main focus because I saw the storm coming. But before that, I built a whole bunch of other things. Um, I have so many ideas. Some should be there, some would be much better fitted when I have access to the latest toys- Uh, and I, I kind of want to have access to, like, the latest toys. So this is important, this is cool, this will continue to exist. My, my short-term focus is, like, working through those… Is it two- Is it 3,000 PRs now by now? I don’t even know. Like, there’s, there’s a little bit of backlog.
Peter Steinberger
(02:29:23)
But this is not gonna be the thing that I’m gonna work until I’m, I’m, I’m 80, you know? This is… This is a window into the future. I’m gonna make this into a cool product. But yeah, I have like… I have more ideas.
Lex Fridman
(02:29:36)
If you had to pick, is there a company you lean? So Meta, OpenAI, is there one you lean towards going?
Peter Steinberger
(02:29:44)
I spend time with both of those. And it’s funny, because a few weeks ago, I didn’t consider any of this. Um… And it’s really fucking hard. Like-
Lex Fridman
(02:30:05)
Yeah.
Peter Steinberger
(02:30:06)
I have some… I know no people at OpenAI. I love their tech. I think I’m the biggest codex advertisement shill that’s unpaid. And it would feel so gratifying to, like, put a price on all the work I did for free. And I would love if something happens and those companies get just merged, because it’s like…
Lex Fridman
(02:30:32)
Is this the hardest decision you’ve ever had to do?
Peter Steinberger
(02:30:39)
No. You know, I had some breakups in the past that feel like it’s the same level.
Lex Fridman
(02:30:43)
Relationships, you mean?
Peter Steinberger
(02:30:45)
Yeah.
Lex Fridman
(02:30:47)
Yeah, yeah, yeah, yeah.
Peter Steinberger
(02:30:48)
And, and I also know that, in the end, they’re both amazing. I cannot go wrong. This is like-
Lex Fridman
(02:30:53)
Right.
Peter Steinberger
(02:30:54)
This is, like, one of the most prestigious and, and, and, and, and largest… I mean, not largest, but, like, they’re both very cool companies.
Lex Fridman
(02:31:02)
Yeah, they both really know scale. So, if you’re thinking about impact, some of the wonderful technologies you’ve been exploring, how to do it securely, and how to do it at scale, such that you can have a positive impact on a large number of people. They both understand that.
Peter Steinberger
(02:31:19)
You know, both Ned and Mark basically played all week with my product, and sent me like, “Oh, this is great.” Or, “This is shit. Oh, I need to change this.” Or, like, funny little anecdotes. And people using your stuff is kind of like the biggest compliment, and also shows me that, you know, they actually… T- they actually care about it. And I didn’t get the same on the OpenAI side. Um, I got… I got to see some other stuff that I find really cool, and they lure me with… I cannot tell you the exact number because of NDA, but you can, you can be creative and, and think of the Cerebras deal and how that would translate into speed. And it was very intriguing. You know, like, you give me Thor’s hammer. Yeah. … been lured with tokens. So, yeah.
Lex Fridman
(02:32:34)
So, it- it’s funny. So, so Marc started tinkering with the thing, essentially having fun with the thing.
Peter Steinberger
(02:32:41)
He got… He… Like, when he first… When he first approached me, I got him in my, in my WhatsApp and he was asking, “Hey, when are we have a call?” And I’m like, “I don’t like calendar entries. Let’s just call now.” And he was like, “Yeah, give me 10 minutes, I need to finish coding.”
Lex Fridman
(02:33:01)
Mm-hmm.
Peter Steinberger
(02:33:01)
Well, I guess that gives you street cred. It’s like, ugh, like, he’s still writing code. You know, he’s-
Lex Fridman
(02:33:07)
Yeah, he does
Peter Steinberger
(02:33:07)
… he didn’t drift away in just being a manager, he gets me. That was a good first start. And then I think we had a, like, a 10-minute fight what’s better, cloud code or Codex. Like, that’s the thing you first do, like, you casually call-
Lex Fridman
(02:33:24)
Yeah, that’s awesome
Peter Steinberger
(02:33:24)
… someone with, like, the- that owns one of the largest companies in the world and, and you have a 10 minutes conversation about that.
Lex Fridman
(02:33:30)
Yeah, yeah.
Peter Steinberger
(02:33:30)
And then I think afterwards he called me eccentric but brilliant. But I also had some… I had some really, really cool discussion with Sam Altman and he’s, he’s very thoughtful brilliant and I like him a lot from the, from the little time I had, yeah. I mean, I know it’s peop- some people vilify both of those people. I don’t think it’s fair.
Lex Fridman
(02:34:15)
I think no matter what the stuff you’re building and the kind of human you are doing stuff at scale is kinda awesome. I’m excited.
Peter Steinberger
(02:34:24)
I am super pumped. And you know the beauty is if, if it doesn’t work out, I can just do my own thing again. Like, I, I told them, like, I, I don’t do this for the money, I don’t give a fuck. I-
Lex Fridman
(02:34:42)
Yeah.
Peter Steinberger
(02:34:42)
I mean, of course, of course it’s a nice compliment but I wanna have fun and have impact, and that’s ultimately what made my decision.

How OpenClaw works

Lex Fridman
(02:34:58)
Can I ask you about… we’ve talked about it quite a bit, but maybe just zooming out about how OpenCloud works. We talked about different components, I want to ask if there’s some interesting stuff we missed. So, there’s the gateway, there’s the chat clients, there’s the harness there’s the agentic loop. You said somewhere that everybody should im- implement an agent loop at some point in their lives.
Peter Steinberger
(02:35:24)
Yeah, because it’s like the, it’s like the Hello World in AI, you know? And it’s actually quite simple.
Lex Fridman
(02:35:30)
Yeah.
Peter Steinberger
(02:35:30)
And it- it’s good to understand that that stuff’s not magic. You can, you can easily build it yourself. So, writing your own little cloud code… I, I even did this at a conference in Paris for people to, like, introduce them to AI. I think it’s it’s a fun little practice. And you, you covered a lot. I think one, one silly idea I had that turned out to be quite cool is I built this thing with full system access. So it’s like, you know, with great power comes great responsibility.
Peter Steinberger
(02:36:09)
And I was like, “How can I up the stakes a little bit more?”
Lex Fridman
(02:36:13)
Yeah, right.
Peter Steinberger
(02:36:14)
And I just made a… I made it proactive. So, I added a prompt. Initially, it was just a prompt, surprise me. Every, like, half an hour, surprise me, you know? And later on I changed it to be like a little more specific and-
Lex Fridman
(02:36:31)
Yeah
Peter Steinberger
(02:36:31)
… in the definition of surprise. But the fact that I made it proactive and that it knows you and that it cares about you, it- it’s at least it’s programmed to that, prompted to do that. And that, that is a follow on, on your current session makes it very interesting because it would just sometimes ask a follow-up question or like, “How’s your day?”
Lex Fridman
(02:36:53)
Yeah, right.
Peter Steinberger
(02:36:53)
And I just made a… I made it proactive. So, I added a prompt. Initially, it was just a prompt, surprise me. Every, like, half an hour, surprise me, you know? And later on I changed it to be like a little more specific and-
Lex Fridman
(02:36:58)
Yeah
Peter Steinberger
(02:36:58)
… in the definition of surprise. But the fact that I made it proactive and that it knows you and that it cares about you, it- it’s… at least it’s programmed to that, prompted to do that. And that, that is a follow on, on your current session makes it very interesting because it would just sometimes ask a follow-up question or like, “How’s your day?” I mean, again, it’s a little creepy or weird or interesting but Heartbeat very… in the beginning, it’s still… today, it doesn’t… the model doesn’t choose to use it a lot.
Lex Fridman
(02:37:16)
By the way, we’re, we’re, we’re talking about Heartbeat, as you mentioned, the thing that regularly-
Peter Steinberger
(02:37:22)
Yeah. Like kicks-
Lex Fridman
(02:37:23)
… Acts.
Peter Steinberger
(02:37:23)
You just kick off the loop.
Lex Fridman
(02:37:25)
Isn’t that just a cron job, man?
Peter Steinberger
(02:37:27)
Yeah, right, I mean, it’s like-
Lex Fridman
(02:37:29)
It’s the cr- the criticisms that you get are hilarious.
Peter Steinberger
(02:37:31)
You can, you can deduce any idea to like a silly… Yeah, it’s just, it’s just a cron job in the end. I have like cron- separate cron jobs.
Lex Fridman
(02:37:41)
Isn’t love just evolutionary biology manifesting itself and isn’t… aren’t you guys just using each other?
Peter Steinberger
(02:37:49)
And then, yeah, and the project is all just glue of a few different dependencies-
Lex Fridman
(02:37:52)
Yeah
Peter Steinberger
(02:37:53)
… and there’s nothing original. Why do people… Well, you know, isn’t Dropbox just FTP with extra steps?
Lex Fridman
(02:38:00)
Yeah.
Peter Steinberger
(02:38:01)
I found it surprising where I had this I had a shoulder operation a few months ago, so.
Lex Fridman
(02:38:06)
Mm-hmm.
Peter Steinberger
(02:38:08)
And the model rarely used Heartbeat, but then I was in the hospital, and it knew that I had the operation and it checked up on me. It’s like, “Are you okay?” And I just… It’s like, again, apparently, like, if something’s significant in the context, that triggered the Heartbeat when it rarely used the Heartbeat…. And it does that sometimes for people, and that just makes it a lot more relatable.
Lex Fridman
(02:38:36)
Let me look this up on Perplexity, how OpenCall works just to see if I’m missing any of the stuff. Local agent run time, high-level architecture. There’s… Oh, we haven’t talked much about skills, I suppose. Skill hub, the tools in the skill lair, but that’s definitely a huge component and there’s a huge growing set of skills-
Peter Steinberger
(02:38:55)
You know, you know what I love? That half a year ago, like everyone was talking about MCPs-
Lex Fridman
(02:39:02)
Yeah
Peter Steinberger
(02:39:02)
… and I was like, “Screw MCPs. Every MCP would be better as a CLI.” And now this stuff doesn’t even have MCP support. I mean, it, it has with asterisks, but not in the core lair, and nobody’s complaining.
Lex Fridman
(02:39:23)
Mm-hmm.
Peter Steinberger
(02:39:24)
So my approach is if you want to extend the model with more features, you just build a CLI and the model can call the CLI, probably gets it wrong, calls the help menu, and then on demand loads into the context what it needs to use the CLI. It just needs a sentence to know that the CLI exists if it’s something that the model doesn’t know about default. And even for a while, I, I didn’t really care about skills, but skills are actually perfect for that because they, they boil down to a single sentence that explains the skill and then the model loads the skill, and that explains the CLI, and then the model uses the CLI. Some skills are, like raw, but most of the time, networks.
Lex Fridman
(02:40:16)
It’s interesting um, I’m asking Perplexity MCP versus skills, because this kind of requires a hot take that’s quite recent, because your general view is MCPs are dead-ish. So MCPs is a more structured thing. So if you listen to Perplexity here, MCP is what can I reach? So APIs, database services files via protocol. So a structured protocol of how you communicate with a thing, and then skills is more how should I work? Procedures, hostile helper scripts and prompts are often written in a kind of semi-structured natural language, right? And so technically skills could replace MCP if you have a smart enough model.
Peter Steinberger
(02:41:00)
I think the main beauty is, is that models are really good at calling Unix commands. So if you just add another CLI, that’s just another Unix command in the end. And MCP is… That has to be added in training. That’s not a very natural thing for the model. It requires a very specific syntax. And the biggest thing, it’s not composable. So imagine if I have a service that gives me better data and gives me the temperature, the average temperature, rain, wind and all the other stuff, and I get like this huge blob back. As a model, I always have to get the huge blob back. I have to fill my context with that huge blob and then pick what I want. There’s no way for the model to naturally filter unless I think about it proactively and add a filtering way into my MCP.
Peter Steinberger
(02:41:53)
But if I would build the same as a CLI and it would give me this huge blob, it could just add a JQ command and filter itself and then only, only get me what I actually need. Or maybe even compose it into a script to, like do some calculations with the temperature and only give me the exact output and the mo- and the… you have no context pollution. Again, you can solve that with like sub-agents and more charades, but it’s just like workarounds for something that might not be the optimal way. There’s… It definitely it was, you know, it was good that we had MCPs because it pushed a lot of companies towards building APIs and now I, I can like look at an MCP and just make it into a CLI.
Lex Fridman
(02:42:37)
Mm-hmm.
Peter Steinberger
(02:42:37)
But this, this inherent problem that MCPs by default clutter up your context. Plus the fact that most MCPs are not made good, in general make it just not a very useful paradigm. There’s some exceptions like Playwright for example that requires state and it’s actually useful. That is an acceptable choice.
Lex Fridman
(02:43:05)
So Playwright you use for browser use, which I think is c- already in OpenClaw is quite incredible, right?
Peter Steinberger
(02:43:11)
Yeah.
Lex Fridman
(02:43:12)
You can basically do everything, most things you can think of using browser use.
Peter Steinberger
(02:43:17)
That, that gets into the whole arch of every app is just a very slow API now, if they want or not. And that through personal agents a lot of apps will disappear. You know, like I had a… I built a CLI for Twitter. I mean, I- I just reverse engineered their website and used the internal API, which is not very allowed.
Lex Fridman
(02:43:50)
It’s called Bird, short-lived.
Peter Steinberger
(02:43:53)
It was called Bird, because the bird had to disappear.
Lex Fridman
(02:43:57)
The, the wings were clipped.
Peter Steinberger
(02:43:59)
All they did is they just made access slower. Yeah, not tak- you’re not actually taking a feature away, but now inst- if, if your agent wants to read a tweet, it actually has to open the browser and read the tweet. And it will still be able to read the tweet. It will just take longer. It’s not like you are making something that was possible, not possible. No. Now, it’s just taking… Now it’s just a bit slower. So, so it doesn’t really matter if your service wants to be an API or not. If I can access it in the browser…… easy API. It’s a slow API.
Lex Fridman
(02:44:35)
Can you empathize with their situation? Like, what would you do if you were Twitter, if you were X? Because they’re basically trying to protect against other large companies scraping all their data.
Peter Steinberger
(02:44:45)
Yeah.
Lex Fridman
(02:44:46)
But in so doing, they’re cutting off like a million different use cases for smaller developers that actually want to use it for helpful cool stuff.
Peter Steinberger
(02:44:54)
I think that if you have a very low per day baseline per account that allows read-only access would solve a lot of problems. There’s plenty, plenty of automations where people create a bookmark and then use OpenClaw to, like, find the bookmark, do research on it, and then send you an email-
Lex Fridman
(02:45:16)
Mm-hmm
Peter Steinberger
(02:45:16)
… with, like, more details on it or a summary. That’s a cool approach. I also want all my bookmarks somewhere to search. I would still like to have that.
Lex Fridman
(02:45:26)
So, read-only access for the bookmarks you make on X. That seems like an incredible application because a lot of us find a lot of cool stuff on X, we bookmark, that’s the general purpose of X. It’s like, holy shit, this is awesome. Oftentimes, you bookmark so many things you never look back at them.
Peter Steinberger
(02:45:40)
Yeah.
Lex Fridman
(02:45:40)
It would be nice to have tooling that organizes them and allows you to research it further.
Peter Steinberger
(02:45:44)
Yeah, I mean, and to be frank, I, I mean, I, I told Twitter proactively that, “Hey, I built this and there’s a need.” And they’ve been really nice, but also like, “Take it down.” Fair. Totally fair. But I hope that this woke up the team a little bit that there’s a need. And if all you do is making it slower, you’re just reducing access to your platform. I’m sure there’s a better way. I also, I’m very much against any automation on Twitter. If you tweet at me with AI, I will block you. No first strike. As soon as it smells like AI, and AI still has a smell.

AI slop

Lex Fridman
(02:46:31)
Mm-hmm.
Peter Steinberger
(02:46:32)
Especially on tweets. It’s very hard to tweet in a way that does look completely human.
Lex Fridman
(02:46:38)
Mm-hmm.
Peter Steinberger
(02:46:38)
And then I block. Like, I have a zero tolerance policy on that. And I think it would be very helpful if they, if, like, tweets done via API would be marked. Maybe there’s some special cases where… But, and there should be, there should be a very easy way for agents to get their own Twitter account. Um…
Lex Fridman
(02:47:04)
Mm-hmm.
Peter Steinberger
(02:47:07)
We, we need to rethink social platforms a little bit if, if, if we, we, we go towards a future where everyone has their agent and agents maybe have their own Instagram profiles or Twitter accounts, so I can, like, do stuff on my behalf. I think it should very clearly be marked that they are doing stuff on my behalf and it’s not me. Because content is now so cheap. Eyeballs are the expensive part. And I find it very triggering when I read something and then I’m like, oh, no, this smells like AI.
Lex Fridman
(02:47:41)
Yeah. Like, where, where is this headed in terms of what we value about the human experience? It feels like we’ll, we’ll move more and more towards in-person interaction and we’ll just communicate. We’ll talk to our AI agent to, to accomplish different tasks, to learn about different things, but we won’t value online interaction because there’ll be so much AI slob that smells and so many bots that it’s difficult.
Peter Steinberger
(02:48:15)
Well, if it’s smart, then it shouldn’t be difficult to filter. And then I can look at it if I want to. But yeah, this is, like, a big thing we need to solve right now. E- especially on this project, I get so many emails that are, let’s say nicely, agentically written.
Lex Fridman
(02:48:36)
Yeah.
Peter Steinberger
(02:48:36)
But I much rather read your broken English than your AI slob. You know, of course there’s a human behind it, and yet they, they prompt it. I’d much rather read your prompt than what came out. Um, I think we’re reaching a point where I value typos again.
Lex Fridman
(02:48:56)
Yeah.
Peter Steinberger
(02:48:56)
Like… Like, and I, I mean, it also took me a while to, like, come to the realization. I, on my blog I experimented with creating a blog post with agents and ultimately it took me about the same time to, like, steer agent towards something I like. But it missed the nuances that, how I would write it. You know, you can like, you can steer it towards your style, but it’s not gonna be all your style. So, I, I completely moved away from that. I, I, everything, everything I blog is organic, handwritten and maybe, maybe I, I, I use AI as a fix my worse typos. But there’s value in the rough parts of an actual human.
Lex Fridman
(02:49:53)
Isn’t that awesome? Isn’t that beautiful? That now because of AI we value the raw humanity in each of us more.
Peter Steinberger
(02:50:02)
I also, I also realized this thing that I, I rave about AI and use it so much for anything that’s code, but I’m allergic if it’s stories.
Lex Fridman
(02:50:12)
Right. Yeah.
Peter Steinberger
(02:50:14)
Also, documentation, still fine with AI. You know, better than nothing.
Lex Fridman
(02:50:17)
And for now it’s still i- it applies in the mi- in the visual medium too. It’s fascinating how allergic I am to even a little bit of AI slob in in video and images. It’s useful, it’s nice if it’s like a little component of like-
Peter Steinberger
(02:50:32)
Or even, even those images. The, like, all these infographics and stuff, the-… they trigger me so hard.
Lex Fridman
(02:50:38)
Yeah.
Peter Steinberger
(02:50:39)
Like, it immediately makes me think less of your content. And it … They were novel for, like, one week and now it just screams slop.
Lex Fridman
(02:50:50)
Yeah.
Peter Steinberger
(02:50:51)
Even- even if people work hard on it, using … And I- I have some on my blog post, you know, in the- in the time where I- I explored this new medium. But now, they trigger me as well. It’s like, yeah, this is … This just screams AI slop. I-
Lex Fridman
(02:51:06)
What… I don’t know what that is, but I went through that too. I was really excited by the diagrams. And then I realized, in order to remove from them hallucinations, you actually have to do a huge amount of work. And you’re just using it to draw the better diagrams, great. And then I’m proud of the diagram. I’ve used them for literally, like, ki- ki- kind of like you said for maybe a couple of weeks. And now I look at those, and I- I feel like I feel when I look at Comic Sans as a font or- or something like this.
Lex Fridman
(02:51:32)
It’s like, “No, this is-“
Peter Steinberger
(02:51:35)
It’s a smell.
Lex Fridman
(02:51:35)
“… this is fake. It’s fraudulent. There’s something wrong with it.” And it…
Peter Steinberger
(02:51:41)
It’s a smell.
Lex Fridman
(02:51:42)
It’s a smell.
Peter Steinberger
(02:51:44)
It’s a smell.
Lex Fridman
(02:51:44)
And it’s awesome because it re- it reminds you that we know. There’s so much to humans that’s amazing and we know that. And we- we know it. We know it when we see it. And so that gives me a lot of hope, you know? That gives me a lot of hope about the human experience. It’s not going to be damaged by … It’s only going to be empowered as tools by AI. It’s not going to be damaged or limited or somehow altered to where it’s no longer human. So … Uh, I need a bathroom break. Quick pause. You mentioned that a lot of the apps might be basically made obsolete. Do you think agents will just transform the entire app market?

AI agents will replace 80% of apps

Peter Steinberger
(02:52:30)
Yeah. Uh, I noticed that on Discord, that people just said how their … like, what they build and what they use it for. And it’s like, why do you need MyFitnessPal when the agent already knows where I am? So, it can assume that I make bad decisions when I’m at, I don’t know, Waffle House, what’s around here? Or- or briskets in Austin.
Lex Fridman
(02:52:57)
There’s no bad decisions around briskets, but yeah.
Peter Steinberger
(02:53:00)
No, that’s the best decision, honestly. Um-
Lex Fridman
(02:53:03)
Your agent should know that.
Peter Steinberger
(02:53:04)
But it can, like … It can modify my- my gym workout based on how well I slept, or if I’m … if I have stress or not. Like, it has so much more context to make even better decisions than any of this app even could do.
Lex Fridman
(02:53:18)
Mm-hmm.
Peter Steinberger
(02:53:19)
It could show me UI just as I like. Why do I still need an app to do that? Why do I have to … Why should I pay another subscription for something that the agent can just do now? And why do I need my- my Eight Sleep app to control my bed when I can tell the a- … tell the agent to … You know, the agent already knows where I am, so he can, like, turn off what I don’t use.
Lex Fridman
(02:53:45)
Mm-hmm.
Peter Steinberger
(02:53:47)
And I think that will … that will translate into a whole category of apps that are no longer … I will just naturally stop using because my agent can just do it better.
Lex Fridman
(02:54:00)
I think you said somewhere that it might kill off 80% of apps.
Peter Steinberger
(02:54:04)
Yeah.
Lex Fridman
(02:54:05)
Don’t you think that’s a gigantic transformative effect on just all software development? So that means it might kill off a lot of software companies.
Peter Steinberger
(02:54:13)
Yeah. Um-
Lex Fridman
(02:54:16)
It’s a scary thing. So, like, do you think about the impact that has on the economy? On just the ripple effects it has to society? Transforming who builds what tooling. It empowers a lot of users to get stuff done, to get stuff more efficiently, to get it done cheaper.
Peter Steinberger
(02:54:41)
It’s also new services that we will need, right? For example, I want my agent to have an allowance. Like, you solve problems for me, here’s like 100 bucks in order to solve problems for me. And if I tell you to order me food, maybe it uses a service. Maybe it uses something like rent-a-human to, like, just get that done for me.
Lex Fridman
(02:55:06)
Mm-hmm.
Peter Steinberger
(02:55:06)
I don’t actually care. I care about solve my problem. There’s space for- for new companies to solve that well. Maybe don’t … Not all apps disappear. Maybe some transform into being API.
Lex Fridman
(02:55:21)
So, basically, apps that rapidly transform in being agent-facing. So, there’s a real opportunity for, like, Uber Eats, that we just used earlier today. It- it’s companies this, of which there’s many. Who gets there fastest to being able to interact with OpenClaw in a way that’s the m- the most natural, the easiest?
Peter Steinberger
(02:55:50)
Yeah. And also, apps will become API if they want or not. Because my agent can figure out how to use my phone. I mean, on- on the other side, it’s a little more tricky. On Android, that’s already … People already do that. And then we’ll just click the Order Uber for Me button for me. Or maybe another service. Or maybe there’s- there’s a … there’s an API I can call so it’s faster. Uh, I think that’s a space we’re just beginning to even understand what that means. And I … Again, I didn’t even … That was not something I thought of. Something that I- that I discovered as people use this, and it … We are still so early. But yeah, I think data is very important. Like, apps that can give me data, but that also can be API. Why do I need a Sonos app anymore when I can …
Peter Steinberger
(02:56:44)
when my agent can talk to the Sonos?… Speakers directly. Like my cameras, there’s like a crappy app, but they have, they have an API, so my agent uses the API now.
Lex Fridman
(02:56:57)
So it’s gonna force a lot of companies to have to shift focus. That’s kind of what the internet did, right? You have to rapidly rethink, reconfigure what you’re selling, how you’re making money.
Peter Steinberger
(02:57:10)
Yeah, and some companies were really not like that. For example, there’s no CLI for Google, so I had to like, do… have to do anything myself and build GAWK. That’s like a CLI for Google. And at the… Yeah, at the end user, they have to give me the emails because otherwise I cannot use their product. If I’m a company and I try to get Google data, Gmail, there’s a whole complicated process, to the point where sometimes startups acquire startups that went through the process, so they don’t- don’t have to work with Google for half a year to be certified to being able to access Gmail. But my agent can access Gmail because I can just connect to it. It’s still crappy because I need to, like, go through Google’s developer jungle to get a key, and that’s still annoying.
Peter Steinberger
(02:58:09)
But they cannot prevent me. And worst case, my agent just clicks on the, on the website and gets the data out that way.
Lex Fridman
(02:58:17)
Through browsers?
Peter Steinberger
(02:58:18)
Yeah. I mean, I, I watch my agent happily click the I’m not a robot button. And there’s this, this whole… That’s gonna be… That’s gonna be more heated. You see companies like Cloudflare that try to prevent bot access. And in some ways, that’s useful for scraping. But in other ways, if I’m, I’m a personal user, I want that. You know, sometimes I, I use Codex and I, I read an article about modern React patterns, and it’s like a Medium article. I paste it in and the agent can’t read it because they block it. So then I have to copy-paste the actual text. Or in the future, I’ll learn that maybe I don’t click on Medium because it’s annoying, and I use other websites that actually are agent friendly.
Peter Steinberger
(02:59:12)
So, uh-
Lex Fridman
(02:59:13)
There’s gonna be a lot of powerful, rich companies fighting back. So it’s really intere- You’re at the center, you’re the catalyst, the leader, and happen to be at the center of this kind of revolution where it’s get- gonna completely change how we interact with services with, with web. And so, like, there’s companies at Google that are gonna push back. I mean, there’s every major companies you could think of is gonna push back.
Peter Steinberger
(02:59:39)
Even… Yeah, even search. Um, I now use, I think Perplexity or Brave as providers because Google really doesn’t make it easy to use Google without Google. I’m not sure if that’s the right strategy, but I’m not Google.
Lex Fridman
(02:59:58)
Yeah, there’s a, there’s a nice balance from a big company perspective ’cause if you push back too much for too long, you become Blockbuster and you lose everything to the Netflixes of the world. But some pushback is probably good during a revolution to see.
Peter Steinberger
(03:00:11)
Yeah. But you see that, that… Like, this is something that the people want.
Lex Fridman
(03:00:14)
Right.
Peter Steinberger
(03:00:14)
So-
Lex Fridman
(03:00:15)
Yes.
Peter Steinberger
(03:00:16)
If I’m on the go, I don’t wanna open a calendar app. I just… I wanna tell my agent, “Hey, remind me about this dinner tomorrow night,” and maybe invite two of my friends and then maybe send a what- send a WhatsApp message to my friend. And I don’t need… I don’t want or need to open apps for that. I think that we passed that age, and now everything is, like, much more connected and, and fluid if those companies want it or not. And I think, well, the right companies will find ways to jump on the train, and other companies will perish.

Will AI replace programmers?

Lex Fridman
(03:00:55)
You got to listen to what the people want. We talked about programming quite a bit, and a lot of folks that are developers are really worried about their jobs, about their… About the future of programming. Do you think AI replaces programmers completely? Human programmers?
Peter Steinberger
(03:01:11)
I mean, we’re definitely going in that direction. Programming is just a part of building products. So maybe, maybe AI does replace programmers eventually. But there’s so much more to that art. Like, what do you actually wanna build? How should it feel? How’s the architecture? I don’t think agents will replace all of that. Yeah, like, just the, the actual art of programming, it will, it will stay there, but it’s, it’s gonna be like knitting. You know? Like, people do that because they like it, not because it makes any sense. So the… I read this article this morning about someone that it’s okay to mourn our craft. And I can…
Peter Steinberger
(03:02:04)
A part of me very strongly resonates with that because in my past I, I spent a lot of time tinkering, just being really deep in the flow and just, like, cranking out code and, like, finding really beautiful solutions. And yes, in a way it’s, it’s sad because that will go away. And I also get a lot of joy out of just writing code and being really deep in my thoughts and forgetting time and space and just being in this beautiful state of flow. But you can get the same state of flow… I get a similar state of flow by working with agents and building and thinking really hard about problems. It is different-… but… And it’s okay to mourn it, but I mean, that’s not something we can fight. Like, there is… the world for a long time had a…
Peter Steinberger
(03:03:06)
there was a lack of intelligence, if you s- if you see it like that, of people building things, and that’s why salaries of software developers reached stupidly high amounts and then will go away. There will still be a lot of demand for people that understand how to build things. It’s just that all this tokenized intelligence enables people to do a lot more, a lot faster. And it will be even more… even faster and even more because those things are continuously improving. We had similar things when… I mean, it’s probably not a perfect analogy, but when we created the steam engine, and they built all these factories and replaced a lot of manual labor, and then people revolted and broke the machines.
Peter Steinberger
(03:04:04)
Um, I- I can relate that if you very deeply identify that you are a programmer, that it’s scary and that it’s threatening because what you like and what you’re really good at is now being done by a soulless or not entity. But I don’t think you’re just a programmer. That’s a very limiting view of your craft. You are, you are still a builder.
Lex Fridman
(03:04:40)
Yeah, there’s a couple of things I want to say. So one is, I never… As you’re articulating this beautifully, I no- I’m realizing I never thought I would… the thing I love doing would be the thing that gets replaced. You hear these stories about these, like you said, with the steam engine. I’ve, I’ve spent so many, I don’t know, maybe thousands of hours poring over code and putting my heart and soul and, like, and just, like, some of my most painful and happiest moments were alone behind… I, I was an Emacs person for a long time. Man, Emacs. And, and then there’s an identity and there’s meaning, and there’s… Like, when I walk about the world, I don’t say it out loud, but I think of myself as a programmer. And to have that in a matter of months…
Lex Fridman
(03:05:31)
I mean, like you mentioned, April to November, it really is a leap that happened, a shift that’s happening. To have that completely replaced is is painful. It’s, it’s truly painful. But I also think programmers, builders more broadly, but what is, what is the act of programming? I, I think programmers are generally best equipped at this moment in history to learn the language, to empathize with agents, to learn the language of agents. To feel the CLI.
Peter Steinberger
(03:06:10)
Yeah.
Lex Fridman
(03:06:11)
Like, like to understand what is the thing you need, you the agent, need to do this task the best?
Peter Steinberger
(03:06:21)
I think at some point it’s just gonna be called coding again, and it’s just gonna be the new normal.
Lex Fridman
(03:06:25)
Yeah.
Peter Steinberger
(03:06:25)
And yet, while I don’t write the code, I very much feel like I’m in the driver’s seat and I am, I am writing the code, you know? It’s just-
Lex Fridman
(03:06:37)
You’ll still be a programmer. It’s just the activity of a programmer is, is different.
Peter Steinberger
(03:06:41)
Yeah, and because on X, the bubble, I mean, is mostly positive. On, on Mastodon and Bluesky, I don’t… I also use it less because oftentimes I got attacked for my blog posts. And I, I had stronger reactions in the past, now I can sympathize with those people more ’cause, in a way I get it. It… In a way, I also don’t get it because it’s very unfair to grab onto the person that you see right now and unload all your fear and hate. It’s gonna be a change and it’s gonna be challenging, but it’s also… I don’t know. I find it incredibly fun and, and, and gratifying. And I can, I can use the new time to focus on much more details. I think the level of expectation of what we build is also rising because it’s just now… The default is now so much easier, so software is changing in many ways.
Peter Steinberger
(03:07:45)
There’s gonna be a lot more. And then you have all these people that are screaming, “Oh yeah, but what about the water?” You know? Like, I did a conference in Italy about the, the state of AI, and m- my whole motivation was to push people away from, don’t see yourself as an iOS developer anymore. You’re now a builder, and you can use your skills in many more ways. Also because apps are slowly going away. People didn’t like that. Like a lot of people didn’t like what I had to say. And I don’t think I was hyperbole, I was just like, “This is how I see the future.” Maybe this is not how it’s going to be, but I’m pretty sure a version of that will happen.
Peter Steinberger
(03:08:30)
And the first question I got was, “Yeah, but what about the insane water use on data centers?” But then you actually sit down and do the maths, and then for most people if you just skip one burger per month, that compensates the, the CO2 output, or, like, the water use in equivalent of tokens. I mean, the maths is, is… the maths is tricky, and it depends if you add pre-training, then maybe it’s more than just one patty…. but it’s not off by a factor of 100, you know? So, so the… or like golf is still using way more water than all data centers together. So are you also hating people that play golf? Those people grab on anything that they think is bad about AI without seeing the potential things that might be good about AI.
Lex Fridman
(03:09:23)
Mm-hmm.
Peter Steinberger
(03:09:24)
And I’m not saying everything’s good. It’s certainly gonna be a very transformative technology for our society.
Lex Fridman
(03:09:32)
There’s to steel man the, the criticism in general, I do wanna say in my experience with Silicon Valley there’s a bit of a bubble in the sense that there’s a kind of excitement and an over-focus about the positive that the technology can bring.
Peter Steinberger
(03:09:54)
Yeah.
Lex Fridman
(03:09:55)
And… which is great. It’s great to focus on… N- not to, not to be paralyzed by fear and fear-mongering and so on, but there’s also within that excitement, and within everybody talking just to each other, there’s a dismissal of the basic human experience across the United States and the Midwest, across the world. Including the programmers we mentioned, including all the people that are gonna lose their jobs, including the s- the measurable pain and suffering that happens at the short-term scale when there’s change of any kind. Especially large-scale transformative change that we’re about to face if what we’re talking about will materialize. And so to ha- having a bit of that humility and awareness about the tools you’re building, they’re going to cause pain.
Lex Fridman
(03:10:43)
They will long term hopefully bring about a better world, and even more opportunities-
Peter Steinberger
(03:10:48)
Yeah
Lex Fridman
(03:10:48)
… and even more awesomeness. But having that kind of like quiet moment often of, of respect for the pain that is going to be felt. And so not, not enough of that is, I think, done, so it’s, it’s good to have a bit of that.
Peter Steinberger
(03:11:07)
And then I also have to put against some of the emails I got where people told me they have a small business, and they’ve been struggling. And, and OpenClaw helped them automate a few of the tedious tasks from, from collecting invoices to like answering customer emails that then freed them up and like cost them a bit more joy in their life.
Lex Fridman
(03:11:30)
Mm-hmm.
Peter Steinberger
(03:11:31)
Or, or some emails where they told me that OpenClaw helped their disabled daughter. That she’s now empowered and feels she can do much more than before. Which is amazing, right? Because you could, you could do that before as well. The technology was there. I didn’t, I didn’t invent a whole new thing, but I made it a lot easier and more accessible, and that did show people the possibilities that they previously wouldn’t see. And now they apply it for good.
Lex Fridman
(03:12:02)
Mm-hmm.
Peter Steinberger
(03:12:03)
Or like also the fact that, yes, I, I, I suggest the, the, the latest and best models, but you can totally run this on free models. You can run this locally. You can run this on, on, on Keyme or other, other, other models that are way more accessible price-wise, and still have a, a very powerful system that might otherwise not be possible. Because other things like, I don’t know, Entropik’s CoWork is locked in into their space, so it’s not all black and white. There’s… I got a lot of emails that were heartwarming and amazing. And, and I don’t know, it just made me really happy.
Lex Fridman
(03:12:48)
Yeah, there’s a lot… It has brought joy into a lot of people’s lives. Not just, not just programmers. Like a lot of people’s lives. It’s, it’s, it’s beautiful to see. What gives you hope about this whole thing we have going on with human civilization?

Future of OpenClaw community

Peter Steinberger
(03:13:03)
I mean, I inspired so many people. There’s like… there’s this whole builder vibe again. People are now using AI in a more playful way and are discovering what it can do and how it can like help them in their life. And creating new places that are just sprawling of creativity. I don’t know. Like, there’s like ClawCoin in Vienna. There’s like 500 people. And there’s such a high percentage of people that uh, want to present, which is to me really surprising, because u- usually it’s quite hard to find people that want to like talk about what they built. And now it’s, there’s an abundance. So that gives me hope that we can, we can figure shit out.
Lex Fridman
(03:14:00)
And it makes it accessible to basically everybody.
Peter Steinberger
(03:14:04)
Yeah.
Lex Fridman
(03:14:05)
Just imagine all these people building, especially as you make it simpler and simpler, more secure. It’s like anybody who has ideas and can express those ideas in language can build. That’s crazy.
Peter Steinberger
(03:14:22)
Yeah, that’s ultimately power to the people, and one of the beauty, the beautiful things that come out of AI. Not just, not just a slop generator.
Lex Fridman
(03:14:36)
Well, Mr. Clawfather, I just realized when I said that in the beginning, I violated two trademarks, because there’s also the Godfather. I’m getting sued by everybody. You’re a wonderful human being. You’ve created something really special, a special community, a special product, a special set of ideas. Plus, the entire… the humor, the good vibes, the inspiration of all these people building, the excitement to build. So I’m truly grateful for everything you’ve been doing and for who you are, and for sitting down to talk with me today. Thank you, brother.
Peter Steinberger
(03:15:14)
Thanks for giving me the chance to tell my story.
Lex Fridman
(03:15:17)
Thanks for listening to this conversation with Peter Steinberger. To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback and so on. And now let me leave you with some words from Voltaire. “With great power comes great responsibility.” Thank you for listening, and hope to see you next time.

Transcript for GSP teaches Lex Fridman how to street fight

This is a transcript of “GSP teaches Lex Fridman how to street fight”.
The timestamps in the transcript are clickable links
that take you directly to that point in
the main video. Please note that the transcript is
human generated, and may have errors.
Here are some useful links:

Georges St-Pierre
(00:00:00)
In a street fight, I would rather- …fight Francis Ngannou than fight Bas Rutten. In a street fight.
Lex Fridman
(00:00:06)
Let me tell you first that I’ve been around. I’ve been a bouncer for many, many years. Bang! Bang! Bang! It’s a street fight. Everybody underestimates the kick in the groin. It’s boom, that’s the first thing to do. I follow up, bang, bang, bang. Right away after that, danga-da-danga-da-dang. See what I’m doing? Boom! That’s the left elbow right there.
Georges St-Pierre
(00:00:33)
Yeah, so very often people ask me the difference between a street fight and a fight in mixed martial arts. The difference is in the street, there is no referee. And there’s an instigator, and there is the other person. The best in life for a street fight is always to not be the instigator because you have the element of surprise. So if you’re in a heated argument with someone and you feel that you’re potentially going to be in a fight, the best thing to do is to never show your center line, to always go on the side and put your hands up like this. Now, that’s one of the best things to do. It’s a self-defense tactic that is used all around the world. Because from there, the distance that I have to travel to cause a lot of damage to him is very minimal.
Georges St-Pierre
(00:01:27)
You know, it’s very short. I can go boom. I can go boom. And I’m protected because if he ever tried to do anything, my hands are already up, but I’m ready to respond to any aggression. So the first thing is, if you’re in an argument and you feel the heat is rising, is to hit first. You don’t want to fight, but you want to hit first. You want to hit first, you know? So it’s either boom, hit first, depending on the situation. If you’re someone who is much less physically strong than the aggressor, you can use the eyes, the genitals, the neck, you know? And then you can leave the scene. However, if I’m like this, the minute he touches me, he declares war. Now I can go and perform a self-defense move.
Lex Fridman
(00:02:25)
So striking, not wrestling?
Georges St-Pierre
(00:02:27)
Yes. It’s always striking first and leave the scene. If you’re, for example, a kid or someone who doesn’t have the physical strength of your aggressor. Of course, I’m a UFC champion— …so that does not apply to me. But the key is, tactically, we always use the element of surprise, and when you strike, strike first. And strike to cause as much damage as possible. The eyes, you can do the neck, you can do the genitals. And then after, you can leave the scene. That’s the goal of having the element of surprise.
Lex Fridman
(00:03:08)
Okay, you were talking about knives. What about if weapons are involved, run faster?
Georges St-Pierre
(00:03:13)
So weapons are very important. If someone has a weapon and attacks me for my money, I give him my money even if I’m Georges St-Pierre and I’m a UFC champion. However, if someone puts a knife to my throat here and he’s telling me to go in the trunk… now, I don’t want to go in the trunk because I know it’s a bad ending. So things that I can do first is always make sure that I try to keep my hands as close as possible to the weapon. And I try to be at as close range as possible. I can act like I want to—
Lex Fridman
(00:03:50)
Look scared?
Georges St-Pierre
(00:03:51)
Yeah. “Please, please, please,” boom. See, here I use my body, and then I can go and break, you know? So the idea is to use your entire body to deflect the weapon. So if the weapon is like this and the blade is coming out this way, I use the element of surprise. You see, I use my body, not only grabbing him like this, so if he tries to come back with the knife, it’s solid, and I can go and break. If the blade is pointing the other side, it’s something here, here, and here. Here I can use my body always to smother the weapon and—
Lex Fridman
(00:04:29)
Controlling the wrist, yeah. But if it’s out here…
unassigned
(00:04:32)
It’s through his clothes.
Georges St-Pierre
(00:04:32)
If it’s out here, and yes, exactly. There’s too much distance. You want to make sure you get close to the weapon because that’s what can cause the most damage. This is very important. There are other situations. Let’s say you’re a kid or someone comes to grab you by the body. What I can do is grab the head and put my fingers inside the eyes; that will make my opponent release me immediately. Then I can go and leave, you know?
Lex Fridman
(00:05:06)
Like thumb in? Like thumb?
Georges St-Pierre
(00:05:08)
Yeah, thumb in the eyes. You push in the eyes.
Lex Fridman
(00:05:11)
Blind them.
Georges St-Pierre
(00:05:12)
There are no rules. The eyes are always my favorite choice to go for because if you cannot see, it’s very hard to fight. And normally the reflex for most people, when they can’t see, they grab their eyes, you know? So it releases the grip.
Lex Fridman
(00:05:32)
I’m now going to ask you about the tie because I think you’re wrong still about that. I think it’s possible to use it as a… same as for a head snatch, like this kind of situation, to choke.
Georges St-Pierre
(00:05:45)
I think it could be an advantage if it’s a fake tie. If it’s something that can go, like it can—
Lex Fridman
(00:05:53)
Clip off?
Georges St-Pierre
(00:05:53)
Like a tail of a reptile that can go. So if you try to pull my tie, it comes out, and now I know I get a head start.
Lex Fridman
(00:06:02)
Element of surprise
unassigned
(00:06:03)
Exactly, it’s all about the element of surprise. You want to strike first; the element of surprise in the street.
Lex Fridman
(00:06:08)
Georges, thank you so much for talking today.
Georges St-Pierre
(00:06:11)
My pleasure.
Lex Fridman
(00:06:11)
Thank you for looking sharp.
Georges St-Pierre
(00:06:13)
Man in black, baby!
Lex Fridman
(00:06:14)
Man in black.

Transcript for 1984 by George Orwell | Lex Fridman

This is a transcript of “1984 by George Orwell | Lex Fridman”.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the video.
Click link to jump approximately to that part in the transcript:

Intro

Lex Fridman
(00:00:00)
“There was truth, and there was untruth. And if you clung to the truth, even against the whole world, you were not mad.” 1984 by George Orwell is one of the most impactful books ever written. It has been widely used and misused in political discourse by all kinds of ideologues. Into that discourse, it entered terms like Big Brother, thoughtcrime, Doublethink, Newspeak, Thought Police, and Orwellian, strangely enough, as a synonym for the very thing that the author, Orwell, was against. It’s been translated into over 65 languages, has sold over 30 million copies, and has been banned in many countries, especially authoritarian regimes. It was banned under Stalin, and as recently as 2022 in Belarus. In this video, I’ll give a quick summary with spoilers and a few takeaways.

World of 1984

Lex Fridman
(00:00:55)
I’d like to try to make it somewhat interesting to people who both have and have not read the book. Let’s see how it goes. The world in the book 1984 is a dystopian future society, a nation, maybe you can say superstate named Oceania. It’s fully controlled by a totalitarian political party called Ingsoc. It’s led by Big Brother who, as we might discuss, may or may not be a real person. He might just be a symbol used by the party. The party wants only to increase its power, also something we might talk about. It uses technology, telescreens, for mass surveillance. It’s creating a new language called Newspeak, which removes words from English that could lead to rebellion.
Lex Fridman
(00:01:38)
It uses Doublethink to control thought by, perhaps you could say, forcing you to hold contradictory beliefs and accept them as true. If not, the Thought Police arrest you for committing a thoughtcrime. Examples of Doublethink are “War is peace,” “Freedom is slavery,” and “Ignorance is strength.” And finally, the party constantly rewrites history. As the quote goes, “Who controls the past controls the future. Who controls the present controls the past.” There are four ministries. The Ministry of Truth is responsible for propaganda and, like I said, rewriting history. The Ministry of Love is responsible for brainwashing people through torture. The Ministry of Plenty is responsible for rationing food, supplies, and goods.
Lex Fridman
(00:02:27)
And the Ministry of Peace, of course, is responsible for maintaining a constant state of war. Society is divided into three levels: the Inner Party, the Outer Party, and the Proles. The term stands for, I guess, proletariats; it’s the working class. The Inner Party’s tiny. The Outer Party’s a little bit bigger, and the majority of the people—I forget what the percentage is, maybe 80%—are the Proles, the working class. There are several key characters. Winston, the main character, is a low-ranking member of Ingsoc. He works at the Ministry of Truth where he rewrites history. Julia is a girl who Winston falls in love with, and she with him.
Lex Fridman
(00:03:11)
They have sex, and this is maybe a good place to mention that love and passionate sex are forbidden in this society. “Goodsex” I think is a term under Newspeak; it’s the kind of sex that leads to procreation, which is the only kind allowed and the only kind that’s “good.” O’Brien is another central character. He’s the member of the Inner Party that convinces Winston he’s part of the Brotherhood, which is a lie, and he eventually is the man who tortures Winston and breaks his mind, breaks his heart. Big Brother and Emmanuel Goldstein are these symbolic characters that we never actually get to meet. They may or may not exist.

Love

Lex Fridman
(00:03:59)
Big Brother is the head of the party Ingsoc, and Emmanuel Goldstein is the leader of the so-called Brotherhood, which is this mysterious group that lurks in the shadows and works to overthrow the party. Again, they may or may not exist. We’ll maybe talk about the importance of that in a totalitarian state. So, a few key takeaways. I’ll try to do my best—I have disparate notes that I took for myself—to integrate them together to make some cohesive thoughts. Part of the reason I wanted to do this is that while I have read 1984 many times, and many of the books on the reading list I’ve read many times, I haven’t often really concretized my thoughts about them.
Lex Fridman
(00:04:52)
I just take the journey and let the thoughts wander around in the background as I live my life. I wanted to put them on paper and maybe share them with others to see what they think my concrete takeaways are from the book, if I could try to convert them into words. So the first one for me, especially later in life as I’ve been reading this book, is that when everything else or most things that make you human are taken away by a totalitarian state, the last thing that’s left, which is the most difficult to take away, is love. Love for other human beings, love for life itself. That’s the little flame from which hope springs. The key revolutionary act is the act of love.
Lex Fridman
(00:05:49)
So when the ability to speak is taken away, when the ability to think rational thoughts is taken away, the last thing that’s left, and the thing that ultimately gives hope, is love. That’s a big takeaway for me. The note that Julia gives to Winston reading “I love you” is the kind of revolutionary act that leads to a society beyond the one they exist in. I think a lot of the book has an interesting hypocrisy to it, where the main character, Winston, is almost in an animalistic way obsessed with destroying the state in rebellion and revolution. But I think love is the thing that allows you to believe in a place beyond the state, in believing that you can build something better, versus just destroying the thing you’re in.
Lex Fridman
(00:06:51)
I think you have to be careful as a revolutionary not to obsess 100% with destruction. Because beyond destruction, there could be chaos that leads to something much worse. I think love is the basic human thing that connects all of us, the messy thing that connects all of us, that allows you to build a better society after the totalitarian one is overthrown. What else did I want to say? There’s an interesting tension there between love and lust. I think there’s a quote that pure love or pure lust was impossible or forbidden. “Pure” here meaning unadulterated, uncensored intensity of feeling, maybe intimacy.
Lex Fridman
(00:07:44)
And there was an interesting question raised by the book, both by Winston and Julia: what is ultimately the most powerful act of rebellion? Is it between us humans when everything is forbidden? Is it animalistic like sex? Just lust for another human? Or is it love? The kind of love you have for a romantic partner, but even love for family and love for friends. I don’t know. I think the book almost claims that it is sex, but I think what the book also shows is that if sex is your manifestation of rebellion, that ultimately leads to something that doesn’t last. That ultimately leads to a focus on destruction versus building beyond the horizon when the state falls. So, some quotes from Winston on this.
Lex Fridman
(00:08:42)
“The more men you’ve had sex with…” Julia admitted to having sex with quite a lot of people. He says, “The more men you’ve had sex with, the more I love you. I hate purity. I hate virtue. I want everyone to be corrupt to the bone.” This kind of rubbed me the wrong way because, again, this seems to be obsessed with the hatred towards the state versus a longing and a hope—which I think hope is really important here—a hope for a better future beyond the state. Again, another quote from the book: “Their embrace had been a battle, the climax a victory. It was a blow struck against the Party. It was a political act.” So there, again, I think sex is seen as a political act of rebellion. I think that’s not the deeply human thing here.
Lex Fridman
(00:09:31)
The deeply human thing is, again, the act of love. It’s a source of hope; it’s the catalyst for building a better future beyond the revolution. An interesting side note here—and there could be a million interesting side notes, and I’m desperately trying not to go on a million tangents, to hold myself together and stay focused—is on family. There’s all kinds of love, and I think family love is a really powerful bond that connects us, and that’s one of the things that totalitarian states really go after.
Lex Fridman
(00:10:06)
And I should mention, I’m loosely using the terms authoritarian and totalitarian here. To me, authoritarian means there’s a government with complete centralized control of political affairs. A totalitarian state is beyond that; it is complete control of not just politics but also social, economic, everything. Nazi Germany is an example of that, I think, where there’s just complete control of every single thing, from the war effort to social interactions, the rules that govern social interaction, the press, all that kind of stuff.
Lex Fridman
(00:10:57)
So I think this book is more about, at least in my definition of the term, totalitarianism. Anyway, as I was saying about family, I think the way they destroy family is, one, of course with your romantic partner by forbidding passion—passionate sex, but really just passion and longing for another human being in that romantic way. And they also really reward and encourage children at a young age; they indoctrinate them to turn their parents in for thoughtcrime, whether real or not, which of course is a silly notion because there’s no nature of truth. You can just accuse anyone of anything and they’re guilty just by existing. So that’s a way to attack the family.
Lex Fridman
(00:11:43)
And I should also have mentioned on the topic of love that I think the goal of the Party, the final destination as described by O’Brien through the process of torture, is to break your mind, heart, and soul completely so that the only love you can have—and it could be felt as a pure love—is for Big Brother. This is the kind of thing you see in North Korea, where the only love you’re allowed to have, the remaining inklings of feeling that might still exist in you, you can channel only not towards family, romantic partners, or friends, but towards this leader, this godlike messianic figure. In this case, one who may or may not exist.

Hate

Lex Fridman
(00:12:32)
In all cases, that figure, while there is a human associated with it, is really much bigger than the human, and that’s the only love you’re allowed to have. So the other takeaway I have is on the topic of hate. I think all humans have the capacity, almost an animalistic craving, for hate of the “other,” the enemy. Whether it’s individuals like Emmanuel Goldstein or nations like Eurasia and East Asia—which are the two other superstates described in this book—they’re constantly at war with each other. Again, the fascinating thing about the way this book is written is you don’t know if Eurasia or East Asia even exist. You really don’t know what is true beyond the local interactions of the main character.
Lex Fridman
(00:13:28)
And that, I think, is the point. When you don’t really know, there’s no steady footing on which to construct a worldview from which you can have hope for a better future. This animalistic craving for hate, especially when we’re in crowds, is most powerfully illustrated in the “Two Minutes of Hate” practiced by that society. The quote is, “The horrible thing about the Two Minutes of Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in. Within thirty seconds any pretence was always unnecessary.
Lex Fridman
(00:14:13)
A hideous ecstasy of fear and vindictiveness, a desire to kill, to torture, to smash faces in with a sledge-hammer, seemed to flow through the whole group of people like an electric current, turning one even against one’s will into a grimacing, screaming lunatic. And yet the rage that one felt was an abstract, undirected emotion which could be switched from one object to another like the flame of a blowlamp.” That’s the point: you get the crowd together, and you get them to hate Goldstein or Eurasia or East Asia. You get them to hate anything. And that feeling, that drug, that mass hypnosis, can be directed by the state in any direction.
Lex Fridman
(00:15:02)
And because you have complete control of history, you can direct it on a day-by-day basis towards any target. As long as the hate is catalyzed through these kinds of rituals, it can overpower the individualistic feeling of love we have for each other. So that hate is a more animalistic desire. I don’t know what to make of it. And of course, it’s also important to say that this book was intended originally by Orwell as a satire, although a satire that has quite a lot of torture at the end and doesn’t seem to have much humor. But I think if you read it as a satire, that’s the best way to understand its relevance in our society today.
Lex Fridman
(00:15:53)
Because a lot of things, like the Two Minutes of Hate, are almost a caricature of what hate looks like in a mass gathering. But if you take it as a caricature, it can reveal some of the elements that already exist in human nature that we should be very cautious about. It reveals the very thing that, if not monitored by ourselves, can result in a slippery slope that leads to tribalism, the destruction of other groups, and then control of the collective intelligence of our species through a totalitarian state. I think there’s elements of this under illustration in social media today, though I don’t want to overstate it.
Lex Fridman
(00:16:44)
I think just like comparing things to Hitler, comparing things to 1984 is a reach in most cases. But social media does reveal this kind of mass hysteria, this capacity of humans to be outraged based on tribalism. So we have to understand it. We have to resist giving into it on the individual level. And I do believe we have the responsibility to create technology that helps us resist it, that incentivizes us not to be cruel to each other just because the people in whatever tribe we define ourselves in are being cruel to a particular person or group. Another takeaway I have is about power. Ingsoc, the totalitarian state, wants only one thing, and that is power. Power is both the means and the end. Absolute power.

Power

Lex Fridman
(00:17:35)
As O’Brien describes in the torture part of the book: “The real power, the power we have to fight for night and day, is not power over things, but power over men. Power is inflicting pain and humiliation. Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing. Power is not a means, it is an end. One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish a dictatorship. The object of persecution is persecution. The object of torture is torture. The object of power is power.” This, of course, is another aspect of human nature: the will to power and the tendency of that power to corrupt.
Lex Fridman
(00:18:36)
O’Brien says also, “The weariness of the cell is the vigour of the organism.” Through the torture and breaking of the individual, the individual doesn’t matter. What matters is the organism. There’s been a lot of brilliant comments throughout social media and on Reddit—I just want to highlight something about this because I had the exact same feeling as I was rereading it. There’s a comment from a Reddit user whose name is BraveSky6764.
Lex Fridman
(00:19:13)
He said the conversation between Lex and Michael Levin, who is a brilliant biologist and engineer, came to mind when O’Brien made an analogy to an organism which survives even as the individual cells pass away, and the great purges are analogous to the cutting of a fingernail. If you see society as an organism—which I think is the way a totalitarian state sees it—then the destruction of a large percentage of that society, the murder, the torture, and all kinds of atrocities and genocide become “justifiable” as long as the organism flourishes. That’s how you get to the ideas Stalin had: it’s okay to break a few eggs to make an omelet. This devaluation of a human being as having fundamental importance in a society…
Lex Fridman
(00:20:23)
is a slippery slope into atrocities. It’s not just deeply unethical from our understanding of morals and ethics; it is also very unproductive. It destroys the human spirit, and the human spirit is essential for building a great society of constant progress. I think that’s also one of the other messages of the book, is about utopia—that totalitarianism results when you chase perfection, when you present this idea of utopia. There is no utopia; there is no perfect society. I think, at least for me, that’s the takeaway. I think the optimal state of being for an individual and for a state is constant change and constant turnover.
Lex Fridman
(00:21:09)
In the case of a state, it’s a constant turnover of leaders and ideas, always hopefully making progress towards a better world. But it’s always going to be messy. Perfection only exists in an oppressive state. Perfection only exists when you remove the basic humanity of the individuals that make up that state, when you destroy the human spirit or suppress all freedoms. Freedom is going to be messy and chaotic, but that freedom, ultimately, in the long arc of history, is going to create progress.
Lex Fridman
(00:21:48)
So yes, as the Redditor BraveSky6764 says, that does give you a perspective of a biological system made of living organisms. Each one of us is made up of living organisms, and we take for granted all the “atrocities” happening there; we don’t seem to give a damn. I think that’s a good metaphor. If you want to put yourself in the mind of the Inner Party, of Big Brother, or the people in power, I think most, if not all of them, see themselves as doing good for society. They are able to justify things the way we justify the death of different cells in our body.
Lex Fridman
(00:22:35)
You don’t even think of them as worthy of consideration. You don’t think of them as living beings having the same value as you. That’s one of the really powerful ideas at the founding of the United States: that all men are created equal, that there’s an equal worth to a human being no matter who they are. That idea, as flawed as its implementations have been, is a really powerful and non-trivial idea, and it resists the drug of totalitarianism and power. I do believe that on the topic of power and politics, 1984 has been misused by political ideologues.
Lex Fridman
(00:23:23)
I’ve seen it, for example, when conservatives in the United States have used 1984 to call left-wing policies “Orwellian.” I think that’s an overstatement, of course used for dramatic effect, but it should at least be said that Orwell was a democratic socialist. 1984 is not a criticism of socialism; it’s a criticism of totalitarianism. I think the point is a warning that all political ideologies can succumb to the allure of power and be corrupted by it. People on both the left and the right in the United States can be corrupted by power. This one-way criticism of policies as Orwellian is a convenient shorthand, but the reality is all politicians are capable of…
Lex Fridman
(00:24:28)
creating an Orwellian world. And I think one of the things that is highlighted in the book very well is the hypocrisy of Winston. When O’Brien asks Winston what he’s willing to do to overthrow the Party, Winston admits he is willing to commit atrocities. He’s willing to do evil unto children, to commit murder, anything. This is a powerful illustration that both the totalitarian state and a blind, immoral rebellion against it can be evil. This is where I return to love as the thing that carries hope for a world beyond this battle for freedom. You have to have that.
Lex Fridman
(00:25:23)
Otherwise, the Orwellian state and the resistance to an Orwellian state can both destroy basic human rights and freedoms. I think in the character of Winston, that’s illustrated well. And I should also mention that there’s interesting writing… Now, I’m not obviously a scholar of Orwell, and there’s a lot of books been written and I should probably recommend them somewhere. There’s just great books written on 1984, on Orwell, on the historical context in which he was operating and all that kind of stuff. But as far as I see, Orwell also with 1984 and himself politically, he was not espousing the complete opposite of totalitarianism.

Orwell

Lex Fridman
(00:26:06)
There is, again, democratic socialism—that there is value to the connection between human beings, that you have to lean on each other, help each other, that society is fundamentally more a cohesive collective than a completely disparate set of sovereign individuals. It’s both. And I think he was torn about that idea, because in order to resist a totalitarian state you have to fight for those basic individual freedoms. But at the same time, a well-functioning society allows for that freedom to manifest as collaboration. And so that’s the difficult challenge there.
Lex Fridman
(00:26:52)
Again, that’s why he was a democratic socialist and the criticism of the book was against totalitarianism, of a centralized state that controls speech, thought, the press, and all the basic human freedoms. Controls truth. And I think a lot of people would ask the question, and I hear this tossed around: “Do we live in the world of 1984 today?” And I think that’s used as a shorthand to sort of criticize different policies and different governments. I generally don’t like the use of that kind of language because it’s basically crying wolf. If everything is 1984, if everybody is Hitler, then you’re not going to…
Lex Fridman
(00:27:35)
There’s no way to properly normalize the discussion of the lesser of two evils, which is ultimately what democracy is about. You have a collection of things you’re picking. They all kind of suck, but you want to pick the one that sucks the least. That’s human society, you know? That’s human nature. It’s messy. And so I don’t think we live in a 1984 state, but there’s a lot of elements that this book reveals about human nature and about the operation of a totalitarian state that we should be on the watch for. So surveillance, a state of doublethink, of controlling language, of being in a constant state of war as a way to control the population and the flow of resources.
Lex Fridman
(00:28:24)
All those things have elements of almost useful tools for the establishment of complete control of a populace. And the moment you notice those elements, it’s our job to resist those elements. So I think the point is we have to be vigilant to the slippery slope of the will to power in centralized institutions. Another thing I want to mention is that I think a lot of people rightfully compliment Orwell for predicting some of the elements of future society, especially with technological capabilities, for example, telescreens used by the state to control the population. Maybe I can make a few comments on technology in general.

Technology

Lex Fridman
(00:29:10)
People who criticize technology will often use 1984 as an example that technology is a tool for a totalitarian state. It’s a way they can achieve full control, and we should be extremely cautious of it. And I think there’s a kernel of truth to that. But it’s not obvious to me that on the whole, technology is a tool for totalitarian control. I think it is also a tool for freedom. The internet is an incredible tool for freedom. And so of course, we have to fight for that freedom, but I believe in general, the greater… Let’s just take the internet broadly as an example, and there’s a lot of sub-elements of that, and like a more platonic sense of what the internet is, which is digital interconnectivity.
Lex Fridman
(00:30:01)
We have to fight for freedom, but in general, the greater reach and access that the internet has, the more powerful the resistance to totalitarianism. Technology is a double-edged sword. It provides the tools for oppression and the tools for the ongoing fight for freedom. And as long as the will to fight arises in the human heart, technology, I think, helps humanity win. And of course, there’s been a lot of discussion about free speech and the freedom of thought, and there’s a lot to be said there that’s much more nuanced than the book 1984 provides. I think 1984 just shows the end, horrible conclusion of complete totalitarian control over speech, over thought, over feeling, over everything. But in general, my view of it is it’s a kind of inspiration to…
Lex Fridman
(00:30:57)
In order to prevent ourselves from slipping into an authoritarian or a totalitarian state, Orwellian type of dystopias—to avoid them, we have to value critical and independent thought. I think thought first, before speech. Just thought. I think you have to learn to think deeply from first principles, independent of whatever tribe you find yourselves in. Independent of government, independent of groups, independent of the people around you, the people you love, that love you. You have to learn, at least sometimes, to think independently. Now, this is the Nietzsche, “If you gaze long into the abyss, the abyss gazes into you.” If you think too independently, it can break your mind. I mean, we are social creatures. We need that connection.
Lex Fridman
(00:31:45)
But I think it’s like with Tom Waits: “I like my town with a little drop of poison.” I think of truly, deeply independent thought as a little drop of poison that’s necessary for your mind. Most of your life you live, you kind of assume most things around you are true, and that’s very useful. We stand on the shoulders of giants. But you, on a regular occasion, have to question. Question your assumptions, question your biases, question everything. Question the things you’ve taken for granted. Question what everybody’s telling you. But not too much. It’s a tricky balance, but the act of rebellion against a totalitarian state, against the slippery slope into that state, is that independent thought. And of course, speech is a manifestation of that thought.
Lex Fridman
(00:32:28)
So we have to avoid echo chambers in both thought and speech. Like I said, you have to question your assumptions, challenge your biases. I think that’s the way out. Or maybe that’s a resistance mechanism to slipping into authoritarianism. And maybe I have a few more things to say about the latter part of the book, the part where there’s torture—where there’s Room 101 that has the thing you fear the most, which is different for all of us, and for Winston, that’s rats. It makes you wonder what that thing is for each of us. I left a mental note for myself to do more research into the historical context, the psychology, the neuroscience, the effectiveness of torture. I think there’s probably a lot of really good work.
Lex Fridman
(00:33:22)
I had a brief conversation with Andrew Huberman on the phone about this topic. Andrew Huberman, the brilliant Andrew Huberman, host of the Huberman Lab podcast that you should listen to. And he mentioned to me that there’s a bunch of papers on these topics. This has been studied, sort of the carrot and the stick of the ability of incentives and disincentives to control the perception and the mental state of people and animals. And he mentioned to me a few folks that I could talk to on a podcast about this topic, and a few books. So, I’ll definitely look into this more. I think 1984 uses torture as a philosophical description, as a caricature of the operation of a totalitarian state.
Lex Fridman
(00:34:08)
But at the same time, a lot of those elements were all done under Stalin in the Soviet Union, so it’s not like it’s very different or very far from reality. It’s very, very real. The question is about the actual effect it has on the human mind, which I really have to think because torture in this case breaks Winston. In fact, I’d like to believe that many people, in the most fundamental of ways, can’t be broken in this way. I’ve seen science… again, without extensively reading, so please correct me if I’m wrong. But I’ve seen science that shows that torture, for the purpose of intelligence gathering, is not effective.
Lex Fridman
(00:34:55)
It’s not effective to get accurate information because people will tell you anything, really, to stop the torture, stop the physical and the mental and the emotional suffering. But I think this book is about the use of torture to completely break your ability to think and to perceive the world. One of the things I talked to Andrew about is whether it’s possible to control perception through these kinds of things. And it seems that there is literature that shows it’s possible to literally change your perception of the world. Like in this case, in 1984, it’s when you’re holding up four fingers, can you actually make the person believe that you’re holding up five fingers?
Lex Fridman
(00:35:39)
Not because of some weird delusion or just because your vision is blurry, but you literally, when you look, are holding four fingers and what you see is five fingers. Not because your vision is poor. No, your visual cortex, the way you’re processing that information, something about the processing changes completely your perception. If I tell you there’s a straight line, can, through incentive or disincentive, you start seeing a crooked line or something like that? Anyway, I think that there’s literature that supports that, which is, by the way, terrifying. But the thing I’d like to research more is if that can be long-lasting. I just don’t believe it can be.
Lex Fridman
(00:36:24)
If you’re not pushed to your death, yes, maybe perception, maybe your willingness to think, but your actual ability to think independent thoughts? Maybe you’re terrified. I understand if you’re terrified of any more thinking that leads to rebellious thoughts. Like the book mentions, the idea of face crime, where you can reveal your thoughts, the inner workings of your mind, by the subtleties of your expressions in your face. And I think also, like O’Brien says, “If you want to keep a secret, you must also hide it from yourself.” So I can understand that.
Lex Fridman
(00:37:11)
And maybe that is the basic mechanism that torture leads to: that your body, your mind learns to hide the truth from yourself. Like you don’t even allow yourself to think it because you know if you think it, it’s going to lead to face crime and thought crime, and that’s going to lead to more torture. That’s possible. But I just can’t imagine the capacity for love in the human heart to be extinguished through torture, finally extinguished. Temporarily, yes, but finally, irrecoverably, which I think is the basic claim of the book. That they break… so because through the worst of the torture Winston gives up Julia, the object of his love, he says that some things like that—the fact that you said, “Torture her, not me.”
Lex Fridman
(00:38:15)
“Anything to make this stop,” the fact that you said that, the fact that you thought that, is a statement, is a thought you can’t walk back to yourself. So it’s irrecoverable. You just destroyed your faith in love? I don’t think so. I think it’s possible we have to remember that this is one particular character. This is one particular story. I think there’s a lot of people in which the capacity to love cannot be broken, no matter the torture. But that’s an interesting scientific question, but it’s also a human question. I think Man’s Search for Meaning—there’s a lot of books that explore those kinds of questions. In the worst of conditions that humans had to suffer through, what still persists? What is the source of meaning?
Lex Fridman
(00:39:03)
And I just think that the flame of love persists through atrocities, through torture, through suffering, through all of it. But the claim of the book is that yes, a totalitarian state can use torture to break even that, even that which leads to the only love you’re allowed to have, which is the love for Big Brother. So I think, practically speaking, from the Party’s perspective, I think the point of O’Brien’s torture of Winston was to suffocate the hope in his mind and heart, so there is no hope, by completely destroying the knowledge of what is and isn’t true, so being betrayed.
Lex Fridman
(00:39:53)
And this kind of Goldstein’s book about the society, not knowing if that’s true, not knowing anything about Julia, is basically having no emotional or intellectual ground to stand on. It’s very difficult to have a sense of where you are. To have hope, you have to have a sense of where you are and where things could be. And then you also betray yourself. To force you to be a hypocrite on your own deepest feelings of love, I think basically puts you in a place where there’s no hope, there’s no point. It’s apathy. It’s nihilism. And there, a hardworking member of society that is nihilistic is probably what the Party wants, because that human will not rebel.
Lex Fridman
(00:40:42)
But on the point of hope, I should mention that there’s a kind of long-running theory that since the appendix… The appendix is about the details of Newspeak, the language that the Party is creating and enforcing. Because that appendix was written in the past tense, and it’s talking about Newspeak in the past tense and it’s written in English, sort of non-Newspeak, that means the Party and Newspeak and all of its elements that we see in the story are in the past. That the world from which the book is created has escaped that. And that’s a message of hope. That whatever the rebellion against the Party—whether it’s passionate lust and sex, whether it’s love, whether it’s seeking truth in a world full of lies—whatever it is, there’s a way out.
Lex Fridman
(00:41:40)
Again, to me, the way out is love. But that’s a hopeful message in this dystopian novel, that even these perfectly executed totalitarian states will fall. I took a few random notes here that maybe I’ll comment on. I wrote a quote: “The masses cannot rebel until they become conscious.” That might be either a Winston observation or an O’Brien statement. I’m not sure. But yeah, so you have to think, 80% plus are proles of the working class. They have the power if they want it, but they don’t want it. They don’t want to take it. That’s the whole point of the totalitarian state: to break your will for freedom, your desire for freedom, break your ability to know that you’re not free.
Lex Fridman
(00:42:28)
And that’s where all of it—the changing of history, the doublethink, the thought crime—all of that comes into play: the torture and the Ministry of Love. All of that is about preventing the populace from becoming conscious. And again, as per the cells discussion earlier, I wrote down the O’Brien quote: “The death of the individual is not death. The Party is immortal.” And this is just an interesting observation about the operation of a totalitarian state, that it’s the idea and a kind of amorphous symbol of the messianic figure in Big Brother is all you need for the Party to persist. That person doesn’t actually have to exist. Any one individual doesn’t have to exist.
Lex Fridman
(00:43:18)
It’s just the division of society into high, middle, and low, and the oppression of the low by the high by the centralized Inner Party. That’s all you need, and the individual does not matter in that. And again, the way to fight that is to fight for individual freedoms. An interesting side note is just a quote I wrote down from Julia, I think: “If you keep the small rules, you can break the big ones.” And so she, in the book, is somebody that follows to the T all the rules of the Party. She attends all the committee meetings and all that kind of stuff, and just is like the model citizen from the perspective of the Party. And so that allows her to break the big rules, like having passionate sex with people—the really…
Lex Fridman
(00:44:11)
or falling in love, all the forbidden things. And I think that’s actually a good way to exist in the world. I think for a lot of us, there’s probably a bunch of things that bother us in the local world around us, in the bigger world, and I think you have to pick your battles. You have to not get lost in the muck of small battles if you want to have at least one or a few big victories in your life that make for a better world. I think, at least in my sense, it’s easy to get distracted by the little things that bother you in life.
Lex Fridman
(00:44:49)
And I think staying focused on the big things, again, picking your battles, and staying with that for as long as possible, working your ass off to solve one problem for as long as possible, not giving up against impossible odds, against all the criticism—that’s the way to solve those big problems. And of course, that’s not what Julia is talking about. But in a sense, she is also, because in that particular case, a totalitarian state is the problem. And the way to rebel is to plant that seed of rebellion in each of the people she has sex with: that we are human, that we have lust for each other, that we have the ability to love each other, and that is the necessary act of rebellion there.
Lex Fridman
(00:45:36)
That is the big leap for her, at least in that kind of society. I should also mention that there’s a lot of interpretations of the different small and big things in this book. So it’s very possible in the case of Julia that Winston was played. He was set up with Julia. He was set up to feel all those things. He was set up to have that little secret cove where he can write on his desk in the diary and dream of rebelling against the state, dreaming of the Brotherhood. It’s unclear to me why an oppressive state would want people to have that little journey of desiring freedom in all its manifestations. I’m not sure.
Lex Fridman
(00:46:26)
But maybe O’Brien’s statement that the purpose of torture is torture holds some wisdom: that to attain absolute power, you also have to have a willingness and a mechanism to attain absolute suffering in the populace. And maybe this is a way to maximize suffering: to give them hope before you crush it. Again, the way out to me and the takeaway from this book—the way out is love. Perhaps this is a good place to also mention a little bit of a fun little controversy that evolved over Twitter. So I posted a reading list quickly before heading off to a New Year’s party of books that I hope to read in 2023, and these are based on books that I asked people to vote on; these are many of the ones they selected.

Reading list controversy

Lex Fridman
(00:47:42)
And they happened to be many of the books I’ve read many times throughout my life and really enjoyed, and they were like old friends that I love visiting and revisiting. Every time I read them, I get something new and they just read differently throughout life. You know, the way in my teens when I read The Stranger by Camus is very different than it was in my 20s and different in my 30s. I’ll say my favorite book now by Camus is probably The Plague, and all of that has evolved. With Dostoevsky, I read The Idiot several times. I read The Brothers Karamazov both in English and Russian, and Notes from Underground. I mean, I love Dostoevsky. And a lot of these books are just…
Lex Fridman
(00:48:24)
Yes, they are classics, but they’re also deeply profound and they move me on an intellectual level, but also just as a human being. They’re like travel companions. They’re like old friends. Old dead friends. So yeah, I wanted to celebrate my love for books. And it was very strange to me that—and if I’m just being honest for a second, it was kind of painful that some prominent figures that I respect were kind of cruel about the list. They responded and they mocked it and all that kind of stuff, basically taking the worst possible interpretation. I have to be honest and say it wasn’t fun, because it was just a silly kid—me—kind of in a joyful New Year’s mood, sharing with the world books I love.
Lex Fridman
(00:49:32)
And I think what was happening—and this seems to be happening a bit more—is there’s a bunch of people that are just almost waiting or hoping that I fail, or maybe that I’m some kind of bad human being. They’re looking, they’re trying to discover things about me that reveal that I’m a bad human being, and maybe somehow this reading list reveals that. I don’t know. So, one criticism was that everybody read these books in school, and they’re basic. I think my response to that criticism is: no. First of all, most people have not read them in school; maybe they read CliffNotes. And they’re not basic; they’re deeply profound, some of the greatest words ever written.
Lex Fridman
(00:50:26)
But also, I don’t think I’ve ever gotten a lot from books I was forced to read in school when I had to read them for an assignment. Some of these books I think I read in school, but most of them not. It’s only when I read them outside of school on my own volition that I really gained a lot from it, and especially throughout my life at regular times—as a teenager, in my 20s, and in my 30s. So, no. These books are profound and deserve returning to. Like I said, they are old friends that give me a lot of meaning every time I revisit the ideas, and they give me a new perspective on life. Another criticism was very nitpicky. The list was put together really quickly, and the goal—I like setting tough goals.
Lex Fridman
(00:51:14)
The goal was to read a book a week. And, you know, on one week I had The Little Prince followed by The Brothers Karamazov. And people criticized that: “How can you possibly read The Brothers Karamazov in one week?” Maybe I won’t. Maybe I’ll fail miserably. But I love trying. But that wasn’t actually the goal. I should’ve said I intend to finish reading it by the end of that week. So, you start earlier because The Little Prince takes an hour or two to read. And then for The Brothers Karamazov, I could have the two weeks. It should take about 30, 40, or 50 hours to read it. That said, friends, I’ve read it already in English and in Russian.
Lex Fridman
(00:51:59)
I’m interviewing the world-famous, amazing translators of The Brothers Karamazov, of Dostoevsky, and of Tolstoy—Richard Pevear and Larissa Volokhonsky—probably across multiple days. So, this book means a lot to me. I’m not somebody just kind of rolling in, “What are the cool kids reading these days?” These books have been lifelong companions to me. And the fact that people just want to stomp on that—and a large number of people did, people I respect—yeah, I’d be lying if I said it didn’t suck a bit. Anyway, the love for reading persists. I have to say, after that, I was very hesitant to even make this particular video on Orwell, on 1984. And I’m not sure I want to be public with my reading after this.
Lex Fridman
(00:52:59)
And I know a lot of people will say, “No, we’re here with you.” They’re very supportive, and I love you. I mean, I meet so many incredible people, but the reality is it just does suck to be vulnerable and share something with the world and receive that kind of mockery at scale. So I will definitely—I will not be affected or broken by any of that kind of stuff for something that’s actually meaningful, like the conversations—some of the very difficult conversations I’m going to do. But for a silly side hobby thing of reading that I do throughout my life to be a source of mockery, I’m just going to do that privately. So, I’m a little torn on that, and I’ll try to figure out a way.
Lex Fridman
(00:53:41)
Also, I should say that that list, like a lot of things, is kind of aspirational because if I take a job at a tech company, or if I start a tech company, or if I have to travel for extremely difficult conversations and really have to prepare for them—all that kind of stuff is going to affect my ability to both read and enjoy reading, which I think is a prerequisite for this kind of reading. But in general, what I do is I read about one hour a day on Kindle—on the sort of physical device, in my eyes. And depending on the workout I do and the chores I have, it’s going to be about two hours of audiobooks. So, most of the things I do during chores is audiobooks.
Lex Fridman
(00:54:30)
And when I run—and I usually run about 10 to 15 miles, so you’re talking about—I often run over two hours. It’s like a slow pace. When the days are not insane, it gives me a chance to think and a chance to listen to audiobooks, so I love that process. It’s an escape from the world, a chance for me to collect my thoughts. And yeah, it’s again a source of happiness and joy, and I wanted to share that. I think you can get quite a lot of reading done through that process, especially if it’s a book you’ve read before. It is very challenging to do this kind of takeaway video, or to concretize your thoughts down on paper, especially when you have to present them in this kind of way.
Lex Fridman
(00:55:12)
I’m not sure I’m going to do that much, because it’s an extra bit of effort. But it’s also a chance to share that joy with the world, and to find cool people that also enjoy it. So it’s a trade-off. Anyway, it’s just a temporary thing, but it did suck for a short amount of time—for a few hours, for a couple of days. But in general, I’ll persist with my love of reading. I might not talk about it publicly as much. But again, let me emphasize that this kind of response and mockery will not affect anything of importance that I do. I try to read comments; I try to see criticism. I really value especially high-effort criticism. I try to grow and constantly try to improve.
Lex Fridman
(00:56:05)
But that’s for things that I take very seriously, like the podcast conversations that I do. But for silly things, like book lists, Spotify music playlists, the food I like to eat—I don’t know, anything, any fun side thing—it’s not that important. If it’s something that others don’t enjoy, then whatever. I’ll enjoy them probably with my friends locally here, or the people I meet. So, anyway, I love reading. I love reading classics. I love returning to old friends in book form, and making new ones.
Lex Fridman
(00:56:49)
There’s a bunch of science fiction that I embarrassingly have not read and would love to, because those worlds are so meaningful to so many of the people I’m friends with that I can’t wait to visit those worlds and sort of make new friends in the form of books. So, definitely the love for books, the love for reading persists. And if you share in that love, that’s beautiful. So thank you for joining me on this journey. Thank you for watching this silly little video. And I hope to see you next time. Love you all.

Transcript for State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490

This is a transcript of Lex Fridman Podcast #490 with Nathan Lambert & Sebastian Raschka.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman
(00:00:00)
The following is a conversation all about the state of the art in artificial intelligence, including some of the exciting technical breakthroughs and developments in AI that happened over the past year, and some of the interesting things we think might happen this upcoming year. At times, it does get super technical, but we do try to make sure that it remains accessible to folks outside the field without ever dumbing it down. It is a great honor and pleasure to be able to do this kind of episode with two of my favorite people in the AI community, Sebastian Raschka and Nathan Lambert. They are both widely respected machine learning researchers and engineers who also happen to be great communicators, educators, writers, and X posters.
Lex Fridman
(00:00:51)
Sebastian is the author of two books I highly recommend for beginners and experts alike. First is Build a Large Language Model from Scratch, and Build a Reasoning Model from Scratch. I truly believe in the machine learning and computer science world, the best way to learn and understand something is to build it yourself from scratch. Nathan is the post-training lead at the Allen Institute for AI, and author of the definitive book on reinforcement learning from human feedback. Both of them have great X accounts, great Substacks. Sebastian has courses on YouTube, Nathan has a podcast. And everyone should absolutely follow all of those. This is the Lex Fridman podcast.
Lex Fridman
(00:01:40)
To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, get feedback, and so on. And now, dear friends, here’s Sebastian Raschka and Nathan Lambert.

China vs US: Who wins the AI race?

Lex Fridman
(00:01:57)
So I think one useful lens to look at all this through is the so-called DeepSeek moment. This happened about a year ago in January 2025, when the open weight Chinese company DeepSeek released DeepSeek R1 that I think it’s fair to say surprised everyone with near or at state-of-the-art performance, with allegedly much less compute for much cheaper. And from then to today, the AI competition has gotten insane, both on the research level and the product level. It’s just been accelerating.
Lex Fridman
(00:02:32)
Let’s discuss all of this today, and maybe let’s start with some spicy questions if we can. Who’s winning at the international level? Would you say it’s the set of companies in China or the set of companies in the United States? Sebastian, Nathan, it’s good to see you guys. So Sebastian, who do you think is winning?
Sebastian Raschka
(00:02:53)
So winning is a very broad term. I would say you mentioned the DeepSeek moment, and I do think DeepSeek is definitely winning the hearts of the people who work on open weight models because they share these as open models. Winning, I think, has multiple timescales to it. We have today, we have next year, we have in ten years. One thing I know for sure is that I don’t think nowadays, in 2026, that there will be any company having access to a technology that no other company has access to. And that is mainly because researchers are frequently changing jobs, changing labs. They rotate. So I don’t think there will be a clear winner in terms of technology access.
Sebastian Raschka
(00:03:37)
However, I do think the differentiating factor will be budget and hardware constraints. I don’t think the ideas will be proprietary, but rather the resources that are needed to implement them. And so I don’t currently see a winner-takes-all scenario. I can’t see that at the moment.
Lex Fridman
(00:03:59)
Nathan, what do you think?
Nathan Lambert
(00:04:00)
You see the labs put different energy into what they’re trying to do. To demarcate the point in time when we’re recording this, the hype over Anthropic’s Claude Opus 4.5 model has been absolutely insane. I’ve used it and built stuff in the last few weeks, and it’s almost gotten to the point where it feels like a bit of a meme in terms of the hype. It’s kind of funny because this is very organic, and then if we go back a few months ago, Gemini 3 from Google got released, and it seemed like the marketing and wow factor of that release was super high. But then at the end of November, Claude Opus 4.5 was released and the hype has been growing, while Gemini 3 was before this.
Nathan Lambert
(00:04:44)
And it kind of feels like people don’t really talk about it as much, even though when it came out, everybody was like, this is Gemini’s moment to retake Google’s structural advantages in AI. Gemini 3 is a fantastic model, and I still use it. It’s just that differentiation is lower. I agree with what you’re saying, Sebastian, that the idea space is very fluid, but culturally Anthropic is known for betting very hard on code, and this Claude Code thing is working out for them right now. So I think that even if the ideas flow pretty freely, so much of this is bottlenecked by human effort and the culture of organizations, where Anthropic seems to at least be presenting as the least chaotic.
Nathan Lambert
(00:05:23)
It’s a bit of an advantage if they can keep doing that for a while. But on the other side of things, there’s a lot of ominous technology from China where there are way more labs than DeepSeek. DeepSeek kicked off a movement within China similar to how ChatGPT kicked off a movement in the US where everything had a chatbot. There are now tons of tech companies in China that are releasing very strong frontier open weight models, to the point where I would say that DeepSeek is kind of losing its crown as the preeminent open model maker in China, and the likes of Z.ai with their GLM models, MiniMax’s models, and Kimi K2 Thinking from Moonshot, especially in the last few months, have shone more brightly.
Nathan Lambert
(00:06:04)
The new DeepSeek models are still very strong, but that could be looked back on as a big narrative point where in 2025 DeepSeek came and provided this platform for way more Chinese companies that are releasing these fantastic models to have this new type of operation. These models from these Chinese companies are open weight, and depending on this trajectory, the business models that these American companies are doing could be at risk. But currently, a lot of people are paying for AI software in the US, and historically in China and other parts of the world, people don’t pay a lot for software.
Lex Fridman
(00:06:37)
So some of these models like DeepSeek have the love of the people because they are open weight. How long do you think the Chinese companies keep releasing open weight models?
Nathan Lambert
(00:06:47)
I would say for a few years. I think that, like in the US, there’s not a clear business model for it. I have been writing about open models for a while, and these Chinese companies have realized it. I get inbound from some of them. They’re smart and realize the same constraints, which is that a lot of top US tech companies and other IT companies won’t pay for an API subscription to Chinese companies for security concerns. This has been a long-standing habit in tech, and the people at these companies then see open weight models as an ability to influence and take part in a huge growing AI expenditure market in the US. They’re very realistic about this, and it’s working for them.
Nathan Lambert
(00:07:24)
And I think the government will see that that is building a lot of influence internationally in terms of uptake of the technology, so there’s going to be a lot of incentives to keep it going. But building these models and doing the research is very expensive, so at some point, I expect consolidation. But I don’t expect that to be a story of 2026; there will be more open model builders throughout 2026 than there were in 2025. And a lot of the notable ones will be in China.
Lex Fridman
(00:07:50)
You were going to say something?
Sebastian Raschka
(00:07:51)
Yes. You mentioned DeepSeek losing its crown. I do think to some extent, yes, but we also have to consider that they are still slightly ahead. It’s not that DeepSeek got worse, it’s just like the other ones are using the ideas from DeepSeek. For example, you mentioned Kimi, same architecture, they’re training it. And then again, we have this leapfrogging where they might be at some point in time a bit better because they have the more recent model. I think this comes back to the fact that there won’t be a clear winner. One person releases something, the other one comes in, and the most recent model is probably always the best model.
Nathan Lambert
(00:08:30)
Yeah. We’ll also see the Chinese companies have different incentives. DeepSeek is very secretive, whereas some of these startups are like the MiniMaxes and Z.ais of the world. Those two literally have filed IPO paperwork, and they’re trying to get Western mindshare and do a lot of outreach there. So I don’t know if these incentives will change the model development, because DeepSeek famously is built by a hedge fund, Highflyer Capital, and we don’t know exactly what they use the models for or if they care about this.
Lex Fridman
(00:08:59)
They’re secretive in terms of communication, but they’re not secretive in terms of the technical reports that describe how their models work. They’re still open on that front. And we should also say on the Claude Opus 4.5 hype, there’s the layer of something being the darling of the X echo chamber, the Twitter echo chamber, and the actual amount of people that are using the model. I think it’s probably fair to say that ChatGPT and Gemini are focused on the broad user base that just wants to solve problems in their daily lives, and that user base is gigantic. So the hype about the coding may not be representative of the actual use.
Sebastian Raschka
(00:09:38)
I would say also a lot of the usage patterns are name recognition and brand, but also almost muscle memory, where ChatGPT has been around for a long time. People just got used to using it, and it’s almost like a flywheel where they recommend it to other users. One interesting point is also the customization of LLMs. For example, ChatGPT has a memory feature. So you may have a subscription and you use it for personal stuff, but I don’t know if you want to use that same thing at work because there is a boundary between private and work. If you’re working at a company, they might not allow that or you may not want that.
Sebastian Raschka
(00:10:16)
And I think that’s also an interesting point where you might have multiple subscriptions. One is just clean code; it has nothing of your personal images or hobby projects in there. It’s just for work. And then the other one is your personal thing. I think the future involves multiple models for different use cases. It doesn’t mean you only have to have one.

ChatGPT vs Claude vs Gemini vs Grok: Who is winning?

Lex Fridman
(00:10:38)
What model do you think won 2025, and what model do you think is going to win ’26?
Nathan Lambert
(00:10:43)
I think in the context of consumer chatbots, the question is: are you willing to bet on Gemini over ChatGPT? Which I would say in my gut feels like a bit of a risky bet because OpenAI has been the incumbent and there are so many benefits to that in tech. I think the momentum in 2025 was on Gemini’s side, but they were starting from such a low point. RIP Bard and those earlier attempts. I think huge credit to them for powering through the organizational chaos to make that happen. But also it’s hard to bet against OpenAI because they always come off as so chaotic, but they’re very good at landing things.
Nathan Lambert
(00:11:26)
Personally, I have very mixed reviews of GPT-5, but it must have saved them so much money with the high-line feature being a router where most users are no longer charging their GPU costs as much. So I think it’s very hard to dissociate the things that I like out of models versus the things that are actually going to be a general public differentiator.
Lex Fridman
(00:11:50)
What do you think about 2026? Who’s going to win?
Nathan Lambert
(00:11:52)
I’ll say something, even though it’s risky. I think Gemini will continue to make progress on ChatGPT. Google has the scale when both of these are operating at such extreme scales, and Google has the ability to separate research and product a bit better, whereas you hear so much about OpenAI being chaotic operationally and chasing the high-impact thing, which is a very startup culture. Then on the software and enterprise side, I think Anthropic will have continued success as they’ve again and again been set up for that. Obviously Google Cloud has a lot of offerings, but I think this Gemini name brand is important for them to build.
Nathan Lambert
(00:12:28)
Google Cloud will continue to do well, but that’s a more complex thing to explain in the ecosystem because that’s competing with the likes of Azure and AWS rather than on the model provider side.
Lex Fridman
(00:12:40)
So in infrastructure, you think TPUs give them an advantage?
Nathan Lambert
(00:12:45)
Largely because the margin on NVIDIA chips is insane and Google can develop everything from top to bottom to fit their stack and not have to pay this margin, and they’ve had a head start in building data centers. So all of these things that have both high lead times and very hard margins on high costs, Google has a kind of historical advantage there. And if there’s going to be a new paradigm, it’s most likely to come from OpenAI. Their research division again and again has shown this ability to land a new research idea or a product. Like Deep Research, Sora, o1 thinking models—all these definitional things have come from OpenAI, and that’s got to be one of their top traits as an organization.
Nathan Lambert
(00:13:28)
So it’s kind of hard to bet against that, but I think a lot of this year will be about scale and optimizing what could be described as low-hanging fruit in models.
Lex Fridman
(00:13:37)
And clearly there’s a trade-off between intelligence and speed. This is what GPT-5 was trying to solve behind the scenes. It’s like, do people actually want intelligence, the broad public, or do they want speed?
Sebastian Raschka
(00:13:52)
I think it’s a nice variety actually, or the option to have a toggle there. For my personal usage, most of the time when I look something up, I use ChatGPT to ask a quick question and get the information I wanted fast. For most daily tasks, I use the quick model. Nowadays, I think the auto mode is pretty good where you don’t have to specifically say “thinking” or “non-thinking.” Then again, I also sometimes want the pro mode. Very often, when I have something written, I put it into ChatGPT and say, “Hey, do a very thorough check. Are all my references correct? Are all my thoughts correct? Did I make any formatting mistakes? Are the figure numbers wrong?” or something like that. And I don’t need that right away.
Sebastian Raschka
(00:14:33)
I can finish my stuff, maybe have dinner, let it run, come back and go through it. This is where I think it’s important to have this option. I would go crazy if for each query I had to wait 30 minutes, or even 10 minutes.
Nathan Lambert
(00:14:46)
That’s me. I’m sitting over here losing my mind that you use the router and the non-thinking model. I’m like, “How do you live with that?”
Nathan Lambert
(00:14:55)
That’s like my reaction. I’ve been heavily on ChatGPT for a while. I never touched GPT-5 non-thinking. I find it just… its tone and then its propensity for errors. It just has a higher likelihood of errors. Some of this is from back when OpenAI released o3, which was the first model to do this Deep Research and find many sources and integrate them for you. So I became habituated with that. I will only use GPT-5.2 thinking or pro when I’m finding any sort of information query for work, whether that’s a paper or some code reference. I will regularly have five pro queries going simultaneously, each looking for one specific paper or feedback on an equation.
Sebastian Raschka
(00:15:38)
I have a fun example where I just needed the answer as fast as possible for this podcast before I was going on the trip. I have a local GPU running at home and I wanted to run a long RL experiment. Usually I unplug things because if you’re not at home, you don’t want to have things plugged in, and I accidentally unplugged the GPU. My wife was already in the car and it was like, “Oh dang.” Basically, I wanted a Bash script as fast as possible that runs my different experiments and the evaluation. I know how to use the Bash terminal, but in that moment I just needed the command in 10 seconds.
Lex Fridman
(00:16:18)
This is a hilarious situation but yeah, so what did you use?
Sebastian Raschka
(00:16:21)
So I did the non-thinking fastest model. It gave me the Bash command. I wanted to chain different scripts to each other and route this to a log file with the `tee` command. Off the top of my head, I was just in a hurry; I could have thought about it myself.
Lex Fridman
(00:16:37)
By the way, I don’t know if there’s a representative case: wife waiting in the car, you have to run, unplug the GPU, you have to generate a Bash script. This sounds like a movie… …Mission Impossible.
Nathan Lambert
(00:16:46)
I use Gemini for that. I use thinking for all the information stuff and then Gemini for fast things or stuff that I could sometimes Google. It’s good at explaining things and I trust that it has this background of knowledge and it’s simple. And the Gemini app has gotten a lot better.
Nathan Lambert
(00:17:01)
It’s good for those sorts of things. And then for code and any sort of philosophical discussion, I use Claude Opus 4.5, also always with extended thinking. Extended thinking and inference-time scaling is just a way to make the models marginally smarter. I will always edge on that side when the progress is very high because you don’t know when that’ll unlock a new use case. And then I sometimes use Grok for real-time information or finding something on AI Twitter that I knew I saw and I need to dig up. Although when Grok 4 came out, the Grok 4 Heavy—which was their pro variant—was actually very good and I was pretty impressed with it, and then I just kind of lost track of it with muscle memory from having the ChatGPT app open. So I use many different things.
Lex Fridman
(00:17:45)
Yeah. I actually do use Grok 4 Heavy for debugging. For hardcore debugging that the other ones can’t solve, I find that it’s the best at. And it’s interesting because you say ChatGPT is the best interface. For me, for that same reason—but this could be just momentum— Gemini is the better interface for me. I think because I fell in love with their needle-in-the-haystack capabilities. If I ever put in something that has a lot of context but I’m looking for very specific information to make sure it tracks all of it, I find Gemini has been the best. So it’s funny with some of these models, if they win your heart over—
Lex Fridman
(00:18:28)
…for one particular feature on a particular day, for that particular query or prompt, you’re like, “This model’s better.” And so you’ll just stick with it for a bit until it does something really dumb. There’s like a threshold effect. Some smart thing happens and then you fall in love with it, and then it does some dumb thing and you’re like, “You know what? I’m gonna switch and try Claude or ChatGPT.” And all that kind of stuff.
Sebastian Raschka
(00:18:51)
This is exactly it. You use it until it breaks, until you have a problem, and then you change the LLM. I think it’s the same way we use anything, like our favorite text editor, operating system, or browser. I mean, there are so many browser options: Safari, Firefox, Chrome. They’re relatively similar, but then there are edge cases, maybe extensions you want to use, and then you switch. But I don’t think anyone types the same thing into different browsers and compares them. You only do that when the website doesn’t render or if something breaks. So that’s a good point. You use it until it breaks, and then you explore other options.
Nathan Lambert
(00:19:28)
On the long context thing, I was also a Gemini user for this, but the GPT-5.2 release blog had crazy long context scores, where a lot of people were like, “Did they just figure out some algorithmic change?” It went from like 30% to like 70% or something in this minor model update. So it’s also very hard to keep track of all of these things, but now I look more favorably at GPT-5.2’s long context. So it’s just kind of like a never-ending battle to actually get to testing this.
Lex Fridman
(00:19:57)
Well, it’s interesting that none of us talked about the Chinese models from a user perspective. What does that say? Does that mean the Chinese models are not as good, or does that mean we’re just very biased and US-focused?
Sebastian Raschka
(00:20:11)
I do think that’s currently the discrepancy between the model and the platform. I think the open models are more known for the open weights, not their platform yet.
Nathan Lambert
(00:20:21)
There are also a lot of companies that are willing to sell you open-model inference at a very low cost. I think, like OpenRouter, it’s easy to look at multi-model things. You can run DeepSeek on Perplexity. I think all of us sitting here are like, “We use OpenAI GPT-5 Pro consistently.” We’re all willing to pay for the marginal—
Nathan Lambert
(00:20:39)
…intelligence gain. And these models from the US are better in terms of the outputs. I think the question is, will they stay better for this year and for years going forward? But so long as they’re better, I’m going to pay for them. I think there’s also analysis that shows that the way the Chinese models are served—which you could argue is due to export controls or not—is that they use fewer GPUs per replica, which makes them slower and leads to different errors. It’s about speed and intelligence.
Nathan Lambert
(00:21:09)
If these things are in your favor as a user, I think in the US a lot of users will go for this. I think that is one thing that will spur these Chinese companies to want to compete in other ways, whether it’s free or substantially lower costs, or it’ll breed creativity in terms of offerings, which is good for the ecosystem. But I just think the simple thing is the US models are currently better, and we use them. I tried these other open models, and I’m like, “Fun, but I’m not gonna… I don’t go back to it.”

Best AI for coding

Lex Fridman
(00:21:38)
We didn’t really mention programming. That’s another use case that a lot of people deeply care about. I use basically half-and-half Cursor and Claude Code, because I find them to be fundamentally different experiences and both useful. You program quite a bit— …so what do you use? What’s the current vibe?
Sebastian Raschka
(00:21:59)
So, I use the Codeium plugin for VS Code. You know, it’s very convenient. It’s just a plugin, and then it’s a chat interface that has access to your repository. I know that Claude Code is a bit different. It is a bit more agentic. It touches more things; it does the whole project for you. I’m not quite there yet where I’m comfortable with that because maybe I’m a control freak, but I still like to see what’s going on. Codeium is the sweet spot for me right now where it is helping me, but it is not taking over completely.
Lex Fridman
(00:22:29)
I should mention, one of the reasons I do use Claude Code is to build the skill of programming with English. I mean, the experience is fundamentally different. As opposed to micromanaging the details of the generation and looking at the diff—which you can in Cursor if that’s the IDE you use—you are understanding the code deeply as you progress, versus just thinking in this design space and guiding it at a macro level. I think that is another way of thinking about the programming process. Also, Claude Code just seems to be a better utilization of Claude Opus 4.5.
Nathan Lambert
(00:23:18)
It’s a good side-by-side for people to do. You can have Claude Code open, you can have Cursor open, you can have VS Code open, and you can select the same models on all of them— …and ask questions, and it’s very interesting. Claude Code is way better in that domain. It’s remarkable.
Lex Fridman
(00:23:32)
All right, we should say that both of you are legit on multiple fronts: researchers, programmers, educators, and on the book front, too. Nathan, at some point soon, hopefully has an RLHF book coming out.
Nathan Lambert
(00:23:50)
It’s available for preorder, and there’s a full digital preprint. I’m just making it pretty and better organized for the physical thing, which is a lot of why I do it—it’s fun to create things that you think are excellent in physical form when so much of our life is digital.
Lex Fridman
(00:24:05)
I should say, going to Perplexity here, Sebastian Raschka is a machine learning researcher and author known for several influential books. A couple that I wanted to mention—and a book I highly recommend—is Build a Large Language Model From Scratch, and the new one, Build a Reasoning Model From Scratch. I’m really excited about that. Building stuff from scratch is one of the most powerful ways of learning.
Sebastian Raschka
(00:24:27)
Honestly, building an LLM from scratch is a lot of fun and a lot to learn. Like you said, it’s probably the best way to learn how something really works, because you can look at figures, but figures can have mistakes. You can look at conceptual explanations, but you might misunderstand them. But if there is code and the code works, you know it’s correct. There’s no misunderstanding; it’s precise. Otherwise, it wouldn’t work. I think that’s the beauty behind coding. It doesn’t lie. It’s math, basically. Even with math, you can have mistakes in a book you would never notice because you aren’t running the math while reading, so you can’t verify it. And with code, what’s nice is you can verify it.
Lex Fridman
(00:25:09)
Yeah, I agree with you about the Build a Large Language Model From Scratch book. It’s nice to tune out everything else, the internet and so on, and just focus on the book. But, you know, compared to history books, it’s just less lonely somehow. It’s really more fun. For example, on the programming front, I think it’s genuinely more fun to program with an LLM. And I think it’s genuinely more fun to read with an LLM. But you’re right. This distraction should be minimized. So you use the LLM to basically enrich the experience, maybe add more context. Maybe I just… the rate of ‘aha’ moments for me on a small scale is really high with LLMs.
Sebastian Raschka
(00:25:54)
100%. I also want to correct myself: I’m not suggesting not to use LLMs. I suggest doing it in multiple passes. Like, one pass just offline, focus mode, and then after that… I mean, I also take notes, but I try to resist the urge to immediately look things up. I do a second pass. For me, it’s just more structured this way and I get less… I mean, sometimes things are answered in the chapter, but also it just helps to let it sink in and think about it. Other people have different preferences. I would highly recommend using LLMs when reading books. For me, it’s just not the first thing to do; it’s the second pass.
Lex Fridman
(00:26:29)
By way of recommendation, I do the opposite. I like to use the LLM at the beginning— …to lay out the full context of what is this world that I’m now stepping into. But I try to avoid clicking out of the LLM into the world of Twitter and blogs because then you’re down this rabbit hole. You’re reading somebody’s opinion, there’s a flame war about a particular topic, and all of a sudden you’re now in the realm of the internet and Reddit and so on. But if you’re purely letting the LLM give you the context of why this matters, what are the big picture ideas… sometimes books themselves are good at doing that, but not always.
Nathan Lambert
(00:27:12)
This is why I like the ChatGPT app, because it gives the AI a home in your computer where you can focus on it, rather than just being another tab in my mess of internet options. And I think Claude Code in particular does a good job of making that a joy, where it seems very engaging as a product design to be an interface that your AI will then go out into the world. There’s something very intangible between it and Codex; it just feels warm and engaging, whereas Codex from OpenAI can often be as good but it just feels a little bit rough around the edges.
Nathan Lambert
(00:27:45)
Whereas Claude Code makes it fun to build things, particularly from scratch where you trust that it’ll make something. Obviously this is good for websites and refreshing tooling, which I use it for, or data analysis. On my blog, we scrape Hugging Face so we keep the download numbers for every dataset and model over time now. Claude was just like, “Yeah, I’ve made use of that data, no problem.” And I was like, “That would’ve taken me days.” And then I have enough situational awareness to be like, “Okay, these trends obviously make sense,” and you can check things. But that’s just a wonderful interface where you can have an intermediary and not have to do the awful low-level work that you would have to do to maintain different web projects.

Open Source vs Closed Source LLMs

Lex Fridman
(00:28:29)
All right. So we just talked about a bunch of the closed-weight models. Let’s talk about the open ones. Tell me about the landscape of open LLM models. Which are interesting ones? Which stand out to you and why? We already mentioned DeepSeek.
Nathan Lambert
(00:28:44)
Do you wanna see how many we can name off the top of our head?
Lex Fridman
(00:28:47)
Yeah, yeah. Without looking at notes.
Nathan Lambert
(00:28:48)
DeepSeek, Kimi, MiniMax, Z.ai, Antlang. We’re just going Chinese.
Sebastian Raschka
(00:28:57)
Let’s throw in Mistral AI, Gemma— …gpt-oss, the open source model by OpenAI. Actually, NVIDIA had a really cool one, Nemotron 3. There’s a lot of stuff, especially at the end of the year. Qwen might be the one—
Nathan Lambert
(00:29:12)
Oh, yeah. Qwen was the obvious name I was gonna say. I was trying to get through… you can get at least 10 Chinese and at least 10 Western. I mean, OpenAI released their first open model—
Sebastian Raschka
(00:29:21)
A long time ago.
Nathan Lambert
(00:29:22)
…since GPT-2. When I was writing about OpenAI’s open model release, people were like, “Don’t forget about GPT-2,” which I thought was really funny because it’s just such a different time. But gpt-oss is actually a very strong model and does some things that the other models don’t do very well. Selfishly, I’ll promote a bunch of Western companies; both in the US and Europe have these fully open models. I work at the Allen Institute for AI where we’ve been building OLMo, which releases data and code and all of this. And now we have actual competition for people that are trying to release everything so that other people can train these models.
Nathan Lambert
(00:29:57)
So there’s the Institute for Foundation Models/LM360, which has had their K2 models of various types. Apertus is a Swiss research consortium. Hugging Face has SmolLM, which is very popular. And NVIDIA’s Nemotron has started releasing data as well. And then Stanford’s Marine Community Project, which is kind of making it so there’s a pipeline for people to open a GitHub issue and implement a new idea and then have it run in a stable language modeling stack. So this space, that list was way smaller in 2024-
Nathan Lambert
(00:30:31)
… so I think it was just AI2. So that’s a great thing for more people to get involved and to understand language models, which doesn’t really have a Chinese company that is an analog. While I’m talking, I’ll say that the Chinese open language models tend to be much bigger and that gives them this higher peak performance as MoEs, whereas a lot of these things that we like a lot, whether it was Gemma or Nemotron, have tended to be smaller models from the US, which is starting to change. Mistral Large 3 came out, which was a giant MoE model, very similar to DeepSeek architecture in December. And then a startup, Reka AI, and both Nemotron have… Nemotron and NVIDIA have teased MoE models way bigger than 100 billion parameters-
Nathan Lambert
(00:31:16)
… in the 400 billion parameter range coming in this Q1 2026 timeline. So I think this kind of balance is set to change this year in terms of what people are using the Chinese versus US open models for, which I’m personally going to be very excited to watch.
Lex Fridman
(00:31:32)
First of all, huge props for being able to name so many of these. Did you actually name LLaMA?
Nathan Lambert
(00:31:38)
No.
Lex Fridman
(00:31:39)
I feel like …
Nathan Lambert
(00:31:40)
RIP.
Sebastian Raschka
(00:31:41)
This was not on purpose.
Lex Fridman
(00:31:43)
RIP LLaMA. All right. Can you mention what are some interesting models that stand out? You mentioned Qwen 3 is obviously a standout.
Sebastian Raschka
(00:31:51)
So I would say the year’s almost book-ended by DeepSeek-V3 and DeepSeek R1. And then on the other hand, in December, DeepSeek-V3.2. Because what I like about those is they always have an interesting architecture tweak- … that others don’t have. But otherwise, if you want to go with the familiar but really good performance, Qwen 3 and, like Nathan said, also gpt-oss. And I think with gpt-oss, what’s interesting about it is it’s kind of the first open-weight model that was really trained with tool use in mind, which I do think is a bit of a paradigm shift where the ecosystem was not quite ready for it. So with tool use, I mean that the LLM is able to do a web search or call a Python interpreter.
Sebastian Raschka
(00:32:33)
And I do think it’s a standout because it’s a huge unlock. One of the most common complaints about LLMs is, for example, hallucinations, right? And so, in my opinion, one of the best ways to solve hallucinations is to not try to always remember information or make things up. For math, why not use a calculator app or Python?
Sebastian Raschka
(00:32:54)
If I ask the LLM, “Who won the soccer World Cup in 1998?” instead of just trying to memorize, it could go do a search. I think mostly it’s usually still a Google search. So ChatGPT and gpt-oss, they would do a tool call to Google, maybe find the FIFA website, and find that it was France. It would get you that information reliably instead of just trying to memorize it. So I think it’s a huge unlock which right now is not fully utilized yet by the open-weight ecosystem. A lot of people don’t use tool call modes because I think it’s a trust thing. You don’t want to run this on your computer where it has access to tools and could wipe your hard drive, so you want to containerize that. But I do think that is a really important step for the upcoming years to have this ability.
Lex Fridman
(00:33:44)
So a few quick things. First of all, thank you for defining what you mean by tool use. I think that’s a great thing to do in general for the concepts we’re talking about, even things as sort of well-established as MOEs. You have to say that means mixture of experts, and you kind of have to build up an intuition for people about what that means, how it’s actually utilized, what are the different flavors. So what does it mean that there’s just such an explosion of open models? What’s your intuition?
Nathan Lambert
(00:34:13)
If you’re releasing an open model, you want people to use it, is the first and foremost thing. And then after that comes things like transparency and trust. I think when you look at China, the biggest reason is that they want people around the world to use these models, and I think a lot of people will not. If you look outside of the US, a lot of people will not pay for software, but they might have computing resources where you can put a model on it and run it. I think there can also be data that you don’t want to send to the cloud. So the number one thing is getting people to use models, use AI, or use your AI that might not be able to do it without having access to the model.
Lex Fridman
(00:34:46)
I guess we should state explicitly, so we’ve been talking about these Chinese models and open weight models. Oftentimes, the way they’re run is locally. So it’s not like you’re sending your data to China or to whoever developed the model in Silicon Valley.
Nathan Lambert
(00:35:04)
A lot of American startups make money by hosting these models from China and selling them. It’s called selling tokens, which means somebody will call the model to do some piece of work. I think the other reason is for US companies like OpenAI. OpenAI is so GPU deprived; they’re at the limits of the GPUs. Whenever they make a release, they’re always talking about how their GPUs are hurting. And I think in one of these gpt-oss-120b release sessions, Sam Altman said, “Oh, we’re releasing this because we can use your GPUs. We don’t have to use our GPUs and OpenAI can still get distribution out of this,” which is another very real thing, because it doesn’t cost them anything.
Sebastian Raschka
(00:35:43)
And for the user, I think also, I mean, there are users who just use the model locally how they would use ChatGPT. But also for companies, I think it’s a huge unlock to have these models because you can customize them, you can train them, you can add more data post-training, like specialize them into, let’s say, law, medical models, whatever you have. And you mentioned Llama; the appeal of the open weight models from China is that the licenses are even friendlier. I think they are just unrestricted open source licenses, whereas if we use something like Llama or Gemma, there are some strings attached. I think it’s like an upper limit in terms of how many users you have.
Sebastian Raschka
(00:36:21)
And then if you exceed so many million users, you have to report your financial situation to, let’s say, Meta or something like that. And I think while it is a free model, there are strings attached, and people like things where strings are not attached. So I think that’s also one of the reasons besides performance why the open weight models from China are so popular, because you can just use them. There’s no catch in that sense.
Nathan Lambert
(00:36:46)
The ecosystem has gotten better on that front, but mostly downstream of these new providers providing such open licenses. That was funny when you pulled up Perplexity and said, “Kimi K2 Thinking hosted in the US.” Which is an exact example of what we’re talking about where people are sensitive to this. Kimi K2 Thinking is a model that is very popular. People say that has very good creative writing and also in doing some software things. So it’s just these little quirks that people pick up on with different models that they like.
Lex Fridman
(00:37:14)
What are some interesting ideas that some of these models have explored that you can speak to, like that are particularly interesting to you?
Sebastian Raschka
(00:37:21)
Maybe we can go chronologically. I mean, there was, of course, DeepSeek R1 that came out in January of 2025. However, this was based on DeepSeek-V3, which came out the year before in December 2024. There are multiple things on the architecture side. What is fascinating is you can still—I mean, that’s what I do with my from-scratch coding projects—you can still start with GPT-2, and you can add things to that model to make it into this other model. So it’s all still kind of like the same lineage. There is a very close relationship between those. But top of my head, DeepSeek, what was unique there is the Mixture of Experts. I mean, they were not inventing Mixture of Experts.
Sebastian Raschka
(00:38:00)
We can maybe talk a bit more about what Mixture of Experts means. But just to list these things first before we dive into detail: Mixture of Experts, but then they also had multi-head latent attention, which is a tweak to the attention mechanism. This was, I would say in 2025, the main distinguishing factor between these open weight models: different tweaks to make inference or KV cache size more economical. We can also define KV cache in a few moments. But it makes it more economical to have long context, to shrink the KV cache size. So what are tweaks that we can do? Most of them focused on the attention mechanism. There is multi-head latent attention in DeepSeek; there is group query attention, which is still very popular.
Sebastian Raschka
(00:38:44)
It’s not invented by any of those models; it goes back a few years. But that would be the other option. Sliding window attention, I think OLMo 3 uses it if I remember correctly. So there are these different tweaks that make the models different. Otherwise, I put them all together in an article once where I just compared them; they are surprisingly similar. It’s just different numbers in terms of how many repetitions of the transformer block you have in the center and just little knobs that people tune. But what’s so nice about it is it works no matter what. You can tweak things, you can move the normalization layers around to get some performance gains.
Sebastian Raschka
(00:39:23)
And OLMo is always very good in ablation studies, showing what it actually does to the model if you move something around. Ablation studies: does it make it better or worse? But there are so many ways you can implement a transformer and make it still work. The big ideas that are still prevalent are Mixture of Experts, multi-head latent attention, sliding window attention, and group query attention. And then at the end of the year, we saw a focus on making the attention mechanism scale linearly with inference token prediction. So there was Qwen3-neXt, for example, which added a gated delta net. It’s inspired by state space models, where you have a fixed state that you keep updating. But it makes essentially this attention cheaper, or it replaces attention with a cheaper operation.

Transformers: Evolution of LLMs since 2019

Lex Fridman
(00:40:08)
And it may be useful to step back and talk about transformer architecture in general.
Sebastian Raschka
(00:40:13)
Yeah, so maybe we should start with GPT-2 architecture, the transformer that was derived from the “Attention Is All You Need” paper.
Sebastian Raschka
(00:40:21)
So the “Attention Is All You Need” paper had a transformer architecture that had two parts: an encoder and a decoder. And GPT went with just focusing in on the decoder part. It is essentially still a neural network and it has this attention mechanism inside. And you predict one token at a time. You pass it through an embedding layer. There’s the transformer block. The transformer block has attention modules and a fully connected layer. And there are some normalization layers in between. But it’s essentially neural network layers with this attention mechanism. So coming from GPT-2 when we move on to gpt-oss-120b, there is, for example, the Mixture of Experts layer. It’s not invented by GPT-OSS; it’s a few years old.
Sebastian Raschka
(00:41:04)
But it is essentially a tweak to make the model larger without consuming more compute in each forward pass. So there is this fully connected layer, and if listeners are familiar with multi-layer perceptrons, you can think of a mini multi-layer perceptron, a fully connected neural network layer inside the transformer. And it’s very expensive because it’s fully connected. If you have a thousand inputs and a thousand outputs, that’s like a million connections. And it’s a very expensive part in this transformer. And the idea is to kind of expand that into multiple feedforward networks. So instead of having one, let’s say you have 256, but you don’t use all of them at the same time.
Sebastian Raschka
(00:41:49)
So you now have a router that says, “Okay, based on this input token, it would be useful to use this fully connected network.” And in that context, it’s called an expert. So a Mixture of Experts means you have multiple experts. And depending on what your input is—let’s say it’s more math-heavy—it would use different experts compared to, let’s say, translating input text from English to Spanish. It would maybe consult different experts. It’s not as clear-cut to say, “Okay, this is only an expert for math and this for Spanish.” It’s a bit more fuzzy. But the idea is essentially that you pack more knowledge into the network, but not all the knowledge is used all the time.
Sebastian Raschka
(00:42:27)
That would be very wasteful. So yeah, kind of like during the token generation, you are more selective. There’s a router that selects which tokens should go to which expert. It adds more complexity. It’s harder to train. There’s a lot that can go wrong, like collapse and everything. So I think that’s why OLMo 3 still uses dense… I mean, you have, I think, OLMo models with Mixture of Experts, but dense models, where dense means… So also, it’s jargon. There’s a distinction between dense and sparse. So Mixture of Experts is considered sparse because we have a lot of experts, but only a few of them are active. And then dense would be the opposite, where you only have, like, one fully connected module, and it’s always utilized.
Lex Fridman
(00:43:08)
So maybe this is a good place to also talk about KV cache. But actually, before that, even zooming out, fundamentally, how many new ideas have been implemented from GPT-2 to today? Like, how different really are these architectures?
Sebastian Raschka
(00:43:25)
Picture like the Mixture of Experts. The attention mechanism in gpt-oss-120b, that would be the Group Query Attention mechanism. So it’s a slight tweak from multi-head attention to Group Query Attention, so that we have two. I think they replaced LayerNorm by RMSNorm, but it’s just like a different normalization there and not a big change. It’s just like a tweak. The nonlinear activation function—for people familiar with deep neural networks, I mean, it’s the same as changing sigmoid with ReLU. It’s not changing the network fundamentally. It’s just like a tweak. And that’s about it, I would say. It’s not really fundamentally that different. It’s still the same architecture. So you can convert one from one… You can go from one into the other by just adding these changes, basically.
Lex Fridman
(00:44:09)
It fundamentally is still the same architecture.
Sebastian Raschka
(00:44:12)
Mm-hmm. Yep. So for example, you mentioned my book earlier. That’s a GPT-2 model in the book because it’s simple and it’s very small, so 124 million parameters approximately. But in the bonus materials, I do have OLMo from scratch, Gemini 3 from scratch, and other types of from-scratch models. And I always start with my GPT-2 model and just, you know, add different components and you get from one to the other. It’s kind of like a lineage in a sense. Yeah.
Lex Fridman
(00:44:37)
Can you build up an intuition for people? Because sort of when you zoom out and look at it, there’s so much rapid advancement in the AI world, and at the same time, fundamentally the architectures have not changed. So where is all the turbulence, the turmoil of the advancement happening? Where are the gains to be had?
Sebastian Raschka
(00:45:01)
So there are the different stages where you develop or train the network. You have pre-training. Now back in the day, it was just pre-training with GPT-2. Now you have pre-training, mid-training, and post-training. So I think right now we are in the post-training focus stage. I mean, pre-training still gives you advantages if you scale it up to better, higher quality data. But then we have capability unlocks that were not there with GPT-2, for example. ChatGPT is basically a GPT-3 model. And GPT-3 is the same as GPT-2 in terms of architecture. What was new was adding the supervised fine-tuning and the Reinforcement Learning with Human Feedback. So, it’s more on the algorithmic side rather than the architecture.
Nathan Lambert
(00:45:44)
I would say that the systems also change a lot. I think if you listen to NVIDIA’s announcements, they talk about things like, “You now do FP8, you can now do FP4.” And what is happening is these labs are figuring out how to utilize more compute to put into one model, which lets them train faster and lets them put more data in. And then you can find better configurations faster by doing this. So you can look at the tokens per second per GPU as a metric that you look at when you’re doing large-scale training. And you can go from, like, 10K to 13K by turning on FP8 training, which means you’re using less memory per parameter in the model. And by saving less information, you do less communication and you can train faster.
Nathan Lambert
(00:46:24)
So all of these system things underpin way faster experimentation on data and algorithms. It’s this kind of loop that keeps going where it’s kinda hard to describe when you look at the architecture and they’re exactly the same. But the code base used to train these models is gonna be vastly different— …and you could probably… the GPUs are different, but you probably train gpt-oss-20b way faster in wall clock time than GPT-2— …was trained at the time.
Sebastian Raschka
(00:46:54)
Yeah. Like you said, they had, for example, in the Mixture of Experts, this NVIDIA FP4 optimization where you get more throughput. But I do think for the speed, this is true, but it doesn’t give the model new capabilities in a sense. It’s just: how much can we make the computation coarser without suffering in terms of model performance degradation? But I do think there are alternatives popping up to the transformer. There are text diffusion models, a completely different paradigm. And although text diffusion models might use transformer architectures, it’s not an autoregressive transformer. And also Mamba models; it’s a State Space Model.
Sebastian Raschka
(00:47:34)
But they do have trade-offs, and what’s true is there’s nothing that has replaced the autoregressive transformer as the state-of-the-art model. So, for state-of-the-art, you would still go with that thing, but there are now alternatives for the cheaper end—alternatives that are kind of making compromises, but it’s not just one architecture anymore. There are little ones coming up. But if we talk about the state-of-the-art, it’s pretty much still the transformer architecture, autoregressive, derived from GPT-2 essentially.

AI Scaling Laws: Are they dead or still holding?

Lex Fridman
(00:48:06)
I guess the big question here is—we talked quite a bit here on the architecture behind the pre-training—are the scaling laws holding strong across pre-training, post-training, inference, context size, data, and synthetic data?
Nathan Lambert
(00:48:20)
I’d like to start with the technical definition of a scaling law-
Nathan Lambert
(00:48:23)
…which kind of informs all of this. The scaling law is the power law relationship between… You can think of the x-axis—what you are scaling—as a combination of compute and data, which are kind of similar, and then the y-axis is like the held-out prediction accuracy over our next tokens. We talked about models being autoregressive. It’s like if you keep a set of text that the model has not seen, how accurate will it get when you train? And the idea of scaling laws came when people figured out that that was a very predictable relationship. I think that technical term is continuing, and then the question is, what do users get out of it? And then there are more types of scaling, where OpenAI’s o1 was famous for introducing inference-time scaling.
Nathan Lambert
(00:49:07)
And I think less famously for also showing that you can scale reinforcement learning training and get kind of this log x-axis and then a linear increase in performance on the y-axis. So there are kind of these three axes now where the traditional scaling laws are talked about for pre-training—which is how big your model is and how big your dataset is—and then scaling reinforcement learning, which is like how long can you do this trial and error learning that we’ll talk about. We’ll define more of this, and then this inference-time compute, which is just letting the model generate more tokens on a specific problem.
Nathan Lambert
(00:49:37)
So I’m kind of bullish; they’re all really still working, but the low-hanging fruit has mostly been taken, especially in the last year on Reinforcement Learning with Verifiable Rewards, which is this RLVR, and then inference-time scaling. That’s why these models feel so different to use, where previously you would get that first token immediately. And now they’ll go off for seconds, minutes, or even hours generating these hidden thoughts before giving you the first word of your answer. And that’s all about this inference-time scaling, which is such a wonderful kind of step function in terms of how the models change abilities. They enabled this tool use stuff and enabled this much better software engineering that we were talking about.
Nathan Lambert
(00:50:17)
And this is, when we say enabled, almost entirely downstream of the fact that this Reinforcement Learning with Verifiable Rewards training just let the models pick up these skills very easily. So if you look at the reasoning process when the models are generating a lot of tokens, what it’ll often be doing is: it tries a tool, it looks at what it gets back, it tries another API, it sees what it gets back and if it solves the problem. The models, when you’re training them, very quickly learn to do this.
Nathan Lambert
(00:50:46)
And then at the end of the day, that gives this kind of general foundation where the model can use CLI commands very nicely in your repo, handle Git for you, move things around, organize things, or search to find more information—which, if we were sitting in these chairs a year ago, is something that we didn’t really think of the models doing. So this is just something that has happened this year and has totally transformed how we think of using AI, which I think is very magical. It’s such an interesting evolution and unlocks so much value. But it’s not clear what the next avenue will be in terms of unlocking stuff like this.
Nathan Lambert
(00:51:23)
I think that there’s—we’ll get to continual learning later, but there’s a lot of buzz around certain areas of AI, but no one knows when the next step function will really come.
Lex Fridman
(00:51:31)
So you’ve actually said quite a lot of things there, and said profound things quickly. It would be nice to unpack them a little bit. You say you’re bullish basically on every version of scaling. So can we just start at the beginning? Pre-training: are we implying that the low-hanging fruit on pre-training scaling has been picked? Has pre-training hit a plateau, or are you still bullish on even pre-training?
Nathan Lambert
(00:52:01)
Pre-training has gotten extremely expensive. I think to scale up pre-training, it’s also implying that you’re going to serve a very large model to the users. So I think that it’s been loosely established the likes of GPT-4 and similar models were around one trillion parameters at the biggest size. There’s a lot of rumors that they’ve actually gotten smaller as training has gotten more efficient. You want to make the model smaller because then your costs of serving go down proportionately. The cost of training these models is really low relative to the cost of serving them to hundreds of millions of users. I think DeepSeek had this famous number of about five million dollars for pre-training at cloud market rates.
Nathan Lambert
(00:52:40)
In the OLMo 3 paper, section 2.4, we just detailed how long we had the GPU clusters sitting around for training—which includes engineering issues, multiple seeds—and it was about two million dollars to rent the cluster to deal with all the problems and headaches of training a model. So these models are… a lot of people could get one to 10 million dollars to train a model, but the recurring costs of serving millions of users is really billions of dollars of compute. A thousand GPU rental you can pay 100 grand a day for. And these companies could have millions of GPUs. You can look at how much these things cost to sit around.
Nathan Lambert
(00:53:19)
So that’s kind of a big thing, and then it’s like, if scaling is actually giving you a better model, is it going to be financially worth it? And I think we’ll slowly push it out as AI solves more compelling tasks—like the likes of Claude Opus 4.5 making Claude Code just work for things. I launched this project called the ATOM project, which is American Truly Open Models, in July, and that was like a true vibe-coded website. I have a job to make plots and stuff. Then I came back to refresh it in the last few weeks and Claude Opus 4.5, versus whatever model was available at the time, just crushed all the issues that it had from building in June and July. It might be a bigger model. There’s a lot of things that go into this, but there’s still progress coming.
Lex Fridman
(00:54:04)
So what you’re speaking to is the nuance of the y-axis of the scaling laws—that the way it’s experienced versus on a benchmark, the actual intelligence might be different. But still, your intuition about pre-training: if you scale the size of compute, will the models get better? Not whether it’s financially viable, but just from the law aspect of it, do you think the models will get smarter?
Nathan Lambert
(00:54:28)
Yeah. And I think that there’s… And this sometimes comes off as almost disillusioned from leadership at AI companies saying this, but they’re like, “It’s held for 13 orders of magnitude of compute; why would it ever end?” So I think fundamentally it is pretty unlikely to stop. It’s just like eventually we’re not even going to be able to test the bigger scales because of all the problems that come with more compute. I think that there’s a lot of talk on how 2026 is a year when very large NVIDIA Blackwell compute clusters—like gigawatt-scale facilities—are coming online. And these were all contracts for power and data centers that were signed and sought out in ’22 and 2023, before or right after ChatGPT.
Nathan Lambert
(00:55:13)
So it took this two-to-three-year lead time to build these bigger clusters to train the models, while there’s obviously immense interest in building even more data centers than that. So that is kind of the crux that people are saying: these new clusters are coming. The labs are going to have more compute for training. They’re going to utilize this, but it’s not a given. I’ve seen so much progress that I expect it, and I expect a little bit bigger models. I would say it’s more like we’ll see a $2,000 subscription this year; we’ve already seen $200 subscriptions. It’s like that could 10x again, and these are the kind of things that could come—and they’re all downstream of a bigger model that offers just a little bit more of a cutting edge.
Lex Fridman
(00:55:53)
So, it’s reported that xAI is going to hit that one-gigawatt scale early ’26, and a full two gigawatts by year end. How do you think they’ll utilize that in the context of scaling laws? Is a lot of that inference? Is a lot of that training?
Nathan Lambert
(00:56:12)
It ends up being all of the above. I think that all of your decisions when you’re training a model come back to pre-training. So if you’re going to scale RL on a model, you still need to decide on your architecture that enables this. We were talking about other architectures and using different types of attention. We’re also talking about Mixture of Experts models. The sparse nature of MoE models makes it much more efficient to do generation, which becomes a big part of post-training, and you need to have your architecture ready so that you can actually scale up this compute. I still think most of the compute is going in at pre-training. Because you can still make a model better, you still want to go and revisit this.
Nathan Lambert
(00:56:53)
You still want the best base model that you can. And in a few years that’ll saturate and the RL compute will just go longer.
Lex Fridman
(00:57:00)
Are there people who disagree with you that say basically pre-training is dead? That it’s all about scaling inference, scaling post-training, scaling context, continual learning, and scaling synthetic data?
Nathan Lambert
(00:57:15)
People vibe that way and describe it in that way, but I think it’s not the practice that is happening.
Lex Fridman
(00:57:19)
It’s just the general vibe of people saying this thing is dead—
Nathan Lambert
(00:57:21)
The excitement is elsewhere. So the low-hanging fruit— …in RL is elsewhere. For example, we released our model in November. Every company has deadlines. Our deadline was like November 20th, and for that, our run was five days, which compared to 2024 is a very long time to just be doing post-training on a model of about 30 billion parameters. It’s not a big model. And then in December, we had another release, which was just letting the RL run for another three and a half weeks, and the model got notably better, so we released it. And that’s a big amount of time to just allocate to something that is going to be your peak— …for the year. So it’s like—
Lex Fridman
(00:57:57)
The reasoning is—
Nathan Lambert
(00:57:58)
There’s these types of decisions that happen when they’re training a model where they just can’t leave it forever. You have to keep pulling in the improvements you have from your researchers. So you redo pre-training, you’ll do this post-training for a month, but then you need to give it to your users. You need to do safety testing. I think there’s a lot in place that reinforces this cycle of just keep updating the models. There’s things to improve. You get a new compute cluster that lets you do something maybe more stably or faster. You hear a lot about Blackwell having rollout issues, where at AI2 most of the models we’re pre-training are on like 1,000 to 2,000 GPUs.
Nathan Lambert
(00:58:36)
But when you’re pre-training on 10,000 or 100,000 GPUs, you hit very different failures. GPUs are known to break in weird ways, and doing a 100,000 GPU run is like… you’re pretty much guaranteed to always have at least one GPU that is down. And you need to have your training code handle that redundancy, which is just a very different problem. Whereas what we’re doing like, “Oh, I’m playing with post-training on DJI Spark,” or people learning ML, what they’re battling to train these biggest models is just like— …mass distributed scale, and it’s very different. But that’s somewhat different than… that’s a systems problem—
Nathan Lambert
(00:59:11)
…in order to enable the scaling laws, especially at pre-training. You need all of these GPUs at once. When we shift to reinforcement learning, it actually lends itself to heterogeneous compute because you have many copies of the model. To do a primer for language model reinforcement learning, what you’re doing is you have two sets of GPUs. One you can call the actor and one you call the learner. The learner is where your actual reinforcement learning updates happen. These are traditionally policy gradient algorithms. Proximal Policy Optimization, PPO, and Group Relative Policy Optimization, GRPO, are the two popular classes.
Nathan Lambert
(00:59:50)
On the other side, you’re going to have actors which are generating completions, and these completions are the things that you’re going to grade. Reinforcement learning is all about optimizing reward. In practice, you can have a lot of different actors in different parts of the world doing different types of problems, and then you send it back to this highly networked compute cluster to do this actual learning, where you take the gradients and you need to have a tightly meshed network where you can do different types of parallelism and spread out your model for efficient training. Every different type of training and serving has these considerations you need to scale.
Nathan Lambert
(01:00:27)
We talked about pre-training, we talked about RL, and then inference time scaling is: how do you serve a model that’s thinking for an hour to 100 million users? I don’t really know about that, but I know that’s a hard problem. In order to give people this intelligence, there’s all these systems problems, and we need more compute and you need more stable compute to do it.
Lex Fridman
(01:00:46)
But you’re bullish on all of these kinds of scaling is what I’m hearing. On the inference, on the reasoning, even on the pre-training?
Sebastian Raschka
(01:00:54)
Yeah, so that’s a big can of worms, but there are basically two knobs: training and inference scaling, where you can get gains. In a world where we had infinite compute resources, you’d want to do all of them. You have training, you have inference scaling, and training is like a hierarchy: pre-training, mid-training, and post-training. Changing the model size, more training data, training a bigger model—it gives you more knowledge. Then the model is a better base model, or what we still call a foundation model, and it unlocks capabilities. But you don’t necessarily have the model be able to solve your most complex tasks—
Sebastian Raschka
(01:01:34)
…tasks during pre-training or after pre-training. You still have these other unlock phases, mid-training or post-training with RL, that unlocks capabilities that the model has in terms of knowledge from the pre-training. And I think, sure, if you do more pre-training, you get a better base model that you can unlock later. But like Nathan said, it just becomes too expensive. We don’t have infinite compute, so you have to decide: do I want to spend that compute more on making the model larger? It’s a trade-off. In an ideal world, you want to do all of them. And I think in that sense, scaling is still pretty much alive.
Sebastian Raschka
(01:02:08)
You would still get a better model, but like we saw with Claude 4.5, it’s just not worth it. I mean, because you can unlock more performance with other techniques at that moment, especially if you look at inference scaling. That’s one of the biggest gains this year with o1, where it took a smaller model further than pre-training a larger model like Claude 4.5. So, I wouldn’t say pre-training scaling is dead; it’s just that there are other more attractive ways to scale right now. But at some point, you will still want to make some progress on the pre-training. The thing to consider is where you want to spend your money.
Sebastian Raschka
(01:02:47)
If you spend it more on pre-training, it’s a fixed cost. You train the model, and then it has this capability forever. You can always use it. With inference scaling, you don’t spend money during training; you spend money later per query, and then it’s about the math. How long is my model going to be on the market if I replace it in half a year? Maybe it’s not worth spending 5 million, 10 million, or 100 million dollars on training it longer. Maybe I will just do more inference scaling and get the performance from there. It maybe costs me 2 million in terms of user queries. It becomes a question of how many users you have and doing the math. I think that’s also where it’s interesting, where ChatGPT is in a position.
Sebastian Raschka
(01:03:27)
I think they have a lot of users where they need to go a bit cheaper, where they have that GPT-5 model that is a bit smaller. For other companies, their customers have other trade-offs. For example, there were the math problems or the Math Olympiad where they had a proprietary model, and I’m pretty sure it’s just a model that has been fine-tuned a little bit more, but most of it was inference scaling to achieve peak performance in certain tasks where you don’t need that all the time. But yeah, long story short, I do think pre-training, mid-training, post-training, and inference scaling are all still things you want to do. At the moment, this year, it’s finding the right ratio that gives you the best bang for the buck, basically.

How AI is trained: Pre-training, Mid-training, and Post-training

Lex Fridman
(01:04:13)
I think this might be a good place to define pre-training, mid-training, and post-training.
Sebastian Raschka
(01:04:18)
So, pre-training is the classic training one next token prediction at a time. You have a big corpus of data. Nathan probably also has very interesting insights there because of OLMo 3. A big portion of the paper focuses on the right data mix. So, pre-training is essentially just training across entropy loss, training on next token prediction on a vast corpus of internet data, books, papers and so forth. It has changed a little bit over the years in the sense people used to throw in everything they can. Now, it’s not just raw data. It’s also synthetic data where people rephrase certain things. So synthetic data doesn’t necessarily mean purely AI-made-up data.
Sebastian Raschka
(01:04:58)
It’s also taking something from a Wikipedia article and then rephrasing it as a Q&A question or summarizing it, rewarding it, and making better data that way. I think of it like with humans. If someone reads a book compared to a messy—no offense, but like—Reddit post or something like that. I do think you learn—no offense, but I think—
Lex Fridman
(01:05:25)
There’s going to be a post about this, Sebastian.
Nathan Lambert
(01:05:28)
Some Reddit data is very coveted and excellent for training. You just have to filter it.
Sebastian Raschka
(01:05:33)
And I think that’s the idea. I think it’s like if someone took that and rephrases it in a, let’s say, more concise and structured way— I think it’s higher quality data that gets the LLM maybe the same—you get the same LLM out of it at the end, but it gets there faster. It trains faster because if the grammar and the punctuation are correct, it already learns the correct way versus getting information from a messy way and then learning later how to correct that. So, I think that is how pre-training evolved and why scaling still works; it’s not just about the amount of data, it’s also the tricks to make that data better for you. And then mid-training is… I mean, it used to be called pre-training.
Sebastian Raschka
(01:06:21)
I think it’s called mid-training because it was awkward to have pre-training and post-training but nothing in the middle, right? It sounds a bit weird. You have pre-training and post-training, but what’s the actual training? So, the mid-training is usually similar to pre-training, but it’s a bit more specialized. It’s the same algorithm, but what you do is you focus, for example, on long context documents. The reason you don’t do that during pre-training is because you don’t have that many long context documents. We have a specific phase. And one problem of LLMs is still that it’s a neural network; it has the problem of catastrophic forgetting.
Sebastian Raschka
(01:06:56)
So, you teach it something, it forgets other things. It’s not 100% forgetting, but there’s no free lunch. It’s also the same with humans. If you ask me some math I learned 10 years ago, I wouldn’t know; I would have to look at it again.
Lex Fridman
(01:07:09)
Nathan was actually saying that he’s consuming so much content that there’s a catastrophic forgetting issue.
Nathan Lambert
(01:07:14)
Yeah, I’m trying to learn so much about AI, and it’s like when I was learning about pre-training parallelism, I’m like, “I lost something and I don’t know what it was.”
Sebastian Raschka
(01:07:22)
I don’t want to anthropomorphize LLMs, but I think it’s the same in terms of how humans learn. Quantity is not always better because it’s about being selective. Mid-training is being selective in terms of quality content at the end, so the last thing the LLM has seen is the quality stuff. And then post-training is all the fine-tuning: supervised fine-tuning, DPO, RLVR with human feedback and so forth. So, the refinement stages. And it’s also interesting, the cost thing, right? Pre-training, you spend a lot of money on that right now. RL a bit less. RL, you don’t really teach it knowledge; it’s more like unlocking the knowledge.
Sebastian Raschka
(01:08:03)
It’s more like skill learning, like how to solve problems with the knowledge that it has from pre-training. There are actually three papers this year, or last year, 2025, on RL for pre-training. But I don’t think anyone does that in production.
Nathan Lambert
(01:08:17)
Toy, toy examples for now.
Sebastian Raschka
(01:08:18)
Toy examples, right. Но to generalize, RL post-training is more like the skill unlock, where pre-training is like soaking up the knowledge essentially.
Nathan Lambert
(01:08:26)
A few things that could be helpful for people. A lot of people think of synthetic data as being bad for training models. You mentioned that DeepSeek got an OCR—Optical Character Recognition—paper. A lot of labs did; AI2 had one, others had multiple. And the reason each of these labs has these is because there’s vast amounts of PDFs and other digital documents on the web in formats that aren’t encoded with text easily. So you use these, like DeepSeek OCR or what we called OLMo OCR, to extract what can be trillions of tokens of candidate data. Pre-training dataset size is on the order of trillions; it’s measured in trillions of tokens.
Nathan Lambert
(01:09:10)
Smaller models from researchers can be something like 5 to 10 trillion. Qwen is documented going up to like 50 trillion, and there’s rumors that these closed labs can go to 100 trillion tokens. Getting this potential data is a very big funnel, and the data you actually train the model on is a small percentage of this. This character recognition data would be described as synthetic data for pre-training in a lab. And then there’s also the fact that ChatGPT now gives wonderful answers, and you can train on those best answers; that’s synthetic data. It’s very different than the early ChatGPT hallucinations data.
Sebastian Raschka
(01:09:48)
One interesting question is, if I recall correctly, OLMo 3 was trained with less data than specifically some other open-weight models, maybe even OLMo 2. But you still got better performance, and that might be one of the examples of how the data helped.
Nathan Lambert
(01:10:01)
It’s mostly down to data quality. I think if we had more compute, we would train for longer. I think we’d ultimately see that as something we would want to do. Especially with big models, you need more compute because big models can absorb more from data, and you get more benefit out of this. It’s like one of those logarithmic graphs—a small model will level off sooner if you’re measuring tons of tokens, and bigger models need more. But mostly, we aren’t training that big of models right now at AI2, and getting the highest quality data we can is the natural starting point.
Lex Fridman
(01:10:38)
Is there something to be said about the topic of data quality? Is there some low-hanging fruit there still where the quality could be improved?
Nathan Lambert
(01:10:46)
It’s like turning the crank. So I think historically, in the open, there’s been a canonical best pre-training dataset that has moved around between who has the most recent one or the best recent effort. Like AI2’s Dolma was very early with the first OLMo and Hugging Face had FineWeb. And there’s the DCLM project, which stands for Data Comp Language Model. There’s been Data Comp for other machine learning projects, and they had a very strong dataset. A lot of it is the internet becoming fairly closed off, so we have Common Crawl, which I think is hundreds of trillions of tokens, and you filter it.
Nathan Lambert
(01:11:21)
And it looks like a lot of scientific work where you’re training classifiers and making decisions based on how you prune down this dataset into the highest quality stuff and the stuff that suits your tasks. Previously, language models were tested a lot more on knowledge and conversational things, but now they’re expected to do math and code. To train a reasoning model, you need to remix your whole dataset. And there are actually some wonderful scientific methods here where you can take your gigantic dataset and sample a lot of really tiny things from different sources, like GitHub, Stack Exchange, Reddit, or Wikipedia.
Nathan Lambert
(01:11:56)
You can sample small things from them, train small models on each of these mixes, and measure their performance on your evaluations. And you can just do basic linear regression, and it’s like, “Here’s your optimal dataset.” But if your evaluations change, your dataset changes a lot. So a lot of OLMo 3 was adding new sources for reasoning to be better at math and code, and then you do this mixing procedure and it gives you the answer. I think a lot of that’s happened at labs this year; there are new hot things, whether it’s coding environments or web navigation, and you just need to bring in new data and change your whole pre-training so that your post-training can work better. And that’s like the constant re-evolution and the re-determining of what they care about for their models.
Lex Fridman
(01:12:35)
Are there fun anecdotes of what sources of data are particularly high quality that we wouldn’t expect? You mentioned Reddit sometimes can be a source.
Nathan Lambert
(01:12:45)
Reddit was very useful. I think that PDFs are definitely one.
Sebastian Raschka
(01:12:51)
Oh, especially arXiv.
Nathan Lambert
(01:12:52)
Yeah, AI2 has run Semantic Scholar for a long time, which is a competitor to Google Scholar with a lot more features. To do this, AI2 has found and scraped a lot of PDFs for openly accessible papers that might not be behind the closed paid garden of a certain publisher—truly open scientific PDFs. If you sit on all of these and process them, you can get value out of it. I think a lot of that style of work has been done by the frontier labs much earlier. You need to have a pretty skilled researcher that understands how things change models, and they bring it in and clean it; it’s a lot of labor.
Nathan Lambert
(01:13:34)
I think at a lot of frontier labs, when they scale researchers, a lot more goes into data. If you join a frontier lab and you want to have impact, the best way to do it is just find new data that’s better. The fancy, glamorous algorithmic things, like figuring out how to make o1, is like the sexiest thought for a scientist. It’s like, “Oh, I figured out how to scale RL.” There’s a group that did that, but I think most of the contributions are-
Lex Fridman
(01:13:58)
On the dataset.
Nathan Lambert
(01:13:58)
… “I’m gonna make the data better,” or, “I’m gonna make the infrastructure better so that everybody on my team can run experiments 5% faster.”
Sebastian Raschka
(01:14:04)
At the same time, I think it’s also one of the closest guarded secrets—what your training data is—for legal reasons. And so there’s also a lot of work that goes into hiding what your training data was, essentially, trying to get the model to not give away the sources because of those legal reasons.
Nathan Lambert
(01:14:19)
The other thing, to be complete, is that some people are trying to train on only licensed data, whereas Common Crawl is a scrape of the whole internet. If I host multiple websites, I’m happy to have them train language models, but I’m not explicitly licensing what governs it. Therefore, Common Crawl is largely unlicensed, which means your consent really hasn’t been provided for how to use the data. There’s another idea where you can train language models only on data that has been licensed explicitly so that the kind of governing contract is provided. I’m not sure if Apertus is the copyright thing or the license thing. I know that the reason they did it was for an EU compliance thing, where they wanted to make sure that their model fit one of those checks.
Sebastian Raschka
(01:15:01)
Mm-hmm. And on that note, there’s also the distinction between the licensing. Some people, like you said, just purchase the license. Let’s say they buy an Amazon Kindle book or a Manning book, and then use that in the training data; that is a gray zone because you paid for the content and you might want to train on it. But then there are also restrictions where even that shouldn’t be allowed. That is where it gets a bit fuzzy.
Sebastian Raschka
(01:15:28)
And I think that is still a hot topic right now. Big companies like OpenAI approached private companies for their proprietary data, and private companies are becoming more and more protective of their data because they know, “Okay, this is going to be my moat in a few years.” And I do think that’s the interesting question. If LLMs become more commoditized, and a lot of people learn about LLMs, there will be a lot more people able to train them. Of course, there are infrastructure challenges.
Sebastian Raschka
(01:16:00)
But if you think of big industries like pharmaceuticals, law, or finance, I do think they at some point will hire people from other frontier labs to build their in-house models on their proprietary data, which will be another unlock with pre-training that is currently not there. Because even if you wanted to, you can’t get that data—you can’t get access to clinical trials most of the time and these types of things. So I do think scaling in that sense might still be pretty much alive if you look at domain-specific applications, because right now we are just looking at general-purpose LLMs like ChatGPT, Anthropic, and so forth. They are just general purpose. They’re not even scratching the surface of what an LLM can do if it is really specifically trained and designed for a specific task.
Nathan Lambert
(01:16:47)
I think on the data thing, this is one of the things where, like, this happened in 2025 and we totally forget it: Anthropic lost in court and owed $1.5 billion to authors. Anthropic, I think, bought thousands of books and scanned them and was cleared legally for that because they bought the books, and that is going through the system. And then on the other side, they also torrented some books, and I think this torrenting was the path where the court said that they were then culpable to pay these billions of dollars to authors, which is just such a mind-boggling lawsuit that kind of just came and went. Like, that is so much money- … from the VC ecosystem.
Lex Fridman
(01:17:22)
These are court cases that will define the future of human civilization because it’s clear that data drives a lot of this, and there’s this very complicated human tension. I mean, you can empathize. You’re both authors. And there’s some degree to which, I mean, you put your heart and soul and your sweat and tears into the writing that you do. It feels a little bit like theft for somebody to train on your data without giving you credit.
Sebastian Raschka
(01:17:49)
And there are, like Nathan said, also two layers to it. Someone might buy the book and then train on it, which could be argued fair or not fair, but then there are the straight-up companies who use pirated books where they’re not even compensating the author. That is, I think, where people got a bit angry about it specifically, I would say.
Lex Fridman
(01:18:06)
Yeah, but there has to be some kind of compensation scheme. This is like moving towards something like Spotify streaming did originally for music. You know, what does that compensation look like? You have to define those kinds of models. You have to think through all of that. One other thing I think people are generally curious about, I’d love to get your thoughts: as LLMs are used more and more, if you look at even arXiv or GitHub, more and more of the data is generated by LLMs. What do you do in that kind of world? How big of a problem is that?
Nathan Lambert
(01:18:38)
The largest problem is the infrastructure and systems, but from an AI point of view, it’s kind of inevitable.
Lex Fridman
(01:18:45)
So it’s basically LLM-generated data that’s curated by humans essentially, right?
Nathan Lambert
(01:18:49)
Yes, and I think that a lot of open source contributors are legitimately burning out. If you have a popular open source repo, somebody’s like, “Oh, I want to do open source AI. It’s good for my career,” and they just vibe code something and throw it in. You might get more of this than I do.
Sebastian Raschka
(01:19:05)
Yeah, so I actually have a case study here. I have a repository called mlxtend that I developed as a student, around 10 or 15 years ago, and it is a reasonably popular library still for certain algorithms, especially frequent data mining stuff. There were recently two or three people who submitted a lot of PRs in a very short amount of time. I do think LLMs have been involved in submitting these PRs. Me, as the maintainer, there are two things. First, I’m a bit overwhelmed; I don’t have time to read through it because, especially since it’s an older library, that is not a priority for me. At the same time, I kind of also appreciate it because I think something people forget is it’s not just using the LLM.
Sebastian Raschka
(01:19:46)
There’s still a human layer that verifies something, and that is in a sense also how data is labeled, right? One of the most expensive things is getting labeled data for RLHF (Reinforcement Learning from Human Feedback) phases. This is kind of like that, where it goes through phases and then you actually get higher quality data out of it. So I don’t mind it, in a sense. It can feel overwhelming, but I do think there is also value in it.
Lex Fridman
(01:20:11)
It feels like there’s a fundamental difference between raw LLM-generated data and LLM-generated data with a human in the loop that does some kind of verification, even if that verification is a small percentage- … of the lines of code.
Sebastian Raschka
(01:20:25)
I think this goes with anything where people think, “Oh, yeah. I can just use an LLM to learn about XYZ,” which is true. You can, but there might be a person who is an expert who might have used an LLM to write specific code. There is this human work that went into it to make it nice and throwing out the not-so-nice parts to pre-digest it for you, and that saves you time. And I think that’s the value-add where you have someone filtering things or even using the LLMs correctly. I think this is still labor that you get for free. For example, when you read a Substack article.
Sebastian Raschka
(01:21:05)
I could maybe ask an LLM to give me opinions on that, but I wouldn’t even know what to ask. And I think there is still value in reading that article compared to me going to the LLM because you are the expert. You select what knowledge is actually spot on and should be included, and you give me this executive summary. This is a huge value-add because now I don’t have to waste three to five hours to go through this myself and maybe get some incorrect information. And so I think that’s also where the future still is for writers, even though there are LLMs that can save you time.
Lex Fridman
(01:21:43)
It’s kind of fascinating to actually watch—and I’m sure you guys do this, but for me to look at the difference between a summary and the original content. Even if it’s a page-long summary of page-long content, it’s interesting to see how the LLM-based summary takes the edge off. What is the signal it removes from the thing?
Nathan Lambert
(01:22:07)
The voice is what I talk about a lot.
Lex Fridman
(01:22:09)
Voice? Well, voice… I would love to hear what you mean by voice, that’s really powerful, but sometimes there’s like literally insights. Like in removing an insight, you’re actually fundamentally changing the meaning of the thing. So I’m continuously disappointed by how bad LLMs are at really getting to the core insights, which is what a great summary does. Yet even if you use extensive, extremely elaborate prompts where I’m really trying to dig for the insights, it’s still not quite there which… I mean, that’s a whole deep philosophical question about what is human knowledge and wisdom and what does it mean to be insightful. But when you talk about the voice, what do you mean?
Nathan Lambert
(01:22:52)
So when I write, I think a lot of what I’m trying to do is take what you think as a researcher, which is very raw. A researcher is trying to encapsulate an idea at the frontier of their understanding, and they’re trying to put what is a feeling into words. And I think that in my writing, I try to do this, which makes it come across as raw but also high-information in a way that some people will get and some won’t. And that’s kind of the nature of research. And I think this is something that language models don’t do well. Particularly, they’re all trained with this reinforcement learning from human feedback which is designed to take feedback from a lot of people and, in a way, average how the model behaves from this.
Nathan Lambert
(01:23:30)
And I think that it’s going to be hard for a model to be very incisive when there’s that sort of filter in it. This is a wonderful fundamental problem for researchers in RLHF: this provides so much utility in making the models better, but also the problem formulation has this knot in it that you can’t get past. These language models don’t have this prior in their deep expression that they’re trying to get at. I don’t think it’s impossible to do. I think there are stories of models that really shock people. Like, I would love to have tried Bing Sydney—did that have more voice? Because it would so often go off the rails on people and affect…
Nathan Lambert
(01:24:13)
And what is historically, obviously, a scary way—like telling a reporter to leave his wife—is a crazy model to potentially put in general adoption. But that’s kind of the trade-off: is this RLHF process, in some ways, adding limitations?
Lex Fridman
(01:24:28)
That’s a terrifying place to be as one of these frontier labs and companies because millions of people are using them.
Nathan Lambert
(01:24:35)
There was a lot of backlash last year with GPT-4o getting removed. I’ve personally never used the model, but I’ve talked to people at OpenAI who get emails from users that might be detecting subtle differences in the deployments in the middle of the night. And they email them and say, “My friend is different.” They find these employees’ emails and send them things because they are so attached to what is a set of model weights and a configuration that is deployed to the users. We see this with TikTok. I don’t use TikTok, but supposedly, in five minutes, the algorithm gets you. It’s locked in. And those are language models doing recommendations.
Nathan Lambert
(01:25:15)
Like, I think there are ways that you can do this with a language model where, within five minutes of chatting with it, the model just gets you. And that is something that people aren’t really ready for. I think that—don’t give that to kids. Don’t give that to kids- at least until we know what’s happening.
Lex Fridman
(01:25:30)
But there’s also going to be this mechanism… What’s going to happen with these LLMs as they’re used more and more… Unfortunately, the nature of the human condition is such that people commit suicide. And what journalists will do is report extensively on the people who commit suicide, and they will very likely link it to the LLMs because they have that data about the conversations. If you’re really struggling, if you’re depressed, if you’re thinking about suicide, you’re going probably to talk to LLMs about it. And so what journalists will do is say, “The suicide was committed because of the LLM.” And that’s going to lead to the companies, because of legal issues and so on, more and more taking the edge off of the LLM.
Lex Fridman
(01:26:13)
So it’s going to be as generic as possible. It’s so difficult to operate in this space because, of course, you don’t want an LLM to cause harm to humans at that level, but also, this is the nature of the human experience—to have a rich conversation, a fulfilling conversation, one that challenges you and from which you grow. You need that edge. And that’s something extremely difficult for AI researchers on the RLHF front to actually have to solve because you’re actually dealing with the human condition.
Nathan Lambert
(01:26:47)
A lot of researchers at these companies are so well-motivated. Anthropic and OpenAI are culturally so wanting to do good for the world through this. And it’s such a… I’m like, “Ooh, I don’t want to work on this,” because, on the one hand, a lot of people see AI as a health ally, as somebody they can talk to about their health confidentially, but then it bleeds all the way into talking about mental health. It’s heartbreaking that this might be the thing where somebody goes over the edge, but other people might be saved. And there’s things that as a researcher training models, it’s like, I don’t want to train image generation models and release them openly because I don’t want to enable somebody to have a tool on their laptop that can harm other people.
Nathan Lambert
(01:27:34)
I don’t have the infrastructure in my company to do that safely. There are a lot of areas like this where it needs people who will approach it with complexity and the conviction that it’s just such a hard problem.
Lex Fridman
(01:27:47)
But also, we as a society and as users of these technologies need to make sure that we’re having the complicated conversation about it versus just fearmongering that big tech is causing harm to humans or stealing your data. It’s more complicated than that. And you’re right, there’s a very large number of people inside these companies, many of whom you know and many of whom I know, that deeply care about helping people. They are considering the full human experience of people from across the world, not just Silicon Valley—what their needs are and what that means. It’s really difficult to design this one system that is able to help all these different kinds of people across different age groups, cultures, and mental conditions.
Nathan Lambert
(01:28:31)
I wish that the timing of AI was different regarding the relationship of big tech to the average person. Big tech’s reputation is so low, and because AI is so expensive, it’s inevitably going to be a big tech thing. It takes so many resources, and people say the US is, quote-unquote, “betting the economy on AI” with this build-out. To have these be intertwined at the same time makes for such a hard communication environment. It would be good for me to go talk to more people in the world who hate big tech and see AI as a continuation of that.
Lex Fridman
(01:29:02)
One of the things you actually recommend, one of the antidotes that you talk about, is to find agency in this whole system, as opposed to sitting back in a powerless way and consuming the AI slop as it rapidly takes over the internet. Find agency by using AI to build things—build apps, build… One, that actually helps you build intuition, but two, it’s empowering because you can understand how it works and what the weaknesses are. It gives your voice power to say, “This is bad use of the technology, and this is good use of technology.” You’re more plugged into the system then, so you can understand it better and steer it better as a consumer.
Sebastian Raschka
(01:29:48)
I think that’s a good point you brought up about agency. Instead of ignoring it and saying, “Okay, I’m not going to use it,” I think it’s probably long-term healthier to say, “Okay, it’s out there. I can’t put it back.” It’s like the internet and computers when they first came out. How do I make the best use of it, and how does it help me up-level myself? The one thing I worry about here, though, is if you just fully use it for something you love to do, the thing you love to do is no longer there. That could potentially lead to burnout. For example, if I use an LLM to do all my coding for me, now there’s no coding; I’m just managing something that is coding for me.
Sebastian Raschka
(01:30:24)
Two years later, let’s say, if I just do that eight hours a day—having something code for me—do I still feel fulfilled? Is this hurting me in terms of being excited about my job and what I’m doing? Am I still proud to build something?
Lex Fridman
(01:30:43)
On that topic of enjoyment, it’s quite interesting. We should just throw this in there, that there’s this recent survey of about 791 professional developers—professional meaning 10-plus years of experience.
Nathan Lambert
(01:30:55)
That’s a long time. As a junior developer?
Lex Fridman
(01:31:01)
Yeah, in this day and age. The results are surprising on many fronts. They break it down by junior and senior developers, and it shows that both groups use AI-generated code in the code they ship. This is not just for fun or learning; this is code they ship. Most of them use it for around 50% or more. What’s interesting is that for the category where over 50% of the shipped code is AI-generated, senior developers are much more likely to do so. But you don’t want AI to take away the thing you love. I think this speaks to my experience. These particular results show that about 80% of people find it either somewhat more enjoyable or significantly more enjoyable to use AI as part of their work.
Sebastian Raschka
(01:31:59)
I think it depends on the task. From my personal usage, for example, I have a website where I sometimes tweak things. I personally don’t enjoy this, so if the AI can help me implement something on my website, I’m all for it. It’s great. But at the same time, when I solve a complex problem—if there’s a bug, and I hunt this bug and find it—it’s the best feeling in the world. You get so much joy. But now, if you don’t even think about the bug and just go directly to the LLM, you never have that kind of feeling, right?
Sebastian Raschka
(01:32:38)
But then there could be a middle ground where you try it yourself, you can’t find it, you use the LLM, and then you don’t get frustrated because it helps you move on to something that you enjoy. Looking at these statistics, what is not factored in is that it’s averaging over all different scenarios. We don’t know if it’s for the core task or for something mundane that people would not have enjoyed otherwise. In a sense, AI is really great for doing mundane things that take a lot of work.
Sebastian Raschka
(01:33:09)
For example, my wife has a podcast for book club discussions, and she was transferring the show notes from Spotify to YouTube, and the links somehow broke. She had some episodes with 100 links or something, and it would have been really painful to go in there and fix each link manually. So I suggested, “Hey, let’s try ChatGPT.” We copied the text into ChatGPT, and it fixed them. Instead of two hours going from link to link, it made that work seamless. I think everyone has a use case where AI is useful for something like that—something that would be really boring and mundane.
Lex Fridman
(01:33:51)
For me personally, since we’re talking about coding, a lot of the enjoyment comes from the cursor side—Claude Code side—where I have a pair programmer. It’s less lonely. You made debugging sound like this great joy. No, I would say debugging is like a drink of water after you’ve been going through a desert for— —for days. You skip the whole desert part where you’re suffering. Sometimes it’s nice to have a friend who can’t really find the bug, but can give you some intuition about the code, and together you go through the desert and find that drink of water. For me, maybe it speaks to the loneliness of the programming experience. That is a source of joy.
Sebastian Raschka
(01:34:48)
It’s maybe also related to delayed gratification. I’m a person who even as a kid liked the idea of Christmas presents better than actually getting them. I would look forward to the day, but then it’s over and I’m disappointed. Maybe it’s like food—it tastes better when you’re really hungry. With debugging, it’s not always great; it’s often frustrating, but if you can solve it, then it’s great. But there’s also a Goldilocks zone where if it’s too hard, then you’re wasting your time. I think another challenge, though, is: how will people learn?
Sebastian Raschka
(01:35:33)
The chart we looked at showed that more senior developers are shipping AI-generated code than the junior ones. I think it’s interesting because intuitively you would think it’s the junior developers because they don’t know how to do the thing yet. It could mean the AI is not good enough yet to solve those tasks, but it could also mean experts are more effective at using it—they know how to review the code and they trust it more. One issue in society in the future will be: how do you become an expert if you never try to do the thing yourself?
Sebastian Raschka
(01:36:12)
I learned by trying things myself. With math textbooks, if you look at the solutions, you learn something, but you learn better if you try first and then appreciate the solution because you know how to put it into your mental framework. If LLMs are here all the time, would you actually go through the length of struggling? Would you be willing to struggle? Struggle is not nice, but if you use the LLM to do everything, at some point you will never really take the next step and you won’t get that unlock that you get as an expert using an LLM.
Sebastian Raschka
(01:36:53)
So, I think there’s a Goldilocks sweet spot where maybe the trick is you make dedicated offline time where you study two hours a day, and the rest of the day you use LLMs. I think it’s important for people to still invest in themselves, in my opinion, and not just LLM everything.

Post-training explained: Exciting new research directions in LLMs

Lex Fridman
(01:37:10)
Yeah, there is a sense that we, together as a civilization, each individually have to find that Goldilocks zone. And in the programming context as developers. Now, we’ve had this fascinating conversation that started with pre-training and mid-training. Let’s get to post-training. There’s a lot of fun stuff in post-training. So, what are some of the interesting ideas in post-training?
Nathan Lambert
(01:37:31)
The biggest one from 2025 is learning this reinforcement learning with verifiable rewards, RLVR. You can scale up the training there, which means doing a lot of this kind of iterative generate-grade loop, and that lets the models learn both interesting behaviors on the tool use and software side. This could be searching, running commands on their own and seeing the outputs, and then also that training enables this inference-time scaling very nicely. It just turned out that this paradigm was very nicely linked, where this kind of RL training enables inference-time scaling. But inference-time scaling could have been found in different ways. So, it was kind of this perfect storm where the models change a lot, and the way that they’re trained is a major factor in doing so.
Nathan Lambert
(01:38:15)
And this has changed how people approach post-training dramatically.
Lex Fridman
(01:38:20)
Can you describe RLVR, popularized by DeepSeek R1? Can you describe how it works?
Nathan Lambert
(01:38:25)
Yeah. Fun fact, I was on the team that came up with the term RLVR, which is from our Tulu 3 work before DeepSeek. We don’t take a lot of credit for being the people to popularize the scaling RL, but as much fun as academics get, as an aside, is the ability to name and influence—
Nathan Lambert
(01:38:43)
—the discourse, because the closed labs can only say so much. One of the things you can do as an academic is, while you might not have the compute to train the model, you can frame things in a way that ends up being… I describe it as like a community can come together around this RLVR term, which is very fun. And then DeepSeek are the people that did the training breakthrough, which is, they scaled the reinforcement learning. They have the model generate answers and then grade the completion if it was right, and then that accuracy is your reward for reinforcement learning. So reinforcement learning is classically an agent that acts in an environment, and the environment gives it a state and a reward back, and you try to maximize this reward.
Nathan Lambert
(01:39:26)
In the case of language models, the reward is normally accuracy on a set of verifiable tasks, whether it’s math problems or coding tasks. And it starts to get blurry with things like factual domains. That is also, in some ways, verifiable or constraints on your instruction, like ‘respond only with words that start with A.’ All of these things are verifiable in some way. The core idea is you find a lot more of these problems that are verifiable and you let the model try it many times while taking these RL gradient updates. The infrastructure evolved from reinforcement learning from human feedback, RLHF, where in that era, the score they were trying to optimize was a learned reward model of aggregate human preferences.
Nathan Lambert
(01:40:13)
So you kind of changed the problem domains and that let the optimization go on to much bigger scales, which kind of kickstarted a major change in what the models can do and how people use them.
Lex Fridman
(01:40:24)
What kind of domains is RLVR amenable to?
Nathan Lambert
(01:40:28)
Math and code are the famous ones, and then there’s a lot of work kind of on what is called the rubrics, which is related to a word people might have heard, LLM-as-a-judge. For each problem, I’ll have a set of problems in my training dataset. I will then have another language model and ask it, “What would a good answer to this problem look like?” And then you could try the problem a bunch of times over and over again and assign a score based on this rubric. So that’s not necessarily verifiable like a math and code domain, but this rubrics idea and other scientific problems where it might be a little bit more vague is where a lot of the attention is. They’re trying to push this set of methods into these more open-ended domains so the models can learn a lot more.
Sebastian Raschka
(01:41:11)
I think that’s called reinforcement learning with AI feedback, right?
Nathan Lambert
(01:41:14)
That’s the older term from it that was coined in Anthropic’s Constitutional AI paper. So a lot of these things come in cycles.
Sebastian Raschka
(01:41:21)
Also, just one step back for the RLVR. I think the interesting, beautiful thing here is that you ask the LLM a math question, you know the correct answer, and you let the LLM figure it out, but how it does it is… I mean, you don’t really constrain it much. There are some constraints you can add, like ‘use the same language’ or ‘don’t switch between Spanish and English.’ But let’s say you’re pretty much hands-off.
Sebastian Raschka
(01:41:44)
You only give the question and the answer, and then the LLM has the task to arrive at the right answer. But the beautiful thing here is what happens in practice: the LLM will do a step-by-step description, like how a student or a mathematician would derive the solution. It will use those steps and that helps the model to improve its own accuracy. And then, like you said, the inference scaling. Inference scaling loosely means spending more compute while using the LLM during inference, and here the inference scaling is that the model would use more tokens. In the DeepSeek R1 paper, they showed the longer they train the model, the longer the responses are.
Sebastian Raschka
(01:42:28)
They grow over time. They use more tokens, so it becomes more expensive for simple tasks, but these explanations help the model with accuracy. There are also a lot of papers showing what the model explains does not necessarily have to be correct, or maybe it’s even unrelated to the answer, but for some reason, it still helps the model—the fact that it is explaining. And I think it’s also—again, I don’t want to anthropomorphize these LLMs—but it’s kind of like how we humans operate, right? If there’s a complex math problem in a math class, you usually have a note paper and you do it step by step. You cross things out.
Sebastian Raschka
(01:43:03)
And the model also self-corrects, and that was, I think, the aha moment in the DeepSeek R1 paper. They called it the ‘aha moment’ because the model itself recognized it made a mistake and then said, “Ah, I did something wrong, let me try again.” I think that’s just so cool that this falls out of just giving it the correct answer and having it figure out how to do it—that it kind of does, in a sense, what a human would do. Although LLMs don’t think like humans, it’s a kind of interesting coincidence. And the nice side effect is it’s great for us humans to see these steps. It builds trust, and we can learn or double-check things.
Nathan Lambert
(01:43:40)
There’s a lot in here. I think- There’s been a lot of debate this year on if the language models—I think these aha moments are kind of fake because in pre-training, you essentially have seen the whole internet. So you have definitely seen people explaining their work, even verbally, like a transcript of a math lecture: “You try this, oh, I messed this up.” And what reinforcement learning—this RLVR—is very good at doing, is amplifying— —these behaviors, because they’re very useful in enabling the model to think longer and to check its work. I agree that it is very beautiful that this training kind of… the model learns to amplify this in a way that is just so useful at the final answers being better.
Sebastian Raschka
(01:44:16)
I can give you also a hands-on example. I was training the Qwen 3 base model with RLVR on MATH-500. The base model had an accuracy of about 15%. Just 50 steps, like in a few minutes with RLVR, the model went from 15% to 50% accuracy. And you can’t tell me it’s learning anything fundamentally about math in—
Nathan Lambert
(01:44:38)
The Qwen example is weird because there’s been two papers this year, one of which I was on, that talks about data contamination in Qwen— —and specifically that they train on a lot of this special mid-training phase that we— —can chime in on for a minute because it’s weird— —because they train on problems that are almost identical to MATH.
Sebastian Raschka
(01:44:53)
Exactly. And so you can see that basically the RL is not teaching the model any new knowledge about math. You can’t do that in 50 steps. So the knowledge is already there in the pre-training; you’re just unlocking it.
Nathan Lambert
(01:45:03)
I still disagree with the premise because there’s a lot of weird complexities that you can’t prove. One of the things that points to weirdness is that if you take the Qwen 3 so-called base model—you could Google “math dataset Hugging Face” and take a problem—if you put it into Qwen 3 base… all these math problems have words, so it would be like, “Alice has five apples and gives three to whoever,” and there are these word problems. With these Qwen-based models, why people are suspicious of them is if you change the numbers but keep the words— —Qwen will produce, without tools, a very high accuracy decimal representation—
Nathan Lambert
(01:45:43)
—of the answer, which means at some point it was shown problems that were almost identical to the test set, and it was using tools to get a very high precision answer. But a language model without tools will never actually have this. So it’s been this big debate in the research community: how much of these reinforcement learning papers that are training on Qwen and measuring specifically on this math benchmark—where there’s been multiple papers talking about contamination—how much can you believe them? I think this is what caused the reputation of RLVR being about formatting, because you can get these gains so quickly and therefore it must already be in the model. But there’s a lot of complexity here. It’s not really like controlled experimentation— —so we don’t really know.
Sebastian Raschka
(01:46:26)
But if it weren’t true, I would say distillation wouldn’t work, right? Distillation can work to some extent, but the biggest problem—and I’m researching this contamination—is we don’t know what’s in the data. Unless you have a new dataset, it is really impossible. Even something simpler like MMLU, which is a multiple-choice benchmark—if you just change the format slightly, like using a dot instead of a parenthesis, the model accuracy will vastly differ.
Nathan Lambert
(01:47:04)
I think that that could be like a model issue rather than a general issue.
Sebastian Raschka
(01:47:09)
It’s not even malicious by the developers of the LLM, like, “Hey, we want to cheat at that benchmark.” It’s just it has seen something at some point. I think the only fair way to evaluate an LLM is to have a new benchmark that is after the cutoff date when the model was deployed.
Lex Fridman
(01:47:22)
Can we lay out what would be the recipe of all the things that go into post-training? And you mentioned RLVR was a really exciting, effective thing. Maybe we should elaborate. RLHF still has a really important component to play. What kind of other ideas are there on post-training?
Nathan Lambert
(01:47:40)
I think you can take this in order. You could view it as what made o1, which is this first reasoning model, possible. You’re going to have similar interventions where you start with mid-training. The thing that is rumored to enable o1 and similar models is really careful data curation where you’re providing a broad set of what is called reasoning traces. This is just the model generating words in a forward process that reflects breaking down a problem into intermediate steps and trying to solve them. So at mid-training, you need to have data similar to this so that when you move into post-training, primarily with these verifiable rewards, it can learn.
Nathan Lambert
(01:48:27)
And then what is happening today is you’re figuring out which problems to give the model, how long you can train it for, and how much inference you can enable the model to use when solving these verifiable problems. As models get better, certain problems are no longer useful; the model will solve them 100% of the time, and therefore there’s very little signal. If we look at the GRPO equation, this one is famous for this because essentially the reward given to the agent is based on how good a given action—a completion—is relative to the other answers to that same problem. So if all the problems get the same answer, there’s no signal in these types of algorithms.
Nathan Lambert
(01:49:09)
So what they’re doing is finding harder problems, which is why you hear about things like scientific domains, which are so hard to get anything right in. If you have a lab or something, it just generates so many tokens, or much harder software problems. The frontier models are all pushing into these harder domains where they can train on more problems and the model will learn more skills at once. The RLHF link to this is that RLHF has been, and still is, the finishing touch on the models, where it makes them more useful by improving the organization, style, or tone.
Nathan Lambert
(01:49:42)
There are different things that resonate with different audiences. Some people like a really quirky model, and RLHF could be good at enabling that personality, and some people hate the markdown bulleted list thing that the models do, but it’s actually really good for quickly parsing information. This human feedback stage is really great for putting this into the model at the end of the day. It’s what made ChatGPT so magical for people. And that use has actually remained fairly stable. This formatting can also help the models get better at math problems, for example.
Nathan Lambert
(01:50:17)
The border between style and formatting and the method that you use to answer a problem are actually very closely linked when you’re training these models. RLHF can still make a model better at math, but these verifiable domains are a much more direct process for doing this because it makes more sense with the problem formulation. To summarize: mid-training gives the model the skills it needs to learn; RL with verifiable rewards lets the model try many times, putting a lot of compute into trial-and-error learning across hard problems; and then RLHF finishes the model, making it easy to use and rounding it out.
Lex Fridman
(01:51:02)
Can you comment on the amount of compute required for RL VR?
Nathan Lambert
(01:51:06)
It’s only gone up and up. I think Grok 4 was famous for saying they use a similar amount of compute for pre-training and post-training. Back to the scaling discussion, they involve very different hardware for scaling. Pre-training is very compute-bound, which is like the FLOPS discussion: how many matrix multiplications can you get through in one time. Because in RL you’re generating these answers and trying the model in real-world environments, it ends up being much more memory-bound. You’re generating long sequences, and the attention mechanisms have a behavior where you get a quadratic increase in memory as you get to longer sequences. So the compute becomes very different.
Nathan Lambert
(01:51:44)
In pre-training, we would talk about a model—if we go back to the Biden administration executive order—it’s like 10 to the 25th FLOPS to train a model. If you’re using FLOPS in post-training, it’s a lot weirder because the reality is just how many hours you are allocating how many GPUs for. In terms of time, the RL compute is getting much closer because you just can’t put it all into one system. Pre-training is so computationally dense where all the GPUs are talking to each other and it’s extremely efficient, whereas RL has all these moving parts and it can take a long time to generate a sequence of a hundred thousand tokens.
Nathan Lambert
(01:52:17)
If you think about Gemini 3 Pro taking an hour, what if your training run has to sample for an hour? You have to make sure that’s handled efficiently. So in GPU hours or wall-clock hours, the RL runs are probably approaching the same number of days as pre-training, but they probably aren’t using as many GPUs at the same time. There are rules of thumb in labs where you don’t want your pre-training runs to last more than a month because they fail catastrophically. If you are planning a huge cluster to be held for two months and then it fails on day 50, the opportunity costs are just so big.
Nathan Lambert
(01:52:54)
People don’t want to put all their eggs in one basket. GPT-4 was like the ultimate YOLO run, and nobody ever wanted to do it before where it took three months to train and everybody was shocked that it worked. I think people are a little bit more cautious and incremental now.
Sebastian Raschka
(01:53:07)
So RLVR is more unlimited in how much you can train or still get benefit, whereas RLHF, because it’s preference tuning, reaches a certain point where it doesn’t really make sense to spend more budget on it. To take a step back with preference tuning: there are multiple people that can give multiple explanations for the same thing and they can both be correct, but at some point, you learn a certain style and it doesn’t make sense to iterate on it. My favorite example is if relatives ask me what laptop they should buy. I give them an explanation or ask about their use case, and they might prioritize battery life and storage.
Sebastian Raschka
(01:53:46)
Other people, like us, would prioritize RAM and compute. Both answers are correct, but different people require different answers. With preference tuning, you are trying to average somehow; you are asking the data labelers to give you the preferred answer and then you train on that. But at some point, you learn that average preferred answer, and there’s no reason to keep training longer on it because it’s just a style. With RLVR, you let the model solve more and more complex, difficult problems. So I think it makes more sense to allocate more budget long-term to RLVR.
Sebastian Raschka
(01:54:27)
Right now, we are in an RLVR 1.0 phase where it’s still that simple thing where we have a question and answer, but we don’t do anything with the stuff in between. There were multiple research papers, by Google for example, on process reward models that also give scores for the explanation—how correct is the explanation? I think that will be the next thing, let’s say RLVR 2.0 for this year, focusing on the steps between question and answer and how to leverage that information to improve the explanation and accuracy. That’s one angle. And there was a DeepSeek-V3.2 paper where they also had interesting inference scaling.
Sebastian Raschka
(01:55:11)
Well, first they had developed models that grade themselves as a separate model. I think that will be one aspect. And the other, like Nathan mentioned, will be RLVR branching into other domains.
Nathan Lambert
(01:55:23)
The place where people are excited is value functions— —which is pretty similar. Process reward models assign how good something is to each intermediate step in a reasoning process, whereas value functions apply value to every token the language model generates. Both of these have been largely unproven in the language modeling and reasoning model era. People are more optimistic about value functions for whatever reason now. I think process reward models were tried a lot more in the pre-o1 era, and a lot of people had headaches with them. Value models have a very deep history in reinforcement learning.
Nathan Lambert
(01:56:06)
They’re one of the first things that were core to deep reinforcement learning existing—training value models. So right now the literature shows people are excited about trying value models, but there’s very little proof in it. And there are negative examples in trying to scale up process reward models.
Nathan Lambert
(01:56:22)
These things don’t always hold in the future. To summarize the scaling: you don’t want to do too much RLHF because of how the signal scales. People have worked on RLHF for years, especially after ChatGPT, but the first release of a reasoning model trained with RLVR, OpenAI’s o1, had a scaling plot where if you increase the training compute logarithmically, you get a linear increase in evaluations. This has been reproduced multiple times; I think DeepSeek had a plot like this. But there’s no scaling law for RLHF where if you log-increase the compute, you get linear performance.
Nathan Lambert
(01:57:02)
In fact, the seminal scaling paper for RLHF is about scaling loss for reward model over-optimization. That’s a big line to draw with RLVR and the methods we have now; they will follow this scaling paradigm where you can let the best runs go for an extra 10x and you get performance, but you can’t do this with RLHF. That is going to be field-defining. To do the best RLHF you might not need the extra 10 or 100x compute, but to do the best RLVR you do. There’s a seminal paper from a Meta internship called “The Art of Scaling Reinforcement Learning with Language Models.”
Nathan Lambert
(01:57:47)
Their framework is called ScaleRL. Their incremental experiment was like 10,000 V100 hours, which is thousands or tens of thousands of dollars per experiment, and they do a lot of them. This cost is not accessible to the average academic, which creates a hard equilibrium when trying to figure out how to learn from each community.

Advice for beginners on how to get into AI development & research

Lex Fridman
(01:58:11)
I was wondering if we could take a bit of a tangent and talk about education and learning. If you’re somebody listening to this who’s a smart person interested in programming and interested in AI, I presume building something from scratch is a good beginning. Can you just take me through what you would recommend people do?
Sebastian Raschka
(01:58:32)
I would personally start, like you said, by implementing a simple model from scratch that you can run on your computer. The goal of building a model from scratch is not to have something you use every day for your personal projects. It’s not going to be your personal assistant replacing an existing open-weight model or ChatGPT. It’s to see exactly what goes into the LLM, what exactly comes out of the LLM, and how pre-training works on your own computer. And then you learn about pre-training, supervised fine-tuning, and the attention mechanism.
Sebastian Raschka
(01:59:03)
You get a solid understanding of how things work, but at some point you will reach a limit because smaller models can only do so much. The problem with learning about LLMs at scale is that it’s exponentially more complex to make a larger model because it’s not just that the model becomes larger. You have to think about sharding your parameters across multiple GPUs. Even for the KV cache, there are multiple ways you can implement it. One is just to understand how it works, like a cache you grow step-by-step by concatenating lists, but then that wouldn’t be optimal on GPUs. You would pre-allocate a tensor and then fill it in. But that adds another 20 or 30 lines of code.
Sebastian Raschka
(01:59:45)
And for each thing, you add so much code. I think the trick with the book is basically to understand how the LLM works. It’s not going to be your production-level LLM, but once you have that, you can understand the production-level LLM.
Lex Fridman
(01:59:56)
So you’re trying to always build an LLM that’s going to fit on one GPU?
Sebastian Raschka
(02:00:00)
Yes. Most of the examples I have fit on one GPU. I have some bonus materials on some MoE models; one or two of them may require multiple GPUs, but the goal is to have it on one GPU. And the beautiful thing is also you can self-verify. It’s almost like RLVR. When you code these from scratch, you can take an existing model from the Hugging Face Transformers library. The Hugging Face Transformers library is great, but if you want to learn about LLMs, I think that’s not the best place to start because the code is so complex. It has to fit so many use cases and some people use it in production. It has to be really sophisticated, so it’s intertwined and hard; it’s not linear to read.
Nathan Lambert
(02:00:39)
It started as a fine-tuning library, and then it grew to be the standard representation of every model architecture and the way it is loaded. Hugging Face is the default place to get a model, and Transformers is the software that enables it— —so people can easily load a model— —and do something basic with it.
Sebastian Raschka
(02:00:56)
And all frontier labs that have open-weight models have a Hugging Face Transformers version of it, from DeepSeek to gpt-oss. That’s the canonical way that you can load them. But again, even the Transformers library is not used in production for inference. People use SGLang or vLLM, and it adds another layer of complexity.
Lex Fridman
(02:01:15)
We should say that the Transformers library has something like 400 models.
Sebastian Raschka
(02:01:19)
So it’s the one library that tries to implement a lot of LLMs, and so you have a huge codebase. It’s massive. It’s—I don’t know, maybe millions— —hundreds of thousands of lines of code. Understanding the part that you want to understand is like finding the needle in the haystack. But what’s beautiful about it is you have a working implementation, so you can work backwards from it. What I would recommend doing is if I want to understand, for example, how OLMo 3 is implemented, I would look at the weights in the model hub and the config file. You can see, “Oh, they used so many layers. They use group query attention.” Then you see all the components in a human-readable 100-line config file. And then you start with your GPT-2 model and add these things.
Sebastian Raschka
(02:02:06)
The cool thing here is you can then load the pre-trained weights and see if they work in your model. You want to match the same output that you get with a Transformers model, and then you can use that basically as a verifiable reward to make your architecture correct. Sometimes it takes me a day. With OLMo 3, the challenge was RoPE for the position embeddings; they had a YaRN extension and there was some custom scaling there. I couldn’t quite match it at first, but in this struggle you kind of understand things. At the end, you know you have it correct because you can unit test it against the reference implementation. I think that’s one of the best ways to learn. Basically, you reverse-engineer something.
Nathan Lambert
(02:02:51)
I think that is something everyone interested in getting into AI today should do, and that’s why I liked your book. I came to language models from the RL and robotics field, so I never had taken the time to just learn all the fundamentals. The Transformer architecture is as fundamental today as deep learning was in the past, and people need to learn it. I think where a lot of people get overwhelmed is how to apply this to have an impact or find a career path.
Nathan Lambert
(02:03:23)
AI language models make this fundamental stuff so accessible, and people with motivation will learn it. Then it’s like, “How do I get the cycles on goal to contribute to research?” I’m actually fairly optimistic because the field moves so fast that a lot of times the best people don’t fully solve a problem because there’s a bigger, lower-hanging fruit to solve, so they move on. In my RLHF book, I try to take post-training techniques and describe how they influence the model. It’s remarkable how many things people just stop studying.
Nathan Lambert
(02:04:06)
I think people trying to go narrow after doing the fundamentals is good. Reading relevant papers and being engaged in the ecosystem—you actually… The proximity that random people have online to leading researchers is incredible. The anonymous accounts on X in ML are very popular, and no one knows who all these people are. It could just be random people who study this stuff deeply. Especially with AI tools to help you keep digging into things you don’t understand, it’s very useful. There are research areas that might only have three papers you need to read, and then one of the authors will probably email you back.
Nathan Lambert
(02:04:45)
But you have to put in a lot of effort into these emails to show you understand the field. It would take a newcomer weeks of work to truly grasp a very narrow area, but going narrow after the fundamentals is very useful. I became very interested in character training—how you make a model funny, sarcastic, or serious, and what you do to the data to achieve this. A student at Oxford reached out to me and said, “Hey, I’m interested in this,” and I advised him. Now that paper exists. There were maybe only two or three people in the world very interested in that specific topic.
Nathan Lambert
(02:05:25)
He’s a PhD student, which gives you an advantage, but for me, that was a topic where I was waiting for someone to say, “Hey, I have time to spend cycles on this.” I’m sure there are a lot more narrow things where you’re just like, “It doesn’t make sense that there was no answer to this.” There’s so much information coming in that people feel they can’t grab onto anything, but if you actually stick to one area, I think there are a lot of interesting things to learn.
Sebastian Raschka
(02:05:48)
Yeah, I think you can’t try to do it all because it would be very overwhelming and you would burn out. For example, I haven’t kept up with computer vision in a long time; I’ve just focused on LLMs. But coming back to your book, I think it’s a really great resource and a good bang for the buck if you want to learn about RLHF. I wouldn’t just go out there and read raw RLHF papers because you would be spending two years—
Nathan Lambert
(02:06:10)
—and some of them contradict each other. I’ve just edited the book, and there’s no chapter where I had to say, “X papers say one thing and Y papers say another, and we’ll see what comes out to be true.”
Lex Fridman
(02:06:21)
What are some of the ideas we might have missed in the bigger picture of post-training? To go through the table of contents: first, you did the problem setup, training overview, what are preferences, preference data and the optimization tools, reward modeling, regularization, instruction tuning, rejection sampling, reinforcement learning. Then constitutional AI and AI feedback, reasoning and inference-time scaling, tool use and function calling, synthetic data and distillation, evaluation, and then the open questions section: over-optimization, style and information, product UX, character and post-training. What are some ideas worth mentioning that connect both the educational component and the research component? You mentioned the character training, which is pretty interesting.
Nathan Lambert
(02:07:08)
Character training is interesting because there’s so little out there, but we talked about how people engage with these models. We feel good using them because they’re positive, but that can go too far; it can be too positive. It’s essentially how you change your data or decision-making to make it exactly what you want. OpenAI has this thing called a “model spec,” which is essentially their internal guideline for what they want the model to do, and they publish this to developers. So you can know what is a failure of OpenAI’s training—where they have the intention but haven’t met it yet—versus what is something they actually wanted to do that you just don’t like.
Nathan Lambert
(02:07:46)
That transparency is very nice, but all the methods for curating these documents and how easy it is to follow them is not very well known. I think the way the book is designed is that the reinforcement learning chapter is obviously what people want because everybody hears about it with RLVR, and it’s the same algorithms and the same math, but you can use it in very different documents. I think the core of RLHF is how messy preferences are. It’s essentially a rehash of a paper I wrote years ago, but this is the chapter that tells you why RLHF is never fully solvable, because the way that RL is set up assumes that preferences can be quantified and reduced to single values.
Nathan Lambert
(02:08:33)
I think it relates in the economics literature to the Von Neumann-Morgenstern utility theorem. That is the chapter where all of that philosophical, economic, and psychological context tells you what gets compressed when doing RLHF. Later in the book, you use this RL map to make the number go up. I think that’s why it’ll be very rewarding for people to do research on, because quantifying preferences is something humans have designed the problem around to make them studyable. But there are fundamental debates; for example, in a language model response, you have different things you care about, whether it’s accuracy or style.
Nathan Lambert
(02:09:13)
When you’re collecting the data, they all get compressed into, “I like this more than another.” There’s a lot of research in other areas of the world that goes into how you should actually do this. I think social choice theory is the subfield of economics around how you should aggregate preferences. I went to a workshop that published a white paper on how you can think about using social choice theory for RLHF. I want people who get excited about the math to stumble into this broader context. I also keep a list of all the tech reports of reasoning models that I like. In Chapter 14, where there’s a short summary of RLVR, there’s a gigantic table where I list every single reasoning model that I like. I think in education, a lot of it needs to be, at this point, what I like—
Nathan Lambert
(02:10:08)
—because language models are so good at the math. For example, the famous paper on Direct Preference Optimization, which is a much simpler way of solving the problem than RL—the derivations in the appendix skip steps of math. I tried for this book to redo the derivations and I was like, “What the heck is this log trick that they use?” But when doing it with language models, they just say, “This is the log trick.” I don’t know if I like that the math is so commoditized. I think some of the struggle in reading this appendix— —and following the math is good for learning.
Lex Fridman
(02:10:43)
Yeah, we’re returning to this often on the topic of education. You both have brought up the word “struggle” quite a bit. There is value in that. If you’re not struggling as part of this process, you’re not fully following the proper process for learning, I suppose.
Nathan Lambert
(02:11:02)
Some of the providers are starting to work on models for education designed to not give… actually, I haven’t used them, but I would guess they’re designed to not give all the information at once— —and make people work for it. I think you could train models to do this and it would be a wonderful contribution. In the book, you had to reevaluate every decision— Which is such a great example. I think there’s a chance we work on it at AI2, which I think would be so fun.
Sebastian Raschka
(02:11:26)
It makes sense. I did something like that the other day for video games. In my spare time, I like video games with puzzles, like Zelda and Metroid. There’s this new game where I got really stuck. I didn’t want to struggle for two days, so I used an LLM. But I told it, “Please don’t add any spoilers. I’m at this point; what do I have to do next?” You can do the same thing for math where you say, “I’m at this point and I’m getting stuck. Don’t give me the full solution, but what is something I could try?” You kind of carefully probe it.
Sebastian Raschka
(02:12:02)
But the problem is that it requires discipline. A lot of people enjoy math, but there are also a lot of people who need to do it for their homework, and then it’s just a shortcut. We can develop an educational LLM, but the other LLMs are still there, and there’s still a temptation to use them.
Lex Fridman
(02:12:20)
I think a lot of people, especially in college, understand the stuff they’re passionate about— —they’re self-aware about it, and they understand it shouldn’t be easy. Like, I think we just have to develop a good taste— —talk about research taste, like school taste about stuff that you should be struggling on— —and stuff you shouldn’t be struggling on. Which is tricky to know, because sometimes you don’t have good long-term vision about what would be actually useful to you in your career. But you have to develop that taste, yeah.
Nathan Lambert
(02:12:51)
I was talking to maybe my fiance or friends about this, and it’s like there’s this brief 10-year window where all of the homework and all the exams could be digital. But before that, everybody had to do all the exams in bluebooks because there was no other way. And now after AI, everybody’s going to need to be in bluebooks and oral exams because everybody could cheat so easily. It’s like this brief generation that had a different education system where everything could be digital, but you still couldn’t cheat. And now it’s just going back. It’s just very funny.
Lex Fridman
(02:13:20)
You mention character training. Just zooming out on a more general topic: for that topic, how much compute was required? And in general, to contribute as a researcher, are there places where not too much compute is required where you can actually contribute as an individual researcher?
Nathan Lambert
(02:13:39)
For the character training thing, I think this research is built on fine-tuning about seven billion parameter models with LoRA, which is like a… Essentially, you’re only fine-tuning a small subset of the weights of the model. I don’t know exactly how many GPU hours that would take.
Lex Fridman
(02:13:55)
But it’s doable.
Nathan Lambert
(02:13:55)
Not doable for every academic. So the situation for some academics is so dire that the only work you can do is doing inference where you have closed models or open models, and you get completions from them and you can look at them and understand the models. And that’s very well-suited to evaluation, where you want to be the best at creating representative problems that the models fail on or show certain abilities, which I think that you can break through with. I think that the top-end goal for a researcher working on evaluation, if you want to have career momentum, is that the frontier labs pick up your evaluation. So you don’t need to have every project do this.
Nathan Lambert
(02:14:33)
But if you go from a small university with no compute and you figure out something that Claude struggles with and then the next Claude model has it in the blog post, there’s your career rocket ship. I think that that’s hard but if you want to scope the maximum possible impact with minimum compute, it’s something like that—which is just get very narrow, and it takes learning of where the models are going. So you need to build a tool that tests where Claude 4.5 will fail. If I’m going to start a research project, I need to think where the models in eight months are going to be struggling.
Lex Fridman
(02:15:05)
But what about developing totally novel ideas?
Nathan Lambert
(02:15:08)
This is a trade-off. I think that if you’re doing a PhD, you could also be like, “It’s too risky to work in language models. I’m going way longer term,” which is like— —what is the thing that’s going to define language model development in 10 years? Which I think that I end up being a person that’s pretty practical. I mean, I went to my PhD where it was like, “I got into Berkeley. Worst case, I get a master’s, and then I go work in tech.” And so I’m very practical about it. The life afforded to people to work at these AI companies, the amount of… OpenAI’s average compensation is over a million dollars in stock a year per employee. For any normal person in the US, getting into this AI lab is transformative for your life. So I’m pretty practical about it—
Nathan Lambert
(02:15:50)
—there’s still a lot of upward mobility working in language models if you’re focused. And looking at the outcomes, look at these jobs. But from a research perspective, for transformative impact and these academic awards, being the next Yann LeCun comes from not caring about language model development very much.
Lex Fridman
(02:16:07)
It’s a big financial sacrifice in that case.
Nathan Lambert
(02:16:09)
So I get to work with some awesome students, and they’re like, “Should I go work at an AI lab?” And I’m like, “You’re getting a PhD at a top school. Are you going to leave to go to a lab?” If you go work at a top lab, I don’t blame you. Don’t go work at some random startup that might go to zero. But if you’re going to OpenAI, I think it could be worth leaving a PhD for.
Lex Fridman
(02:16:30)
Let’s more rigorously think through this. Where would you give a recommendation for people to do a research contribution? The options are academia—get a PhD, spend five years publishing, though compute resources are constrained. There are research labs that are more focused on open-weight models, so working there. Or closed frontier labs. OpenAI, Anthropic, xAI, and so on.
Nathan Lambert
(02:17:04)
The two gradients are: the more closed, the more money you tend to get, but also you get less credit. In terms of building a portfolio of things that you’ve done, it’s very clear what you have done as an academic. Versus if you are going to trade this fairly reasonable progression for being a cog in the machine, which could also be very fun. I think they’re very different career paths. But the opportunity cost for being a researcher is very high because PhD students are paid essentially nothing. I think it ends up rewarding people that have a fairly stable safety net and they realize they can operate in the long term, doing very interesting work and getting a very interesting job.
Nathan Lambert
(02:17:50)
So it is a privileged position to be like, “I’m going to see out my PhD and figure it out after because I want to do this.” And at the same time, the academic ecosystem is getting bombarded by funding getting cut and stuff. There are just so many different trade-offs where I understand plenty of people that are like, “I don’t enjoy it. I can’t deal with this funding search. My grant got cut for no reason by the government,” or, “I don’t know what’s going to happen.” So I think there’s a lot of uncertainty and trade-offs that, in my opinion, favor just taking the well-paying job with meaningful impact. It’s not like you’re getting paid to sit around at OpenAI. You’re building the cutting edge of things that are— changing millions of people’s relationship to tech.
Lex Fridman
(02:18:34)
But publication-wise, they’re being more secretive, increasingly so. So you’re publishing less and less. You are having a positive impact at scale, but you’re a cog in the machine.
Sebastian Raschka
(02:18:47)
I think, honestly, it hasn’t changed that much. I have been in academia; I’m not in academia anymore. At the same time, I wouldn’t want to miss my time in academia. But what I wanted to say before I get to that part, I think it hasn’t changed that much. I was using AI or machine learning methods for applications in computational biology with collaborators, and a lot of people went from academia directly to Google. I think it’s the same thing. Back then, professors were sad that their students went into industry because they couldn’t carry on their legacy in that sense. I think it’s the same thing. It hasn’t changed that much. The only thing that has changed is the scale.
Sebastian Raschka
(02:19:32)
But, you know, cool stuff was always developed in industry that was closed. You couldn’t talk about it. I think the difference now is your preference. Do you like to talk about your work and publish, or are you more in a closed lab? That’s one difference—the compensation, of course. But it’s always been like that. So it really depends on where you feel comfortable. And also, nothing is forever. The only thing right now is there’s a third option, which is starting a startup. There are a lot of people doing startups. Very risky move, but it’s a high-risk, high-reward type of situation, whereas joining an industry lab is pretty safe and offers upward mobility.
Sebastian Raschka
(02:20:16)
Honestly, I think once you have been at an industry lab, it will be easier to find future jobs. But then again, it’s like, how much do you enjoy the team and working on proprietary things versus how do you like the publishing work? I mean, publishing is stressful. Acceptance rates at conferences can be arbitrary and very frustrating, but it’s also high reward. If you have a paper published, you feel good because your name is on there. You have a high accomplishment.
Nathan Lambert
(02:20:48)
I feel like my friends who are professors seem on average happier than my friends who work at— a frontier lab, to be totally honest. Because there’s just a grounding and— the frontier labs definitely do this 9/9/6— which essentially is shorthand for work all the time.

Work culture in AI (72+ hour weeks)

Lex Fridman
(02:21:03)
Can you describe 9/9/6 as a culture? I believe you could say it was invented in China and adopted in Silicon Valley. What’s 9/9/6? It’s 9:00 AM to 9:00 PM—
Sebastian Raschka
(02:21:14)
six days a week.
Lex Fridman
(02:21:15)
Six days a week. What is that, 72 hours? Is this basically the standard in AI companies in Silicon Valley? More and more this kind of grind mindset.
Sebastian Raschka
(02:21:26)
Yeah, I mean, maybe not exactly like that, but I think there is a trend towards it. And it’s interesting—I think it almost flipped because when I was in academia, I felt like that because as a professor, you had to write grants, you had to teach, and you had to do your research. It’s like three jobs in one, and it is more than a full-time job if you want to be successful. And I feel like now, like Nathan just said, the professors in comparison to a lab have even less pressure or workload than at a frontier lab because—
Nathan Lambert
(02:21:57)
I think they work a lot. They’re just so fulfilled. By working with students— …and having a constant runway of mentorship and a mission that is very people-oriented. I think in an era when things are moving very fast and are very chaotic, it’s very rewarding to people.
Sebastian Raschka
(02:22:11)
Yeah, and I think at a startup, there’s this pressure. You have to make it. It is really important that people put in the time, but it is really hard because you have to deliver constantly. I’ve been at a startup. I had a good time, but I don’t know if I could do it forever. It’s an interesting pace and it’s exactly like we talked about in the beginning. These models are leapfrogging each other, and they are just constantly trying to take the next step compared to their competitors. It’s just ruthless right now.
Nathan Lambert
(02:22:42)
I think this leapfrogging nature and having multiple players is actually an underrated driver of language modeling progress where competition is so deeply ingrained. These companies have intentionally created very strong cultures. For example, Anthropic is known to be culturally deeply committed and organized. We hear so little from them, and everybody at Anthropic seems very aligned. Being in a culture that is super tight and having this competitive dynamic is a thing that’s going to make you work hard and create things that are better.
Nathan Lambert
(02:23:20)
But that comes at the cost of human capital. You can only do this for so long, and people are definitely burning out. I wrote a post on burnout as I’ve tread in and out of this myself, especially trying to be a manager while doing full-mode training. It’s a crazy job. In the book Apple in China, Patrick McGee talked about how hard the Apple engineers worked to set up the supply chains in China. He mentioned they had “saving marriage” programs, and he said in a podcast that people died from this level of working hard. It’s a perfect environment for creating progress based on human expense. The human expense is the 996 that we started this with, where people do really grind.
Sebastian Raschka
(02:24:08)
I also read this book. I think they had a code word for if someone had to go home to spend time with their family to save the marriage. Then the colleagues said, “Okay, this is red alert for this situation. We have to let that person go home this weekend.” But at the same time, I don’t think they were forced to work. They were so passionate about the product that you get into that mindset. I had that sometimes as an academic, and as an independent person. I overwork, and it’s unhealthy. I had back issues and neck issues because I did not take the breaks that I should have. But it’s not because anyone forced me; it’s because I wanted to work because it’s exciting stuff.
Nathan Lambert
(02:24:46)
That’s what OpenAI and Anthropic are like. They want to do this work.

Silicon Valley bubble

Lex Fridman
(02:24:49)
Yeah, but there’s also a feeling of fervor that’s building, especially in Silicon Valley, aligned with the scaling laws idea. There’s this hype where the world will be transformed in a scale of weeks and you want to be at the center of it. I have the great fortune of having conversations with a wide variety of human beings, and I get to see all these bubbles and echo chambers across the world. It’s fascinating to see how we humans form them. I think it’s fair to say that Silicon Valley is a kind of echo chamber, a kind of silo and bubble. I think bubbles are actually really useful and effective. It’s not necessarily a negative thing because you can be ultra-productive.
Lex Fridman
(02:25:34)
It could be the Steve Jobs reality distortion field, because you just convince each other the breakthroughs are imminent, and by convincing each other of that, you make the breakthroughs imminent.
Nathan Lambert
(02:25:48)
Bryne Hobart wrote a book classifying bubbles. One of them is financial bubbles, which involve speculation and are bad, and the other is effectively for build-outs, because it pushes people to build. I do think AI is in this, but I worry about it transitioning to a financial bubble.
Lex Fridman
(02:26:05)
Yeah, but also in the space of ideas, that bubble creates a reality distortion field. That means you are deviating from reality, and if you go too far while also working 996, you might miss some fundamental aspects of the human experience. This is a common problem in Silicon Valley. It’s a very specific geographic area. You might not understand the Midwest perspective or the experience of all the other different humans in the United States and across the world. You speak a certain way to each other and convince each other of a certain thing, and that can get you into real trouble.
Lex Fridman
(02:26:47)
Whether AI is a big success and becomes a powerful technology or it’s not, in either trajectory you can get yourself into trouble. So you have to consider all of that. Here you are, a young person trying to decide what you want to do with your life.
Nathan Lambert
(02:27:02)
The thing that is… I don’t even really understand this, but the SF AI memes have gotten to the point where the “permanent underclass” was one of them. This was the idea that the last six months of 2025 was the only time to build durable value in an AI startup or model. Otherwise, all the value will be captured by existing companies and you will therefore be poor. That’s an example of the SF thing that goes so far. I still think for young people who are really passionate about having an impact in AI, being physically in SF is the most likely place where you’re going to do this. But it has trade-offs.
Lex Fridman
(02:27:41)
I think SF is an incredible place, but there is a bit of a bubble. And if you go into that bubble, which is extremely valuable, just get out also. Read history books, read literature, and visit other places in the world. Twitter and Substack are not the entire world.
Nathan Lambert
(02:28:01)
I think I would say, one of the people I worked with is moving to SF, and I need to get him a copy of Season of the Witch. It’s a history of SF from 1960 to 1985 that goes through the hippie revolution, the culture emerging in the city, the HIV/AIDS crisis, and other things. That is so recent, with so much turmoil and hurt, but also love in SF. No one knows about this. It’s a great book, Season of the Witch; I recommend it. A bunch of my SF friends who do get out recommended it to me. I lived there and I didn’t appreciate this context, and it’s just so recent.

Text diffusion models and other new research directions

Lex Fridman
(02:28:46)
Yeah. Okay, let’s… we talked a lot about many things, certainly about what was exciting last year. But this year, one of the things you guys mentioned that’s exciting is the scaling of text diffusion models and just a different exploration of text diffusion. Can you talk about what that is and what possibilities it holds? So, different kinds of approaches than the current LMs?
Sebastian Raschka
(02:29:13)
Yeah, so we talked a lot about the transformer architecture and the autoregressive transformer architecture specifically, like GPT. And it doesn’t mean no one else is working on anything else. People are always on the lookout for the next big thing, because I think it would be almost stupid not to. Sure, right now the transformer architecture is the thing and it works best, but it’s always a good idea to not put all your eggs into one basket. People are developing alternatives to the autoregressive transformer. One of them would be, for example, text diffusion models.
Sebastian Raschka
(02:29:49)
And listeners may know diffusion models from image generation, like Stable Diffusion popularized it. Back then, people used GANs, Generative Adversarial Networks. And then there was this diffusion process where you iteratively de-noise an image, and that resulted in really good quality images over time. Other companies build their own diffusion models. And now people are like, “Okay, can we try this also for text?” It doesn’t make intuitive sense yet because it feels like it’s not something continuous like a pixel that we can differentiate. It’s discrete text, so how do we implement that de-noising process?
Sebastian Raschka
(02:30:25)
But it’s kind of similar to the BERT models by Google. When you go back to the original transformer, there were the encoder and the decoder. The decoder is what we are using right now in GPT and so forth. The encoder is more like a parallel technique where you have multiple tokens that you fill in in parallel. GPT models do autoregressive completion one token at a time. In BERT models, you have a sentence that has gaps—you mask them out—and then one iteration is filling in those gaps.
Sebastian Raschka
(02:31:02)
And text diffusion is kind of like that, where you are starting with some random text, and then you are filling in the missing parts or refining them iteratively over multiple iterations. The cool thing here is that this can do multiple tokens at the same time, so it has the promise of being more efficient. Now, the trade-off is, of course, how good is the quality? It might be faster, but the more de-noising steps you do, the better the text becomes. People are trying to see if that is a valid alternative to the autoregressive model in terms of giving you the same quality for less compute.
Sebastian Raschka
(02:31:46)
Right now, there are papers that suggest if you want to get the same quality, you have to crank up the de-noising steps and then you end up spending the same compute you would spend on an autoregressive model. The other downside is that while it’s parallel, some tasks are not. For reasoning tasks or tool use where you have to ask a code interpreter to give you an intermediate result, it is kind of tricky with diffusion models. So there are some hybrids. But the main idea is how can we parallelize it. It’s an interesting avenue. I think right now there are mostly research models out there, like LaMDA and some other ones.
Sebastian Raschka
(02:32:24)
I saw some by startups, some deployed models, but there is no big diffusion model at scale yet on the level of Gemini or ChatGPT. But there was an announcement by Google where they said they are launching Gemini Diffusion, and they put it into context of their Nano 2 model. They said for the same quality on most benchmarks, we can generate things much faster. I don’t think the text diffusion model is going to replace autoregressive LLMs, but it will be something for quick, cheap, at-scale tasks. Maybe the free tier in the future will be something like that.
Nathan Lambert
(02:33:04)
I think there are a couple of examples where it’s actually started to be used. To paint an example of why this is so much better: when a model like GPT-5 takes time to respond, it’s generating one token at a time. This diffusion idea is essentially generating all of those tokens in the completion in one batch, which is why it could be way faster.
Nathan Lambert
(02:33:27)
The startups I’m hearing are code startups where you have a codebase and somebody is effectively vibe coding. They say, “Make this change,” and a code diff is essentially a huge reply from the model. It doesn’t have to have that much external context, and you can get it really fast by using these diffusion models. They use text diffusion to generate really long diffs because doing it with an autoregressive model would take minutes, and that time causes a lot of churn for a user-facing product. Every second, you lose users. So I think that it’s going to be this thing where it’s going to-
Nathan Lambert
(02:34:02)
-grow and have some applications, but I actually thought that different types of models were going to be used for different things sooner than they have been. I think the tool use point is the one that’s stopping them from being most general purpose because, with something like Claude Code or ChatGPT with search, the autoregressive chain is interrupted with an external tool, and I don’t know how to do that with the diffusion setup.

Tool use

Lex Fridman
(02:34:28)
So what’s the future of tool use this year and in the coming years? Do you think there’s going to be a lot of developments there, and how that’s integrated into the entire stack?
Sebastian Raschka
(02:34:37)
I do think right now it’s mostly on the proprietary LLM side, but we will see more of that in open-source tooling. It is a huge unlock because then you can really outsource certain tasks from just memorization to actual computation—you know, instead of having the LLM memorize what is 23 plus 5, just use a calculator.
Lex Fridman
(02:34:58)
So do you think that can help solve hallucinations?
Sebastian Raschka
(02:35:01)
Not solve it, but reduce it. Still, the LLM needs to know when to ask for a tool call. And second, it doesn’t mean the internet is always correct. You can do a web search for who won the World Cup in 1998, but it still needs to find the right website and get the right information. You can still go to the incorrect website and get incorrect information. I don’t think it will fully solve it, but it is improving. There was another cool paper earlier this year—I think it was December 31st, so not technically 2026, but close—on the recursive language model.
Sebastian Raschka
(02:35:43)
That’s a cool idea to take this even a bit further. Nathan, you mentioned earlier it’s harder to do cool research in academia because of the compute budget. If I recall correctly, they did everything with GPT-5, so they didn’t even use local models. But the idea is, for a long-context task, instead of having the LLM solve all of it in one shot or in a chain, you break it down into sub-tasks. You have the LLM decide what is a good sub-task and then recursively call an LLM to solve that.
Sebastian Raschka
(02:36:16)
And then adding tools—you know, each sub-task maybe goes to the web and gathers information, and then you pull it all together at the end. I think there’s going to be a lot of unlock using things like that where you don’t necessarily improve the LLM itself, you improve how the LLM is used and what it can use. One downside right now with tool use is you have to give the LLM permission to use tools. That will take some trust, especially if you want to unlock things like having an LLM answer emails for you, or just sort them. I don’t know if I would today give an LLM access to my emails, right? I mean, this is a huge risk.
Nathan Lambert
(02:37:03)
I think there’s one last point on the tool use thing. You hinted at this, and we’ve both come at this in our own ways: open versus closed models use tools in very different ways. With open models, people go to Hugging Face and download the model, and then the person’s going to be like, “What tool do I want?” Maybe X.ai is my preferred search provider, but someone else might care for a different search startup. When you release a model, it needs to be useful for multiple tools, which is really hard because you’re making a general reasoning engine, which is actually what gpt-oss-120b is good for.
Nathan Lambert
(02:37:36)
But on the closed models, you’re deeply integrating the specific tool into your experience. I think that open models will struggle to replicate some of the things that I like to do with closed models, where you can reference a mix of public and private information. Something that I keep trying every three to six months is Codex on the web, which is just prompting a model to make an update to some GitHub repository that I have.
Nathan Lambert
(02:38:01)
That set of secure cloud environment is just so nice for just sending it off to do this thing and then come back to me. This will probably help define some of the local open and closed niches. Because there was such a rush to get tool use working, the open models were on the back foot, which is kind of inevitable. There are so many resources in these frontier labs, but it will be fun when the open models solve this because it’s going to necessitate a more flexible model that might work with this recursive idea to be an orchestrator. Hopefully, necessity drives innovation there.

Continual learning

Lex Fridman
(02:38:45)
So, continual learning—this is a longstanding topic and an important problem. I think that increases in importance as the cost of training models goes up. So can you explain what continual learning is and how important it might be this year and in the coming years to make progress?
Nathan Lambert
(02:39:03)
This relates a lot to this kind of SF zeitgeist of: what is AGI, Artificial General Intelligence, and what is ASI, Artificial Superintelligence? What are the language models that we have today capable of doing? I think language models can solve a lot of tasks, but a key milestone for the AI community is when AI can replace any remote worker, taking in information and solving digital tasks. The limitation is that a language model will not learn from feedback the same way an employee does. If you hire an editor, they might mess up, but you will tell them, and they don’t do it again.
Nathan Lambert
(02:39:43)
But language models don’t have this ability to modify themselves and learn very quickly. The idea is, if we are going to get to something that is a true, general adaptable intelligence that can go into any remote work scenario, it needs to be able to learn quickly from feedback and on-the-job learning. I’m personally more bullish on language models being able to just provide very good context. You can write extensive documents where you say, “I have all this information. Here are all the blog posts I’ve ever written. I like this type of writing; my voice is based on this.” But a lot of people don’t provide this to models.
Nathan Lambert
(02:40:24)
The agentic models are just starting. So it’s this kind of trade-off: do we need to update the weights of this model with this continual learning thing to make them learn fast? Or, the counterargument is we just need to provide them with more context and information, and they will have the appearance of learning fast by just having a lot of context and being very smart.
Lex Fridman
(02:40:43)
So we should mention the terminology here. Continual learning refers to changing the weights continuously so that the model adapts and adjusts based on the new incoming information, and does so continually, rapidly, and frequently. And then the thing you mentioned on the other side of it is generally referred to as in-context learning. As you learn stuff, there’s a huge context window. You can just keep loading it with extra information every time you prompt the system, which I think both can legitimately be seen as learning. It’s just a different place where you’re doing the learning.
Sebastian Raschka
(02:41:24)
I think, to be honest with you, continual learning—the updating of weights—we already have that in different flavors. I think the distinction here is: do you do that on a personalized custom model for each person, or do you do it on a global model scale? And I think we have that already with going from GPT-5 to 5.1 and 5.2. It’s maybe not immediate, but it is like a quick curated update where there was feedback by the community on things they couldn’t do. They updated the weights, released the next model, and so forth. So it is kind of a flavor of that. Another even finer-grained example is RLVR; you run it, it updates.
Sebastian Raschka
(02:42:08)
The problem is you can’t just do that for each person because it would be too expensive to update the weights for each person. Even at OpenAI scale, building the data centers, it would be too expensive. I think that is only feasible once you have something on the device where the cost is on the consumer. Like what Apple tried to do with the Apple Intelligence models, putting them on the phone so they learn from the experience.
Lex Fridman
(02:42:33)
A bit of a related topic, but this kind of—maybe anthropomorphized term—memory. What are different ideas of the mechanism of how to add memory to these systems as you’re increasingly seeing? Especially personalized memory?
Sebastian Raschka
(02:42:49)
Right now, it’s mostly like context—stuffing things into the context and then just recalling that. But again, it’s expensive because even if you cache it, you spend tokens on that. And the second one is you can only do so much. I think it’s more like a preference or style. A lot of people do that when they solve math problems. You can add previous knowledge, but you also give it certain preference prompts, like “do what I preferred last time.” But it doesn’t unlock new capabilities. For that, one thing people still use is LoRA adapters.
Sebastian Raschka
(02:43:32)
These are basically, instead of updating the whole weight matrix, two smaller weight matrices that you have in parallel or overlays, like the delta. But you can do that to some extent, and then again, it is economics. There were also papers showing, for example, LoRA learns less but forgets less. There’s no free lunch. If you want to learn more, you need to use more weights, but it gets more expensive. And then if you learn more, you forget more; you have to find that Goldilocks zone.

Long context

Lex Fridman
(02:44:04)
We haven’t really mentioned it much, but implied in this discussion is context length as well. Is there a lot of innovations that’s possible there?
Nathan Lambert
(02:44:13)
I think the colloquially accepted thing is that it’s a compute and data problem. Sometimes there are small architecture things, like attention variants. We talked about hybrid attention models, which is essentially if you have what looks like a state space model within your transformer. Those are better suited because you have to spend less compute to model the furthest along token. But those aren’t free because they have to be accompanied by a lot of compute or the right data. How many sequences of 100,000 tokens do you have in the world, and where do you get these? It just ends up being pretty expensive to scale them.
Nathan Lambert
(02:44:56)
So we’ve gotten pretty quickly to a million tokens of input context length. And I would expect it to keep increasing and get to 2 million or 5 million this year, but I don’t expect it to go to, like, 100 million. That would be a true breakthrough, and I think those breakthroughs are possible. I think of the continual learning thing as a research problem where there could be a breakthrough that makes transformers work way better at this and it’s cheap. These things could happen with so much scientific attention. Но turning the crank, it’ll be consistent increases over time.
Sebastian Raschka
(02:45:27)
I think also looking at the extremes, there’s no free lunch. One extreme to make it cheap is to have, let’s say, an RNN that has a single state where you save everything from the previous stuff. It’s a specific fixed-size thing, so you never really grow the memory. You are stuffing everything into one state, but then the longer the context gets, the more information you forget because you can’t compress everything into one state. Then on the other hand, you have the transformers, which try to remember every token. That is great if you want to look up specific information, but very expensive because you have the KV cache and the dot product that grow.
Sebastian Raschka
(02:46:06)
But then, like you said, the Mamba layers kind of have the same problem. Like an RNN, you try to compress everything into one state, and you’re a bit more selective there. I think it’s like this Goldilocks zone again with NVIDIA Nemotron 3; they found a good ratio of how many attention layers you need for the global information where everything is accessible compared to having these compressed states. I think we will scale more by finding better ratios in that Goldilocks zone between making it cheap enough to run and making it powerful enough to be useful.
Sebastian Raschka
(02:46:43)
And one more plug here: the recursive language model paper is one of the papers that tries to address the long context thing. What they found is, essentially, instead of stuffing everything into this long context, if you break it up into multiple smaller tasks, you save memory and can actually get better accuracy than having the LLM try everything all at once. It’s a new paradigm; we will see if there are other flavors of that. I think we will still make improvement on long context, but like Nathan said, the problem is for pre-training itself, we don’t have as many long-context documents as other documents. So it’s harder to study basically how LMs behave on that level.
Nathan Lambert
(02:47:31)
There are some rules of thumb where, essentially, you pre-train a language model—like OLMo, we pre-trained at an 8K context length and then extended to 32K with training. There’s a rule of thumb where doubling the training context length takes about 2X compute, and then you can normally 2 to 4X the context length again. I think a lot of it ends up being compute-bound at pre-training. Everyone talks about this big increase in compute for the top labs this year, and that should reflect in some longer context windows.
Nathan Lambert
(02:48:02)
But I think on the post-training side, there’s some more interesting things. As we have agents, the agents are going to manage this context on their own. Now people who use Claude Code a lot dread the compaction, which is when Claude takes its entire 100,000 tokens of work and compacts it into a bulleted list. But what the next models will do—I’m sure people are already working on this—is the model can control when it compacts and how. So you can essentially train your RL algorithm where compaction is an action,
Nathan Lambert
(02:48:30)
where it shortens the history. Then the problem formulation will be, “I want to keep the maximum evaluation scores while the model compacts its history to the minimum length.” Because then you have the minimum amount of tokens that you need to do this kind of compounding auto-regressive prediction. There are actually pretty nice problem setups in this where these agentic models learn to use their context in a different way than just plowing forward.
Sebastian Raschka
(02:48:56)
One interesting recent example would be DeepSeek-V3.2, where they had a sparse attention mechanism with a very efficient, small, lightweight indexer. Instead of attending to all the tokens, it selects which tokens I actually need. It almost comes back to the original idea of attention where you are selective, but attention is always on; you have maybe zero weight on some of them, but you use them all. But they are even more like, “Okay, let’s just mask that out or not even do that.” And even with sliding window attention in OLMo, that is also kind of like that idea. You have that rolling window where you keep it fixed, because you don’t need everything all the time.
Sebastian Raschka
(02:49:34)
Occasionally, in some layers you might, but it’s wasteful. But right now, I think if you use everything, you’re on the safe side; it gives you the best bang for the buck because you never miss information. And right now, I think this year will also be the year of figuring out, like you said, how to be smarter about that. Right now people want to have the next state-of-the-art, and the state-of-the-art happens to be the brute force, expensive thing. Once you have that, like you said, you want to keep that accuracy but see how we can do that cheaper now using tricks.
Nathan Lambert
(02:50:07)
Yeah, all this scaling thing. Like the reason we get the Claude 4.5 Sonnet model first is because you can train it faster and you’re not hitting these compute walls as soon. They can just try a lot more things and get the model out faster, even though the bigger model is actually better.

Robotics

Sebastian Raschka
(02:50:22)
I think we should say that there’s a lot of exciting stuff going on in the AI space. My mind has recently been really focused on robotics, so today we almost entirely didn’t talk about robotics. There’s a lot of stuff on image generation and video generation. I think it’s fair to say that the most exciting research work in terms of intensity and fervor is in the LLM space, which is why I think it’s justified for us to focus on the LLMs we’re discussing. But it’d be nice to bring in certain things that might be useful. For example, world models—there’s growing excitement about that. Do you think there will be any use in this coming year for world models in the LLM space?
Sebastian Raschka
(02:51:08)
Also with LLMs, what’s an interesting thing here is I think if we unlock more LLM capabilities, it also automatically unlocks all the other fields because it makes progress faster. Because, you know, a lot of researchers and engineers use LLMs for coding. So even if they work on robotics, if you optimize these LLMs that help with coding, it pays off. But then yes, world models are interesting. It’s basically where you have the model run a simulation of the world—like a little toy version of the real thing—which can unlock capabilities like data the LLM is not aware of. It can simulate things. I think LLMs happen to work well by pre-training and doing next-token prediction, but we could do this in a more sophisticated way.
Sebastian Raschka
(02:52:05)
There was a paper, I think by Meta, called “Coder World Models.” They basically apply the concept of world models to LLMs where, instead of just having next-token prediction and verifiable rewards checking the answer correctness, they also make sure the intermediate variables are correct. The model is basically learning a code environment. I think this makes a lot of sense; it’s just expensive to do. But it is making things more sophisticated by modeling the whole process, not just the result, and that can add more value.
Sebastian Raschka
(02:52:51)
I remember when I was a grad student, there’s a competition called CASP where they do protein structure prediction. They predict the structure of a protein that is not solved yet. In a sense, this is actually great, and I think we need something like that for LLMs also, where you do the benchmark but no one knows the solution until someone reveals it after the fact. When AlphaFold came out, it crushed this benchmark. I mean there were multiple iterations, but I remember the first one explicitly modeled the physical interactions and the physics of the molecule.
Sebastian Raschka
(02:53:34)
Also, things like impossible angles. Then in the next version, I think they got rid of this and just used brute force, scaling it up. I think with LLMs, we are currently in this brute-force scaling because it just happens to work, but I do think at some point it might make sense to bring back this approach. I think with world models, that might be actually quite cool. And of course, for robotics, that is completely related to LLMs.
Lex Fridman
(02:54:03)
Yeah, and robotics is very explicit. There’s the problem of locomotion or manipulation. Locomotion is much more solved, especially in the learning domain. But there’s a lot of value, just like with the initial protein folding systems… …Bringing in the traditional model-based methods. So it’s unlikely that you can just learn the manipulation or the whole-body local manipulation problem end-to-end. That’s the dream. But then you realize when you look at the magic of the human hand… …And the complexity of the real world, you realize it’s really hard to learn this all the way through- …the way I guess AlphaFold 2 didn’t.
Nathan Lambert
(02:54:40)
I’m excited about the robotic learning space. I think it’s collectively getting supercharged by all the excitement and investment in language models generally. The infrastructure for training transformers, which is a general modeling thing, is becoming world-class industrial tooling. Wherever there was a limitation for robotics, it’s just way better now. There’s way more compute. They take these language models and use them as central units where you can do interesting explorative work around something that already works. And then I see it emerging as, kind of like we talked about, Hugging Face transformers and Hugging Face.
Nathan Lambert
(02:55:19)
I think when I was at Hugging Face, I was trying to get this to happen, but it was too early. These open robotic models on Hugging Face enable people to contribute data and fine-tune them. I think we’re much closer now that the investment in robotics and self-driving cars is related and enables this. Once you get to the point where you have this sort of ecosystem, someone can download a robotics model and fine-tune it to their robot or share datasets across the world. There’s some work in this area like RTX from a few years ago where people are starting to do that. But once they have this ecosystem, it’ll look very different. And then this whole post-ChatGPT boom is putting more resources into that, which I think is a very good area for doing research.
Lex Fridman
(02:56:02)
This is also resulting in much better, more accurate, more realistic simulators being built, closing this sim-to-real gap in the robotic space. But you know, you mentioned a lot of excitement and investment. The downside of that, which happens in hype cycles—I personally believe, and most robotics people believe—is that robotics is not going to be solved on the timescale being implicitly or explicitly promised. So what happens when all these robotics companies spring up and then they don’t have a product that works? Then there’s going to be this crash of excitement, which is nerve-wracking. Hopefully something else will swoop in so that the continued development of some of these ideas keeps going.
Sebastian Raschka
(02:56:53)
I think it’s also related to the continual learning issue. The real world is so complex, whereas with LLMs, you don’t really need to have something learn for the user because there are a lot of things everyone has to do—everyone maybe wants to fix their grammar in their email or code. It’s more constrained, so you can prepare the model for that. But preparing a robot for the real world is harder. You have robotic foundation models, and you can learn things like grasping, but every house is different. It’s so different that the robot would have to learn on the job, essentially. And I think that is the bottleneck right now: customizing it on the fly.
Lex Fridman
(02:57:42)
I don’t think I can possibly understate the importance of the thing that doesn’t get talked about almost at all by robotics folks or anyone, and that is safety. All the interesting complexities we talk about regarding learning, all the failure modes and failure cases—everything we’ve been talking about with LLMs where sometimes it fails in interesting ways—all of that is fun and games in the LLM space. In the robotic space, in people’s homes, across millions of minutes and billions of interactions, you really are almost allowed to fail never. When you have embodied systems put out there in the real world, you just have to solve so many problems you never thought you’d have to solve when you’re just thinking about the general robot learning problem.
Nathan Lambert
(02:58:32)
I’m so bearish on in-home learned robots for consumer purchase. I’m very bullish on self-driving cars, and I’m very bullish for robotic automation, like Amazon distribution— …where Amazon has built whole new distribution centers designed for robots first rather than humans. There’s a lot of excitement in AI circles about AI enabling automation—
Nathan Lambert
(02:58:54)
…and mass-scale manufacturing, and I do think that the path to robots doing that is more reasonable. It’s a thing that is designed and optimized to do a repetitive task that a human could conceivably do but doesn’t want to. But it’s also going to take a lot longer than people probably predict. I think the leap from the AI singularity to scaling up mass manufacturing in the US because we have a massive AI advantage is one that is troubled by a lot of political and other challenging problems.

Timeline to AGI

Lex Fridman
(02:59:31)
Let’s talk about timelines specifically: timelines to AGI or ASI. Is it fair, as a starting point, to say that nobody really agrees on the definitions of AGI and ASI?
Nathan Lambert
(02:59:46)
I think there’s a lot of disagreement, but I’ve been getting pushback where people say it is something that could reproduce most digital economic work. The remote worker is a fairly reasonable example. I think OpenAI’s definition is somewhat related to that—an AI that can do a certain number of economically valuable tasks—which I don’t really love as a definition, but it could be a grounding point. Language models today, while immensely powerful, are not this remote worker drop-in. There are things an AI could do that are way harder than remote work, like solving a…
Nathan Lambert
(03:00:29)
…finding an unexpected scientific discovery that you couldn’t even posit, which would be an example of something people call an artificial superintelligence problem. Or taking in all medical records and finding linkages across certain illnesses that people didn’t know or figuring out that some common drug can treat a niche cancer. They would say that is a superintelligence thing. So these are natural tiers. My problem is that it becomes deeply entwined with the quest for meaning in AI and these religious aspects. There are different paths you can take.
Lex Fridman
(03:01:06)
And I don’t even know if remote work is a good definition. I liked the originally titled AI2027 report. They focus more on code and research taste, so the target there is the superhuman coder. They have several milestone systems: superhuman coders, superhuman AI researcher, then superintelligent AI researcher, and then the full ASI. After you develop the superhuman coder, everything else follows quickly. The task is to have fully autonomous, automated coding, so any kind of coding you need to do in order to perform research is fully automated.
Lex Fridman
(03:01:58)
From there, humans would be doing AI research together with that system, and they will quickly be able to develop a system that actually can do the research for you. That’s the idea. Initially, their prediction was 2027 or ’28, and now they’ve pushed it back by three to four years to 2031, mean prediction. My prediction is probably even beyond 2031, but at least you can think concretely about how difficult it is to fully automate programming.
Nathan Lambert
(03:02:31)
Yeah, I disagree with some of their presumptions and dynamics on how it would play out, but I think they did good work in defining concrete milestones to tell a useful story. That’s why the reach of this AI 2027 document well transcended Silicon Valley—because they told a good story and did a lot of rigorous work.
Nathan Lambert
(03:02:53)
I think the camp that I fall into is that AI is so-called jagged, which will be excellent at some things and really bad at some things. I think that when they’re close to this automated software engineer, what it will be good at is traditional ML systems and front end—the model is excellent at those—but the distributed ML, the models are actually really quite bad at because there’s so little training data on doing large-scale distributed learning and things. And this is something that we already see, and I think this will just get amplified. And then it’s kind of messier in these trade-offs, and then there’s how you think AI research works and so on.
Lex Fridman
(03:03:28)
So you think basically a superhuman coder is almost unachievable meaning, because of the jagged nature of the thing, you’re just always going to have gaps in capabilities?
Nathan Lambert
(03:03:38)
I think it’s assigning completeness to something where the models are kind of superhuman at some types of code, and I think that will continue. And people are creative, so they’ll utilize these incredible abilities to fill in the weaknesses of the models and move really fast. There will always be, for a long time, this dance between the humans enabling this thing that the model can’t do, and the best AI researchers are the ones that can enable this superpower.
Nathan Lambert
(03:04:04)
And I think those lines, compared to what we already see… I think like Claude Code for building a website, you can stand up a beautiful website in a few hours or do data analysis. But the whole thing is going to keep getting better at these things, and we’ll pick up some new code skills and stuff along the way. Linking to what’s happening in big tech, this AI 2027 report leans into the singularity idea where I think research is messy and social and largely in the data in ways that AI models can’t process. But what we do have today is really powerful, and these tech companies are all collectively buying into this with tens of billions of dollars of investment. So we are going to get some much better version of ChatGPT, a much better version of Claude Code than we already have.
Nathan Lambert
(03:04:50)
I think that it’s just hard to predict where that is going, but the bright clarity of that future is why some of the most powerful people in the world are putting so much money into this. And I think it’s just kind of small differences—we don’t actually know what a better version of ChatGPT is, but also can it automate AI research? I would say probably not, at least in this timeframe. Big tech is going to spend $100 billion much faster than we get an automated AI researcher that enables an AI research singularity.
Lex Fridman
(03:05:22)
So you think your prediction would be, if this is even a useful milestone, more than 10 years out?
Nathan Lambert
(03:05:30)
I would say less than that on the software side, but I think longer than that on things like research.
Lex Fridman
(03:05:36)
Well, let’s just for fun try to imagine a world where all software writing is fully automated. Can you imagine that world?
Nathan Lambert
(03:05:46)
By the end of this year, the amount of software that’ll be automated will be so high. But it’ll be things like you’re trying to train a model with RL and you need to have multiple bunches of GPUs communicating with each other. That’ll still be hard, but I think it’ll be much easier.
Lex Fridman
(03:06:02)
One of the ways to think about this, the full automation of programming, is just think of lines of useful code written—the fraction of that to the number of humans in the loop. So presumably there’ll be, for a long time, humans in the loop of software writing. It’ll just be fewer and fewer relative to the amount of code written. Right? And with the superhuman coder, I think the presumption there is the number of humans in the loop goes to zero. What does that world look like when the number of humans in the loop is in the hundreds, not in the hundreds of thousands?

Will AI replace programmers?

Nathan Lambert
(03:06:39)
I think software engineering will be driven more to system design and goals of outcomes, where I do think software is largely going to be… I think this has been happening over the last few weeks, where people have gone from a month ago saying, “Oh yeah, agents are kind of slop,” which is a famous Karpathy quote, to the industrialization of software when anyone can just create software with their fingerprints. I do think we are closer to that side of things, and it takes direction and understanding how the systems work to extract the best from the language models. And I think it’s hard to accept the gravity of how much is going to change with software development and how many more people can do things without ever looking at the code.
Sebastian Raschka
(03:07:22)
I think what’s interesting is to think about whether these systems will be independent, in the sense that while I have no doubt that LLMs will at some point solve coding in the way calculators solve calculating, right? At some point, humans developed a tool that you never need a human to calculate that number for; you just type it in, and it’s an algorithm. I think that’s the same probably for coding. But the question isn’t… I think what will happen is you will just say, “Build that website,” and it will make a really good website, and then you maybe refine it. But will it do things independently where…
Sebastian Raschka
(03:07:59)
Will you still have humans asking the AI to do something? Like will there be a person to say, “Build that website?” Or will there be AI that just builds websites or something, or whatever?
Lex Fridman
(03:08:12)
I think talking about building websites is the—
Nathan Lambert
(03:08:15)
Too simple.
Sebastian Raschka
(03:08:16)
Yeah. Sure.
Lex Fridman
(03:08:16)
It’s just that the problem with websites and the problem with the web, you know, HTML and all that kind of stuff, it’s very resilient to just— slop. It will show you slop. It’s good at showing slop. I would rather think of safety-critical systems, like asking AI to end-to-end generate something that manages logistics— or manages cars— a fleet of cars, all that kind of stuff. So it end-to-end generates that for you.
Nathan Lambert
(03:08:45)
I think a more intermediate example is take something like Slack or Microsoft Word. I think if the organizations allow it, AI could very easily implement features end-to-end and do a fairly good job for things that you want to try. You want to add a new tab in Slack that you want to use, and I think AI will be able to do that pretty well.
Lex Fridman
(03:09:06)
Actually, that’s a really great example. How far away are we from that?
Nathan Lambert
(03:09:09)
Like this year.
Lex Fridman
(03:09:11)
See, I don’t know. I don’t know.
Nathan Lambert
(03:09:14)
I guess I don’t know— how bad production codebases are, but I think that within… on the order of a few years, a lot of people are going to be pushed to be more like a designer and product manager, where you have multiple of these agents that can try things for you, and they might take one to two days to implement a feature or attempt to fix a bug. And you have these dashboards—which I think Slack is actually a good dashboard—where your agents will talk to you and you’ll then give feedback. But things like, I make a website and it’s like, “Do you want to make a logo that’s passable?” I think these cohesive design things and the style is going to be very hard for models and deciding on what to add next.
Lex Fridman
(03:09:54)
I just… Okay. So I hang out with a lot of programmers and some of them are a little bit on the skeptical side in general—that’s just the vibe. I just think there’s a lot of complexity involved in adding features to complex systems. Like, if you look at the browser, Chrome. If I wanted to add a feature, if I wanted to have tabs as opposed to up top, I want them on the left side. Interface, right? I think we’re not… This is not a next year thing.
Nathan Lambert
(03:10:26)
One of the Claude releases this year, one of their tests was we give it a piece of software and leave Claude to run to recreate it entirely, and it could already almost rebuild Slack from scratch, just given the parameters of the software and left in a sandbox environment to do that.
Lex Fridman
(03:10:41)
So the from-scratch part, I like almost better.
Nathan Lambert
(03:10:44)
So it might be that the smaller and newer companies are advantaged and they’re like, “We don’t have to have the bloat and complexity, and therefore this feature exists.”
Sebastian Raschka
(03:10:53)
And I think this gets to the point that you mentioned that some people you talk to are skeptical, and I think that’s not because the LLM can’t do X, Y, Z. It’s because people don’t want it to do it this way.
Lex Fridman
(03:11:05)
Some of that could be a skill issue on the human side. Unfortunately, we have to be honest with ourselves. And some of that could be an underspecification issue. So, programming… this is like a communication type of issue in relationships and friendships. You’re assuming the LLM somehow is supposed to read your mind. I think this is where spec-driven design is really important. Like you just, using natural language, specify what you want.
Nathan Lambert
(03:11:32)
I think if you talk to people at the labs, they use these in their training and production code. Claude Code is built with Claude Code, and they all use these things extensively. And Dario talks about how much of Claude’s code… It’s like these people are slightly ahead in terms of the capabilities—
Nathan Lambert
(03:11:49)
—they have, and they probably spend on inference. They could spend 10 to 100 times as much as we’re spending, like we’re on a lowly 100 or $200 a month plan. They truly let it rip. And I think that with the pace of progress that we have, a year ago we didn’t have Claude Code and we didn’t really have reasoning models. The difference between sitting here today and what we can do with these models—it seems like there’s a lot of low-hanging fruit to improve them. The failure modes are pretty dumb. It’s like, “Claude, you tried to use the CLI command I don’t have installed 14 times, and then I sent you the command to run.” From a modeling perspective, that thing is pretty fixable. So, I don’t know.
Lex Fridman
(03:12:34)
I agree with you. I’ve been becoming more and more bullish in general. Speaking to what you’re articulating, I think it is a human skill issue. So Anthropic and other companies are leading the way in understanding how to best use the models for programming; therefore, they’re effectively using them. I think there’s a lot of programmers on the outskirts who don’t… I mean, there’s not a really good guide on how to use them. People are trying to figure it out exactly, but—
Nathan Lambert
(03:13:04)
It might be very expensive. It might be that the entry point for that is $2,000 a month, which is only for tech companies and rich people. That could be it.
Lex Fridman
(03:13:13)
But it might be worth it. If the final result is a working software system, it might be worth it. By the way, it’s funny how we converged from the discussion of the timeline to AGI to something more pragmatic and useful. Is there anything concrete and profound to be said about the timeline to AGI and ASI? Or are these discussions a bit too detached from the day-to-day?
Nathan Lambert
(03:13:39)
There’s interesting bets. There’s a lot of people trying to do Reinforcement Learning with Verifiable Rewards—RLVR—but in real scientific domains. There are startups spending hundreds of millions of dollars in funding, and they have wet labs where they’re having language models propose hypotheses that are tested in the real world. I would say that they’re early, but with the pace of progress—
Nathan Lambert
(03:14:00)
—maybe they’re early by six months and they make it because they were there first, or maybe they’re early by eight years. You don’t really know. So I think that type of moonshot to branch this momentum into other sciences would be very transformative if AlphaFold moments happen in all sorts of other scientific domains by a startup solving this. I think there are startups—maybe Harmonic is one—where they’re going all in on language models plus Lean for math. I think you had another podcast guest where you talked about this recently, and it’s like we don’t know exactly what’s going to fall out of spending $100 million on that model.
Nathan Lambert
(03:14:41)
Most of them will fail, but a couple of them might be big breakthroughs that are very different than ChatGPT or Claude Code type software experiences. Like a tool that’s only good for a PhD mathematician but makes them 100 times more effective.
Sebastian Raschka
(03:14:58)
I agree. I think this will happen in a lot of domains, especially those with a lot of resources like finance, legal, and pharmaceutical companies. But then again, is it really AGI? Because we are specializing it again. Is it really that much different from how we had specialized algorithms back in the day? I think it’s just the same thing but way more sophisticated. Is there a threshold when we call it AGI? I think the real cool thing here is that we have foundation models that we can specialize. That’s the breakthrough.
Sebastian Raschka
(03:15:34)
Right now, I think we are not there yet because first, it’s too expensive, but also ChatGPT doesn’t just give away their model to customize it. I can imagine a business model where OpenAI says at some point, “Hey, Bank of America, for $100 million we will do your custom model.” I think that will be the huge economic value add. The other thing though is, what is the differentiating factor? If everyone uses ChatGPT, they will all do the same thing. Everyone is moving in lockstep, but usually companies want to have a competitive advantage. I think there is no way around using some of their private data and experimenting with specialization. It’s going to be interesting.
Nathan Lambert
(03:16:26)
Given the pace of progress, it does feel like things are coming. I don’t think the AGI and ASI thresholds are particularly useful.
Lex Fridman
(03:16:35)
I think the real question, and this relates to the remote worker thing, is when are we going to see a big, obvious leap in economic impact? Because currently there’s not been an obvious leap in economic impact from LLM models, for example. Aside from AGI or ASI, there’s a real question of when we are going to see a GDP jump. Jump.
Nathan Lambert
(03:17:06)
Yeah, it’s like, what is the GDP made up of? A lot of it is financial services, so I don’t know what this is. It’s just hard for me to think about the GDP bump, but I would say that software development becomes valuable in a different way when you no longer have to look at the code anymore. So when it is like, Claude will make you a small business—which is essentially Claude can set up your website, your bank account, your email, and your whatever else—and you just have to express what you’re trying to put into the world. That’s not just an enterprise market, but it is hard. I don’t know how you get people to try doing that. I guess if ChatGPT can do it—people are trying ChatGPT.
Lex Fridman
(03:17:49)
I think it boils down to the scientific question of, “How hard is tool use to solve?” Because a lot of the stuff you’re implying, the remote work stuff, is tool use. It’s like computer use; how you have an LLM that goes out there, this agentic system, and does something in the world, and only screws up 1% of the time.
Nathan Lambert
(03:18:11)
Computer use is a good example of what labs care about and we haven’t seen a lot of progress on.
Lex Fridman
(03:18:12)
Or less.
Nathan Lambert
(03:18:12)
We saw multiple demos in 2025 of, like, Claude can use your computer, or OpenAI had operator, and they all suck. So they’re investing money in this, and I think that’ll be a good example. Whereas actually, taking over the whole screen seems a lot harder than having an API that they can call in the back end. Some of that is you have to then set up a different environment for them all to work in. They’re not working on your MacBook; they are individually interfacing with Google and Amazon and Slack, and they handle all these things in a very different way than humans do. So some of this might be structural blockers.
Sebastian Raschka
(03:18:55)
Also, specification-wise, I think the problem for arbitrary tasks is that you still have to specify what you want your LLM to do. What is the environment? How do you specify? You can say what the end goal is, but if it can’t solve the end goal—with LLMs, if you ask it for text, it can always clarify or do sub-steps. How do you put that information into a system that, let’s say, books a travel trip for you? You can say, “Well, you screwed up my credit card information,” but even to get it to that point, as a user, how do you guide the model before it can even attempt that? I think the interface is really hard.
Lex Fridman
(03:19:36)
Yeah, it has to learn a lot about you specifically. And this goes to continual learning—about the general mistakes that are made throughout, and then mistakes that are made through you.
Nathan Lambert
(03:19:48)
All the AI interfaces are getting set up to ask humans for input. I think Claude Code we talked about a lot. It asks feedback and questions. If it doesn’t have enough specification on your plan or your desire, it starts to ask questions, “Would you rather?” We talked about Memory, which saves across chats. Its first implementation is kind of odd, where it’ll mention my dog’s name or something in a chat. I’m like, “You don’t need to be subtle about this. I don’t care.” But things are emerging, like ChatGPT has the Pulse feature.
Nathan Lambert
(03:20:19)
Which is like a curated couple paragraphs with links to something to look at or to talk about, and people talk about how the language models are going to ask you questions. It’s probably going to work. The language model knows you had a doctor appointment or something, and it’s like, “Hey, how are you feeling after that?” Which again, goes into the territory where humans are very susceptible to this and there’s a lot of social change to come. But also, they’re experimenting with having the models engage. Some people really like this Pulse feature, which processes your chats and automatically searches for information and puts it in the ChatGPT app. So there’s a lot of things coming.
Sebastian Raschka
(03:20:58)
I used that feature before, and I always feel bad because it does that every day, and I rarely check it out. How much compute is burned on something I don’t even look at, you know?
Nathan Lambert
(03:21:11)
There’s also a lot of idle compute in the world, so don’t feel too bad.
Lex Fridman
(03:21:16)
Okay. Do you think new ideas might be needed? Is it possible that the path to AGI—whatever that is, however we define that—to solve computer use more generally, to solve biology and chemistry and physics, sort of the Dario definition of AGI or powerful AI? Do you think it’s possible that totally new ideas are needed? Non-LLM, non-RL ideas. What might they look like? We’re now going into philosophy land a little bit.
Nathan Lambert
(03:21:50)
For something like a singularity to happen, I would say yes. And the new ideas could be architectures or training algorithms, which are fundamental deep learning things. But they’re, in that nature, pretty hard to predict. But I think we won’t get very far even without those advances. Like, we might get this software solution, but it might stop at software and not do computer use without more innovation. So I think that a lot of progress will be coming, but if you’re going to zoom out, there’s still ideas in the next 30 years that are going to look like a major scientific innovation that enabled the next chapter of this. And I don’t know if it comes in one year or in 15 years.
Lex Fridman
(03:22:32)
Yeah. I wonder if the Bitter Lesson holds true for the next 100 years, and what that looks like.
Nathan Lambert
(03:22:37)
If scaling laws are fundamental in deep learning, I think the Bitter Lesson will always apply, which is compute will become more abundant. But even within abundant compute, the ones that have a steeper scaling law slope or a better offset—like, this is a 2D plot of performance and compute—even if there’s more compute available, the ones that get 100x out of it will win.
Lex Fridman
(03:23:01)
It might be something like literally computer clusters orbiting Earth with solar panels.
Nathan Lambert
(03:23:09)
The problem with that is heat dissipation. You get all the radiation from the sun and you don’t have any air to dissipate heat. But there is a lot of space to put clusters. There’s a lot of solar energy there and you could figure out the heat dissipation, but there is a lot of energy and there probably could be engineering will to solve the heat problem— …so there could be.
Lex Fridman
(03:23:27)
Is it possible—and we should say that it definitely is possible—that we’re basically going to be plateauing this year? Not in terms of— …the system capabilities, but what the system capabilities actually mean for human civilization. So on the coding front, really nice websites will be built. Very nice autocomplete.
Lex Fridman
(03:23:53)
Very nice way to understand code bases and maybe help debug, but really just a very nice helper on the coding front. It can help research mathematicians do some math. It can help you with shopping. It’s a nice helper. It’s Clippy on steroids. What else? It may be a good education tool and all that kind of stuff, but computer use turns out extremely difficult to solve. So I’m trying to frame the cynical case in all these domains where there’s not a really huge economic impact, but realize how costly it is to train these systems at every level—both the pre-training and the inference, how costly the inference is, the reasoning, all of that. Is that possible? And how likely is that, do you think?
Nathan Lambert
(03:24:47)
When you look at the models, there are so many obvious things to improve, and it takes a long time to train these models and to do this art, that it’ll take us with the ideas that we have multiple years to actually saturate in terms of whatever benchmark or performance we are searching for. It might serve very narrow niches. The average ChatGPT user might not get a lot of benefit out of this, but it is going to serve different populations by getting better at different things.

Is the dream of AGI dying?

Lex Fridman
(03:25:18)
But I think what everybody’s chasing now is a general system that’s useful to everybody. So, okay, if that’s not… that can plateau, right?
Nathan Lambert
(03:25:28)
I think that dream is actually kind of dying. As you talked about with the specialized models where it’s like… and multimodal is often… like, video generation is a totally different thing.
Lex Fridman
(03:25:39)
“That dream is kind of dying” is a big statement, because I don’t know if it’s dying. If you ask the actual Frontier Lab people, they’re still chasing it, right?
Sebastian Raschka
(03:25:48)
I do think they are still rushing to get the next model out, which will be much better than the previous one. “Much” is a relative term, but it will be better than the previous one. I can’t see them slowing down. I just think the gains will be made or felt more through not only scaling the model, but now… I feel like there’s a lot of tech debt. It’s like, “Well, let’s just put the better model in there, and better model, better model.” And now people are like, “Okay, let’s also at the same time improve everything around it too.”
Sebastian Raschka
(03:26:20)
Like the engineering of the context and inference scaling. And the big labs will still keep doing that. And now also the smaller labs will catch up to that because now they are hiring more. There will be more people. LLMs, it’s kind of like a circle. They also make them more productive and it’s just like an amplifier. I think what we can expect is amplification, but not a paradigm change. I don’t think that is true, but everything will be just amplified and amplified and amplified, and I can see that continuing for a long time.
Nathan Lambert
(03:26:52)
Yeah. I guess my statement with the dream is dying depends on exactly what you think it’s going to be doing. Like Claude Code is a general model that can do a lot of things, but it depends a lot on integrations and other things. I bet Claude Code could do a fairly good job of doing your email, and the hardest part is figuring out how to give it information and how to get it to be able to send your emails and stuff like this. But I think it goes back to what is the “one model to rule everything” ethos, which is just like a thing in the cloud that handles your entire digital life and is way smarter than everybody.
Nathan Lambert
(03:27:34)
So it’s an interesting leap of faith to go from Claude Code becomes that—which, in some ways, there are some avenues for that—but I do think that the rhetoric of the industry is a little bit different.
Sebastian Raschka
(03:27:49)
I think the immediate thing we will feel next as a normal person using LLMs will probably be related to something trivial, like making figures. Right now LLMs are terrible at making figures. Is it because we are getting served the cheap models with less inference compute than behind the scenes? Maybe with some cranks we can already get better figures, but if you ask today to draw a flowchart of X, Y, Z, it’s most of the time terrible. And it is kind of a very simple task for a human. I think it’s almost easier sometimes to draw something than to write something.
Nathan Lambert
(03:28:25)
Yeah, the multimodal understanding does feel like something that is odd, that it’s not better solved.
Lex Fridman
(03:28:31)
I think we’re not saying one actually obvious thing that we’re not realizing, that’s a gigantic thing that’s hard to measure, which is making all of human knowledge accessible… …To the entire world. One of the things that I think is hard to articulate, but there’s just a huge difference between Google Search and an LLM. I feel like I can basically ask an LLM anything and get an answer, and it’s doing less and less hallucination.
Lex Fridman
(03:29:04)
And that means understanding my own life, figuring out a career trajectory, figuring out how to solve the problems all around me, learning about anything through human history. I feel like nobody’s really talking about that because they just immediately take it for granted that it’s awesome. That’s why everybody’s using it—it’s because you get answers for stuff, and think about the impact of that across time. This is not just in the United States; this is all across the world. Kids throughout the world being able to learn these ideas—the impact that has across time is probably where the real GDP growth will be. It won’t be like a leap.
Lex Fridman
(03:29:51)
It’ll be that that’s how we get to Mars, that’s how we build these things, that’s how we have a million new OpenAIs, all the kind of innovation that happens from there. And that’s just this quiet force that permeates everything, right? Human knowledge.
Sebastian Raschka
(03:30:06)
I do agree with you, and in a sense it makes knowledge more accessible, but it also depends on what the topic is. For something like math, you can ask it questions and it answers, but if you want to learn a topic from scratch—we talked about this earlier—I think the sweet spot is still math textbooks where someone laid it out linearly. That is a proven strategy to learn a topic, and it makes sense if you start from zero to get information-dense text to soak it up, but then you use the LLM to make infinite exercises.
Sebastian Raschka
(03:30:47)
If you have problems in a certain area or have questions about things you are uncertain about, you ask it to generate example problems, you solve them, and then maybe you need more background knowledge and you ask it to generate that. But it won’t give you anything that is not in the textbook. It’s just packaging it differently, if that makes sense.
Sebastian Raschka
(03:31:13)
But then there are things where it also adds value in a more timely sense, where there is no good alternative besides a human doing it on the fly. For example, if you’re planning to go to Disneyland and you try to figure out which tickets to buy for which park when, well, there is no textbook on that. There is no information-dense resource on that. There’s only the sparse internet, and then there is a lot of value in the LLM. You just ask it. You have the constraints on traveling on these specific days, you want to go to certain places, and you ask it to figure out what you need, when and from where… …What it costs and stuff like that. It is a very customized, on-the-fly package. Personalization is essentially like—

How AI will make money?

Sebastian Raschka
(03:32:02)
…pulling information from the sparse internet, the non-information-dense thing where there’s no better version that exists. You make it from scratch almost.
Lex Fridman
(03:32:12)
And if it does exist, it’s full of—speaking of Disney World—ad slop. Like any city in the world, if you ask “what are the top 10 things to do?” An LLM is just way better to ask… …Than anything on the internet.
Nathan Lambert
(03:32:29)
Well, for now, that’s because they’re massively subsidized, and eventually they’re going to be paid for by ads.
Lex Fridman
(03:32:35)
Oh my goodness.
Nathan Lambert
(03:32:37)
It’s coming.
Lex Fridman
(03:32:38)
No. I’m hoping there’s a very clear indication of what’s an ad and what’s not an ad in that context, but—
Sebastian Raschka
(03:32:46)
That’s something I mentioned a few years ago. It’s like, I don’t know, if you are looking for a new running shoe, is it a coincidence that Nike maybe comes up first? Maybe, maybe not. I think there are clear laws around this. You have to be clear about that, but I think that’s what everyone fears. It’s like the subtle message in there or something like that. But also, this brings us to the topic of ads where, I think this was a thing, hopefully they try to launch in 2025 because I think they’re still not making money in that other way right now, so… …Like having actual ad spots in there. And then the thing, though, is they couldn’t because there are alternatives without ads and people would just flock-
Sebastian Raschka
(03:33:31)
…to the other products. And it also is just crazy how they’re one-upping each other, spending so much money just to get the users.
Nathan Lambert
(03:33:41)
I think so. Like some Instagram ads—I don’t use Instagram- …but I understand the appeal of paying a platform to find users who will genuinely like your product. That is the best case of things like Instagram ads.
Nathan Lambert
(03:33:56)
But there are also plenty of cases where advertising is very awful for incentives. I think that a world where the power of AI can integrate with that positive view—like, I am a person and I have a small business and I want to make the best damn steak knives in the world and I want to sell them to somebody who needs them. And if AI can make that sort of advertising work even better, that’s very good for the world, especially with digital infrastructure because that’s how the modern web has been built. But that’s not to say that addicting feeds so that you can show people more content is a good thing. So, I think that’s even what OpenAI would say is they want to find a way that can make the monetization upside of ads while still giving their users agency.
Nathan Lambert
(03:34:45)
And I personally would think that Google is probably going to be better at figuring out how to do this because they already have ad supply. If they figure out how to turn this demand in their Gemini app into useful ads, then they can turn it on. I don’t know if I think it’s this year, but there will be experiments with it.
Sebastian Raschka
(03:35:06)
I do think what holds companies back right now is really just that the competition is not doing it. It’s more like a reputation thing. I think people are just afraid right now of ruining their reputation or losing users- …because it would make headlines if someone launched these ads. But-
Nathan Lambert
(03:35:23)
Unless they were great, but the first ads won’t be great because it’s a hard problem that we don’t know how to solve.
Sebastian Raschka
(03:35:28)
Yeah, I think also the first version of that will likely be something like on X, like the timeline where you have a promoted post sometimes in between. It’ll be something like that where it will say “promoted” or something small, and then there will be an image or something. I think right now the problem is who makes the first move.
Nathan Lambert
(03:35:43)
If we go 10 years out, the proposition for ads is that you will make so much money on ads by having so many users- …that you can use this to fund better R&D and- …make better models, which is why- …like YouTube is dominating the market. Netflix is scared of YouTube. They make, I don’t know—I pay $28 a month for premium. They make at least $28 a month off of me and many other people, and they’re just creating such a dominant position in video. So I think that’s the proposition, which is that ads can give you a sustained advantage- …in what you’re spending per user. But there’s so much money in it right now that it’s like somebody starting that flywheel- is scary because it’s a long-term bet.

Big acquisitions in 2026

Lex Fridman
(03:36:29)
Do you think there’ll be some crazy big moves this year business-wise? Like Google or Apple acquiring Anthropic or something like this?
Nathan Lambert
(03:36:40)
Dario will never sell, but we are starting to see some types of consolidation, with Groq being valued at $20 billion and Scale AI for almost 30 billion. There are countless other deals structured in a way that is actually detrimental to the Silicon Valley ecosystem—these licensing deals where not everybody gets brought along, rather than a full acquisition that benefits the rank-and-file employees by getting their stock vested. That’s a big issue for Silicon Valley culture to address because the startup ecosystem is the lifeblood. If you join a startup, even if it’s not that successful, your startup very well might get acquired at a cheap premium and you’ll get paid out for your equity.
Nathan Lambert
(03:37:24)
And these licensing deals are essentially taking the top talent a lot of the time. I think the deal for Groq to NVIDIA is rumored to be better for the employees, but it is still this antitrust-avoiding thing. I think that this trend of consolidation will continue. Me and many smart people I respect have been expecting consolidation to have happened sooner, but it seems like things are starting to turn. But at the same time, you have companies raising ridiculous amounts of money for reasons that I don’t understand. I’m like, “I don’t know why you’re taking that money.” So it’s maybe mixed this year, but some consolidation pressure is starting.
Lex Fridman
(03:38:04)
What kind of surprising consolidation do you think we’ll see? You say Anthropic is a “never.” I mean, Groq is a big one—Groq with a Q, by the way.
Nathan Lambert
(03:38:12)
Yeah. There’s just a lot of startups and there’s a very high premium on AI startups. So there could be a lot of $10 billion range acquisitions, which is a really big acquisition for a startup that was maybe founded a year ago. I think Manus.ai—this company based in Singapore that was founded eight months ago and then had a $2 billion exit. I think there will be some other big multi-billion dollar acquisitions, like Perplexity.
Lex Fridman
(03:38:39)
Like Perplexity, right?
Nathan Lambert
(03:38:40)
Yeah, people rumor them to Apple. I think there’s a lot of pressure and liquidity in AI. There’s pressure on big companies to have outcomes, and I would guess that a big acquisition gives people leeway to then tell the next chapter of that story.
Lex Fridman
(03:38:56)
I mean, yeah, we’ve been talking about code. Maybe somebody acquires Cursor.
Nathan Lambert
(03:39:02)
They’re in such a good position because they have so much user data. And we talked about continual learning and stuff; they had one of the most interesting blog posts. They mentioned that their new Composer model was a fine-tune of one of these large Mixture of Experts models from China. You can know that from gossip or because the model sometimes responds in Chinese, which none of the American models do. They had a blog post where they said, “We’re updating the model weights every 90 minutes based on real-world feedback from people using it.” Which is the closest thing to real-world RL happening on a model, and it was just right there in one of their blog posts.
Lex Fridman
(03:39:36)
That’s incredible.
Nathan Lambert
(03:39:36)
—which is super cool.
Lex Fridman
(03:39:38)
And by the way, I should say I use Composer a lot because one of the benefits it has is it’s fast.
Nathan Lambert
(03:39:43)
I need to try it because everybody says this.
Lex Fridman
(03:39:45)
And there’ll be some IPOs potentially. You think Anthropic, OpenAI, xAI?
Nathan Lambert
(03:39:51)
They can all raise so much money so easily that they don’t feel a need to… So long as fundraising is easy, they’re not going to IPO because public markets apply pressure.
Nathan Lambert
(03:40:00)
I think we’re seeing in China that the ecosystem’s a little different, with both MiniMax and Z.ai applying for filing IPO paperwork, which will be interesting to see how the Chinese market reacts. I actually would guess that it’s going to be similarly hypey to the US so long as all this is going, and not based on the realities that they’re both losing a ton of money. I wish more of the American gigantic AI startups were public because it would be very interesting to see how they’re spending their money and have more insight. And also just to give people access to investing in these, because I think that they’re the companies of the era. And the tradition is now for so many of the big startups in the US to not go public.
Nathan Lambert
(03:40:43)
It’s like we’re still waiting for Stripe and their IPO, but Databricks definitely didn’t; they raised like a Series G or something. And I just feel like it’s a kind of a weird equilibrium for the market where I would like to see these companies go public and evolve in that way that a company can.

Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta

Lex Fridman
(03:41:01)
You think 10 years from now some of the frontier model companies are still around? Anthropic, OpenAI?
Nathan Lambert
(03:41:08)
I definitely don’t see it to be a winner-takes-all unless there truly is some algorithmic secret that one of them finds that lets this flywheel. Because the development path is so similar for all of them. Google and OpenAI have all the same products, and Anthropic’s more focused, but when you talk to people, it sounds like they’re solving a lot of the same problems. So I think… there’s offerings that’ll spread out. It’s a very big cake that’s being made that people are going to take money out of.
Lex Fridman
(03:41:36)
I don’t want to trivialize it, but OpenAI and Anthropic are primarily LLM service— —providers. And some of the other companies like Google and xAI, linked to X, do other stuff— —too. And so it’s very possible if AI becomes more commodified that the companies that are just providing LLMs will die.
Sebastian Raschka
(03:42:00)
I think the advantage they have is they have a lot of users, and I think they will just pivot. Like Anthropic, I think, pivoted. I don’t think they originally planned to work on code, but it happened that they found, “Okay, this is a nice niche and now we are comfortable in this niche and we push on this niche.” And I can see the same thing once… Let’s say hypothetically speaking, I’m not sure if it will be true, but let’s say Google takes all the market share of the general chatbot. Maybe OpenAI will then be focused on some other sub-topic— —like… They have too many users to go away in the foreseeable future, I think.
Lex Fridman
(03:42:37)
I think Google is always ready to say, “Hold my beer,” with AI mode.
Nathan Lambert
(03:42:40)
I think the question is if the companies can support the valuations. I’d see the AI companies being looked at in some ways like AWS, Azure, and GCP, which are all competing in the same space and all very successful businesses. There’s a chance that the API market is so unprofitable that they go up and down the stack to products and hardware. They have so much cash that they can build power plants and build data centers, which is a durable advantage now. But there’s also just a reasonable outcome that these APIs are so valuable and so flexible for developers that they become the likes of something like AWS. But AWS and Azure are also going to have these APIs, so five or six people competing in the API market is hard. So maybe that’s why they get squeezed out.
Lex Fridman
(03:43:27)
You mentioned “RIP LLaMA.” Is there a path to winning for Meta?
Nathan Lambert
(03:43:32)
I think nobody knows. They’re moving a lot, so they’re signing licensing deals with Black Forest Labs, which is image generation, or Midjourney. So I think in some ways, on the product and consumer-facing AI front, it’s too early to tell. I think they have some people that are excellent and very motivated being close to Zuckerberg. So I think that there’s still a story to unfold there. Llama is a bit different, where Llama was the most focused expression of the organization. And I don’t see Llama being supported to that extent anymore. I think it was a very successful brand for them, so they still might do some part of participation in the open ecosystem or continue the Llama brand into a different service, because people know what Llama is.
Lex Fridman
(03:44:21)
You think there’s a Llama 5?
Nathan Lambert
(03:44:24)
Not an open weight one.
Sebastian Raschka
(03:44:26)
It’s interesting. Just to recap a bit, I mean, Llama was the pioneering open-weight model—Llama 1, 2, 3, a lot of love. But I think then what happened, just hypothesizing or speculating, is that the leaders at Meta, like the upper executives, got very excited about Llama because they saw how popular it was in the community. And then I think the problem was trying to use the open source to make a bigger splash. It felt forced, like developing these very big Llama 4 models just to be on the top of the benchmarks.
Sebastian Raschka
(03:45:09)
But I don’t think the goal of Llama models is to be on top of the benchmarks beating, let’s say, ChatGPT or other models. I think the goal was to have a model that people can use, trust, modify, and understand. That includes having smaller models; they don’t have to be the best models. And what happened was just these models were—of course, the benchmarks suggest that they were better than they were because I think they had specific models trained on preferences so that they performed well on the benchmarks. That’s kind of this overfitting thing to force it to be the best. But then at the same time, they didn’t do the small models that people could use, and no one could run these big models.
Sebastian Raschka
(03:45:45)
And then there was kind of a weird thing. I think it’s just because people got too excited about headlines pushing the frontier. I think that’s it.
Lex Fridman
(03:45:54)
And too much on the benchmark-sync side.
Sebastian Raschka
(03:45:56)
It’s too much work.
Nathan Lambert
(03:45:57)
I think it imploded under internal political fighting and misaligned incentives. The researchers want to build the best models, but there’s a layer of organization— …and management that is trying to demonstrate that they do these things. There are a lot of pieces and rumors where some horrible technical decision was made, and it just seems like it got too bad where it all just crashed out.
Lex Fridman
(03:46:24)
Yeah, but we should also give huge props to Mark Zuckerberg. I think it comes from Mark actually, from the top of the leadership, saying open source is important. The fact that that exists means there could be a Llama 5, where they learn the lessons from the benchmark-syncing and say, “We’re going to be GPT-OSS—” “…and provide a really awesome library of open source.”
Nathan Lambert
(03:46:51)
What people say is that there’s a debate between Mark and Alexander Wang, who is very bright but much more against open source. To the extent that he has a lot of influence over the AI org, it seems much less likely, because Mark brought him in for fresh leadership in directing AI. And if being open or closed is no longer the defining nature of the model, I don’t expect that to be a defining argument between Mark and Alex. They’re both very bright, but I have a hard time understanding all of it because Mark wrote this piece in July of 2024, which was probably the best blog post at the time, making the case for open source AI. And then July 2025 came around and it was like, “We’re reevaluating our relationship with open source.” So it’s just kind of…
Sebastian Raschka
(03:47:42)
But I think also the problem—well, we may have been a bit too harsh, and that caused some of that. I mean, we as open source developers or the open source community. Because even though the model was maybe not what everyone hoped for, it got a lot of backlash. I think that was a bit unfortunate because as a company, they were hoping for positive headlines. Instead of just getting no headlines or positive headlines, they got negative headlines. And then it kind of reflected badly on the company. It’s maybe a spite reaction, almost like, “Okay, we tried to do something nice, we tried to give you something cool like an open source model, and now you are being negative about us.” So in that sense, it looks like, “Well, maybe then we’ll change our mind.” I don’t know.
Lex Fridman
(03:48:38)
Yeah, that’s where the dynamics of discourse on— …X can lead us as a community astray. Because sometimes it feels random; people pick the things they like and don’t like. I mean, you can see the same thing with Grok 4.1 and Grok Code Fast 1.0. I don’t think, vibe-wise, people love it publicly. But a lot of people use it. So if you look at Reddit and X, they don’t really give it praise from the programming community— … but, like, they use it. And the same thing with probably Llama. I don’t understand the dynamics of either positive hype or negative hype. I don’t understand it.
Nathan Lambert
(03:49:25)
I mean, one of the stories of 2025 is the US filling the gap of Llama, which is all the rise of these Chinese open-weight models- … to the point where I was like, “That was the single issue I’ve spent a lot of energy on in the last five months,” which is trying to do policy work- … to get the US to invest in this.
Lex Fridman
(03:49:41)
So just tell me the story of Adam.
Nathan Lambert
(03:49:43)
Adam Project is… It started as me calling it the American DeepSeek Project, which doesn’t really work for DC audiences, but it’s the story of what is the most impactful thing I can do with my career. These Chinese open-weight models are cultivating a lot of power and there is a lot of demand for building on open models, especially in enterprises in the US that are very cagey about Chinese models.
Lex Fridman
(03:50:06)
Looking at Perplexity, The Adam Project—American Truly Open Models—is a US-based initiative to build and host high-quality, genuinely open-weight AI models and supporting infrastructure explicitly aimed at competing with and catching up to China’s rapidly advancing open-source AI ecosystem.
Nathan Lambert
(03:50:25)
I think the one-sentence summary would be that—or two sentences. One is a proposition that open models are going to be an engine for AI research because that is what people start with; therefore, it’s important to own them. And the second one is, therefore, the US should be building the best models so that the best research happens in the US and those US companies take the value from being the home of where AI research is happening. Without more investment in open models, we have all the plots on the website where it’s like, “Qwen, Qwen, Qwen, Qwen,” and it’s all these models that are excellent from these Chinese companies that are cultivating influence in the US and internationally.
Nathan Lambert
(03:51:07)
And the US is spending way more on AI. The ability to create open models that are half a generation or a generation beyond what the cutting edge of closed labs is costs roughly $100 million, which is a lot of money, but not compared to what these companies have. Therefore, we need a centralizing force of people who want to do this. I think we got signed engagement from people pretty much across the full stack, including policy.
Lex Fridman
(03:51:33)
So there has been support from the administration?
Nathan Lambert
(03:51:36)
I don’t think anyone technically in government has signed it publicly, but I know that people that have worked in AI policy, both in the Biden and Trump administrations, are very supportive of trying to promote open-source models in the US. I think, for example, AI2 got a grant from the NSF for $100 million over four years, which is the biggest CS grant the NSF has ever awarded, for AI2 to attempt this, and I think it’s a starting point. But the best results happen when there are multiple organizations building models because they can cross-pollinate ideas and build this ecosystem. It doesn’t work if it’s just Llama releasing models to the world, because Llama could go away. The same thing applies for AI2; I can’t be the only one building models.
Nathan Lambert
(03:52:24)
It becomes a lot of time spent on talking to people, whether they’re in policy… I know NVIDIA is very excited about this. I think Jensen Huang has been specifically talking about the urgency for this, and they’ve done a lot more in 2025, where the Nemotron 3 models are more of a focus. They’ve started releasing some data along with NVIDIA’s open models and very few companies do this, especially of NVIDIA’s size. So there are signs of progress. We hear about Reflection AI where they say their two-billion-dollar fundraise is dedicated to building US open models, and their announcement tweet reads like a cultural tide starting to turn.
Nathan Lambert
(03:53:09)
I think in July was when we had four or five DeepSeek-caliber Chinese open-weight models and zero from the US. That’s the moment where I released this and was like, “I guess I have to spend energy on this because nobody else is gonna do it.” So it takes a lot of people contributing together. I’m not saying the Adam Project is the only thing moving the ecosystem, but it’s people like me doing this sort of thing to get the word out.

Manhattan Project for AI

Sebastian Raschka
(03:53:35)
Do you like the 2025 America’s AI Action Plan? That includes open source stuff. The White House AI Action Plan includes a dedicated section titled “Encourage Open-Source and Open-Weight AI,” defining such models and arguing they have unique value for innovation and startups.
Nathan Lambert
(03:53:52)
Yeah. I mean, the AI Action Plan is just a plan, but I think it’s maybe the most coherent policy document that has come out of the administration, and I hope that it largely succeeds. I know people that have worked on it. The challenge is taking policy and making it real, and I have no idea how to do this as an AI researcher, but largely a lot of things in that were very real. There’s a huge build-out of AI in the country, and while there are issues people hear about, from water use to whatever, we should be able to build things in this country without ruining places in the process. It’s worthwhile to spend energy on.
Nathan Lambert
(03:54:35)
I think that’s a role for the federal government. They set the agenda. And setting the agenda so that open-weight should be a first consideration is a large part of what they can do to get people thinking about it.
Sebastian Raschka
(03:54:49)
Also, for education and talent, it’s very important. Otherwise, if there are only closed models, how do you get the next generation of people contributing? You would only be able to learn after you joined a company, but at that point, how do you identify and hire talented people? I think open source is essential for educating the population and training the next generation of researchers. It’s the only way.
Nathan Lambert
(03:55:24)
The way that I could’ve gotten this to go more viral was to tell a story of Chinese AI integrating with an authoritarian state, becoming ASI and taking over the world, and therefore we need our own American models. But it’s very intentional why I talk about innovation and science in the US, because I think it’s both more realistic as an outcome and it’s a world that I would like to manifest.
Sebastian Raschka
(03:55:47)
I would say, though, that any open-weight model is a valuable model.
Nathan Lambert
(03:55:55)
Yeah. And my argument is that we should be in a leading position. But I think it’s worth saying it so simply because there are still voices in the AI ecosystem that say we should consider banning the release of open models due to the safety risks. And I think it’s worth adding that, effectively, that’s impossible without the US having its own Great Firewall, which is known to not work that well. The cost for training these models, whether it’s one to a hundred million dollars, is attainable to a huge amount of people in the world that want to have influence, so these models will be getting trained all over the world. We want this information and these tools to flow freely across the world and into the US so that people can use them and learn from them.
Nathan Lambert
(03:56:47)
Stopping that would be such a restructuring of our internet that it seems impossible.
Sebastian Raschka
(03:56:51)
Do you think maybe the big open-weight models from China are actually a good thing for US companies? You mentioned earlier they are usually one generation behind in terms of what they release open source. For example, gpt-oss-120b might not be the cutting-edge model, or Gemini 3 might not be, because they want to ensure it is safe. But when these companies see that DeepSeek-V3.2 is really awesome and is being used with no backlash or security risk, that could encourage them to release better models. Maybe that is a very positive thing.
Nathan Lambert
(03:57:30)
A hundred percent. These Chinese companies have set things into motion that I think would potentially not have happened if they were not all releasing models. I’m almost sure that those discussions have been had by leadership.
Sebastian Raschka
(03:57:45)
Is there a possible future where the dominant models, AI models in the world are all open source?
Nathan Lambert
(03:57:50)
Depends on the trajectory of progress that you predict. If you think saturation in progress is coming within a few years, essentially within the time where financial support is still very good, then open models will be so optimized and so much cheaper to run that they’ll win out. Essentially, this goes back to open source ideas where so many more people will be putting money into optimizing the serving of these open-weight common architectures that they will become standards. Then you could have chips dedicated to them and it’ll be way cheaper than the offerings from these closed companies that are custom.
Sebastian Raschka
(03:58:25)
We should say that the AI2027 report predicts—one of the things it does from a narrative perspective is that there will be a lot of centralization. As the AI system gets smarter and smarter, the national security concerns will come to be, and you’ll centralize the labs, and you’ll become super secretive, and there’ll be this whole race.
Lex Fridman
(03:58:45)
…from a military perspective of how you… between China and the United States. And so all of these fun conversations we’re having about LLMs—all the generals and soldiers will come into the room and be like, “All right, we’re now in the Manhattan Project stage of this whole thing.”
Sebastian Raschka
(03:59:02)
I think 2025, ’26, ’27—I don’t think something like that is even remotely possible. I mean, you can make the same argument for computers, right? You can say, “Okay, computers are capable and we don’t want the general public to get them.” Or chips—even AI chips—but you see how Huawei makes chips now. It took a few years, but… and I don’t think there is a way you can contain knowledge like that. I think in this day and age, it is impossible, like the internet. I don’t think this is a possibility.
Nathan Lambert
(03:59:37)
On the Manhattan Project thing, one of my funny things looking at them is I think that a Manhattan Project-like thing for open models would actually be pretty reasonable, because it wouldn’t cost that much. But I think that that will come. It seems like culturally, the companies are changing. But I agree with Sebastian on all of the stuff that he just said. It’s just like, I don’t see it happening nor being helpful.
Lex Fridman
(03:59:58)
Yeah. I mean, the motivating force behind the Manhattan Project was that there was civilizational risk. It’s harder to motivate that for open-source models.
Nathan Lambert
(04:00:08)
There’s not civilizational risk.

Future of NVIDIA, GPUs, and AI compute clusters

Lex Fridman
(04:00:10)
On the hardware side, we mentioned NVIDIA a bunch of times. Do you think Jensen and NVIDIA are going to keep winning?
Sebastian Raschka
(04:00:18)
I think they have the downside that they have to iterate a lot and manufacture a lot. And what they’re doing—they do innovate, but I think there’s always the chance that there is someone who does something fundamentally different, who gets very lucky and then does something. But the problem is, I think, adoption. You know, the moat of NVIDIA is probably not just the GPU; it’s more like the CUDA ecosystem, and that has evolved over two decades. I mean, even back when I was a grad student, I was in a lab doing biophysical simulations, molecular dynamics, and we had a Tesla GPU back then just for the computations. It was fifteen years ago now.
Sebastian Raschka
(04:01:01)
They built this up for a long time and that’s the moat, I think. It’s not the chip itself. Although they have the money now to iterate, build, and scale, it’s really on the compatibility. If you’re at that scale as a company, why would you go with something risky where it’s only— … a few chips that they can make per year? You go with the big one. But then I do think with LLMs now, it will be easier to design something like CUDA. It took 15 years because it was hard, but now that we have LLMs, we can maybe replicate CUDA.
Lex Fridman
(04:01:35)
And I wonder if there will be a separation of the training and the inference- … compute, as we stabilize a bit more and more compute is needed for inference.
Nathan Lambert
(04:01:47)
That’s supposed to be the point of the Groq acquisition. And that’s why part of what Vera Rubin is—
Nathan Lambert
(04:01:52)
… where they have a new chip with no high-bandwidth memory, or very little, which is one of the most expensive pieces. It’s designed for pre-fill, which is the part of inference where you essentially do a lot of matrix multiplications, and then you only need the memory when you’re doing this autoregressive generation and you have the KV cache swaps. So they have this new GPU that’s designed for that specific use case, and then the cost of ownership per flop is actually way lower. But I think that NVIDIA’s fate lies in the diffusion of AI still. Their biggest clients are still these hyperscale companies, whether it’s Google—which obviously can make TPUs—Amazon making Trainium, or Microsoft trying to do its own things.
Nathan Lambert
(04:02:36)
As long as the pace of AI progress is high, NVIDIA’s platform is the most flexible and people will want that. But if there’s stagnation, then with creating bespoke chips, there’s more time to do it.
Lex Fridman
(04:02:50)
It’s interesting that NVIDIA is, is quite active in trying to develop all kinds of different products.
Nathan Lambert
(04:02:55)
They try to create areas of commercial value that will use a lot of GPUs.
Lex Fridman
(04:03:01)
But they keep innovating and they’re doing a lot of incredible research, so…
Nathan Lambert
(04:03:06)
Everyone says the company’s super oriented around Jensen and how operationally plugged in he is. It sounds so unlike many other big companies that I’ve heard about. And so long as that’s the culture, I think that you can expect that to keep progress happening. It’s like he’s still in the Steve Jobs era of Apple. So long as that is how it operates, I’m pretty optimistic for their situation because it is their top-order problem, and I don’t know if making these chips for the whole ecosystem is the top goal of all these other companies. They’ll do a good job, but it might not be as good of a job.
Lex Fridman
(04:03:43)
Since you mentioned Jensen, I’ve been reading a lot about history and about singular figures in history. What do you guys think about the great man view of history? How important are individuals for steering the direction of history in the tech sector? So, you know, what’s NVIDIA without Jensen? You mentioned Steve Jobs. What’s Apple without Steve Jobs? What’s xAI without Elon or DeepMind without Demis?
Nathan Lambert
(04:04:11)
People make things earlier and faster, whereas scientifically, many great scientists credit being in the right place at the right time. Eventually someone else will still have the idea. So I think that in that way, Jensen is helping manifest this GPU revolution much faster and much more focused than it would be without having a person like him there. This is making the whole AI build-out faster. But I do still think that eventually something like ChatGPT would have happened and a build-out like this would have happened, but it probably would not have been as fast. I think that’s the sort of flavor that is applied.
Sebastian Raschka
(04:04:55)
These individual people are placing bets on something. Some get lucky, some don’t. But if you don’t have these people at the helm, it would be more diffused. It’s almost like investing in an ETF versus individual stocks. Individual stocks might go up or down more heavily than an ETF, which is more balanced. We’ll eventually get there, but I just think the focus is the thing. Passion and focus.
Lex Fridman
(04:05:19)
Isn’t there a real case to be made that without Jensen, there’s not a reinvigoration of the deep learning revolution?
Nathan Lambert
(04:05:26)
It could’ve been 20 years later, is the thing I would say.
Lex Fridman
(04:05:29)
Yeah, 20 is…
Nathan Lambert
(04:05:30)
Or another deep learning winter could have come… …If GPUs weren’t around.
Lex Fridman
(04:05:35)
That could change history completely because you could think of all the other technologies that could’ve come in the meantime, and the focus of human civilization would get… Silicon Valley would be captured by different hype.
Sebastian Raschka
(04:05:48)
But I do think there’s certainly an aspect where the GPU trajectory was all planned. But on the other end, it’s also a lot of lucky coincidences or good intuition. Like the investment into, let’s say, biophysical simulations. I mean, I think it started with video games and then it just happened to be good at linear algebra because video games require a lot of linear algebra. And then you have the biophysical simulations. But still, I don’t think the master plan was AI. I think it just happened to be Alex Krizhevsky. So someone took these GPUs and said, “Hey, let’s try to train a neural network on that.” It happened to work really well and… …I think it only happened because you could purchase those GPUs.
Nathan Lambert
(04:06:30)
Gaming would’ve created a demand for faster processors if… …NVIDIA had gone out of business in the early days. That’s what I would think. I think GPUs would still exist… …At the time of AlexNet and at the time of the Transformer. It was just hard to know if it would be one company as successful or multiple smaller companies with worse chips. But I don’t think that’s a 100-year delay. It might be a decade delay.
Lex Fridman
(04:07:01)
Well, it could be a one, two, three, four, five-decade delay. I mean, I just can’t see Intel or AMD doing what NVIDIA did.
Nathan Lambert
(04:07:08)
I don’t think it would be a company that exists.
Sebastian Raschka
(04:07:11)
A new company.
Nathan Lambert
(04:07:11)
I think it would be a different company that would rise.
Sebastian Raschka
(04:07:13)
Like Silicon Graphics or something.
Nathan Lambert
(04:07:15)
So yeah, some company that has died would have done it.
Lex Fridman
(04:07:19)
But looking at it, it seems like these singular figures, these leaders, have a huge impact on the trajectory of the world. Obviously, incredible teams are behind them. But, you know, having that kind of very singular, almost dogmatic focus- …is necessary to make progress.
Sebastian Raschka
(04:07:40)
Yeah, I mean, even with GPT, it wouldn’t exist if there wasn’t a person, Ilya, who pushed for this scaling, right?
Nathan Lambert
(04:07:47)
Yeah, Dario was also deeply involved in that. If you read some of the histories from OpenAI, it almost seems wild thinking about how early these people were like, “We need to hook up 10,000 GPUs and take all of OpenAI’s compute and train one model.” There were a lot of people there that didn’t want to do that.

Future of human civilization

Lex Fridman
(04:08:02)
Which is an insane thing to believe—to believe scaling before scaling has any indication that it’s going to materialize. Again, singular figures. Speaking of which, 100 years from now, this is presumably post-singularity, whatever the singularity is. When historians look back at our time now, what technological breakthroughs would they really emphasize as the breakthroughs that led to the singularity? So far we have Turing to today, which is 80 years.
Sebastian Raschka
(04:08:36)
I think it would still be computing, like the umbrella term “computing.” I don’t necessarily think that even 100 or 200 years from now it would be AI. It could still well be computers, you know? We are now taking better advantage of computers, but it’s the fact of computing.
Lex Fridman
(04:08:53)
It’s basically a Moore’s Law kind of discussion. Even the details of CUDA and GPUs won’t even be remembered, and there won’t be all this software turmoil. It’ll be just, obviously, compute.
Nathan Lambert
(04:09:07)
I generally agree, but is it the connectivity of the internet and compute able to be merged? Or is it both of them?
Sebastian Raschka
(04:09:17)
I think the internet will probably be related to communication—it could be a phone, internet, or a satellite. And compute is more like the scaling aspect of it.
Lex Fridman
(04:09:29)
It’s possible that the internet is completely forgotten. That the internet is wrapped into the phone networks, like communication networks. This is just another manifestation of that, and the real breakthrough comes from just the increased compute—Moore’s Law, broadly defined.
Nathan Lambert
(04:09:46)
Well, I think the connection of people is very fundamental to it. You want to find the best person in the world for something, they are somewhere in the world. Being able to have that flow of information—AIs will also rely on this. I’ve been fixating on when I said the dream was dead about the one central model; the thing that is evolving is that people have many agents for different tasks. People already started doing this with different Clouds for different tasks. It’s described as many AGIs in the data center where each one manages and they talk to each other. That is so reliant on networking and the free flow of information on top of compute. But networking, especially with GPUs, is such a part of the scaling of compute. The GPUs and the data centers need to talk to each other.
Lex Fridman
(04:10:36)
Do you think there’s something very specific and singular to the fact that it’s neural networks that’s seen as a breakthrough? Like a genius move where you’re basically replicating, in a very crude way, the structure of the human brain, the human mind?
Sebastian Raschka
(04:10:54)
I think without the human mind, we probably wouldn’t have neural networks because it was an inspiration for them. But on the other end, I think it’s just so different. I mean, it’s digital versus biological, so I think it will probably be more grouped as an algorithm.
Lex Fridman
(04:11:11)
That’s massively parallelizable— —on this particular kind of compute?
Sebastian Raschka
(04:11:15)
It could have well been genetic computing, like genetic algorithms, just parallelized. It just happens that this is more efficient and works better.
Lex Fridman
(04:11:23)
And it very well could be that the neural networks, the way we architect them now, are just a small component of the system that leads to the singularity.
Nathan Lambert
(04:11:33)
I think if you think of it over 100 years, society can be changed more with more compute and intelligence because of autonomy. But looking at this, what are the things from the Industrial Revolution that we remember? We remember the engine—it is probably the equivalent of the computer in this. But there’s a lot of other physical transformations that people are aware of, like the cotton gin and all these machines that are still known—air conditioning, refrigerators— Some of these things from AI will still be known; the word “transformer” could still very well be known. I would guess that deep learning is definitely still known, but the transformer might be evolved away from in 100 years with AI researchers everywhere. But I think deep learning is likely to be a term that is remembered.
Lex Fridman
(04:12:28)
And I wonder what the air conditioning and the refrigeration of the future is that AI brings. If we travel forward 100 years from now, what do you think is different? How does the world look? First of all, do you think there’s humans? Do you think there’s robots everywhere walking around?
Sebastian Raschka
(04:12:46)
I do think there will be specialized robots for certain tasks.
Lex Fridman
(04:12:49)
Humanoid form?
Sebastian Raschka
(04:12:50)
Maybe half-humanoid. We’ll see. I think for certain things, yes, there will be humanoid robots because it’s just amenable to the environment. But for certain tasks, it might not make sense. What’s harder to imagine is how we interact with devices and what humans do with them. I’m pretty sure it will not be the cellphone or the laptop. Will it be implants?
Lex Fridman
(04:13:16)
I mean, it has to be brain-computer interfaces, right? I mean, 100 years from now, it has to—given the progress we’re seeing now— —there has to be, unless there’s legitimately a complete alteration of how we interact with reality.
Sebastian Raschka
(04:13:33)
On the other hand, if you think of cars, cars are older than 100 years, right? And it’s still the same interface. We haven’t replaced cars with something else; we just made them better. But it’s still a steering wheel, it’s still wheels.
Nathan Lambert
(04:13:45)
I think we’ll still carry around a physical brick of compute— —because people want some ability to have a private interface. You might not engage with it as much as a phone, but having something where you could have private information that is yours as an interface between you and the rest of the internet is something I think will still exist. It might not look like an iPhone, and it might be used a lot less, but I still expect people to carry things around.
Lex Fridman
(04:14:08)
Why do you think the smartphone is the embodiment of privacy? There’s a camera on it. There’s a-
Nathan Lambert
(04:14:15)
Private for you, like encrypted messages, encrypted photos; you know what your life is. I guess this is a question of how optimistic you are on brain-machine interfaces. Is all that just going to be stored in the cloud, like your whole calendar? It’s hard to think about processing all the information that we can process visually through brain-machine interfaces presenting something like a calendar to you. It’s hard to just think about knowing your email inbox without looking. Like you signal to a computer and then you just know your email inbox. Is that something that the human brain can handle being piped into it non-visually? I don’t know exactly how those transformations happen. ‘Cause humans aren’t changing in 100 years.
Nathan Lambert
(04:15:05)
I think agency and community are things that people actually want.
Lex Fridman
(04:15:09)
A local community, yeah.
Nathan Lambert
(04:15:10)
So, like, people you are close to, being able to do things with them and being able to ascribe meaning to your life. I don’t think that human biology is changing away from those on a timescale that we can discuss. UBI does not solve agency. I do expect mass wealth, and I hope that it has spread so that the average life does look very different in 100 years. But that’s still a lot to happen in 100 years. If you think about countries that are early in their development process, to build all the infrastructure and have policy that shares one nation’s wealth with another is… I think it’s an optimistic view to see all that happening in 100 years- …while they are still independent entities and not just absorbed into some international order by force.
Lex Fridman
(04:16:13)
But there could be just better, more elaborate, more effective- …social support systems that help alleviate some levels of basic suffering from the world. With the transformation of society where a lot of jobs are lost in the short term, I think we have to really remember that each individual job that’s lost is a human being who’s suffering. When jobs are lost at scale, it is a real tragedy. You can make all kinds of arguments about economics or say it’s all going to be okay and good for the GDP because new jobs will be created, but fundamentally at the individual level for that human being, that’s real suffering. That’s a real personal tragedy.
Lex Fridman
(04:16:58)
And we have to not forget that as the technologies are being developed. Also, my hope for all the AI slop we’re seeing is that there will be a greater and greater premium for the fundamental aspects of the human experience that are in-person. The things that we all enjoy, like seeing each other and talking together in-person.
Nathan Lambert
(04:17:22)
The next few years are definitely going to see an increased value on physical goods and events- …and even more pressure from slop. The slop is only starting. The next few years will be more and more diverse-
Lex Fridman
(04:17:37)
Do you think we’ll all be drow-
Nathan Lambert
(04:17:37)
…versions of slop.
Lex Fridman
(04:17:38)
They would be drowning in slop. Is that what-
Nathan Lambert
(04:17:40)
So I’m hoping that society drowns in slop enough to snap out of it and be like, “We can’t. It just doesn’t matter. We all can’t deal with it.” And then, the physical has such a higher premium on it.
Sebastian Raschka
(04:17:53)
Even like classic examples, I honestly think this is true, and I think we will get tired of it. We are already kind of tired of it. Same with art. I don’t think art will go away. I mean, you have physical paintings. There’s more value, not just monetary value, but just more appreciation for the actual painting than a photocopy of that painting. It could be a perfect digital reprint, but there is something when you go to a museum and you look at that art and you see that real thing and you just think about, “Okay, a human.” It’s like a craft. You have an appreciation for that.
Sebastian Raschka
(04:18:25)
And I think the same is true for writing, for talking, for any type of experience, where it will be… I do unfortunately think it will be like a dichotomy, like a fork where some things will be automated. There are not as many paintings as there used to be 200 years ago. There are more photographs, more photocopies. But at the same time, it won’t go away. There will be value in that. I think that the difference will just be what’s the proportion of that. But personally, I have a hard time reading things where I see it’s obviously AI-generated. I’m sorry, there might be really good information there, but I have a certain feeling, like, it’s not for me.
Nathan Lambert
(04:19:08)
I think eventually they’ll fool you, and it’ll be on platforms that give ways of verifying or building trust. So you will trust that Lex is not AI-generated, having been here. So then you have trust in this- -channel. But it’s harder for new people- -that don’t have that trust.
Sebastian Raschka
(04:19:25)
Well, that will get interesting because I think fundamentally it’s a solvable problem by having trust in certain outlets that they won’t do it, but it’s all going to be kind of trust-based. There will be some systems to authorize, “Okay, this is real. This is not real.” There will be some telltale signs where you can obviously tell this is AI-generated and this is not. But some will be so good that it’s hard to tell, and then you have to trust. And that will get interesting and a bit problematic.
Nathan Lambert
(04:19:54)
The extreme case of this is to watermark all human content. So all photos that we take on our own- -have some watermark until they- -are edited- -or something like this. And software can manage communications with the device manufacturer- -to maintain human editing, which is the opposite of the discussion to try to watermark AI images. And then you can make a Google image that has a watermark and use a different Google tool to remove the watermark.
Sebastian Raschka
(04:20:20)
Yeah. It’s going to be an arms race, basically.
Lex Fridman
(04:20:23)
And we’ve been mostly focusing on the positive aspects of AI. I mean, all the capabilities that we’ve been talking about can be used to destabilize human civilization with even just relatively dumb AI applied at scale, and then further and further, superintelligent AI systems. Of course, there’s the sort of doomer take that’s important to consider a little bit as we develop these technologies. What gives you hope about the future of human civilization? Everything we’ve been talking about—are we going to be okay?
Nathan Lambert
(04:20:59)
I think we will. I’m definitely a worrier both about AI and non-AI things, but humans do tend to find a way. I think that’s what humans are built for—to have community and find a way to figure out problems. And that’s what has gotten us to this point. I think the AI opportunity and related technologies is really big. I think that there are big social and political problems to help everybody understand that. I think that’s what we’re staring at a lot of right now; the world is a scary place, and AI is a very uncertain thing. And it takes a lot of work that is not necessarily building things. It’s like telling people and understanding people, things that the people building AI are historically not motivated or wanting to do.
Nathan Lambert
(04:21:50)
But it is something that is probably doable. It just will take longer than people want. And we have to go through that long period of hard, distraught AI discussions if we want to have the lasting benefits.
Lex Fridman
(04:22:04)
Yeah. Through that process, I’m especially excited that we get a chance to better understand ourselves at the individual level as humans and at the civilization level, and answer some of the big mysteries, like what is this whole consciousness thing going on here? It seems to be truly special. Like, there’s a real miracle in our mind. And AI puts a mirror to ourselves and we get to answer some of the big questions about what is this whole thing going on here.
Sebastian Raschka
(04:22:35)
Well, one thing about that is also what I do think makes us very different from AI and why I don’t worry about AI taking over is, like you said, consciousness. We humans, we decide what we want to do. AI in its current implementation, I can’t see it changing. You have to tell it what to do. And so you still have the agency. It doesn’t take the agency from you because it becomes a tool. You tell it what to do. It will be more automatic than other previous tools. It’s certainly more powerful than a hammer, it can figure things out, but it’s still you in charge, right? So the AI is not in charge, you’re in charge. You tell the AI what to do and it’s doing it for you.
Lex Fridman
(04:23:17)
So in the post-singularity, post-apocalyptic war between humans and machines, you’re saying humans are worth fighting for?
Sebastian Raschka
(04:23:27)
100%. I mean, the movie Terminator, they made in- -the ’80s, essentially, and I do think the only thing I can see going wrong is, of course, if things are explicitly programmed to do things that are harmful.
Lex Fridman
(04:23:43)
I think actually in a Terminator type of setup, I think humans win. I think we’re too clever. It’s hard to explain how we figure it out, but we do. And we’ll probably be using local LLMs, open source LLMs, to help fight the machines. I apologize for the ridiculousness. Like I said, Nathan, I’ve already been a big fan of yours for a long time. And I’ve been a big fan of yours, Sebastian, for a long time, so it’s an honor to finally meet you. Thank you for everything you put out into the world. Thank you for the excellent books you’re writing. Thank you for teaching us. And thank you for talking today. This was fun.
Sebastian Raschka
(04:24:26)
Thank you for inviting us here and having this human connection, which is actually-
Lex Fridman
(04:24:30)
-extremely valuable- -human connection. Thanks for listening to this conversation with Sebastian Raschka and Nathan Lambert. To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on. And now let me leave you with some words from Albert Einstein: “It is not that I’m so smart, but I stay with the questions much longer.” Thank you for listening, and hope to see you next time.

Transcript for Paul Rosolie: Uncontacted Tribes in the Amazon Jungle | Lex Fridman Podcast #489

This is a transcript of Lex Fridman Podcast #489 with Paul Rosolie.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Episode highlight

Paul Rosolie
(00:00:00)
… were standing there. Everyone is waiting, because at any moment an arrow could just fly through your neck, and there’s people holding shotguns. And the anthropologist, this little guy, is standing there in the front, and he’s going, “Wamole.” He’s going, “Brothers.” And then it happened. Then you start hearing people screaming, “Mashco! Mashco!” And people are screaming and women are lifting children and running into the huts and the dogs and chickens are going nuts and—
Lex Fridman
(00:00:25)
So fear.
Paul Rosolie
(00:00:26)
Fear. He’s going, “Look there. He has a bow. He has a bow.” And we’re looking up the beach and there’s just this clan walking down the beach with these seven-foot bows and they’re hunched over and they’re pointing at us. They’re going, “Look at that one.” They’re going, “Look, there’s a gun there.” And you can see them communicating to each other and the butterflies are swirling off the beach and they can hit a spider monkey out of the treetops at 40 meters. They can sneak up and you will never know they’re there. And so when that arrow passes through your body, you’ll only have a moment to realize it before you fall over. In order for any of this to make sense, I have to show you this footage.
Lex Fridman
(00:01:01)
And this has not been shown ever before.
Paul Rosolie
(00:01:05)
This is a world first.

Introduction

Lex Fridman
(00:01:08)
The following is a conversation with Paul Rosolie, his third time on the podcast. Paul is a naturalist, explorer, writer, and is someone who has dedicated his life to protecting the Amazon rainforest and celebrating the beauty of the natural world. He has a new book coming out in a few days titled Jungle Keeper that you should definitely go pre-order now. It tells some intense stories about his time in the jungle over the past several years, building up to a few epic recent events, including a new full-on extended encounter with an uncontacted tribe that we discuss in this podcast. Both the book and audiobook are great. I highly recommend it. If you would like to support Paul and his incredible team in their mission to protect the jungle, go to junglekeepers.org.
Lex Fridman
(00:02:01)
You can help with donations or by spreading the word or checking out the gala that Paul is hosting in New York on January 22nd in a few days. They are doing all they can to help raise funds for the mission of safeguarding as much of the rainforest as possible, and I think it’s a mission worth fighting for. The Amazon jungle is one of the most special and beautiful places on Earth. As an aside, allow me to look back briefly and mention something that I’ve been struggling with a bit. For context, I traveled to the Amazon rainforest with Paul a while back. It was an adventure of a lifetime, with lots of crazy twists and turns. We did record a podcast out there, literally in the jungle—Episode 429, if you want to go check it out. It was awesome.
Lex Fridman
(00:02:51)
And we also recorded a bunch of disparate footage of the journey just for fun. And I would still love to somehow put all that together into a cohesive video in case it’s interesting to someone. But I’ve learned just how difficult it is to organize and edit a pile of chaotically recorded footage like that. So, let’s see if I can pull it off. But in any case, this kind of raw vlog-style video is something that I would love to be able to do more of as a way to celebrate amazing human beings like Paul and others, including everyday people who I meet on my travels. So, I’ll keep trying, tinkering, learning, and I ask for your patience and support along the way. Now, back to our regular scheduled programming. This is the Lex Fridman Podcast.
Lex Fridman
(00:03:45)
To support it, please check out our sponsors in the description where you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Paul Rosolie.

Uncontacted tribes in the Amazon Jungle

Lex Fridman
(00:04:00)
We survived a challenging time out in the jungle about a year and a half ago, and since then, your life has increasingly gotten more intense. You’ve achieved the incredible feat of saving now more than 130,000 acres of rainforest. And the goal that you’re working towards is protecting 200,000 acres more.
Lex Fridman
(00:04:23)
And doing so while facing extreme danger from narcos, narco-traffickers, so-called Cocaine Mafia in an escalating drug war. This is insane. These are new developments. Illegal loggers, as we’ve talked about before, gold miners, and the incredible recent encounter with an uncontacted tribe. We’ll talk about all of this. So your new book, Jungle Keeper, opens with the killing of two loggers— … by the warriors of an uncontacted tribe, the Mashco Piro, in August 2024.
Lex Fridman
(00:04:57)
And then you reveal that you had your own dramatic encounter with the tribe two months later in October 2024. So if I may, let me read the opening of the book: “Far out on the western edge of the Amazon rainforest, deep in the Peruvian jungle, a pair of loggers plunged their chainsaws into the buttressed roots of an ancient ironwood. An ironwood, or shihuahuaco, of this size is a giant among giants, an emergent sentinel that reaches heights of 160 feet, towering over the rest of the canopy.” I’ve read that many are over 1,000 years old, by the way, as an aside. And you’ve found ones that are 1,200 years old.
Paul Rosolie
(00:05:41)
Yeah, incredibly old.
Lex Fridman
(00:05:43)
Anyway, you continue: “This particular tree had started its life as a tiny sapling in the great jungle, a story that began before the Spanish reached Peru, long before the United States was even a dream. At a time when Leonardo da Vinci was still honing his talents in a faraway part of the world, through the Renaissance, the First and Second World Wars, and the birth of our grandparents.” This tree was out there slowly charging upward, anonymous, just one pillar among the billions of others. But on this day, in August 2024, when the two loggers worked, this witness of the centuries came crashing down to the canopy with such cataclysmic power that it shook the earth. And then you go on to talk about how the shaking of the earth was felt and heard by the uncontacted tribe.
Lex Fridman
(00:06:34)
So you go on to describe how these particular loggers were killed— … by the uncontacted tribe of Mashco Piro. What do we know about these warriors of the uncontacted tribe?
Paul Rosolie
(00:06:48)
We know that across the Amazon basin there’s still perhaps thousands of clans of uncontacted peoples—people that are living in nomadic isolation in what remains of the intact Amazon basin and want to remain that way. And so, what happened with these loggers was that local people told them, “Don’t go out there. Don’t go into these territories.” And what happens is that people that aren’t from… there’s this thing with the jungle, people don’t believe that it’s as wild as the legends say. And so when they say there’s Calatos out there, there’s wild people out there, these loggers from another region go, “Yeah, that’s just some story. We’re fine. We’ll go.”
Paul Rosolie
(00:07:35)
We have shotguns.” They don’t realize you’re dealing with a civilization of people that is still nomadic, still uses bamboo-tipped arrows, still lives naked in the Amazon rainforest, has knowledge of medicines that we have yet to encounter or may never discover, and that they can hit a spider monkey out of the treetops at 40 meters. And so while you’re using a chainsaw, they can sneak up and you will never know they’re there. And so when that arrow passes through your body, you’ll only have a moment to realize it before you fall over.
Lex Fridman
(00:08:07)
And we’re looking at something you posted on your Instagram— … which are the arrows that they use, which are bigger than you. So they’re like six or seven feet.
Paul Rosolie
(00:08:16)
Six, seven feet. More like seven feet. And that’s—
Lex Fridman
(00:08:19)
Look how sharp that is.
Paul Rosolie
(00:08:19)
…incredibly sharp. They cure it over the fire and they have a way of sharpening it. That edge of bamboo becomes incredibly knife-sharp. You can cut meat with it easily; I’ve done it. These arrows… Look at that. I mean, I’m 5’9″. That’s easily a seven-foot arrow.
Lex Fridman
(00:08:34)
Yeah, so for people who are just listening, this arrow is really a spear. Some people would think it was a spear, but they’re shooting this thing with a gigantic bow. That’s crazy.
Paul Rosolie
(00:08:45)
Yeah, and so to be holding that… Look at that, they even twist the fletching so the arrow spins in the air. They have incredible craftsmanship. And then you see all the little string on there is plant fibers that they’ve woven. And then this is them.
Lex Fridman
(00:09:01)
The warriors of the tribe.
Paul Rosolie
(00:09:03)
The warriors of the tribe. And so the fact that we’re sitting here talking on microphones and that we have airplanes and cell phones and all the things that we have in the modern world, and yet we still live in this age where there’s, right now at this moment, people living out in the jungle who have been there since before history—it is an incredible thing.
Lex Fridman
(00:09:24)
Let me look this up on Perplexity: what are the technologies we modern humans have that the Mashco Piro do not? It’s just interesting to think about the kind of technologies we take for granted. Energy and power—obviously all the electricity generation, grids, batteries, solar panels, and electric motors. Metals and materials—mass-produced steel, aluminum, advanced alloys, plastics, composites, glass, concrete; all of those things.
Paul Rosolie
(00:09:52)
All of those things.
Lex Fridman
(00:09:52)
Tools, of course, and machinery. The infrastructure of roads and bridges and buildings, and the weapons of war—everything but the spears and the arrows that they have—and then medicine and biology. Of course, they probably have complicated medicines that they’ve developed for their own— …that are available within the jungle.
Paul Rosolie
(00:10:14)
I mean, that entire list is “no.”
Lex Fridman
(00:10:16)
No.
Paul Rosolie
(00:10:17)
I mean, metal—I think you have to be able to excavate into the earth and forge metal. These people don’t even… As a Peruvian anthropologist said to me, “You know, people think of them as Stone Age tribes.” And he was like, “They don’t have stones.” So they don’t know that water… They see water that they drink, but they don’t know that water freezes because they’ve never seen it. They don’t know whether water boils because they don’t have… they don’t even make clay pots. They just have their bamboo and their string. And so they’re living an incredibly simple life. So all of that, I mean, even a camera is a miracle to them. You have to bend your mind to even understand how far back they are. It’s like looking into thousands of years ago, like the Stone Age.
Lex Fridman
(00:11:03)
When they hear the sounds of the chainsaws, the sounds of machinery in the distance— …I wonder how they can possibly comprehend what that is.
Paul Rosolie
(00:11:12)
I think they view it as a demonic, destructive force. When I show you the encounter that we had… we left with more questions than answers, but one of the things that they were able to communicate across the language barrier was, “Why are you cutting down the trees?” They don’t like it.
Lex Fridman
(00:11:34)
Yeah. That represents to them the danger that the outside world brings, the destruction that the outside world brings.
Paul Rosolie
(00:11:42)
They see us as the destroyers of worlds.

Intense new encounter

Lex Fridman
(00:11:45)
So tell me about this encounter in October of 2024.
Paul Rosolie
(00:11:52)
So, in order to tell you about that encounter, I think we need to orient people into where we’re talking about. We’re talking about this river that runs through the western edge of the Amazon rainforest that you know well now, after spending time there with me. It’s a high tributary of the Amazon rainforest where you have the main river channel and then smaller and smaller tributaries. And the smaller you get, the less trafficked they are. And so this river has remained wild through the centuries. And even during the ’90s when there was a mahogany boom where people went out for mahogany trees, there were very few people going up this river.
Paul Rosolie
(00:12:30)
And so 20 years ago, when I first got to the region and people were telling me that there were uncontacted tribes out there, it was always in the realm of something… You know, it’s like people say, “There’s Bigfoot,” or, “Don’t go there, it’s haunted.” It was like a tall tale almost. And even the Peruvian government at the time that I went to Peru first, which was 2006, their official position was that the tribes are a myth. “There’s no such thing as the tribes.” That was the official position. And you would hear these stories of people that got shot. You’d meet someone four days upriver, deep in the Amazon, that had an arrow.
Paul Rosolie
(00:13:08)
And you’d look at this thing and it had this mega gravity. And so as we’ve created Jungle Keepers and now we’re protecting 130,000 acres of this river, we’re protecting the plants and the animals and the ancient trees, and trying to preserve the ecosystem, counting the butterflies and conducting ecological surveys, what we’ve inadvertently found ourselves the caretakers of is the fact that these people, in order to continue living, have to remain isolated. They want to remain isolated. That’s their one mandate as a civilization, the tribes of the Mashco Piro. And so in October, as Jungle Keepers now, we’re working with the indigenous people.
Paul Rosolie
(00:13:52)
What we do is we take loggers and gold miners and make them into rangers and give them better jobs, and we try to protect the forest. And those people who live up in the remote indigenous community, they called us on a satellite phone and they said, “Directors, you’ve been working with us and telling us you want to help us. The tribes are coming out. What do we do?”
Lex Fridman
(00:14:13)
So, even they don’t really know, when the tribes emerge from the deep jungle— —what to do?
Paul Rosolie
(00:14:19)
They were terrified.
Lex Fridman
(00:14:20)
What was your thinking when you got the phone call?
Paul Rosolie
(00:14:23)
When we got the phone call, it was a mix of… you know, we should keep… because we’re over here trying to get land concessions and doing all this important work, part of me was like, “That can’t be real,” so we’re going to keep our heads down.
Lex Fridman
(00:14:34)
Bigfoot is emerging— —from the forest.
Paul Rosolie
(00:14:35)
Like, yeah, sure. And then we hung up and we said, “Okay, maybe tomorrow if they’re still there or something.” And then it was crazy because it was probably about noon and we had an important day of meetings. We had a meeting with the police, we had a meeting with the landowner, we were trying to do all this stuff for the conservation work. And then I got together with the core team of directors—JJ, Mohsin, Stephane—and we said, “Wait, if this is real, we have to get there like now. Like now, now.” And so we dropped what we were doing, canceled the meetings, put other people on the meetings. We got a boat and we called Ignacio. We called our most hardcore ranger.
Lex Fridman
(00:15:12)
Who has been shot.
Paul Rosolie
(00:15:14)
Who in 2019 was shot in the head by an arrow and still bears the scar, and he barely survived. And we said, “Look, this is going down.” He said, “I already know, because the whole river already knows.” And we said, “Can you get us there by tomorrow morning?” And he said, “Look, it’s a two-day journey by boat. So, no.” And we said, “Is there any way you can get us there?” And he went, “I’ll get you there.” And so we got a couple of sacks of rice, a couple of cans of tuna, our dry bags, our tents. We got on a boat by 6:00 PM and we started riding up the river—
Lex Fridman
(00:15:46)
Through the night?
Paul Rosolie
(00:15:47)
…through the night. And so, a two-day boat journey that we’re trying to flex in one night. And so I was at the front with the headlamp— …with the torch. And so the first few hours, it was clear, and that comet—remember that comet—
Paul Rosolie
(00:15:59)
…that was going? There was that comet in the sky. I remember looking at the comet and going, somehow, “This is it.” I knew this was it. And the first few hours were clear, the stars were out, and it was beautiful. And then it clouded over and the lightning started, and then it just apocalypse-downpoured. And from midnight until 8:00 AM it was just the front of the boat with the light, and it was just Star Wars vision of just raindrops and galaxies and moths flying in my eye. People don’t realize you can get hypothermia in the tropics, but as you’re going at night, even if it’s 80 degrees outside, in the rain and the wind at night in a lightning storm, you’re freezing.
Paul Rosolie
(00:16:37)
And so by 2:00 AM I’m convulsively shivering, and we’re using the caiman eyes on the side of the river because it was so dark we couldn’t see where we were going, so those shine back at you. So, I was finding the caiman eyes and then motioning with the light to Ignacio where to go, and he knew how to find the channel and we had to jump the waterfalls. We did the two-day boat ride in one night.
Lex Fridman
(00:17:01)
Nice.
Paul Rosolie
(00:17:02)
And we got there and we arrive at this community where—and it’s morning now and the howler monkeys are calling over the jungle and the little naked children are all by the side and everyone’s scared. And we get a hug from this guy, Bacho, who we know, and they’re like, “Come in, come in, come in.” And they’re like, “The tribe came out yesterday. We saw a few of them on the beach and they’re gone now.” And so we collapsed, we fell asleep. It rained the whole day. That night we went out and we looked for them and there was this crazy moment where we’re standing on this beach and their footprints were there.
Paul Rosolie
(00:17:37)
And the local indigenous anthropologist was standing there and we’re looking out into the Amazon beyond, and there’s just all this wreckage. It looked like something very Cormac McCarthy, just dark sky, iron clouds, and we’re standing there. Everyone is waiting, because at any moment an arrow could just fly through your neck. And there’s people holding shotguns and the anthropologist, this little guy, is standing there in the front and he’s going, “Nomole.” He’s going, “Brothers.” There’s only a few words that intersect between the languages and he’s going, “Brothers, we’re here. We don’t want to hurt you.” He’s speaking in the Yine language.
Paul Rosolie
(00:18:13)
And he’s saying, “Come out.” And you can tell by their footprints—the trackers explained this to us—that you could see it was just the balls of their feet. So right as we pulled up to the beach, they had run. So, they were there listening to us, and he’s going, “Nomole, come out. It’s okay. Lay down your arms. We’ll lay down ours. Nomole.” Just kept saying nomole. And nothing happened. And we went back to the village and we went to sleep. We wake up the next morning and it’s 5:00 AM. And again, we’re trying to save the jungle. We’re in a race against time to get these land concessions. And so my team, like Mohsin and Stefan—JJ couldn’t come because he was in town actually signing paperwork and interviewing loggers and landowners.
Paul Rosolie
(00:18:53)
And also, he didn’t think there was any chance this was going to be real because in his entire 50-something years in the Amazon, he’s never seen them. And so we’re getting ready to leave in the morning. We had tents on the boat. And Ignacio comes up to me and he goes, “You’re my director, right? You’re my boss?” And I went, “Yeah.” He goes, “I need to talk to you like a friend.” I was like, “Yeah, shoot, go.” And he goes, “You’d be an idiot to leave right now. They’re coming.” And so he convinced us to stay. We pull our tents off the boat. Stefan and Mohsin go off with their cameras. They start shooting the community that we’re in. These are monkey eaters and fishermen. And everything’s quiet.
Paul Rosolie
(00:19:31)
And I opened my laptop, and I was working, just writing my book. And then it happened. Then you start hearing people screaming, “Mashko, Mashko.” And people are screaming and women are lifting children and running into the huts, and the dogs and chickens are going nuts. And I mean—
Lex Fridman
(00:19:50)
So fear. Fear.
Paul Rosolie
(00:19:51)
Fear.
Lex Fridman
(00:19:52)
Because we should say, kind of the obvious thing is, as far as anyone remembers any encounters, any minimal, small encounters with these tribes— …have been violent.
Paul Rosolie
(00:20:01)
Extremely violent. These tribes have remained alive because of their violence. Almost like the Spartans or the Comanches, they seem to have adopted violence as a first response to contact.
Lex Fridman
(00:20:12)
Maybe you can correct me on this, but I read that in the late 19th century and early 20th century, there was documentation of encounters with these tribes by the private armies of the rubber barons. And those encounters were, from the rubber barons’ armies’ perspective, violent. And so maybe the lesson the uncontacted tribes learned is that any interaction with the outside world is going to have to be violent because they have to defend themselves.
Paul Rosolie
(00:20:43)
Yeah. You had colonial missionaries in the 1600s and 1700s. Then you had the rubber barons in the late 1800s into the 1900s—just periods of extraction, domination, and cruelty. And these tribes, their grandparents must have told them, “When the outside world comes, you shoot first. That’s the only thing that’s going to keep you alive.”
Lex Fridman
(00:21:00)
Do you think the memory of those violent encounters is defining to how they think about the world?
Paul Rosolie
(00:21:06)
Yeah. Because even in my lifetime, in the 20 years I’ve spent in the Amazon, Ignacio was shot in the head. My friend Victor survived a violent encounter where they murdered somebody on a beach. I mean, they’ve shot numerous people. They’ve even shot people who were trying to help them, people who were trying to give them clothing and bananas. They call it “porcupining” them, where they find a body on the beach with so many arrows that when they fall over, all the arrows are sticking up. And they’ll do it out of curiosity too, where it’s like, “Hey, you’re wearing a suit. That’s weird.” We’ve never seen anybody in a black and white suit.”
Paul Rosolie
(00:21:41)
And then get the clothing. You know, the way Teddy Roosevelt would shoot a bird for science. They just want to look at you. And so they’re operating on a different… They don’t have the moral system that we have or understand. They’re truly wild.
Lex Fridman
(00:21:56)
How does Ignacio think about them? Because they almost killed him.
Paul Rosolie
(00:21:59)
Yes. It depends on the mood you get him in because one day I asked him, “If you could see the people that shot you in the head, what would you say to them?” And he looked at me with that Ignacio look and he said, “I wouldn’t say anything. I would kill as many of them as I could.” I said, “Okay.” He also had a time where he was in a really remote guard station working for the Ministry of Culture, and they showed up and he knew that they were going to kill him. And so he climbed up into the peak of the little structure there. And just like a dog in a car, that greenhouse effect, in the top at midday with the sun beating down, he was huddled over a mattress while they were walking on the deck—
Paul Rosolie
(00:22:38)
…moving pots and pans and looking at our items and artifacts. And he knew that if he was found, they’d kill him. But if he stayed up there, he was literally frying to death. He said he was soaking the mattress. He could feel himself dying. For two hours he had to stay there. And he was constantly making this decision of, “If I come out, I die. If I stay here, I probably die.” He’s like, “Probably die is better than definitely die.” So he was terrified. And so as they’re screaming, “Mashko,” and everybody’s running and women are lifting children, Ignacio comes and finds me. And you can see in his eyes, you can see when somebody has that PTSD response where he’s breathing heavy. He’s moving behind trees.
Paul Rosolie
(00:23:16)
He’s keeping me close to him, and he’s going, “Look there. He has a bow. He has a bow.” And we’re looking up the beach, and there’s just this clan of naked men walking down the beach with these seven-foot bows, and they’re hunched over and they’re pointing at us. They’re going, “Look at that one.” They’re going, “Look, there’s a gun there.” And you can see them communicating to each other. And the butterflies are swirling off the beach. And, you know, in these moments you go, “Am I entering a moment that is a one-way door? Is this an irreversible situation?” Because there’s an unfolding situation where they’re coming towards us. Are they going to attack? What do they want?
Paul Rosolie
(00:23:54)
I mean, I am soaked in chills right now just talking about it because I remember standing there and going, “There’s no way this is real life.” It’s burned into my memory, them walking down the beach and seeing them with the bows. And of course, Stefan is up there just firing off pictures and Mohsin is down getting video. And the community that we’re with, you hear shotgun shells loading home. But they’re also getting ready. And there’s this one guy, an anthropologist named Romel, who has been the only person who has communicated with them peacefully. He did it in 2013 where he stood on the beach and he spoke to them.
Paul Rosolie
(00:24:33)
He knows enough of the local dialect that overlaps with theirs that he can speak to them. And so as they’re coming down the beach, the butterflies are flying up and we’re all waiting. And again, you’re talking, how many meters? 30, 40 meters. For an arrow, you loose a seven-foot arrow that weighs nothing, you’re talking about 300 meters easy. They can shoot you from across the river. So Ignacio was pulling me and he was like, “Down. You go down. You stay behind this tree. You watch them from there. Watch out, that guy has an arrow.” He was watching everyone because you could see, he’s like, “This is how it happens.”
Lex Fridman
(00:25:08)
Did you think you might, this might be the last day you have on this earth? Were you afraid?
Paul Rosolie
(00:25:14)
I was, yeah, of course I was afraid. I’m with my two best friends and a bunch of people that I work very closely with. And you’re in the middle of nowhere and there’s no help coming, and you’re with like—
Paul Rosolie
(00:25:26)
…you know, 26 people and there’s 50 of the tribe that you can see, and you know that they’re surrounding us. There are men on the other side of the river. And then we had guns looking back towards the jungle because we knew we were being surrounded. And again, this is always the story of someone’s uncle, brother, or cousin telling a story that happened, and now it’s happening. And it’s not happening in the shadows, it’s not happening in the middle of the night. It’s happening in broad daylight. They’re walking out onto the beach. It’s like the first time they saw the dinosaurs in Jurassic Park. You’re going, “There’s no way.”
Lex Fridman
(00:26:02)
And you’re walking on the knife’s edge. It’s funny you say this, …about taking pictures. ‘Cause there’s two ways to think of this situation. This is fascinating, or this is extremely dangerous. And it’s both. It is a knife’s edge. So you could approach it one of the two ways. Like if I die, I die. I’m gonna take some good pictures.
Paul Rosolie
(00:26:22)
But also we’re there—that was also our mission, you know? As the directors of Jungle Keepers, we’re working with this community to ensure that their lifestyle can continue, and they’re saying, “Hey, that’s great, but as an indigenous community, we’re dealing with these people that come out and raid our stuff, try and steal our women, that kill our hunters, and now they’re coming out. We want you to see it.” And so documenting it is part of our job. We have to show what happened that day. And so those guys were shooting and then—yes, very seriously—Mohsin’s wife and I, we always joked like, “Oh, if the tribe ever comes out, you stand in front of him, you take the arrow.
Paul Rosolie
(00:26:59)
He has kids.” And that day we were strategically positioning ourselves being like, “You, down. You cannot get killed.” And you start in those moments to go, “Okay, where will I be safe from arrows? Where can I run to the river if they come over?” And you start planning, “Okay, if I jump into the river…” I was going, “Okay, I got my bag. I have a can of tuna. I have a flashlight.” I was like, “If I jump into the river and float down and I live, I’m still days upriver.” And so you start going through all these things, but—
Lex Fridman
(00:27:32)
And of course, the Mashco-Piro people are thinking exactly the same thing probably.
Paul Rosolie
(00:27:37)
Well, the interesting thing is that they’re initiating the contact, right? They are the ones coming out of the jungle and confronting us.
Lex Fridman
(00:27:44)
And fundamentally that contact is they’re at least giving peace a chance. They’re trying the peaceful contact first, correct? Or was there a violent element? Like what did you sense in the caution of them emerging to the beach?
Paul Rosolie
(00:28:02)
Fear.
Lex Fridman
(00:28:03)
Fear.
Paul Rosolie
(00:28:03)
As they came out, you could see fear on them because the way they were hunched over, the way they had their bows ready, they were worried. And so they came and Ramal is standing there, closer than any of us at the edge on one side of the river, and it was like shirts versus skins. It was two tribes looking at each other with a thousand years of civilization between them. And Ramal’s going, “Put down your bows. Put down your bows and we can talk.” And he’s saying, “Namole, Namole.” He kept saying, “Namole.” He kept saying, “Brothers, brothers, please put down your…”
Lex Fridman
(00:28:35)
So Namole means brother in a language that they might be able to understand.
Paul Rosolie
(00:28:39)
Namole means brother in a language that they do understand, and it seems like they refer to themselves as the Namoles. The brothers.
Lex Fridman
(00:28:48)
So potentially, that’s what they call themselves as a tribe, Namoles?
Paul Rosolie
(00:28:53)
Exactly, and actually, the anthropologists that we’ve been speaking to post this event have been explaining to us that Mashco-Piro—you know, Piro is the group that they’re from, these various nomadic tribes, and Mashco basically means like wild Piros. And so the one thing we know they call themselves is Namoles.
Lex Fridman
(00:29:12)
So at the end of this, we might converge towards the name of this tribe being Namole versus Mashco-Piro?
Paul Rosolie
(00:29:17)
The Namoles, yeah. It seems like the most current, or at least their self-appointed identity, is the brothers, Namole.
Lex Fridman
(00:29:24)
Anyway, there’s these shredded warriors on the beach. They’re gigantic.
Paul Rosolie
(00:29:29)
With seven-foot arrows, and we’re all standing there. And so the first thing, again, you just think of like the peace pipe in the old stories. And the first thing is let’s make them an offering of peace. And so they got a canoe with no motor, and we piled it with plantains, like just full of plantains, 16 feet of endless green bananas. And then, I mean, the balls on this guy, the anthropologist, he gets into the river, takes the canoe—and it’s the dry season, so the river’s only about three or four feet deep at the channel—and so he walks this thing out, this one man walking in the face of all these warriors. And he takes the boat and he pushes it towards them.
Paul Rosolie
(00:30:12)
And they rush out, and they start grabbing the bananas, and they’re not going, “Okay, we will unload these bananas and use them later.” They’re saying, “These are my bananas” and “You’re grabbing your bananas.”
Paul Rosolie
(00:30:22)
And they’re fighting and they’re yelling and they’re all grabbing them, and then they push the boat back and he talks to them a little bit. Again, it’s not a perfect translation. So he’s saying, “Where have you come from? What do you want? Who’s your leader?” And he’s trying to establish these things, and they’re saying things, and they all sort of talk at the same time, like a flock of birds. It wasn’t like one man speaks. And there were no women. The women were nowhere to be seen. And actually, at one point as we were preparing the second canoe of bananas, there was a moment of absolute panic.
Paul Rosolie
(00:30:58)
And it happened when there was a noise behind us and you just hear a bunch of shotguns swing around. Mohsin goes down. I go running away from the river now because I want to see it coming if there’s an attack coming. And I’m standing there, me and this guy were sharing a tree as cover, and he’s got a shotgun and he’s looking back into the forest and peering through. And what was happening was the women of the tribe had come silent-foot and they were just pulling the yucca out of the ground and taking the banana plants and ruining the farm completely. They were raiding the farm behind us while the men were talking up here. So again, were they peacefully contacting us or were they like, “Hey, we need some food, so go make a diversion”? …and take the food out the back”?
Lex Fridman
(00:31:42)
So you really were surrounded.
Paul Rosolie
(00:31:44)
We were completely surrounded.
Lex Fridman
(00:31:46)
So they could have murdered all of you, probably.
Paul Rosolie
(00:31:51)
Easily. We were outnumbered five to one at the least.
Lex Fridman
(00:31:54)
Yeah. And it’s probably fair to say that part of the reason they did—maybe they wanted peace, but part of the reason is they didn’t know how deep this goes. They didn’t know if you have backup.
Paul Rosolie
(00:32:04)
They don’t know if we have backup. They also had questions. Some of their questions were incredible. “How do we tell the difference? How do we know who the good guys and the bad guys are?” Because to them, all you outsiders are the same. So, who were the ones cutting down the trees?
Lex Fridman
(00:32:22)
And those are the ones they know are the bad guys.
Paul Rosolie
(00:32:25)
Well, the big trees seemed to have incredible significance to them. They’re significant to us in a different way, but to them, it’s offensive on an almost religious level to cut a big tree, as if you’re killing their gods.
Lex Fridman
(00:32:40)
So there’s a spirituality to the trees to them.
Paul Rosolie
(00:32:43)
It seems like that.
Lex Fridman
(00:32:43)
And so whoever’s cutting them down is a source of destruction on a spiritual, existential level.
Paul Rosolie
(00:32:50)
Yeah. “Why would you destroy our home?” And I think they’re right.
Lex Fridman
(00:32:54)
Yeah. In a deep sense, the uncontacted tribes represent the deep jungle. And so if they’re threatened, that means the deep jungle is threatened.
Paul Rosolie
(00:33:06)
Yeah. I mean, they are the human voice of the jungle. They’re asking questions and they’re also demanding. They’re clapping at us and waving and saying, “Send more bananas.” And so they loaded up another boat and pushed it out, and this time they gave them some rope. They all had rope tied around their waists. But they love rope, and some of them were wearing rope that they had made, which is brown or reddish. And then some of them were wearing rope that they had clearly pillaged from logging camps or the communities because it was modern nylon paracord. They had this wound around their waists like a thick belt. And they took the second boat, and they had some rope and some plantains on there.
Lex Fridman
(00:33:47)
So some of these guys might have been the ones that murdered the loggers.
Paul Rosolie
(00:33:51)
Could be.
Lex Fridman
(00:33:51)
From a couple of months before that.
Paul Rosolie
(00:33:53)
Absolutely, could be. But what Romel said as he was talking to them, he turned to us and he said, “You know, this group… the other groups call me the Grandfather. This group, I don’t know any of these. This is first contact. This is the first time this group is talking to us.” And you saw people from maybe 12 years old to what looks like 40-something, like a banged-up 40. And no really old people and no women.
Lex Fridman
(00:34:22)
So this is a particular clan of the uncontacted—
Paul Rosolie
(00:34:24)
It’s a particular clan.
Lex Fridman
(00:34:25)
… tribe who they’ve never contacted. Yeah, is there, just from your memory, interesting aspects about the way they were trying to communicate? Like you said, clapping. I think it’s, from an anthropology perspective, from a human perspective, fascinating. How do you talk to people from an uncontacted tribe like this? So clapping, yelling. It’s interesting to know that there’s not a hierarchy where there’s a leader that represents them.
Paul Rosolie
(00:34:49)
Well-
Lex Fridman
(00:34:49)
Or is that we know for sure?

Never-before-seen footage of tribe warriors

Paul Rosolie
(00:34:51)
Before even coming to talk to you about this, we passed this through anthropologists and ethicists and people, and we said, “Look, is it even, can we talk about this?” Because if you talk about this and you tell people there are these uncontacted tribes, people have misconceptions. They go, “They’re the last free people on Earth. They’re living the real life. We need to go join them. We want to see them. We want to photograph them.” There’s all this bad stuff that happens and all these people want is to be left alone. So, the last thing we want to do is kill the thing we’re trying to protect by telling the world. But at the same time, they’re speaking out. They’re saying, “Stop cutting our trees.
Paul Rosolie
(00:35:25)
Leave us alone.” And so if we’re not successful in the greater Jungle Keepers’ mission of protecting this river, they cease to exist. And so advocating for these people requires us to have this conversation. It requires us to have this footage and to show the world, and then leave them alone. In order for any of this to make sense, I have to show you this footage.
Lex Fridman
(00:35:46)
And this has not been shown ever before.
Paul Rosolie
(00:35:49)
This is a world first. I mean, up until now, that’s the other thing. You know, we’re sitting there this day and the only thing you’ve ever seen are these blurry images of them from someone’s cellphone from 100 meters away. And we’re sitting there with 800-millimeter lenses with a 2X teleconverter and R5s. And so this is as we’re looking through the binoculars, anticipating the tribe coming. I’ll put a little bit of volume so you can hear it. And then you can see, this is the moment. This is us running when they’re like, “They’re out. They’re coming down the beach.”
Lex Fridman
(00:36:25)
Oh, wow. Oh, wow.
Paul Rosolie
(00:36:31)
You see how many thousands of butterflies? But look at the way they move. Look at the way they point. Look at him with his bow.
Lex Fridman
(00:36:39)
Wow.
Paul Rosolie
(00:36:45)
There it is.
Lex Fridman
(00:36:48)
They’re trying to figure out what they’re looking at.
Paul Rosolie
(00:36:54)
And they didn’t know what the cameras are there for. So this was the guys looking out the back. So he’s going, “There’s something back here.” … hear the women in the woods. And I’m looking in every direction because I’m going, “Which way is the arrow coming from?” But see, he has his shotgun. This is just like a farm shotgun. Even if he shot it, you have to use a stick to bang out the shell. But see, as they come closer, they start laying down their… See, he’s laying down his bow and arrow. They understand.
Lex Fridman
(00:37:22)
So these are warriors, and the way they were at first moving, it really looked like they’re ready for violence. And now they’re all standing in a relaxed- And they’re smiling? Are they smiling?
Paul Rosolie
(00:37:33)
Smiles come at some point. I would say that one of these guys seemed like he was in a leadership position. He did most of the talking.
Lex Fridman
(00:37:43)
What’s with the different hand gestures? This holding your hand up to the face—all of this means something.
Paul Rosolie
(00:37:51)
All of this means something. Some had red smeared on their faces. Some had yellow.
Lex Fridman
(00:37:55)
Did you have a sense of hierarchy at all, like the boss?
Paul Rosolie
(00:37:58)
Again, there were just these two dominant guys. And this guy and one other guy who looked almost like him, like his brother. A lot of gesturing.
Lex Fridman
(00:38:10)
Wow. This is incredible, Paul.
Paul Rosolie
(00:38:22)
Yeah. You see the rope? Some of that rope is…
Lex Fridman
(00:38:31)
Yeah, I can kind of tell who the bosses are.
Paul Rosolie
(00:38:33)
Right? All right, so a few of the… But see, even that, as he’s pointing- … with them, what are you pointing at?
Lex Fridman
(00:39:02)
You guys are nuts. You guys are nuts.
Paul Rosolie
(00:39:07)
You see as they’re rushing in, there’s this desperation. They’re hungry. They also-
Lex Fridman
(00:39:12)
Is that in the water, or is that Ramo in the water in that case?
Paul Rosolie
(00:39:15)
In this particular video, it’s a guy named Liner. But see these guys? They’re fighting over it. It’s not that we’re all going to share it later. It’s, “I get mine, you get yours.” And so what does that mean?
Lex Fridman
(00:39:27)
Yeah. But here, they’re in peaceful mode, for sure.
Paul Rosolie
(00:39:31)
Now, after we’d given them several boatloads of bananas, things did calm down. Ramo said to them, “Look, we’ve given you what we can give you. We gave you sugarcane. We gave you boatloads of plantains.” And so then there came a time where things were a little more relaxed. They were walking around. We had a great moment where we’d given them the plantains and the bananas, and he’d said, “Look, that’s it. We’ve given you what you asked for. You asked for bananas. We don’t cut the trees here. All of us here are not tree-cutters.
Paul Rosolie
(00:40:09)
We’re indigenous people.” And he couldn’t explain who the hell we were, but they were like, “We don’t cut the trees. We’re not the loggers.” And they’re like, “Okay.” So then at some point, Ignacio went out and started, you know, he’d go like this and they’d go like this. He’d dance a little bit, they’d dance a little bit. And then there was this very human moment of just sort of joking.
Lex Fridman
(00:40:30)
So even Ignacio warmed up.
Paul Rosolie
(00:40:31)
Even Ignacio warmed up. Once he realized that it didn’t seem like anyone was going to die that day, things did calm down. It was a false sense of security. Here, I’ll show you. There’s a couple more things that are relevant here, though. This is just them interacting with the boat.
Lex Fridman
(00:40:48)
This is truly incredible, man.
Paul Rosolie
(00:40:51)
But then they don’t have boats. They don’t have stone tools. They don’t… Imagine if you showed them ice. You know, they wouldn’t…
Lex Fridman
(00:40:59)
This is historic.
Paul Rosolie
(00:41:03)
I mean, you hear of Percy Fawcett encountering the tribes. We’ve heard of anecdotal accounts of the tribes. This is the first time that the tribes have been filmed, that we can hear their voices— —that there’s a documented interaction happening. I mean, look how comfortable he’s getting. He’s so close. They asked him for his shirt. He gave his shirt.
Lex Fridman
(00:41:24)
This is incredible.
Paul Rosolie
(00:41:25)
They asked him for his pants. He gave his pants. He was in his underwear. You see this? The shirt that’s over his shoulder. Ignacio took off his JungleKeepers shirt and threw it to the anthropologist, and then the anthropologist walked off and threw it to them. So over the shoulder of that uncontacted naked warrior is a JungleKeepers shirt with the logo showing.
Lex Fridman
(00:41:47)
That’s great.
Paul Rosolie
(00:41:47)
So their second shirt and they’re…
Lex Fridman
(00:41:49)
You just upgraded that guy’s status in the tribe. He’s gonna be the new boss with that shirt.
Paul Rosolie
(00:41:54)
He’s got a dope polo. Yeah, and he didn’t even have to order it. But yeah, this is in the aftermath when things were calm. And my sort of moment with this that really stuck with me was when Ramo said to me, “They’re asking about you.” And I said, “Me?” And he goes, “Yeah, they’re asking about you.” Again, I’m not tall, but compared to the people in the village, I was a little bit taller with big shoulders. And he said, “They said you look like a warrior. Could you come forward? Show them that you don’t mean any harm. Show them your palms.” And so he pulled me up onto the beach. This was right before they left. See, I hold up my hands. Listen. And they sang back. They’re singing. They raised their hands. I raised my hands.
Lex Fridman
(00:42:50)
Wow.
Paul Rosolie
(00:43:08)
And then we were left watching them walk off the beach into the jungle with everything that we’d given them, and they were gone. And so we went downriver the next day and the community said to us, “Okay, now you understand this is real. This is terrifying. You felt that fear. You have a duty, if you’re going to protect this river, to protect us from them and to help us figure out what future they want. If they want to come to us, if they want to learn farming, whatever it is, that’s fine.” But they were like, “We need protection from you guys.” And then in this video in the beginning, I’m narrating to the camera and walking around right as they’re coming up the beach. But you see this guy, right there in the blue shirt?
Paul Rosolie
(00:44:00)
That’s George. And he was very friendly, very confident with this. He said, “Don’t be scared. They’re not going to hurt us.” And the next day, we went back to town—a long journey back to town. We go to sleep, we wake up in the morning, and we find out that the following early morning, our friends in the community had said, “Okay, the tribe is gone. We gave them all the things they wanted. We gave them sugarcane, bananas, and we said, ‘Please come back, you’re welcome here anytime.'” And George was driving a boat with people on it, and as they were going upriver, 200 of the tribe ran out, surrounded the boat, and they started firing arrows.
Paul Rosolie
(00:44:34)
And everybody else could hit the deck and get under the benches and hide behind bags of rice. George was driving and he was leaning back as he was driving as fast as he could. And one arrow came in just above his scapula and came out by his belly button. And so he had that seven-foot arrow tip through him. And so they pulled him out—and I saw the boat afterward, and there’s just horrific amounts of blood all over the boat. And he had to be medevacked out, and somehow he lived. We were able to help get him a helicopter, getting him evaced, all this. But again, you just go, “What?” You know, these people came out of the jungle and they asked for bananas.
Paul Rosolie
(00:45:16)
We gave them bananas and we, in every way possible, said, “We mean peace. We want friendship with you.” And then the next day they attacked.
Lex Fridman
(00:45:28)
What do you think happened? Why do you think their mind turned? Or maybe this has to do with the role of violence in their society. Maybe it’s so integrated into how they interact with the world that they don’t even see that as a fundamental shift in the interaction.
Paul Rosolie
(00:45:50)
I don’t know. I don’t know what to make of it. And the only thing I can think is that the way they hid the women from us, you don’t know—for them, maybe we’re not allowed to see their women. Or because the one thing that we got was that as George’s boat and this other boat were going upriver, which is how they live—it’s not like they were doing anything wrong, these people live in a community days into the Amazon and were going fishing—they came around a bend and I think they spooked the tribe. The tribe might have just acted defensively and said, “We don’t know who this is.” The motors could have set them off, we don’t know. But they shot him. And then the other thing is the thing with the necklace.
Paul Rosolie
(00:46:29)
I’ve asked anthropologists about this, and their answer was that at this point, they said, “You know more than we do.” Because two of them had the exact same item around their necks, and it seems to be a Brazil nut and then some sort of casing around the side, and it looked like animal teeth positioned in there. And it’s like, what are you carrying? Are you carrying medicine? Are you carrying some sort of a totem? But both of them had it, and it’s not a comfortable thing to wear around your neck— it was grapefruit-sized or bigger.
Lex Fridman
(00:47:03)
Do you have a sense if that’s a container or is it just like a totem?
Paul Rosolie
(00:47:08)
It seems like a container. They didn’t let it get wet; they cared for it. The guy in this picture, he’s got this piece of tree fiber that he has on him, and then he’s gotten his hands on Brazil nut sacks—plastic sacks from one of the farms across the river. And so they just take, they take, and one of them got a machete. As they were leaving, again, during that period where he got friendly, he was leaving and he had the machete and was playing with it and swinging it at butterflies. And one of my friends, this guy Bacho, he goes, “Oye, deja mi machete.” He’s like, “Drop the machete.” And the guy just looked at him and was like, “Yeah, come and get it.” It’s like, “Yeah, you cross the river and see what happens.”
Lex Fridman
(00:47:46)
Do you think he figured out or they later figured out how to use a machete?
Paul Rosolie
(00:47:51)
Oh, they know machetes.
Lex Fridman
(00:47:52)
They understand the machete?
Paul Rosolie
(00:47:53)
Yeah, they do raids for machetes.
Lex Fridman
(00:47:56)
They understand the power of sharpened metal.
Paul Rosolie
(00:47:59)
I mean, it’s an Excalibur sword to them. But that one has stuck with me because I wonder, what were they carrying in there?

The mysteries of the jungle

Lex Fridman
(00:48:08)
So what are some of the questions? Like if you could know everything you’d want to know about them— Maybe in the space of communication and language, that’s really interesting. You mentioned that there’s all kinds of calls, animal calls. So they obviously know how to mimic animal calls.
Paul Rosolie
(00:48:25)
Yeah, they can use animal calls with enough complexity that they can do basic commands. They can speak in Capuchin; they use Tinamou calls. Some of our rangers were upriver recently, and they found a Nomole trail, a Mashco Piro trail. It was Ignacio, of course, and he made a secret whistle they do. He whistled out into the jungle and he’s listening, and they whistle back. So him and everybody on the team just ran back to the boat and got out of there. But at least they answered. They didn’t just shoot. He whistled, they whistled, they said “out,” and he got out.
Paul Rosolie
(00:49:10)
But it’s like we don’t know: where are the old people? Do they not survive? What are the marriage rituals? How is reproduction handled? There are one or two children in the Amazon that I know of who washed down river on a log and were rescued by communities and raised. They either learn the native dialect or Spanish, and then at some point, somebody will ask, “What was it like when you lived with them?” And the answer is always the same: “I forget.” They don’t talk about it.
Lex Fridman
(00:49:46)
So maybe we know that they value secrecy. I mean, when you’re afraid of the outside world, part of that is confidentiality. They all sign NDAs.
Paul Rosolie
(00:49:57)
Yeah, there’s some really good NDAs.
Lex Fridman
(00:50:00)
It’s understood. It’s an NDA. There are no lawyers; there’s only one way to execute the law.
Paul Rosolie
(00:50:08)
Yeah. It’s either a really strong NDA or just that it is savage living out there in the jungle. You’re eating monkeys and turtles, and you’re hungry for days on end. Your wife might get stolen by another tribe; your baby might get stolen. Imagine the botflies and the things they must put up with. I mean, what we experienced in three days of living out with modern camping gear and headlamps, they’re doing none of that. You could put us out there naked, and it’s a very different story.
Lex Fridman
(00:50:43)
Yeah, the brutality of nature- Werner Herzog comes to mind. That they have to live in that. But then, there must be something about the jungle that serves as a catalyst for spirituality, so they must also have a religious component, a spiritual component that probably unifies them. There must be an ideology they operate under.
Paul Rosolie
(00:51:06)
Oh, there must be. They probably have a belief system. They probably have amazing origin stories. It would be amazing to know what things they have accurately and inaccurately guessed about us, about the outside world. I mean, they’ve never heard of the country they live in or of World War II or any of it. And so seeing them come across the beach was surreal because it’s like this aperture into history.
Lex Fridman
(00:51:36)
By the way, I mean, you do have a certain look, so you realize like— …as I’m saying to you, your face is carved in some wood somewhere. And there’s a few of them gathering around and still singing about the great gringo with the—
Paul Rosolie
(00:51:50)
The full beard and the big nose. They probably drew this like he’s got hair all over his face and a huge nose, and they tell their children.
Lex Fridman
(00:51:57)
Yeah. And it could be anything. You- they- You could be like… To the children, they say, “This is the monster you should be afraid of,” or you could be the most beautiful encapsulation of the outside world. It could be everything in between. You don’t get to control the myths.
Paul Rosolie
(00:52:12)
You don’t get to control the myths. Yeah, God only knows, but I mean, it’s—
Lex Fridman
(00:52:16)
That’s so interesting.
Paul Rosolie
(00:52:17)
So now in that 130,000 acres that we have, we know—and this is what we sort of have to come out of the closet with—we are now protecting these people. And the only way to do that is to make sure that they’re not contacted, let alone that they don’t get machine guns shot at them by the narcos or that crazy hippie gringos don’t go down there thinking they’re going to join the coolest commune on Earth.
Lex Fridman
(00:52:46)
So how much of the land that they move about is within the 130,000 acres of rainforest you’ve been able to save? And how much of it is not? How much of it is in the extra 200,000 acres that you’re trying to save?
Paul Rosolie
(00:53:01)
Most of that 200,000 that we’re still trying to protect is territory that is theirs. People always ask me this. They’re like, “How could you buy the Amazon? That doesn’t make sense.” And it’s like, well, I have bad news for you. Somebody already owns it and we have to buy it from them so that they don’t log it. These landowners are going to sell their forest to the logging companies because owning 10,000 acres of the Amazon doesn’t help you if you’re a third generation jungle man. If you live in the city, they’re going to contract either the narcos or the loggers or the miners to go out there and use it, and they’ll get a little money. And those people, when they see these tribes, will kill them. That’s for sure. Shotguns and machine guns in the end will win, not to mention the germs.
Lex Fridman
(00:53:53)
So all the money you’re trying to raise and all the land that you’re trying to save, it’s all towards that, protecting the deep jungle. So when you buy up the jungle, you just want to let it be, let the natural ecosystem come back to life in the cases when it was logged or just flourish— …if it hasn’t?
Paul Rosolie
(00:54:12)
Again, we’re talking about the last great jungle. I always called it the last endless forest because this place is so incredibly remote. The other question I always get is, “Why is this river so important?” For my whole career, 20 years in the Amazon, it’s been that it’s massively intact forest. Places like the ancient forest where the trees have never been cut, so it’s forest that’s been growing since the dawn of time. Thousands of species can be on a single Shihuahuaco tree. It’s Avatar on Earth. You can see the sweat come off your skin and rain down and then drink it out of the river; you’re part of the chemical physical reality there. It’s one of the last places that’s untouched.
Paul Rosolie
(00:54:58)
This changed everything because we realized that along with the butterflies and the monkeys and the jaguars and the trees and the ecosystem, there’s also a human culture that will, in the next few years, cease to exist, that will be exterminated if we don’t protect them. When you look back at what happened to indigenous cultures all over the world over the past few centuries, we collectively now have a chance to undo all of those injustices by at least doing one right—by saying these people want one thing: to just be left alone. Imagine if we just protected the river. Then it’s not that they’re this thing that’s vanishing from reality, but they get to continue living that way.
Paul Rosolie
(00:55:47)
And if they want to come out and contact us, great, and if they want to continue living like this for the next 10,000 years, they can. That’s what we’re working with now. It’s become so much more important than just trying to protect the environment. It’s like protecting Yellowstone or Yosemite or the sequoias that occur nowhere else on Earth. You protect the things that are unique and special, the crown jewels. In both a biological way and an anthropocentric way, this has now become a river with global historic significance because this story is going to play out in the next 18 months.
Lex Fridman
(00:56:30)
You’re further and further trying to save more and more rainforest. And the mission is clear because there’s just this deep jungle— —that’s full of this incredible life. And now we know with uncontacted tribes, there’s a lot of interests that don’t care about the jungle, they’re pushing— —and want to cut it down, want to destroy it. And the mission is pretty clear. You just want this whole territory to be preserved.
Paul Rosolie
(00:56:56)
Yeah. And that’s what makes it so beautiful is that this is one of those crown jewels. This is one of those special places on earth where it’s like a time capsule for nature, for human culture, for biodiversity, for climate services, for everything. And then, you know, I think people get overwhelmed when you say, “Okay, we have to save the environment. We have to save the ocean.”
Paul Rosolie
(00:57:20)
This is one watershed. It’s 300,000 acres and we’re already at 130,000. We’ve shown we can do it. The loggers are happy to turn into rangers. People all over the world have become Jungle Keepers supporters. We have several thousand people that every month give us between five and a thousand dollars, and that keeps the rangers going, that employs the local people. So it’s not just drawing a line and making a park and saying, “Everybody stay out.” No, you have the Nomole, you have the indigenous people, you have a future for the indigenous people where their kids don’t have to worry about eating monkeys. They can be park rangers.
Paul Rosolie
(00:57:57)
And I get blowback from people right away where I say, “And people can even come see it through the treehouse.” And people go, “Oh, are you going to bring tourists into the wildest place on earth?” And it’s like, man, look at that jungle. There’s 300,000 acres of that, and we’re talking about two blades of grass on a football field that we access so people can see it, which makes a huge difference. And so the fact that we can share it with people… Look, since the first time I came here and spoke to you, the amount to which you’ve made it possible for us to protect this place, the amount of spider monkeys and jaguars and giant anteaters and those ancient millennium trees that you’ve made it possible to protect is monstrous. And so—
Lex Fridman
(00:58:43)
Thank you, brother. It’s been—
Paul Rosolie
(00:58:45)
No, thank you.
Lex Fridman
(00:58:45)
—it’s been an honor of a lifetime to be able to watch you. I tell this to a lot of people, there’s certain people I’m glad exist in this world because you’ve educated me and millions of people about the beauty of the jungle and how important the fight to save the jungle is. So if you’re listening to this, you absolutely must go. Please donate or post about it, share it with friends at junglekeepers.org. You’re also doing a gala in New York at the end of January. So if you can, please go and donate to help save the jungle.
Paul Rosolie
(00:59:25)
Yes, please do. Because our first conversation led to the first surge where people realized what Jungle Keepers was— —and then because we got this surge of support, we were able to expand our work, protect more acres. A lot of our major donors and small-scale donors came in because of that. So these are people that went, “Wait, if Lex thinks it’s a good idea, then we’ll do it.” I think that based on your trust they came in.
Lex Fridman
(00:59:50)
I guess also I should say it’s not enough to speak and communicate the importance of saving the rainforest. You actually have to have incredible people there making it happen. And we have talked, and we’ll talk more, about the dangers and the complexities involved on how to navigate everything. And one of the things, and the reason I’m really excited about what you’re doing, is I just got to meet the team, and it brings a smile to my face— —several of the people I know who are extremely competent. Stefan, somebody we’ve talked about—
Lex Fridman
(01:00:23)
Yes, he likes to take pictures of stuff, but primarily the thing he does incredibly well is run everything—organize everything to make sure that stuff happens and happens quickly and efficiently. These are the kinds of things that are required to make stuff like this happen in the complex environment that the jungle operates in, the sometimes lawless environment— —that the jungle operates in. So the team is incredible, which is why when you sort of connect the money, how does the money lead to the solution of the problem? It’s the team, and the team— —makes it happen.
Paul Rosolie
(01:01:07)
I didn’t know that people like Stefan existed.
Lex Fridman
(01:01:10)
Yeah, me neither. When I met him— He was a beautiful, wonderful human being.
Paul Rosolie
(01:01:16)
I’m, you know, again, I can use a machete to catch a fish. But his systems knowledge and his ability… I mean, his bandwidth is the size of a country. It has its own area code. Just like JJ opened the door of the Amazon and gave us that local indigenous perspective—I mean, yeah, okay, I told some stories about it, but Stefan came in and went, “Okay, you guys have good ideas, but you’re both jungle guys.”
Paul Rosolie
(01:01:44)
“You’re not helping each other.” And running those systems, making the website, and making it possible to connect the people that care with the indigenous ranger program and make sure the rangers have shirts and cans of tuna and that there’s a person running the ranger team—I mean, these are things that I couldn’t dream of organizing. I can’t even make my bed. You know, I can’t even get that far.
Lex Fridman
(01:02:06)
Caveman want fish.
Paul Rosolie
(01:02:07)
Caveman want fish.
Lex Fridman
(01:02:09)
Watching you hunt for fish with a machete is one of the most awesome things I’ve ever seen. You were literally able to catch a fish with a machete. So that’s what you’re good at. And then Stefan is good at everything else.
Paul Rosolie
(01:02:22)
Everything else. You remember the Most Interesting Man in the World? And they’re like, “You know, he once had an awkward moment just to see how it felt.” And it’s like, Stefan’s to-do list doesn’t exist because it’s already done. It’s just incredible.
Lex Fridman
(01:02:35)
Quick pause. Bathroom break.
Paul Rosolie
(01:02:36)
Oh, 100%. I’m so happy about that. Yes, sir.

Tribe’s diet: Monkeys, turtles, and turtle eggs

Lex Fridman
(01:02:41)
And we’re back. One thing I forgot to ask you is about the diet of the uncontacted tribes. You mentioned— —potentially monkeys and turtle— —eggs? So, what do we know about what they eat? What’s the source of protein? Do they eat monkeys?
Paul Rosolie
(01:02:59)
Oh, yeah. Their primary sources of food, I would say, would be monkeys, turtles, turtle eggs, and small game like paka, the large rodent that’s like the size of a beagle. Capybaras. Stuff they can shoot. They don’t really fish. And we know these things because our indigenous trackers and our rangers find their camps, and so they’ll find some of those little thatch structures they make on the beaches and we see the bones. There’ll be tapir bones. There’ll be turtle shells, which seems like is their closest thing to a bowl. The day that we interacted with them, they did find a bowl. We saw them walking away with it in one of the farms, and then days later we found it destroyed. So, they didn’t seem like they saw much utility in the bowl.
Lex Fridman
(01:03:47)
It’s a temporary container.
Paul Rosolie
(01:03:49)
It’s temporary. So, they kill it. They make a fire. They must be amazing at making fire. I don’t know how they do it out there.
Lex Fridman
(01:03:56)
It’s very difficult because everything is wet.
Paul Rosolie
(01:03:59)
I don’t know how they do it. And I’m a really good firestarter.
Lex Fridman
(01:04:02)
And it’s tough in the jungle.
Paul Rosolie
(01:04:03)
It is almost impossible most of the year because everything is wet to its core.
Lex Fridman
(01:04:09)
So you think they cook the meat?
Paul Rosolie
(01:04:11)
I mean, they have to be cooking their meat from a parasite standpoint, from everything. We know that— —they’re cooking their meat. We see it, that they’ve cooked it. You know, there’s not a lot of excess berries. Things like berries and nuts and fruits, that the monkeys and the birds are— —and the bats are getting to those first. As soon as… I mean, that’s what fruit does, right? A tomato is green until its seeds are mature and then it turns red to advertise, “Eat me,” so that you eat it and then your gut transports that to somewhere else and it gets free transportation. In the jungle, that happens so quick that we’re never getting produce.
Lex Fridman
(01:04:45)
In the book, you have a picture of a native girl on the Los Piedras- … Having monkey for lunch.
Paul Rosolie
(01:04:52)
Yes.
Lex Fridman
(01:04:53)
It looks really strange. The monkey kind of looks a little bit like cannibalism because it looks like a small human. I don’t know what it is about monkeys. There’s a human- … element to them. In their eyes, in the form factor, but even in the warmth they bring to the interaction.
Paul Rosolie
(01:05:22)
Yeah, I was babysitting her and she was six at the time, Dira, and her parents went out and we were left at camp. And they just said, “You know, keep an eye on her. Make sure nothing eats her.” And I said, “Sure.” And she was like, “Hey, I want lunch.” And I said, “Great. Well, what is there?” And she pulls out this monkey head and she was like, “It’s ready,” and she starts pulling at the ear. And she’s like, “I can’t get the ear. Can you help me?” So I pulled off the ear with my teeth- … and then I gave it to her, and then we just shared this monkey head back and forth.
Paul Rosolie
(01:05:51)
And we’re sitting there and I took a few pictures of her as she’s eating. And I have this video where I go, “What’s your favorite food?” And she was like, “Monkey.” And I said, “Not cake?” And she was like, “Monkey.” And she was pulling its lips off and, like you said- … you see the teeth and the eyes and it’s like sort of grilled in static agony. And it looks like a tortured human and she was just enjoying it.
Lex Fridman
(01:06:12)
Let me look it up on Perplexity how many people in the world eat monkey. Does it taste good?
Paul Rosolie
(01:06:24)
If it was prepared right, it would taste good, but they just throw it over the fire and then eat it. So, even if you took a perfectly good chicken and did that, it wouldn’t taste great.
Lex Fridman
(01:06:33)
There’s no reliable global count of how many people eat monkey meat, but available data suggests many millions of people regularly or occasionally consume primate bushmeat- … especially in parts of Africa, Latin America and Asia. I mean, she looks like that is her favorite meal- … is monkey.
Paul Rosolie
(01:06:53)
Yeah. Yeah, we had a great time.
Lex Fridman
(01:06:55)
Who are we to judge?
Paul Rosolie
(01:06:56)
Who are we to judge? I mean, have a tuna sandwich or a monkey face, whatever.
Lex Fridman
(01:07:02)
She’s loving it. That’s awesome. That’s a good picture there.
Paul Rosolie
(01:07:05)
And she’s adorable.
Lex Fridman
(01:07:06)
Yeah. Now that some time has passed, when you look back at that encounter, which I really do think is historic, with the uncontacted tribe—what do you think about? What lingers with you?
Paul Rosolie
(01:07:19)
Honestly, I’m still processing it. I’ll still find myself just staring off, sort of remembering it or looking at the footage. But it felt like the voice of the jungle was speaking. These people are… there’s that separation between humans and nature where we go, “We have to protect nature,” you know? It’s like explaining what water is to a fish. We’re part of it. We depend on it. And these are people that depend on it 100%. And as we sit here surrounded by technology and concrete and civilization, they’re still out there right now. And the fact that we’ve been trying to protect their home without even really knowing that they were in it, because they’re so elusive, it gives you perspective on where we came from and how far we’ve come.
Paul Rosolie
(01:08:14)
I look at simple things. You board an airplane or you take a picture and you go, “This is a miracle.” I think having that perspective of having interacted with them where you go, “How much work does it take to make this?” If you and I were standing in the jungle and somebody said, “You have to make this,” how many years before we came up with this? How many rubber trees, and where would we get the metal, and what would we use as dye, and how do we make the spring mechanism and figure out how to make it work? I don’t know. They are working with the bare essentials. So it’s an interesting reference point to start at in terms of how incredibly privileged we are.
Paul Rosolie
(01:09:00)
The other thing is we have written text, we have so many different types of text, and we have code, and we have language, and we have music, and we can communicate in all these different ways. And they have spoken word. They have oral tradition, and that’s it. And so they’re operating the way our ancestors did, persisting in modern times. I think, for me, I come back to the world and it moves very fast when I see it because I’m still stuck on, you know, whether or not you and I can drink out of that puddle. You know? And thinking about that.
Lex Fridman
(01:09:42)
The big questions of life.
Paul Rosolie
(01:09:43)
The big questions of life.
Lex Fridman
(01:09:45)
Yeah. You’re right from the perspective of the uncontacted tribe. Going from the technological world to the jungle, you realize the majesty, the magic of the biological system that is the jungle, that is nature. But from their perspective, also there is a majesty and magic to the technological world. The human-created technological world of the pen and the computer— … and the light bulb, that too is magical. So sometimes we— … don’t give enough credit to both: the magic of the technological world, all the incredible things humans have been able to build, and the magic of the natural world.
Paul Rosolie
(01:10:34)
I think you and I and people that spend large amounts of time in the wilderness, especially somewhere as remote and fundamental as the Western Amazon, have a different perspective on it. Because I think that when you’re born in it, you don’t necessarily have the framework to appreciate how far we’ve come. You go, “Yeah, I got on the train today. I checked my phone. I FaceTimed my mom,” and you’re like, “This is all normal.” It’s like we found a way to take things out of the ground and mix them together into magic devices that can do anything. It’s mind-blowing.
Lex Fridman
(01:11:14)
There’s a deep optimism to that. And you actually write in the book, which I really like, I think somewhere in the beginning, quote: “Given all the death and destruction I’ve witnessed, it would be easy to slip into the popular anti-human narrative that we are a plague on the planet and there’s nothing that can be done, but my career in conservation has given me a glimpse into an alternate narrative. I’ve met people who are proving more and more that something can be done. I’m talking about real heroes, people who have dedicated their lives to redeeming the evil that is capable of being waged by the human soul, people who are guarding the flame amidst the storm, proving every day what so many have forgotten.

Jane Goodall

Lex Fridman
(01:11:56)
There is still hope.” And that speaks against the cynicism and maybe apathy and the view that humans are a destructive force in the world. That speaks to the fact that humans, with all the technological elements that we have created, can actually do a lot of good. I wrote in my notes here a quote from the great Jane Goodall: “The greatest danger to our future is apathy.” So caring about the world, having optimism for the world, having hope for the world is the way to help have an impact, help save it. But on that, I have to ask you about Jane. She passed away on October 1st. Some humans in this human civilization of ours can open our eyes to the beauty of the world, and she is one of the best of them. And she’s had an impact on your life. Maybe can you speak to the impact that she’s had?
Paul Rosolie
(01:13:03)
I mean, when I grew up, being dyslexic, I couldn’t read for a very long time. And so my parents read to us every night, which was amazing considering how hard they were working. But they’d find the time to give us an hour of reading every night, whether it was Lord of the Rings or Sherlock Holmes or Jane Goodall. And so I grew up with Jane being this figurehead of conservation and of adventure and sort of a living historical figure, this legendary person. And so then one time, right around the time that I’d been going to the jungle for a few years, I got to go see Jane speak, I think it was at NYU. And sitting in the crowd, watched her, completely amazed.
Paul Rosolie
(01:13:47)
And I had, at the time, my cousins had been telling me that I should write down my stories, the stories of taking care of an anteater and stories of catching anacondas. And they’re like, “Write, you know? These are such good stories.” And so I’d been writing them down and I just remember after the talk, she did at least an hour on stage and then hundreds of people lined up, and she sat there and each of those people wants a moment with this legend.
Paul Rosolie
(01:14:18)
And so she has to take a picture, shake their hand, they say, “You mean so much to me.” She says, “Thank you.” And then they move on and they say, “We’ll send you the picture.” “Okay, great.” And so then I got my moment and we waited in line for a long time and I gave her this manila envelope with two chapters in it. One chapter was Lulu the Giant Anteater from Mother of God, and the other chapter was me, JJ and Pico out on the river catching anacondas, just talking about how amazing the jungle was. And I said, “I’d love it if you could endorse my book that doesn’t exist yet.” And I felt like such a loser doing that.
Paul Rosolie
(01:14:47)
And I felt so stupid because I feel like everyone was probably asking something of her and it’s incredibly draining to talk to that many people, even if it is for a good reason. And 48 hours later, she got back and she said, “This is incredible. I would love to write a recommendation for your book as soon as you find a publisher.” And what happened with that is that Jane, the way I think of it is, she waved her very powerful magical wand in my direction, and she had the incredible compassion and presence to actually—I mean, after talking to that many people and being on the road 300 days a year and being Jane Goodall, this living legend scientist, to actually do something so mundane as look at some kid’s writing.
Paul Rosolie
(01:15:36)
And of course when I went to publishers they said, “Jane who? Who said that they would endorse your book?” Because everyone had said no. Every publisher in New York had already said no. And then after that, HarperCollins took me on and they said, “Well, if Jane Goodall thinks it’s a good idea, then we think it’s a good idea.” And it became Mother of God and then because of that, Jungle Keepers, Dax, everything else stemmed from that. So had Jane not been the legend that she is truly in every moment, my whole career would never have happened, which also means that those thousands of heartbeats and thousands of acres in the Amazon wouldn’t be protected because we never would’ve started Jungle Keepers.
Lex Fridman
(01:16:17)
And she did that not because you’re special, she did that to everybody. And now just imagine the scale, the impact she’s had because of that. And guess what? You have a bit of that responsibility now as well. There’s young people that walk up to you in that way and you have that responsibility of seeing them, of giving them a chance, seeing the potential in every single human being that walks up to you.
Paul Rosolie
(01:16:45)
It definitely… I would say that we could do four hours on just Jane, what she did for humanity, what she did for science, what she did for women, what she did for wildlife, the amount of other people that she inspired and gave careers to, everything she did for me. But to me, that presence of mind when you reach that level to not be worried about your own travel and your own schedule and busy with getting some rest, and that she actually looked at it, has informed how I operate. And indeed like you say, at this point as strange as it is, people will stop me on the street and say, “Hey, I watch your videos every night with my kids,” or someone will say, “How do I get your job?”
Paul Rosolie
(01:17:27)
I’ve been watching you for years and I’d love to help conservation.” And so it’s made it so that I follow her example where you stop what you’re doing and you pay attention. Because you don’t know, that might be the next kid that’s out there saving a river, or the next person that makes an innovation that makes it possible to clean rivers, or whatever their dream is. But Jane was in the hope business. She always said it. That not losing hope was key to staying in the fight. And we live at a time when that apathy is a poison peddled by the darkness. They’re trying to make you feel disoriented and apathetic and scared.
Paul Rosolie
(01:18:12)
And fighting back against that and having conviction and passion and fire and hope are the only way that we’re going to fight that. And she understood that, and she spent her whole life spreading it, guarding the flame against the storm, and tipping her candle to others to light them. I mean, that was her whole thing.

Advice for young people

Lex Fridman
(01:18:30)
What advice would you give to young people on how to do that? Those young Pauls sitting there, and your life story’s just incredible in that way. You’ve taken a leap into adventure— …into the unknown. What would you recommend they do?
Paul Rosolie
(01:18:49)
I think the thing that I try to communicate to them—and again, my inboxes are filled with people from Finland, Spain, Georgia, saying, “How do I get your job? How do I get out there and do it?”—and it really is just that: you throw yourself headfirst into adventure. You just do it. And I remember hearing people say that, like, “You know, if I can do it, you can do it.” And I remember how hollow that sounds because I’m like, “Yeah, you’re on a talk show or you just wrote a book.” These titans of their industries and innovators saying, “Oh, if I can do it, anybody can do it.” But now that we’re protecting all this rainforest and that I’ve lived with the animals and met the tribes, and it’s becoming this global movement—you know, I didn’t have a PhD.
Paul Rosolie
(01:19:38)
There’s that quote that someone less qualified than you is living your dream life and has your dream job right now, and I am the poster child for that because I failed out of high school and started taking unmatriculated college classes and going to the jungle with my friend JJ and just doing it for the sheer love of it for years, almost a decade, before anything surfaced. And the other thing is there’s not even a path. There was no path ahead of us. There was no, “Okay, you go to school, you get trained in this, and you’re going to become this.” I went there and it was like, “You’re never going to be a conservation biologist because you don’t have the grades. You don’t have a PhD.”
Paul Rosolie
(01:20:16)
You don’t have family money. You’re not going to be able to protect rainforests.” So I said, “All right, well then, selfishly, I just want to see it.” And then I ended up getting trained by the indigenous people, and like what happens so many times—you could use a restaurant example—where you might start washing dishes, but at least you’re in the restaurant, you know? And then at some point, the manager’s going to need you to help with restocking and so on. And at some point after a few years, you’re going to be helping the new guy, and at some point you might end up being the manager, and at some point you might end up in a position where you’re starting your own restaurant. That’s the only way to do that. You can’t just search it on a computer. You have to go sweat and bleed and do it.
Lex Fridman
(01:20:58)
And that said, especially if you fall in love with the journey that you take on, it is full of difficult periods. I think you said somewhere this just seems to be the nature of it. That there’s going to be pain, there’s going to be suffering along the way. You have a really nice post… …That I recommend people watch about just this. When people ask for advice, that the hardship, the suffering—
Lex Fridman
(01:21:27)
…and I’ve seen how much you care. I’ve seen it in your face when you see a tree being cut down or you see the fires. There’s real pain there in your heart and you have to carry that. And so the post is, “How honest can I be? What do I tell these kids who message me asking how they can do what I do? It’s not David versus Goliath. There’s no sword or sling that can hold back a dragon this big. You’re going against the current of global economic entropy and human apathy. Swimming against the current is tiring, a great way to drown. Every day, we don’t win, we lose, and when we do, worlds burn. The more you know, the more it bleeds. The heartbeats all stop when the flames come through. Constellations of species turn to ghosts, and we’re the only ones saving them.”
Lex Fridman
(01:22:25)
“Cupped our hands around a candle in the howling darkness. And people want to be inspired. Keep that social media going, keep it up. You’re doing great. They want to know we’re winning, and we’ve done a lot of winning, but not right now. We’re getting slaughtered. We’re at that part of the story. We’re almost at the end game. We can think as positively as we want. Thoughts and prayers won’t stop a chainsaw, and the motor that’s carrying us against the current towards the miraculous goal only works when there’s gasoline in it. As soon as that stops, we drown. We can take the warm light from all of those who help and not let it bother us that there are people who could buy a planet’s claim to care.” At some point you realize what’s really happening.”
Lex Fridman
(01:23:17)
“As a kid you’d rather be Aragorn. You don’t want to actually carry the ring, not when you learn what it’s going to cost, even if you make it. How can you explain to Sam why you can’t get on the boats? Whatever it takes, whatever it takes. It’s that time of year again. Here come the flames. Whatever it takes, it’s coming.” And people should watch the video that goes along with this. But that speaks to the pain, the difficulty, the challenge, the suffering involved— …when you’re faced with the possibility of destruction. And that’s the other side of the sword of caring for something deeply.
Paul Rosolie
(01:23:57)
Yeah, we’ve watched a lot of forest burn. We’ve pulled a lot of animals out of the flames. I wrote that at a time where we were just getting hammered. Funding wasn’t coming in. There were miners. It was just months and months out in the jungle alone. It’s a Thom Yorke track that I’d just been listening to again and again, and it was just so low. There was a huge new invasion where they just burned the whole side of the river and it’s never going to come back. And it’s part of the forest that I loved and I knew the animals there and it’s gone. And so we have to live through that on a weekly basis, at least, a day-to-day basis.
Paul Rosolie
(01:24:46)
And when you take on responsibility for something like this, you go to sleep thinking, “Yeah, if we don’t do it then worlds burn. If we don’t save it, then…” Every time you mention the sadness that surrounds a happy moment, well, it’s like, how am I supposed to go to a party and talk with people about anything? How am I supposed to even go to sleep when if we don’t succeed at what we’re trying to do, if we don’t outrace the chainsaws and the roads, then those trees die—those millennium trees—and we’re the only ones out there protecting them. And then when you see that black scorched earth with nothing left, it’s just ashes on the ground…
Paul Rosolie
(01:25:29)
…the cacophony of life is silenced, and it’s just this horrible violent silence. It makes you sick. And so yeah, there’s a lot of weight that comes with that where we’re not theoretically doing something. We’re practically doing it.
Lex Fridman
(01:25:51)
So that’s the other side of the advice to young people.
Paul Rosolie
(01:25:55)
Oh, yeah.
Lex Fridman
(01:25:56)
It’s not gonna be easy.
Paul Rosolie
(01:25:58)
No. I mean, when they say, “How do I get your job?” It’s like, “Well, you don’t want my job. You don’t want the botflies, and you don’t want the dengue, and don’t even inquire what a normal life looks like.” I lived out of a backpack for 20 years. You know how many monkey faces I had to eat because there was no other food? Like, seriously. Just being alone on the boat in the river and how many days the motor didn’t work. And you sleep out there, and you get rained on because you don’t have any protection, and you have some leaves over your face. And then you go home, and everyone’s got a job, and everyone’s got kids, and everyone’s happy.
Paul Rosolie
(01:26:35)
And they’re like, “What are you doing down there?” “I’m trying to save the rainforest.” They’re like, “Sure.” And now we’re at this point where I cared a whole lot for a long time. We’ve had rises, and then we’ve had falls, and we’ve had wins, and then we’ve had failures. And the last few years, we’ve had this rolling success of people finding out about our work and coming in. And we start to go, “Wow, if we protected 130,000 acres, we might actually be able to do this.” There’s that moment in 300 where they show Leonidas and they say, “Even the king allows himself a moment of hope that this might be okay” right before they get slaughtered.
Paul Rosolie
(01:27:13)
And someone very dear to me recently said, “In celebration of where we’ve gotten to, if it happened in any harder of a way, it would have actually killed you. And if it had happened in an easier way, it wouldn’t have been so divine.” And that slapped me in the face because it was like, “Man, it has been so hard, but look where we are.” We might actually do this.

Cartel, Narco-traffickers & assassination attempts

Lex Fridman
(01:27:41)
It just has to be that way. Speaking of which, another complexity in all of this—you write about in the afterward of the book about the narco-traffickers that have moved into the river basin. They are not the loggers that we’ve spoken about anymore. They’re growing coca for cocaine, and they’re building airstrips. So tell me how this came to be.
Paul Rosolie
(01:28:14)
Like you said, our whole life on this river, when loggers come in, JJ and I would walk up to them and say, “Hey, what’s up?” and sit down with them and have a beer or share a meal and talk to them and ask who their father was and if we know them, and then hire them. And they’re friendly.
Lex Fridman
(01:28:32)
They are, in a way, brothers. They’re the same.
Paul Rosolie
(01:28:38)
They come from the same people. They’re simple local people. They’re not evil. They’re just people who usually have a kid and a wife, and they’re looking for work. So they work with the chainsaw because that’s what they know. And they work for, you know, $30 a day if that, in very challenging, harsh environments. And so when we see clearings, I would always go with the drone and fly it over. We’d get some intel, and then we’d bring that to the police. Jungle Keepers supports the police at this point because the Peruvian government has a hard time with resources trying to manage Amazonia. And when you’re three days from civilization, getting cops out there is not the easiest thing.
Paul Rosolie
(01:29:21)
So sometimes we’ll lend boats or gasoline or logistical support. And there was a moment in March, several hours upriver from home base. I’m with JJ on the boat, and I fly the drone. There’s this big new clearing, and I lower the drone. A few times, I’ve had people come out and wave at the drone or say, “Get away.” And we’re out in the middle of the river just sort of idling, and I lower the drone. I see these little huts, and we’re saying, “Okay, this is a big clearing.” I’m snapping images. There are visitors who had flown in on the boat with us, and I have my local team, and all of a sudden, people come running out of the houses.
Paul Rosolie
(01:30:04)
And they run straight to their boats. Home is in the downriver direction. They get in their boats and start chasing us, and we start driving at full speed. We have a 60 horsepower; they had a 40. We’re doing this chase now, and our guests, who are potential funders, you know—at one point, the father looked at me and goes, “Hey, this whole running from the Pirates of the Caribbean thing… it’s getting scary.” “You’re scaring us.” He was like, “When are you going to put the drone down?” And I go, “I’m flying the drone at full speed to keep up with the boat.” And I just crash-landed the drone on the side of the river near a big tree. I just said, “Fuck it. We’ll get it later.”
Paul Rosolie
(01:30:46)
And I was like, “This happens all the time. They get mad, they chase us. It’s no big deal.” And I smiled at him, and JJ’s smiling. He goes, “This is so bad.” And he’s smiling. And JJ looked at me, and the smile fell off him like a mask. He looked at me and was like, “This is not good.” And we kept going upriver and luckily, there was a camp of police that we’ve worked with quite a bit. I went to a friend of mine, and I remember we got off the boat. I shook his hand. He said, “What’s going on?” I said, “Look downriver, there’s a boat tearing upriver towards us.” And he did three things. He got the rest of the guys, they armed up, they got on the boat with guns. They put ski masks on. They got ready for combat. They told us to get down. He also said, “Hey, turn on the sat-link.
Paul Rosolie
(01:31:33)
Call for support back home.” We turned our boat around. And as soon as the narcos—which we didn’t even realize were narcos chasing us; we thought we were looking at loggers—as soon as they saw the guns and they saw us face them, they turned their boat around and went back downriver. So we got escorted downriver, and I remember shaking my friend’s hand and saying, “Thank you for saving us today.” And telling the other guys they did a good job. We’d been brought home safe. Hours later, I said, “Good job. Thank you so much.” And they went back upriver, and then that night, I’m sitting at the station. And I get a phone call from Stefan.
Paul Rosolie
(01:32:14)
And he goes, “Pick up the phone.” I go, “I’m in the middle of a conversation.” He goes, “Pick up the phone.” And my friend whose hand I had just shaken a few hours ago, they went back upriver, and as they were unloading their boat and washing off in the stream, the narcos did a drive-by and shot him straight in the chest with a shotgun. And so all of that enthusiasm—that we’re protecting the biodiversity, this is so great—it’s like that scene in the movie where there’s a montage of success and winning, then gunshot. I could still feel his hand in my hand. I just shook his hand. I said, “No. You’re not…”
Paul Rosolie
(01:32:57)
I said, “Well, is he okay?” He said, “He took a shotgun straight to the chest. He’s dead.” I said, “Okay.” And so I had to go out to dinner and not show the guests anything, and just smile and laugh and talk to them about whatever and keep that in, which felt very difficult to do. And as you said, the threat level escalated and we didn’t know it.
Paul Rosolie
(01:33:25)
The narcos had come in and started realizing that there’s so much wilderness here that they can operate and there’s no police. And then when we flew the drone, they got mad. So we communicated with the police and they said, “Oh yeah, these are narcos.” Now we realize this is part of the serious drug mafia. And then I had gone back, the incident that you’re referring to at the end of the book, I had gone back to New York to speak to donors to try and get this work to continue. You know how it works. We’re at the station and then you go to that little logging town, and then there’s a road.
Paul Rosolie
(01:34:08)
And so our pickup truck had come in on the road and JJ was supposed to come down, get in the truck and drive back to the city. JJ was on the river and went, “I forgot I was supposed to get more stuff at the city. I’ll go tomorrow.” He went back up and he sent the boat driver down and told our driver, Percy, who was waiting with the pickup truck, “JJ’s not coming today. Go back and come back tomorrow.” Percy starts driving down the road and he sees a tree across the road—this is a single-lane road through the jungle. Men with guns come and stick pistols in through the open windows, gun against his head.
Paul Rosolie
(01:34:50)
They pull him out and they go, “Where’s JJ and the mierda gringo volador?” He said, “Where’s that shithead gringo that flew the drone?” And if either of us had been in the car that day, they would have killed us. And we know that because they took his wallet, they took his phone—our driver, Percy. Thank God they didn’t hurt him, but they sent a message to us. They said, “We missed you this time, but we’ll get you next time. We’re going to get you.” And so when JJ called me, he was howling. He just had that adrenaline and that emotion that it almost happened. And so that changed everything.
Paul Rosolie
(01:35:35)
Since then it’s not counting butterflies and taking ecological surveys; it’s that there’s a drug war being fought on our river. And now when these roads come in, we can’t just go out and meet these people anymore because they are actively looking to shoot us. They know our names. The police intercepted a phone from someone they arrested, and in the WhatsApp chat, it said, “If you see JJ or the gringo, anyone in our network, please kill them. You’ll be rewarded.” So we both have a hit out on us and life on the river has changed. We can’t…
Paul Rosolie
(01:36:16)
You know, I can’t just go out walking around and swimming and driving my boat. You have to be looking over your shoulder at all times. You can get as trained as you want with a pistol and sleep with it under your pillow, but the way these people work, they’ll catch you when you’re least expecting it. They’ll wait till you’re at a cafe in town. They’ll wait till your motor doesn’t work on the side of the river. It’ll just be a quick one and they’ll go. And so that feeling on top of the weight of protecting the ecosystem and the animals, it’s like now we’re actively being hunted when we’re there.
Lex Fridman
(01:36:54)
And this is very directed at you and JJ? So they really don’t care about the others. They understand. Are you afraid? What’s it been like living with the real fear of being murdered at any moment?
Paul Rosolie
(01:37:20)
I wish I could say I handled it better than I’ve been handling it. I wonder how people in war zones do it. I wonder how some of my soldier friends that I have immense respect for did it when they were deployed. Because for me, once this happened, with every phone call now I think, “Did something happen to JJ?”
Paul Rosolie
(01:37:40)
Every time I go to sleep, my dreams are that I’m being shot. It really threw me. It really affected me. When J.J. called me, he was just shouting. I don’t even remember what he was saying. He was just shouting, “They almost got us. They almost got us.” He was so terrified and angry. There was a day not that long ago that I was swimming in the river, right in front of the stairs at the station, and a boat came around the bend. I remember thinking, “Do I run? Do I go underwater? Do I hide? What the hell do I do?” I didn’t have a gun near me.
Paul Rosolie
(01:38:24)
The security people were up the stairs. It’s like, you go, “Holy shit.” And it’s not the danger of, you know, if I jump on an anaconda, it might kill me, or if I climb this, I might fall. These are people who want to kill you. And on top of it, when you see what your friend looks like after three days of floating in a river—what a body looks like of a person you used to know—that’s very viscerally terrifying. There’s the tragedy of that person who lost his life, who was younger than I was. He was a kid in his 20s. It’s very hard to do anything because… I mean, right now, my hands are sweating. It just affects me.
Paul Rosolie
(01:39:12)
Even in the daylight, if I can go, “You know, it’s fine. This is part of the thing. This is the adventure, people deal with this all over the world.” You can talk yourself tough, and then in those quiet moments, that 4:00 AM thing, you wake up and you go, “Fuck. Why am I sweating? Why did I just have those dreams? Why is my heart racing?” It sinks its way into your subconscious, and it’s just not what we signed up for. We wanted to just protect this beautiful place and this is a whole new threat. We’re not trained for this.
Paul Rosolie
(01:39:45)
We’re not police or military and we’ve now seen violence on a scale that we were very unprepared for. Just two days ago, I was on my way to you and my phone rang at nine o’clock at night and it was J.J. My heart was jackhammering. I had to pull over because I was going, “What news now? Did we lose another bunch of acres? Is it a new road? Did somebody die?” It really scatters you.
Lex Fridman
(01:40:20)
In some sense, it’s a twist that you didn’t ask for and it doesn’t necessarily have anything to do with the fight you’re fighting, which is protecting the rainforest. But because of it being pristine and quiet and away from civilization, it also becomes a place where you can have airstrips. It becomes lawless in a certain way because it’s so far away from civilization.
Paul Rosolie
(01:40:46)
Yeah. It’s the only place that they can operate with impunity. There’s no police out there. And so they saw us helping the police and they went, “Cut the head off the snake.”
Paul Rosolie
(01:40:57)
And that… you know, Chico Mendes, Dorothy Stang—the list of environmental defenders that are assassinated in the Amazon every year is huge. There’s endless examples of it. It’s staggering. I forget the exact numbers, but every year we lose people. There’ll be local leaders who are trying to stop an oil company or a drug cartel, and they just shoot them because they know that one person who’s able to rally that support, who has that voice—if you just shoot them, usually it’ll end the thing and then they can go back to doing whatever the hell they want. And so right now, we’re working very closely with the Peruvian government.
Paul Rosolie
(01:41:39)
People assume that a Latin American government is automatically corrupt, but what we found is that these are really good people that want to help their citizens. And the police have been working very hard to stop the narcos, to protect the local indigenous people because with the narcos comes human trafficking. With a team of male narcos out in the woods making drugs, they want prostitutes. And how do they get prostitutes? They go steal girls from indigenous communities that don’t know any better. And then there’s reports that the narcos have made contact with the uncontacted tribes. Of course, they’re going to shoot machine guns at them. They’re not going to have a shotgun where it’s a fair fight.
Paul Rosolie
(01:42:24)
They’re going to mow them down and the uncontacted tribes are going to have no idea. That’s why I posted a video of me in the rain saying, “This is endgame,” because there was a new road coming off the north of our territory above the ancient forest. They had jumped over because we stopped it at the ancient forest. They’ve gone above the ancient forest. Now they’re trying to cut down to a new area. And so it looks like this. …Trans-Amazonian… Stefan made this map, of course. But you see the area that we’re trying to protect—loosely, so that we don’t give away anything—the area that we are protecting. So, the light green is the 130,000 acres—
Paul Rosolie
(01:43:09)
…and then this metastasizing network of roads just reaching out and trying to get in. And so they’re trying to come in from the north where that arrow is, they’re trying to come down. And so the police are fighting them along this— …and it’s a full-on drug war right now. Stopping that, securing this northern boundary… and again, just the power of what we have. When I posted this, I asked Stefan to show people the road and where it’s going to go. We posted this video and said, “We have to protect this 100,000 acres right now.” And all up here is uncontacted tribe territory. And just from that one post, we got $150,000 in like 48 hours and we bought this concession. We stopped that road. But now they’re up here—
Paul Rosolie
(01:43:55)
…and they’re trying to come down. This is the thing, again, you said it’s great. Yes, you get to be an adventurer and you get to live in the jungle, sure. But it’s like there’s this Mission: Impossible thing where you might get lucky enough to pull off your psychotic mission. You know, jump your motorcycle off the train and parachute down and stop the bomb before it goes off. Great. How many of those do you get? And we’re having to do it every month. These amazing people that are supporting the rangers allow us to patrol and protect this because once we have this land protected, the interesting thing is that the police can go into any of the light green areas. If anybody’s there, they just arrest them.
Paul Rosolie
(01:44:37)
They’re on Jungle Keepers’ land, they’re out. And eventually, that land will become national park if we’re successful. The problem with the land that’s not is it’s a gray area. It’s the middle of the Amazon; are they allowed to be here? Do they really have cocaine? Because they’ll plant papaya for acres and a little bit of cocaine behind it. They’re sneaky. And so they have to build a case and it takes time, and then the road comes in… and in that time, then they’ll knock off a police officer. If we were just able to get this tomorrow, the whole problem gets solved. We could give the police two more boats, and then they could do all the patrolling they need.
Lex Fridman
(01:45:17)
So the mission is clear.
Paul Rosolie
(01:45:18)
The mission is very clear, and the problem is that right now we’ve been playing defense and sustaining losses. Either we need to inspire enough people that the donor program goes through the roof, and instead of having several thousand donors we have 50,000 donors and we raise—we need $20 million to save the rest of the corridor. We’d raise $20 million overnight with enough people. Or we need one of these people who has the resources to come in like Batman and just go, “I want the park named after me and I’m just going to give you the $20 million.” And then we do it tomorrow, and then we make a documentary about how we saved a river and the tribe and the monkeys. But right now, we’re…
Paul Rosolie
(01:46:03)
Yeah, right now we’re begging on the side of the road for enough change to buy bullets so that we can stay alive.
Lex Fridman
(01:46:12)
So these narcos, they’re… there’s a kind of distributed network where a bunch of them are pretending to be farmers. They’re holding onto the land and then maybe they start planting cocaine on the land— …slowly, and they build the airstrips. Are they trying to stay under the canopy— …with the airstrip?
Paul Rosolie
(01:46:33)
It’s brilliant. First, what they do is they subsidize the poorest people and they say, “Go up this river, turn left at the tree and just start there.” And they’re like, “Here’s a few grand.” And these people are like, “I never had a few grand before.” They’re like, “Buy gasoline. Here’s a chainsaw. Go clear some land.” They send these people up there, and then when they show up a year later and these people have made an illegal farm out in the jungle, they go, “Hey, we need a safe house. Remember that time we gave you the gasoline and now you live here? You’re going to work for us now.” And so they’re kind of a friend of the people like that, and they have safe houses all over the jungle. And then when the bosses come to collect what they’re growing out there…
Paul Rosolie
(01:47:14)
I mean, the police busted a narco operation that was in the middle of the jungle. I mean, you know, hiking to the ancient forest— …just days into the jungle. These people are going on foot with sacks and stuff. And the way they do their airstrips is you think the canopy of the rainforest is 150 feet tall, 160 feet tall. And if you clear the interior of the landing strip, the trees are still meeting overhead. And so you can’t fly over and see down—
Paul Rosolie
(01:47:42)
…which is the same reason we didn’t know about the road that was going to the ancient forest, because overhead the trees are meeting, so you’re not gonna see it on satellite and you’re not gonna see it from a plane. And these bush pilots fly in and they’ll just duck in under the canopy, land their plane, load up, and then they fly out. I mean, expert pilots.
Lex Fridman
(01:48:01)
So it’s impossible to detect.
Paul Rosolie
(01:48:03)
It’s almost impossible to detect. We’re working with people now. You know, it’s this arms race. There are drone programs. I talked to someone that has a different type of drone, a 16-foot drone that uses the thermals to climb up and has solar panels on the wings and flies for two weeks at a time. It’s like a glider- …that recharges itself. And it’ll keep constant imagery so we’ll get almost up-to-the-moment data on disturbances in the canopy. And it’s like, well, that’ll be a first-hand alert system, but then we gotta get the police out there which, as you know, is a two-day expedition by boat, and it’s the only way. And so the local police force there may be dedicated, but putting people on a multi-day expedition to go get shot at in the jungle is nobody’s idea of a good time.
Lex Fridman
(01:48:49)
You understand, have you researched into this whole other world of drug trafficking, cocaine trafficking? How big is the operation here, looking at Perplexity— …multi-thousand ton, multi-billion dollar global industry?
Paul Rosolie
(01:49:05)
I mean, globally it’s a monster.
Lex Fridman
(01:49:07)
Colombia, Peru, Bolivia. And they move north and east through the Americas, the Caribbean, the Atlantic to reach major consumer markets. Yeah, this is a machine fueled by a lot of money and a lot of brutality. The number of cocaine users worldwide is about 25 million people.
Paul Rosolie
(01:49:32)
Users.

Climbing the giant tree

Lex Fridman
(01:49:33)
Users. So there’s a market. And when there’s a market, you’re going to find a way. Quick pause, bathroom break. All right, and we’re back. And me as somebody who is afraid of heights, and I’ve had a chance to interact with you a bunch—you’re in some sense fearless and I’ve watched you climb a lot of trees. You’ve helped me climb a tree. And there’s this wonderful part of the book where you talk about finding the tallest tree in the forest you knew at the time, and that was something that you passed and thought was impossible to climb. And you talk about climbing it. Take us through the experience of that. And that leads you to seeing the Mist River in the rainforest as the sun rises.
Lex Fridman
(01:50:24)
I was wondering if you could talk through the story of that, both for at least for me, but even for you at that time, the terrifying process of climbing a tree like that for the first time with J.J. at the bottom cheering you on, and what it felt like to see the Mist River.
Paul Rosolie
(01:50:44)
That tree, you’ve met that tree. She’s a good one. Her base is at least as big as this room, and she’s probably about 160-something feet tall. And so when you’re looking at these giant buttress roots going up, which I’d been doing for 18 years at that point, I always said, “Man, if I could just climb it.” And I’d never had the rope skills, you know, and I’d developed as a rock climber. I was working on strength, and I trained for it. It’s like most things. You can’t just do it. I’d gone and climbed up 30 feet and gone, “No way.” The trunk of the tree goes vertical for about 70 feet before branches even come out, so there’s just this one big vine. And J.J.
Paul Rosolie
(01:51:29)
and I did it at, I want to say like 4:00 in the morning, like really early. The howler monkeys had just started. And you start climbing with the rope up this one vine, and you have to… it’s not a technical climb. It’s a strength climb. You have to gorilla up this vine, and it’s all back strength. And so I did it no shirt, no shoes, straight up, and J.J. had the belay device. And so every like 30 feet, I would put in a piece of webbing and a carabiner. So then you go up another 30 feet and you put a piece of webbing and a carabiner, and you don’t know what you’re gonna find. And you’re going up in the dark.
Lex Fridman
(01:52:05)
And so when you say it’s a lot of strength that’s involved, there’s very few places to rest. You’re essentially just lifting the whole time. So it’s extremely exhausting.
Paul Rosolie
(01:52:13)
Extremely exhausting. Like, I really trained for a long time, and there is no rest. The only rest you get hurts. You’ll have to cling to the tree and your feet are smeared against the bark and you’re holding on with your toes, if anything. And if you fall—you know, if you’re climbing up, and this is basically trad climbing—if you’re climbing up and you put a safety, which is a piece of rope with a carabiner, and you put my rope through that, again, as you’re doing that, it’s dangerous ’cause if you fall, you fall. Then I do that and then you climb up. Right before you put the next one, you’re gonna fall double. So if you climb 30 feet, you fall 60 feet.
Paul Rosolie
(01:52:48)
And so your head’s gonna smack against the side of the tree. As you’re climbing, you don’t know if you’re gonna reach into a wasp nest or if there’s gonna be a venomous snake.
Lex Fridman
(01:52:56)
And there’s, by the way, in those trees, a lot of those.
Paul Rosolie
(01:52:58)
A lot of those. And it took me over an hour just to get to the branches the first time, and it’s just, again, full exertion, everything I had. And then you get to the branches above you, and each of the branches is the size of a mature oak tree. They’re just these huge branches, thick as a minivan, and you’re climbing up this straight tree that’s like the World Trade Center. It’s just huge and then I had to traverse around the tree on vines, and then finally I get up into the crown of this tree. And then from there, I called down to J.J. and I just see this little speck of light 85 feet below me. And then I climbed up to about 120 feet and I sat there.
Lex Fridman
(01:53:39)
And you’re doing all this still in darkness.
Paul Rosolie
(01:53:41)
We’re doing all this in the pre-dawn light. And so when I got up there, now the howler monkeys are going and the jungle’s starting to vibrate and you can hear the first macaws starting to chirp and everything’s starting to turn on. And in the east, the sun is coming over the jungle. When the first rays get line of sight to the canopy, it starts lifting the mist off the canopy. All of that moisture starts coming up, and I’m sitting on this branch at 100-something feet above the ground with dark jungle below me, and all of a sudden I see the river. I see the Mist River I’d always heard about.
Paul Rosolie
(01:54:16)
They say that there’s a river above the Amazon, an invisible river that has more moisture and that more water is flowing above the Amazon than is flowing in the Amazon. And I’d heard this my whole life and you think, “Okay, the fact that there’s a molten core of the Earth or that black holes theoretically exist.” It’s just like one of those things you’re never gonna see. And in this moment on this tree, sweating and just ripped apart and bleeding, I was sitting up there and I saw the Mist River and it was flowing over the canopy in the golden rays of the morning and the macaws start taking flight and there was monkeys below me that were looking up. And you could tell they were confused.
Paul Rosolie
(01:54:53)
They were looking at me going, “What is that?” And I just had this absolutely incredible moment. It felt like you’re seeing God. I wanted to share it with everyone. I felt guilty afterwards for having had a moment like that. But it felt like I had taken this insane risk, risked falling out of the tree or getting strung up on the ropes, and of course it’s just me and J.J., so if something goes wrong, no one’s gonna help you. And being out there on that branch felt suicidal ’cause even then, if you fall, it’s a giant swing back to the tree.
Paul Rosolie
(01:55:31)
But the beauty that I saw up there was so intense that it sucked the air right out of my lungs. I had tears in my eyes and I’m just watching this incredible process flow over the Earth, this legendary thing that I’d heard about, that scientists described, and now I’m seeing it with my own eyes. It felt like the gift of the tree.
Lex Fridman
(01:55:56)
And you write, “Now, in the branches of the greatest tree in the jungle, I watched as the Mist River caught the morning rays, illuminating golden currents, swirling as it rushed over the canopy like a stream from heaven. In the troughs and basins in lower areas, the river was deep blue. But then, as it flowed up and over the taller trees, slow rapids washing over the canopy, the Mist River became ignited, electrified in the gold magnificence of the sunlight. Scores of birds flew up, in and out of the churning currents. The life and breath of the Amazon was flowing from north to south along the basins of the Las Piedras over the jungle. My God. My God.” I thought of everyone I loved, of every creature contained in the leafy distance.
Lex Fridman
(01:56:45)
The jungle itself was like a great being, a monstrous leviathan of warm green might. I wanted to call down to JJ and tell him to find a way up. I wanted my mother to see it. I wanted the world to see it. The light filled my eyes, and I found myself wiping away tears.” You know, I should take the small tangent of saying the obvious, but the thing that needs to be said is you’re a fucking great writer.
Paul Rosolie
(01:57:15)
Thank you. I mean it, come on, I’m just describing what happened, but…
Lex Fridman
(01:57:20)
All right. You mentioned macaws as part of the process of the jungle waking up. I read that when you first start in the jungle, that’s kind of your job, studying those. And me as a fan of monogamy and birds… So macaws are beautiful, but they’re also monogamous creatures. They scream at each other quite loudly. What are some interesting things about them? Among which, by the way, you write how important the ironwoods are to— …their wellbeing, to their life.
Paul Rosolie
(01:57:53)
Yeah, I mean, when I went down there, that’s—like I said, for young people, if you wanna get out there, go do it. I agreed to stay at the station and do like six hours of macaw research every morning. So you’d wake up before dawn and go sit and just stare at the side of the river. And the macaws would show up—
Paul Rosolie
(01:58:11)
…and like you said, they all scream and bicker at each other. It’s just how they talk. It’s very, very loud and very, very harsh. But they do love each other. You can actually hear when you walk through the forest, I know what the sound of macaws giving affection is. They make a certain kind of sound when they’re preening each other’s feathers and taking care of each other and just nuzzling. And then there’s a different call altogether when they’re yelling at other macaws or saying, “Let’s go.” And you start to learn macaw language.
Lex Fridman
(01:58:42)
What have you learned about relationships and successful marriage from listening to macaws screaming at each other in those nuanced ways you’re talking about?
Paul Rosolie
(01:58:52)
Well, I guess…
Lex Fridman
(01:58:54)
Never mind, you can skip that question.
Paul Rosolie
(01:58:56)
It’s interesting to see two animals sticking by each other’s side while they’re raising a chick. And at the bottom of the stairs at the station, there is a macaw nest in an ironwood. The relationship that you mentioned is that in the jungle, there’s a limited amount of macaw real estate. And those are all ancient ironwood trees, at least 500 years or more. So they have to be thick. This, again, car thickness or bigger. And when a branch falls off, it creates a hollow and the macaws use that to reproduce. And because there’s only so many nest sites in the forest, only about 17 to 20% of the macaw population reproduces in a given year. So they have a slow replacement rate. And macaws are one of the things that people come to the jungle to see.
Paul Rosolie
(01:59:42)
And so along with gold mining and logging and all these extractive things, in our region, ecotourism has been great. It’s given the local people jobs as guides, cooks, chefs, and carpenters. And so macaws are a huge part of that because it’s one of the last places where you can see these flying rainbows over the canopy. Or when you’re on a branch from one of these trees and the macaws fly under you. And again, they’ll fly by; you just hear the wind in their feathers. And they just look at you over their shoulder, like, “What?” and just keep going. And then they’ll join up with other macaws and they fly across the horizon.
Paul Rosolie
(02:00:24)
And it gives you this sense like you’re seeing something from the dinosaur times. It just looks like wild jungle and there’s nothing human in sight. And there’s just this savage canopy to the horizon and just these beautiful birds flying over. They’re just magical.

Giant anaconda

Lex Fridman
(02:00:42)
You have this Instagram post with an anaconda around your neck. There’s a million questions. Maybe you can talk about that experience, but also, how did you not die?
Paul Rosolie
(02:00:53)
So as you know, we’ve been studying the habits of Eunectes murinus for quite a while. The lowland green anaconda is the largest, heaviest snake on earth. And I’ve been practicing a lot for a long time, and this is the biggest one we’ve ever physically caught. This was just under 20 feet—it was 19 feet something. And you can see she’s in the middle of shedding. And the other interesting thing with her is that she had blue eyes because her scales over her eyes turn blue right before it comes off of her head. And so I’ve never caught a blue-eyed anaconda before. But if you look at the size of my head and the size of my hands, you start to imagine that thing’s head is bigger than a Great Dane’s.
Paul Rosolie
(02:01:41)
It’s huge. And so the power on that—when we tried to lift her to measure her, we wanted to bring her up out of the stream and get her over to the side so we could straighten her out and measure her. We’re just trying to take some simple data points and then release her. And she, at one point, she just decided to flex her body, and you just see 10 people fly this way, and then she’s flexing the other way and 10 people fly this way. And every time that mouth would open, she would just reach back and she’d just be like, “Just let me do it.” And you know that if she gets purchase—
Paul Rosolie
(02:02:11)
Once they get purchase, they wrap you so quick and they’ll just crush the life out of you like you’re a bag of chips. And if you’ve ever seen a mouse in a mousetrap, when the mousetrap goes down and the eyes come out? Anyone that’s owned snakes and fed them mice knows this, that sometimes if they catch it right, the guts will either come out the back end or the front end. So I’d imagine that the same thing will happen with a snake that’s that big. That’s bigger than I am around.
Lex Fridman
(02:02:39)
So they have a process. When you say purchase, they want a bite just to hold and then they— It’s good. So…
Paul Rosolie
(02:02:44)
But again, all she wants is to be let go. In her defense, this massive snake—we named her Millie for the data entry—she just wanted to go on her way down the stream. The comments on this are hysterical. People are like, “This is the worst example of white people shit I’ve ever seen.” I mean, Snoop Dogg shared it. So one guy goes, “Congratulations, you’ve touched enough grass. Go back inside.”
Lex Fridman
(02:03:14)
Yeah, somebody said, “Interesting use of free will.” And I saw Killpopper007 commented— And maybe you can tell me if this is correct. “Anacondas are ambush predators. If you approach them, they will usually try to flee and will not register you as food. There’s other reasons too.” This is in response to how Paul possibly did not die from this. “There’s other reasons too, but this is the main reason. They’re pretty much apex at that size, so their fear isn’t as prominent. He was calm, so the snake was calm. It’s insane to do—” “…and still risky, but he might actually be the most qualified anaconda handler on Planet Earth. Paul is one interesting cat.” Hugging emoji. Is that accurate?
Paul Rosolie
(02:04:05)
Yes. At that size, they’re apex, so they’re really not thinking about defense. They’re just like, “Get off me.” If I was to hurt her, like if I was to touch you in the arm with a needle, you’d react. If I was to do anything that hurt her, which I’m not doing, she would turn around and bite me to say, “Go away.” But they also don’t want to bite because their recurved teeth make it very difficult to detach. And also, they’re putting their head at the source of the danger. It’s not a good calculation. And so these giants—and I’ve had the privilege of interacting with four or five anacondas in the 20 to 26-foot range—all of them have been very Leviathan-like. They just don’t want to move. They just want to keep going. He’s 100% right on all of that stuff.
Paul Rosolie
(02:04:54)
I’ve caught 90-something anacondas at this point, and many of them have been massive. Then there’s the one that me and JJ didn’t get at the Floating Forest because it was bigger than—
Paul Rosolie
(02:05:03)
…bigger than we could tackle, bigger than my hands. I couldn’t touch fingers. But every single one of them has chosen flight over fight. Only the little babies and the smaller males get snappy. They’ll come back at you like a normal snake, and if you grab their tail, they’ll try to just bite you and then go. But these big females, you know, they’re like dragons. They’re like these big, legendary things that live in swamps, and the only reason they’ve gotten that big is because they have a reliable prey source in a secluded place away from humans, and they’ve been there for decades just pulling things down to hell and eating them. And the other thing—I mean, look, I have a team with me. You know? So…
Lex Fridman
(02:05:43)
So there’s people holding the—
Paul Rosolie
(02:05:44)
Yeah. I mean, let’s be real here. I would never do this. If I was out in the jungle by myself at night, doing this would be suicide, 100%, because for every second there that I’m going, “Oh, I’m in the water and she’s over my neck,” if JJ wasn’t there to jump in and unwrap her— …then I die. 100%.
Lex Fridman
(02:06:03)
Because she’s continuously wrapping.
Paul Rosolie
(02:06:06)
She’s continuously on her back saying, “Come in here—”
Lex Fridman
(02:06:12)
Come in here.
Paul Rosolie
(02:06:12)
“…and let me arm bar you. Let me squeeze the guts out of you.” She’s just going, “Let it happen.”
Lex Fridman
(02:06:17)
And moving slowly.
Paul Rosolie
(02:06:18)
Moving really slow.
Lex Fridman
(02:06:20)
Conjuring.
Paul Rosolie
(02:06:20)
With that assurance of power where she doesn’t need to try and tap you quick. She’s going to get you eventually.
Lex Fridman
(02:06:25)
Although, to push back on something you just said, having known you long enough, let’s be honest. You’re saying, “I wouldn’t be insane enough to do it.” I think you would be. I mean, there’s a line of insanity, and you, my friend, walk that line masterfully so far. I think there’s a sense when you’re able to sense the animal, whether it’s crocodiles, caiman, or anacondas, and maybe radiate a sense of calm. I’ve seen you be able to go into some dangerous, from my perspective, situations, and make it seem like it’s not dangerous at all. And maybe when you become one with the ecosystem, you’re not a threat to it, and maybe that’s why you can survive? I haven’t been able to make sense of it, really.
Paul Rosolie
(02:07:21)
Look, I would say this. In the case of elephants, if we ever end up in Africa together, I can get incredibly close to elephants because I’ve spent enough time with them where, so far, it’s always been a mock charge. And you can be one with the elephant and learn their language enough that you respect their boundaries and you also show them that this better be serious because you’re either going to have to kill me, or you’re going to have to just turn around and go back to eating. And you can have that exchange with them. And with smaller snakes, I’ll be careful and whatever else.
Paul Rosolie
(02:07:59)
I can tell you with this that when you have both of your hands around an anaconda’s neck, I mean, I’ve been known to surprise myself with the decisions I make, but this alone would lead to death, 100%. It’s like laying down in front of an 18-wheeler with it in neutral. It’s going to roll over you. This is going to turn into anaconda handcuffs with this thickness, and then that is going to wrap you, and then six more of those are going to go around your body and you will get squeezed and you will turn into goop. And she will not… just like that guy said, she probably is in defense mode and not food mode, so she’ll probably just neutralize the threat and then go back to sleep.
Lex Fridman
(02:08:47)
I have to ask you about the floating forest. And you write about Santiago, once again, beautifully in the book, of the time when he told you the stories and when your mind and eyes were still fresh and maybe skeptical and more leaning towards the Western world point of view versus the jungle point of view. “Santiago’s eyes were glowing in the darkness. He watched the orange ember spark upward to join the celestial river of stars that arched across the night sky as if the memories were written there. He squinted, his face as wrinkled and weathered as an old map of the world.”
Lex Fridman
(02:09:25)
“Vast experience whispered in the firelight, as ephemeral as the breath that spoke the words, but powerful enough to latch on and sink down into some deep part of me.” This is Pico saying, “Papa, tell me about the anaconda on the blackwater stream.” And he tells a story of that. And he talks about it being big and having horns.
Lex Fridman
(02:09:50)
And you write once again masterfully about you at that time having doubts. It sounds like bullshit, but now more and more of the things you’ve seen of the jungle and the things you sense you have not seen yet, all of those stories seem to be true. The one he was referring to may be 36 feet long, this big. He says that, “The floating forest is the place you need to go, Gringo, if you want to be liberated of your doubts and skepticism.” So tell me about the anacondas you’ve encountered in the floating forest.
Paul Rosolie
(02:10:32)
Well, the thing he’s describing there is that he’s saying they found an anaconda that had horns. And, in that moment, we were all hanging out by the side of the river and I said, “That’s enough.” I stood up. I was like, “Come on, there’s no anaconda that has horns.” If I’ve learned anything in 20 years of living with the indigenous people in the Amazon, it is that they’re not wrong. You know, if they say there’s a tribe of naked people with arrows out there, they’re right. And they know what an anaconda looks like. So if he says he saw an anaconda with horns, he saw something that ain’t a normal anaconda. A smaller version of this played out recently where one of my…
Paul Rosolie
(02:11:12)
One of the people that works at the treehouse, he came and he said, “I found a snake and it was in the water tank. And it had green spikes on it.” And I said, “There’s no snake that has green spikes. Congratulations, you’re an idiot.” I made fun of him.
Paul Rosolie
(02:11:29)
And I said, “I know all the snake species that are here. None of them have spikes.” He said, “No, it had long spikes. The snake is this big and had spikes this long on it.” I said, “There’s no snake with spikes.” Until finally he came and he got me in the night and he goes, “The snake with spikes is there.” And I said, “Well, I’ll get out of bed for that. Let’s go.” I said, “And I guarantee it’s not going to be there when we get there.” And we got to the water tank and I shined my flashlight down and sure as shit, there’s a snake in there and it’s got thousands of green spikes coming off of it.
Paul Rosolie
(02:12:06)
And the spikes are coming completely perpendicular out from its body. For a second, I really was having this out-of-body experience. And then the snake saw us, got scared and swam, and all of the spikes collapsed onto its body and became smooth. And then I realized the snake had been living in the stagnant water for a while and developed algae that was growing off of it. So when it was sitting still, all the algae would settle out. And so if you look straight down on it, it’s a water snake that has algae growing on it. And so it does look like a snake with spikes. He’s not wrong. It was. It was a water snake. It was some sort of Helicops. But there’s always an answer like that.
Lex Fridman
(02:12:44)
Amazing, yeah.
Paul Rosolie
(02:12:45)
Where they’re not wrong. So when they tell you something like, “There’s an anaconda with horns,” and multiple people have seen it— …you make an expedition there. You know, like if somebody said there’s giant ground sloths in this one valley, I wouldn’t be like, “They’re extinct.” I’d be like, “Where?” You know, you start to listen. I mean, after the tribe walked out of the forest… You could tell me, that day, if a Tyrannosaurus rex walked out behind them, I would’ve been like, “Makes sense.”
Lex Fridman
(02:13:10)
Let’s go to the floating forest. Do you ever think about what creatures are in there? I just had a conversation with Michael Levin at Tufts University. He’s this biologist- … who creates biological life forms in the lab, but he also studies all kinds of weird, what he calls unconventional intelligences on earth. And he speaks about that from a perspective of just understanding the incredible intricacies and weirdnesses of biological systems. So, you know, the soup of organisms that’s there- …in the floating forest is probably incredible. You ever think about what kind of weirdness is there?
Paul Rosolie
(02:13:53)
Yeah. I mean, along with giant snakes are animals that are existing in an ecosystem that’s isolated, right? And so the tepuis… You know, like in the movie Up, those Venezuelan cliff jungles where it’s like the straight… Like Angel Falls? And up there you have this allopatric speciation occurring where these isolated communities are departing from whatever’s down there.
Paul Rosolie
(02:14:18)
So on the floating forest, you have this very unique ecosystem where there’s animals living on grassy islands, there’s animals living in the tops of palm trees. And so in that nightmare soup that exists beneath the rafts, there’s probably insects and… I mean, I’ve seen lizards there that we have been unable to identify. There’s things there in the… I mean, I can’t imagine. I don’t think the decay is going to happen. There’s probably not a lot of oxygen in that water. And so, I brought a few scientists there and they’ve all just been like, “This is…”
Lex Fridman
(02:14:52)
Yeah. How do you even
Paul Rosolie
(02:14:53)
Yeah. How did this form? We’ve brought hydrologists there and they’re like, “How the hell did this thing form?” And then, trying to study what creatures live under that is amazing.
Lex Fridman
(02:15:04)
But the big anacondas, it’s interesting because they truly are the apex, so they’re unbothered. They’re not really using- …their power for anything.
Paul Rosolie
(02:15:14)
No, and I’m sure if I bit her, she’d turn around and kill me.
Lex Fridman
(02:15:17)
Yeah, but in a bored kind of way. Like it wouldn’t even… It would just slowly kill you.
Paul Rosolie
(02:15:22)
But I wonder if once she killed you, if she’d be like-
Lex Fridman
(02:15:27)
Just take a bite?
Paul Rosolie
(02:15:28)
I mean, if she’d… I mean, bite? They swallow, right? So- …once you collapse your shoulders, it’s like if you killed a perfectly good hamburger and it was like in your hands dead, you’d be like- “Maybe I’ll try it.”
Lex Fridman
(02:15:41)
I mean, they need the calories.
Paul Rosolie
(02:15:43)
Yeah, and then take a six-month nap.
Lex Fridman
(02:15:47)
Yeah. They’re truly incredible, majestic creatures though.
Paul Rosolie
(02:15:51)
Yeah. I love this picture. Just look at the size. I want you one day to feel them, because the wild ones are not like the captive ones. The captive ones are soft from sitting in a cage their whole lives. These guys have been flexing every day. So it’s like you’re hitting steel cables. It’s just wild.
Lex Fridman
(02:16:12)
And even if it’s just being chill, you can probably get a hint of the power it’s capable of, right?
Paul Rosolie
(02:16:18)
The one good thing about those really big ones is that when they do strike, it’s like being in a fight with a big guy. That haymaker comes from way back here and you’re like- …”Oh, good. I’m going to duck.” And you get down, because they open their mouth and they start accelerating. And it’s pretty easy to either get out of the way or get it right before it hits you in the face— … usually. Again, if you ever mess that up, just like the haymaker from the big guy, it’s over.
Lex Fridman
(02:16:49)
Your level of knowledge and comfort with snakes is incredible. I think they—
Paul Rosolie
(02:16:53)
Play with them a lot.
Lex Fridman
(02:16:53)
… sense that. I mean, I’ve just seen you with snakes and they must sense in you the camaraderie. I don’t know. You have a way of speaking to animals and about animals like there’s zero danger. Well, from my outsider perspective, it seems like a lot of them are full of danger if you’re not communicating to them correctly.
Paul Rosolie
(02:17:18)
With snakes, I think it’s more of a “the highway is dangerous, but you can drive safely” thing. I know what I’m doing, so I’m working with a snake that can’t envenomate me and is small, so I can allow it to freak out. And then if I can get it into my hands and warm it up and it goes, “Ooh, it’s nice in here.” And of course, like you said, I’m not scared and so the snake is going… They are very sensitive to that and so he’s going, “Okay, this isn’t so bad.” You can chill him out. But I don’t think snakes have any camaraderie. I think that whales, monkeys, elephants—I think that they can sense. They can say, “Okay, this person’s trying to help me get out of this net. I’m gonna relax and not kill them.” I think you have that dynamic then very much so.

Rescuing a spider monkey

Lex Fridman
(02:18:00)
Speaking of somebody that does have camaraderie, there’s this incredible video on your Instagram that people should go watch where this spider monkey was drowning and you jumped in to rescue her.
Paul Rosolie
(02:18:12)
Sure. So we’re coming downriver. It’s seven o’clock in the morning so I’m cold—I’m always cold. I’m sitting on the boat and JJ’s like, “Look, spider monkey.” And I go, “Great, spider monkey in the river,” like that’s normal. And JJ’s like, “No, she’s having trouble.” And I was like, “Why is she having trouble? They swim all the time.”
Paul Rosolie
(02:18:30)
And he goes, “No, she—” he goes, “You should help.” And so the boat comes around. Then sure enough, what you can’t see in the video is that the river was so full that there’s these little whirlpools and currents and she was trying to get to the side. And again, all the animal righteous people are very quick to be like, “Let nature take its course, you know? Let the monkey drown,” or, “She doesn’t need help. You’re interfering.” Sure, sure, sure. If you were actually there, you would know something, and that is that she did need help and she was drowning. Her head kept going under. And so I saw that JJ was right.
Paul Rosolie
(02:18:59)
And so we pull around, I took off whatever I could in the moment, jumped in with the paddle because now here again, I trust monkeys but I don’t want her to bite me. She is gonna be scared so I thought, “There’s two ways I can do this. I can grab her by the neck and ‘animal control’ her—grab her by the neck and the tail and take her out of the river, which is gonna be scary for her.” Instead I thought, “I know spider monkeys so well. I’ve raised so many of them.” And when you raise them, they curl up to your neck and they’ll…
Paul Rosolie
(02:19:31)
Like if you have an orphan spider monkey whose mother got shot by poachers and you’re taking care of her before we bring them to the animal rehabilitation experts, they’ll curl up on your neck and they’ll just talk to you in your ear. And so I feel like I know a little bit of spider monkey—broken spider monkey—and so I pull up next to her and I give her the paddle. And we’re in this rushing river and we’re moving at 10 miles an hour downstream, and I tried to give her the paddle and she smacks it away. She was like—
Paul Rosolie
(02:20:00)
… “No. Get away from me. I don’t know what you are.” And then she keeps swimming. She goes under again. I give her the paddle. No, and then she puts a hand around the paddle. In that moment that you had paused on, she looked back at me and she registered like, “Oh, this is another animal, with a face.”
Lex Fridman
(02:20:18)
For people just listening, you need to go watch the video. You guys are just looking at each other, and she’s looking at you. It’s so cool.
Paul Rosolie
(02:20:26)
She looked right at me, but then she went, “No.” She was like, “Whatever you are, no.” She was like, “I’d rather die in the river. I’m so scared and I’m drowning.” She looked at me and she got scared and she jumped back in. And then I lifted her up, and I started talking in spider monkey. And then, the next moment, you see it. She just goes, “Sure.” And she wraps her tail… You see her tail is around the edge of the paddle. And she puts her hand around it, and then I lifted her. Because I’m taller than she is, I lifted her out of the river. And so now, instead of manhandling her like a raccoon you’re catching by the neck—
Paul Rosolie
(02:21:02)
she’s holding on in her spider monkey way to the paddle, and she looks back over her shoulder. She looks at me, and I’m sitting there talking to her in spider monkey. And she looks at me, and you hear her. She goes… I can’t do the sound she makes, but she makes this spider monkey sound like, “Guh!” And she goes, “Fine.” And then she’s looking off the front end of the paddle as she’s looking at the jungle, and she looks back at me and she’s like… You could just tell. She’s like, “I have no idea what’s happening.”
Paul Rosolie
(02:21:28)
But she accepted the help. And the difference is because I spoke her language in this case. And I know that would be one of those stories that people would nail me on every time if it wasn’t on camera. You can see the moment that she makes direct eye contact with me and goes, “Okay.” And then as soon as we get to shore, she jumps off and runs off into the forest, but it was—
Lex Fridman
(02:21:48)
It’s so… I mean, to me, just watching the video, it’s so amazing. Because she’s looking at you. Like, real… You can see that there’s an actual connection. That there’s like communication, like a social… You know, the way humans, when you’re maybe saving a human being that’s drowning or something like this. There’s that connection. It was beautiful to see, man. And then I read a little bit that spider monkeys are very intelligent, but they’re especially socially intelligent. So they have social connections with each other. They understand what that means. They understand what another entity means. So you speaking in a broken language… Probably is really important and a powerful way to indicate that, “Wow, you’re in network.” Like a foreigner, but—
Paul Rosolie
(02:22:42)
It’s like you’re in a foreign country and someone goes, “Helping, helping.” Like, “Helping.” And you go, “Okay, sure.” Like, you know, “You’re not robbing me, you’re helping,” right? But no, they’re incredibly… And I’m telling you, I’ve had orphan spider monkeys so many times. And they wrap their tail around your neck and they hug you. And you realize that connection that they have with their mothers when they hold onto them in the canopy… When the loggers shoot the mother and then I’m taking care of this baby, they hold onto you. And they need that love and that connection more than they need food. If you put food or you put the warmth of a body, they’ll choose the connection— —over the sustenance.
Lex Fridman
(02:23:22)
Yeah, they really value the touching, that connection.
Paul Rosolie
(02:23:26)
Very tactile. They’re very loving. They wrap their long spider monkey arms around each other. They’re very much like us. They hold their babies. When it rains, all the spider monkeys will get together and they’ll huddle up, and they’ll pull leaves down and they’ll all huddle up together. When it’s cold out, they get close. It’s very cute.
Lex Fridman
(02:23:45)
Yeah, that’s true for a lot of… I mean, they’re distant relatives, but that’s true for a lot of our relatives. The apes, the chimps, all of them, they have this intricate… They’re different. Sometimes more violent, sometimes more loving. But social interactions, it’s cool. It’s cool that way.

Dangerous animal encounters

Paul Rosolie
(02:23:59)
Yeah, I mean, you expect it from them. They’re practically us. To me, it’s when other animals show it. You know, the times that I’ve been on a trail and a jaguar has walked by and just been like, “Mm, ‘sup?” Keep walking. And it’s like, “Eh, it’s kind of cool of you not to eat me. I appreciate it.”
Lex Fridman
(02:24:16)
Has that happened to you?
Paul Rosolie
(02:24:18)
Yeah. I thought somebody was walking on the trail behind me and I was setting a camera trap. And I put my finger up and I was going to go, “Could you walk any louder?” And I had my finger up and I’m crouched because I was setting a camera trap. A jaguar walked by and he literally was just like, shoom, shoom, shoom, just kicking leaves, just having fun, mouth open. And he just walked by and he looked at me and just went, “‘Sup?” Never broke stride. But like— —dead-ass eye contact with the bottom teeth out and that jaguar look of just like, “Hey.” I was like, “Okay.” Now I’m gonna have a full meltdown. Your system, you start sweating.
Paul Rosolie
(02:24:50)
You’re like, “Whoa.” Because they’re also so beautiful. When you actually see a jaguar, and it’s like bright yellow and the teeth and all the muscles… It’s, you know…
Lex Fridman
(02:25:00)
What do you think you communicated to the jaguar that it didn’t kill you?
Paul Rosolie
(02:25:04)
No, nothing. The jaguar was making the decisions. I didn’t do anything that like saved my life. He was just going somewhere. And because he’s the king there, he just went, “Uh.”
Lex Fridman
(02:25:16)
Yeah, probably also not threatened.
Paul Rosolie
(02:25:18)
Not threatened at all.
Lex Fridman
(02:25:18)
I don’t know. But I think there is something to you. See, you’re just taking for granted the things that you’re putting out into the world. You’re probably radiating calm. Or not—
Paul Rosolie
(02:25:29)
Or not…
Lex Fridman
(02:25:29)
…but non-threat.
Paul Rosolie
(02:25:31)
Certainly non-threat. I also smell like an animal when I’m in the jungle, right? I shower in the river. I don’t use deodorant or shampoo or any of that stuff. So I don’t smell… You know, you can just imagine to animals that have a smell that’s four times as good as ours, that just your deodorant, just your conditioner— …just whatever other products, the detergent on your clothes— We smell like Times Square. We smell like a fire alarm to them.
Paul Rosolie
(02:25:59)
You know, they’re like, “What is this thing? It smells very foreign and scary.” Everything’s scary. Speaking of scary, the jaguar was kind of friendly. He was like, “‘Sup?” It’s almost like he’d seen me before on the trail, so he was like, “Oh, it’s just you.” The one time I stood on the forest floor in India with a wild tiger and nobody else was there, the thing that the tiger did that was so unnerving… And again, a tiger’s back is so much bigger than you think. It’s like four jaguars. They’re so big. She wouldn’t look at me, and it was terrifying. She would look over there, she’d look like that, and never eye contact.
Paul Rosolie
(02:26:39)
But it was like, “You’re as important to me as a stick.” And, you know, when you see two fighters square up and it’s all about the eye contact, trust me, you look through a person. You pretend they’re not even there. That tiger insulted me on such a profound and disarming level that I never forgot it. It was just like, “You matter as much as a sparrow. You’re just not one of the things that I care about.” She just was looking around and carried on doing it. And she was like, “I’m gonna walk this way.” And I was just like, “Holy shit, I’m gonna run.” You know, it’s just profound insignificance from this god of an animal with paws the size of dinner plates. And I was like, “Man, if she does, I don’t want her to look at me because if she looks at me, I’m gonna probably…”
Lex Fridman
(02:27:27)
That’s the end.
Paul Rosolie
(02:27:27)
You know, that’s the end.
Lex Fridman
(02:27:28)
Yeah, it just shows how much more powerful she is. That’s probably the most terrifying animal on earth. Yeah, tigers—
Paul Rosolie
(02:27:37)
The rock-paper-scissors of land predators. I think like polar bear and tiger gotta be the most scary.
Lex Fridman
(02:27:44)
Yeah, polar bear.
Paul Rosolie
(02:27:45)
Polar bear’s pretty scary.
Lex Fridman
(02:27:46)
Yeah, you don’t fuck with a polar bear.
Paul Rosolie
(02:27:47)
I don’t think they’re as fast as tigers, but I don’t think you’re gonna go fast on the ice and… But I mean, with a tiger, you can’t outrun it. If you climb a tree, they climb better than you. If you get in the car, they could smash through the door. If a tiger decides it wants you, pretty much nothing… Even if you had a gun, even if you had like a nine millimeter, it ain’t gonna stop a tiger that wants you.
Lex Fridman
(02:28:08)
In the jungle, have you ever felt in danger? Putting the humans aside, were there animals… We’ve talked about how humans are really the source of danger. You often speak about animals as a source of beauty and wonder— —and elegance and grace and all these things which they are. But I’m sure you’ve felt danger.
Paul Rosolie
(02:28:38)
Yeah. I mean, I’m very aware that a hornet’s nest can kill you.
Lex Fridman
(02:28:43)
Oh, so the little guys.
Paul Rosolie
(02:28:45)
The little guys suck. You know, the… I always think like when we were going through the jungle- …one machete whack, and again, people don’t realize how dense it is. You try to run, you get hung up on vines, you trip, you fall onto one of those trees with the black spikes. And then while you’re laying there dealing with all that, they’re just stinging you and your body— …goes into anaphylactic shock and you die instantly. That can very quickly just take you out.
Lex Fridman
(02:29:08)
You’re right. I mean, speaking of spikes, the biggest danger is not even the spikes. I mean, the spikes just—because it creates open wounds and then that can lead slowly—
Paul Rosolie
(02:29:16)
Infection
Lex Fridman
(02:29:16)
…to infection. So it’s really that— …is the biggest danger.
Paul Rosolie
(02:29:20)
Yeah. In the Amazon, again, I’ve never heard of a human-directed violent jaguar in our region. They just don’t attack people. I’d say mosquitoes are the thing that come after you. The snakes just want to be left alone. Even the venomous snakes. Again, the bushmaster, I grabbed an 11-foot bushmaster by the tail and he turned around, he lifted up to about this high off the ground. And if you could translate what he said, it was just, “Don’t make me do it.” It just said, “Make my day.”
Lex Fridman
(02:29:50)
See, but that’s the thing. You speak snake language.
Paul Rosolie
(02:29:53)
And then I put the tail down.
Lex Fridman
(02:29:54)
You speak snake.
Paul Rosolie
(02:29:54)
I went, “Okay.” I was like, “I’m sufficiently scared.” So the problem happens when you don’t know what you’re doing. So I’ll give you— …an example. You want a dangerous animal story, I’ll give you one. I was walking one time and I was trying to be responsible. It always happens when I’m trying to be responsible— …I get into trouble. I’m trying to be safe and I’m on the side of a stream and there’s elephants on the other side… I’m in India. There’s a deep, like a 12-foot thing, and then a stream and then on the other side there’s elephants. And I’m walking and I’m like, “I’m going to sit in a tree and I’m going to enjoy these elephants.” I’m going to make notes in my book like Jane Goodall.”
Paul Rosolie
(02:30:29)
Then I came up against a cement wall and it was the back of a male elephant. And in India, it’s a male elephant that’s been harassed and had fire thrown at it and God knows what else. And if I translate what he said, he turned around and he just went, “What the fuck?” Like, he just looked at me like, “How dare you?” And then he just smacks apart the tree, turns around, and then that elephant was trying to kill me. That was not a mock charge. I threw off my backpack, zigzagged through the woods. He broke apart trees. If I had a GoPro on my back to show you what I saw—just the shrapnel and devastation of this thing just bashing through trees. And again, every bush that I encounter is a possible trip.
Paul Rosolie
(02:31:12)
Every vine is a possible hangup. And then if they get you, he’ll step on you and crush you. And so I threw myself off the edge of this cliff, rolled down into the stream, and the elephant got to the edge of the cliff and almost fell on me. He got to the edge of the cliff and did one of these and then came back down on his hind feet. Picked up a stick, threw it at me. And the stick just smacked down next to me in the stream, and I remember I gave him the finger because it was like, “I’m alive.” And then he just stormed off into the jungle.
Lex Fridman
(02:31:42)
I mean, there’s nothing like an elephant.
Paul Rosolie
(02:31:43)
There’s nothing like an elephant anywhere. I loved listening—I was so excited when I put on your podcast with the dinosaur guy— because he was like, “When a baby is born,” he was like, “it learns, you know, elephant, giraffe, T-Rex.” And I was like, “Holy shit.” You know? Along with like banana, water— sky is blue, and somehow these are initial things in your first few months on Earth. These are the characters you’re introduced to. Like, how the hell did T-Rex get there? They don’t even exist anymore. It’s like— It was just such a fun… and I could hear you smiling through the mic as I’m listening to it, and I was like, “Oh, this is gonna be a good one.”
Lex Fridman
(02:32:19)
Yeah. I mean, the dinosaur world is incredible. But like, the fact that you have such a predator evolve with such a gigantic jaw, so much destructive power is weird.
Paul Rosolie
(02:32:30)
And then he broke my heart because he was talking about how the T-Rex and Stegosaurus—he’s like, “All the books have them together.” And he’s like— “They’re nowhere near each other.” “They did not exist anywhere near each other,” and I was like…
Lex Fridman
(02:32:42)
I want them to battle with each other.
Paul Rosolie
(02:32:44)
Yes.
Lex Fridman
(02:32:46)
Speaking of elephants, I feel like we’ll be up for an adventure at some point. After all this chaos is over, do you think back in the jungle? Africa? India?
Paul Rosolie
(02:32:58)
I think I would love to show you a herd of truly wild elephants in the African jungle. I think going on a boat trip through the Amazon, not a hiking one— —where we’re going through some really… there’s areas where you can get permits to go through areas where no one’s allowed to go. They’re completely protected areas, and you can just go for a week through areas where the animals have no idea what a human is. And so you can move through it, and it would be a little bit more of an enjoyable experience, not a survival situation. Go with J.J. in a boat and just travel through the Amazon. “Hey, maybe we protect this river.” And then the river’s mapped from north to south, and we just raft down with boat support. You know?
Lex Fridman
(02:33:45)
It’s really incredible to see how it’s all connected. I mean, the river is the thread that connects the whole story. And so it’s nice— —to see how it all is connected. And that’s why us starting in the mountains is also really nice, to see where it begins. But it keeps going. The story keeps going.
Paul Rosolie
(02:34:01)
It keeps going. We did start in the mountains. An epic first day together.

Writing, journaling, and great writer inspirations

Lex Fridman
(02:34:07)
And hopefully, people get a chance to see that video. So I gotta ask you about the writing. I mentioned you’re— —an incredible writer. What’s your writing process like for this book, Jungle Keeper, for Mother of God, for future books you’re writing? Are you like a Stephen King? Do you have a drinking thing where you go to some dark places in the basement? Do you write every single day? Do you take little notes here and there? Like, your notebook has a bunch of doodles— —a bunch of writing. What’s your process like?
Paul Rosolie
(02:34:44)
I try to journal every day for a number of reasons. It’s accountability. It helps me keep track of… It’s fun to see your hopes and dreams. It’s fun to record the mundane moments that we all forget about, and that might be, like, cooking in the kitchen with your mother. That might be a fun walk you had with your dog. Little things that you think you’re going to remember everything, but you just don’t. And so I have piles of notebooks in my room. When something happens, I write it down. If a cool story happens, I will write it down, or if I find a leaf from an extinct tree I will make an etching of it. But I just…
Paul Rosolie
(02:35:28)
Anything that happens that I find remarkable in any way, either for my own personal memory or for writing, I’ll write it down. And then when I go back to it later, one, I have a very good memory, and then two, the facts are there. And so when something happens like you rescue a spider monkey or something remarkable in life, you get to spend time with someone that you haven’t in a long time and you get that feeling of, “Oh, that’s why I’m such good friends with them.” You write these things down, and then it’s always there. And so I feel like whenever I don’t journal, I’m missing out on keeping my life and my memories. So yeah.
Paul Rosolie
(02:36:12)
I don’t do that Stephen King thing. That quote about how “amateurs wait for inspiration, and the professionals, we go to work every day,” and he’s like, “10 pages a day,” or whatever it is. I don’t do that. I write when I feel like it. I’ll start thinking, “Oh, this is a perfect way to start this scene,” because at the moment this happened, I felt it so intensely. If we’re bringing people in and out, I’ll just be in a car or boat, and I’ll start thinking about it and I’ll go, “This is…” You’ve just got to carpe diem. And I’ll go, “Okay, where did that happen again?” and I’ll go to that page. And I’ll go, “Okay, so what exactly… …Happened.” Then you get the laptop, and yeah. So it’s brain to paper to laptop, always paper in between.
Lex Fridman
(02:36:57)
Well, how do you go from disparate notes to the final thing? Because it’s difficult to convey the experience through words, and you do that well. So, do you edit a lot? Do you iterate?
Paul Rosolie
(02:37:13)
That’s where Stephen King was right. Because I look at writing like sculpting. You have to have something to sculpt. And so when you’re thinking of a story… Again, I love listening to great storytellers. And I actually love listening to bad stories, just like I like watching bad movies to see what they did wrong. When you listen to someone that starts a story and they have you hooked from the second they start, and then you’re like, “Wait, but how did that happen? And why was that happening? What happened next?” and they keep you going and they drop the information perfectly. And so every now and then, you figure that out in that moment of inspiration. So then I have my facts written down here.
Paul Rosolie
(02:37:51)
And then I’ll do an outline on a page or something. But then I have to get it all out of me with a pen. Then I can move to… and I’ll almost just close my eyes and write the story out. You’re literally making your clay, making the shape of the thing. And then editing is the giving it details.
Lex Fridman
(02:38:14)
So, you do take passes like-
Paul Rosolie
(02:38:16)
Oh, my God, yes.
Lex Fridman
(02:38:17)
Oh, editing. Yeah.
Paul Rosolie
(02:38:17)
I mean, dozens and dozens. That’s where writing sucks. When you’re finishing a book… I’ll never do that again. So what I’m doing now with this last book, there’s so much that it covered.
Paul Rosolie
(02:38:30)
And I was in the jungle, and it’d be like hiking for 10 hours a day, dealing with narco-traffickers, all this stuff, and then I’d have to edit at night. And it was like, “This is no way to live.” So now what I’m doing is I’m writing chapters as I feel like writing chapters. When something amazing or remarkable happens, I go, “This is going to be its own chapter.” I write it, edit it, and then I send it to my sister who is an expert editor and has lived more in literature than most people live in real life. And she’ll let me know if it’s good or bad, or needs to be tweaked, or moved along. When I get it back from her, it’s marked up. And now what I’m going to do is I’m just going to put those aside.
Paul Rosolie
(02:39:10)
And then, the next time I want to write a book, it’s not starting from scratch on 300,000 words, it’s just here, and it’s ready. Much easier.
Lex Fridman
(02:39:20)
What kind of books do you think you might write in the future?
Paul Rosolie
(02:39:23)
Well, there’s Mother of God, and now there’s Jungle Keeper. And then I’m already working on Endgame. Because there’s so much that has happened. I think I told you when you were there, but right before you came, me and JJ went to the back end, behind our river—
Paul Rosolie
(02:39:40)
—to this horrible part of the Amazon that’s 10 times more lawless than where we are. And instead of having no people, there are people. And you want to talk about Amazonian No Country for Old Men? It’s the oil companies, and the missionaries, and the newly contacted tribe. There’s a people called the Nahua people, and they’re recently contacted, and they’ve been ripped out of the forest. And they’re standing there with their little bows and arrows. They’re tiny people. The normales are tall, the Nahua are small. And we just saw brutality in this horrific, horrible… It’s like Sicario. It’s just absolute lawlessness.
Paul Rosolie
(02:40:18)
I remember the moment JJ looked at me and he said—and we both think of ourselves as tough, I think, until we get in these certain situations—he looked at me and he went, “We’re not safe.” And we looked at the people around us, and we’re at this side of the river port eight days up this river, and you could tell that everyone that was looking at us was making a calculation about how inconvenient it would be to kill us at this moment and how much money they could get. They were like: camera, watch, clothing, backpack.
Paul Rosolie
(02:40:49)
And they were like, “That’s a nice backpack.” You could tell they were just shopping. JJ and me were like, “Where are we putting the tent tonight?” I was like, “We’re not staying here.” And then I was like, “Well, maybe we should stay here.” I didn’t know what to do. And then one of the Nahua people came over to JJ and was asking for food, and he made the mistake of explaining money to them. They’d never had money before.
Paul Rosolie
(02:41:11)
And so he gave them a piece of money, a couple coins. And he was like, “Oh, if you just go over there, there’s a man that’ll sell you something and then you can eat it.” And the guy was like, “Bow and arrow?” And JJ was like, “No, no. Give him this and he’ll give you food,” and it worked. And then JJ got swarmed by like 60 of these tribals; they all had bows and arrows, hands out, and JJ was running with all these half-naked people behind him. That whole saga right there is… that chapter’s going to be called River of the Dolphin Fuckers because everyone we met on the river kept telling us—
Paul Rosolie
(02:41:49)
I’d have my camera with me and I’d go, “Are there dolphins here?” And they’d go, “Yeah, there’s dolphins. And if you fuck one, be careful because they’ll pull you under.” I went, “Okay, weirdo,” to the first guy. And then we got to like eight hours further upriver, met the next guy and I had my camera out, and I’m like, “Hey, are there any dolphins here?” And he goes, “Yeah. If you fuck any, be careful because they’ll grab on and pull you under.” And I was like, “What?” And then like four more people told me the same thing. So I was like, “Okay.” You know?
Lex Fridman
(02:42:14)
The lesson we learned in the jungle: you know, horned anacondas, believe them.
Paul Rosolie
(02:42:18)
Believe them. So apparently on that river, they were all trying to be good Samaritans and warn me about the clear and present dangers involved with amorous dolphin encounters.
Lex Fridman
(02:42:28)
So stylistically, I mean, that is a bit Cormac McCarthy.
Paul Rosolie
(02:42:32)
Ooh, he would have loved it.
Lex Fridman
(02:42:33)
Are there writers you draw inspiration from like that? I mean, you’re very close to him in terms of—
Paul Rosolie
(02:42:40)
It’s too big of a compliment.
Lex Fridman
(02:42:40)
—the style you plug into every once in a while. You jump around stylistically, actually.
Paul Rosolie
(02:42:45)
I do. It depends, because sometimes I want to sink in and flex a little bit, which I don’t think people really enjoy, but I enjoy it. You know, just use all those flowery words— —and make these beautiful metaphors. But what I’m finding more and more is that modern readers aren’t really looking for that. They want an easy read. In my style of storytelling, people really enjoy and tend to thank me for more of an Anthony Bourdain style where you’re like, “So we found ourselves on the side of this river and we knew we were in danger. The reason we were in danger…” and you just start telling the story. Forget the… maybe once every two pages you can throw in one of those beautiful little zingers, but no one wants to watch you flex.
Lex Fridman
(02:43:30)
But also sometimes you go even more than… I don’t think Anthony Bourdain did like Hemingway-like minimal, like— —word, period— —word, like that. That’s another way to flex that I really like that you do sometimes, which is just— —less, and just power in the spacing, the silences. The unsaid is what does the driving.
Paul Rosolie
(02:43:53)
I mean, that’s what’s so arresting about it. You read For Whom the Bell Tolls, and you know, “The air was crisp, and the water was sweet, and the wine was good, and the afternoon was warm.” And you’re like, “I know what that’s like.” These are not complicated sentences, but when he puts them together into a paragraph, you go, “Oh, yeah. I want to drink wine out of leather and lie by the side of that stream.” It sounds so beautiful. And so sometimes, I mean, just look at that. Look at that fire cracking on the horizon there. And it’s like sometimes the only way is just these simple statements, you know?
Lex Fridman
(02:44:27)
Writing’s beautiful. I love writing; I love reading it. Have you interacted with LLMs much? You know, AI systems like ChatGPT? There’s a bit of a scary and a sad aspect to the fact that they can generate language extremely well. But something is missing, and it’s very hard to put your finger on it.
Paul Rosolie
(02:44:50)
My question to you is, I can pick out with stunning accuracy when someone sends me a message and they’ve passed it through ChatGPT— —I know. Somehow I could tell. I don’t know how, but I could tell. We’re at the point with images where we almost can’t tell anymore. I don’t know if that’s going to go away. Like you said, there’s something… one of the things that F. Scott Fitzgerald does is describe these incredibly human moments with such crystalline accuracy that you go, “It must have taken you a month. You must have studied life so much to string those words together.” In one book, he writes about someone screaming with such abandonment that at the highest register, her voice wobbled and cracked.
Paul Rosolie
(02:45:47)
And you’re like, “Oh my God, I know what that sounds like.” And I wonder if it’s because you can say, “Write me The Jungle Book but make it sound like Cormac McCarthy wrote it.” And it’ll be like, “The jungle was dark and stern, and the boy was…” It’ll do it, and it’s amazing. My question to you is, at least right now, what are we picking up on in something as simple as a text message?
Lex Fridman
(02:46:12)
It is very difficult to define. But it’s important to keep thinking about because— … like, what makes us human?
Paul Rosolie
(02:46:19)
You reassured me recently because I called you and I said, “I come out of the jungle and all anybody wants to talk about is AI.” And everyone’s like… It’s like people are walking themselves into the Matrix and asking to be hooked. Everyone’s just obsessed with this topic. And you were like, “Man, human art and human literature is going to actually become so valuable as this other thing happens.” And I expected the opposite answer. I thought you were going to be like, “Yeah, man, this really is. We’re taking off and everything’s going to change.” And you were like, “Man, real artists are going to become more appreciated.”
Lex Fridman
(02:46:57)
As more and more compelling and effective bots appear on the internet— … we’re going to value that less and less, I think. And we’re going to value in-person interaction more and more. And so, you know, artists showing art at galleries versus on the internet— … meeting in person. And, actually, it’s going to force people to be more authentic and real and raw with each other. That’s going to be the valuable resource.
Paul Rosolie
(02:47:25)
I mean, I think already, AI aside, in today’s world, everyone’s so… I mean, movies have become so polished. There’s no weird, quirky stuff. There’s no risky stuff anymore. It’s all very curated. I’ve almost stopped watching movies. And I used to love movies. But it’s fun when they take risks, when they’re messy, when they’re real.
Lex Fridman
(02:47:49)
Yeah. I think Hollywood, the Hollywood stars, and the Hollywood movie-making process have become less and less popular because of that. So I can’t wait for movies to be reinvented—
Paul Rosolie
(02:47:59)
Oh, I can’t wait
Lex Fridman
(02:48:00)
… like independent film. Just raw, edgy, dangerous, all that kind of stuff.
Paul Rosolie
(02:48:03)
And all the actors we like are in TV shows on various streaming platforms. It’s like they’ve all just gone home. They’re not there. I was literally like, “Man, I miss movies. What happened?” I’m re-watching all the old movies that I like, and I was like, “Where is everybody?” What are they doing? It’s like they all have a TV series on Hulu or something, you know? It’s like, “Damn.”
Lex Fridman
(02:48:28)
Yeah. I think it’ll come. The raw, the dangerous, the edgy.
Paul Rosolie
(02:48:32)
What we just described is almost perfect for… There’s a scene in Dead Poets Society where Robin Williams makes them open their books. And the first page of the poetry book is like, “How do you identify a good poem?” He’s like, “A good poem can be…” and he makes a graph. He’s like, “By the subject of the poem, and then the accuracy with which it is described, you can tell whether or not it’s a good poem.” And he reads this, and the whole class is sitting there bored. And he’s like, “Now rip that page out of your book.” And they rip the page out. And then he’s like, “Now stand up. Describe something.” And he makes them bleat it and scream it. It’s almost exactly what we’re describing right now.
Paul Rosolie
(02:49:05)
It’s like, yeah, you can turn it into a graph if you need to, but it’s something way messier than that.
Lex Fridman
(02:49:10)
Yeah. And Robin Williams, the person—
Paul Rosolie
(02:49:12)
God.
Lex Fridman
(02:49:13)
…is a perfect example of a complicated, beautiful human. I miss him. And whenever I see clips of him come up, it’s just— I still to this day can’t make sense that a person like that can take their own life—somebody who’s brought so much joy to the world. It scares me, man. It scares me. I’m scared of my own mind in that way, you know? That he could be at the top of the world…
Paul Rosolie
(02:49:47)
Mm-hmm. But he had an illness.
Lex Fridman
(02:49:50)
Yeah, that’s what I understand. Dude, life is a rollercoaster.
Paul Rosolie
(02:49:57)
I’m telling you.
Lex Fridman
(02:49:57)
And you’re living through it.
Paul Rosolie
(02:49:58)
As scary as that… like, that you can go down the Robin Williams hole, I’ll give you this. My very close friend, Gleb, has a story. He was in New York City as a kid and he saw Robin Williams walking down the street. He went up to him and said, “Oh my God, it’s Robin Williams.” And Robin Williams was like, “Yeah.” And he goes, “Can I have an autograph?” And he goes, “Do you have any paper?” And my friend was like, “No, I’m eleven.” And Robin Williams was like, “Go get some paper.” And Robin Williams’ manager was with him and he was like, “Robin, we don’t have time. We gotta get up there.” And he was like, “Hold on. I told the kid I’d give him a thing.”
Paul Rosolie
(02:50:32)
He’ll be back.” And my friend heard this as he’s thinking, “Just please stay, please stay.” Like his whole life depended on this. And he ran into a diner, grabbed a napkin, and ran back out into the street. It took him a few minutes, and he said Robin Williams was sitting there, and the irate manager was there being like, “Come on, let’s go.” Robin had waited there and signed the napkin for him, and actually did it with a smile and a wink.
Lex Fridman
(02:51:00)
Yeah, man. You can bring a lot of joy to the world. Never forget that. All those little interactions. I love it. I love it.
Paul Rosolie
(02:51:07)
That was another one of Jane’s amazing quotes that I couldn’t reproduce, but it’s just that you don’t realize the degree to which the things you do each day matter— …even if it’s just to the people around you. To the people around you, you are their entire life experience— …if they’re your kids, your parents, your partner. So yeah, the things you do. And if you can manage to put that extra energy where you put a little magic on it, where it is fun—showing up home with something, or playing with the kids in a way that surprises them. I had a good friend of mine.
Paul Rosolie
(02:51:44)
This guy Vinnie, he told me—I called him and said, “What are you doing?” He said, “Oh, I have a whole plan set up. It’s supposed to be really good stars tonight. I’m putting my kids to bed. I’m putting my daughter to bed. I’m gonna wake her up in the middle of the night and I’m gonna have a candle. She’s never seen it. And I’m gonna take her up to the roof to go stargazing.” He’s like, “But I want her to sleep.” And he’s like, “You know, remember when you were a kid and you would wake up?” And it’s like he was curating a magical experience for her to see the stars, making warm tea, and all this. Man, you can just make it so great.
Lex Fridman
(02:52:18)
Jane Goodall’s the reason you met this guy.
Paul Rosolie
(02:52:20)
That’s right.
Lex Fridman
(02:52:21)
You’ve continuously spoken really highly of him, and he gave me this book that he has recently written, Echoes from Eden. Signed it.
Paul Rosolie
(02:52:31)
Yeah. Dax, A, saved my life, and B, is the example of what everybody wishes. Dax made an amazing company, amassed an amazing fortune, and then said, “I’m gonna use it for good.”
Lex Fridman
(02:52:43)
He’s given a lot of resources, a lot of love, a lot of effort to helping the Amazon rainforest and the environment in general. And he’s one of the only guys I know who has a sexier beard than you.
Paul Rosolie
(02:52:58)
Yeah, he’s got me beat big time. That thing is—
Lex Fridman
(02:53:01)
He wrote, “Thank you, brother, for your love of the wild. This book is about the heroes fighting on the front line for nature. Together we can protect Earth’s last wild places. Speak soon, Dax.”
Paul Rosolie
(02:53:14)
He supported all these initiatives. He went to the Amazon with Jane. He supported Jungle Keepers. He’s supported Sea Shepherd. And so he really went out and said, “Okay, what are the environmental projects that are doing the most good? And where do I want to put my resources?” And everyone always whines about that, like, “How come these guys don’t?” And it’s like he did, and he got a lot done. Then he went and visited all those projects—sea turtles, Indonesian orangutans, working with Jane. So that book is sort of a State of the Union on where conservation is at, with a lot of knowledge about how all the different strategies…
Paul Rosolie
(02:53:52)
It’s so different protecting sea turtle eggs versus trying to save a river in the Amazon versus Jane’s global message of hope. And then he has a guy in there who’s trying to save a specific part of I think Sumatra, and it’s just amazing stuff.
Lex Fridman
(02:54:07)
The Congo.
Paul Rosolie
(02:54:08)
The Congo. And then he actually took the time to go to these places and see the operations on the ground.
Lex Fridman
(02:54:14)
And are you still working with him?
Paul Rosolie
(02:54:16)
Yeah. Well, the way it happened in my life was the one time I quit conservation was right around the time COVID hit and I was going through a divorce. I was 32 years old, and I had no job, no nothing. JJ’s mom had COVID. Don Ignacio, the shaman, had COVID. Pico’s leg was coming off. It was like nothing was working. Nobody could go anywhere. And I called Mohsin and I was like, “I quit.” I was like, “We’re never going back to the jungle.” The loggers just went out and were tearing down everything. I just said, “I’ve got nothing.” In that absolute black depression, I called him and I said, “I quit.”
Paul Rosolie
(02:55:02)
I’m gonna go get a job. I guess I’ve just been like a jungle Peter Pan, and it’s time to grow up.” I was really embarrassed at the time that I did that. And then I spent like four days just laying in bed with no idea what to do. The only thing I can do is this. And I had talked to Dax months earlier, told him my plan for protecting the river, for making a ranger team, and he’d been looking over the budgets and spreadsheets and seeing if this was real. He was still forming Age of Union. And then four days after I quit, the phone rings and it’s Dax. And he goes, “Hey, I looked over the budget by the way. I’d like to make a 10-year commitment to Jungle Keepers. Let’s go.”
Paul Rosolie
(02:55:45)
He had no idea what I was going through, and he was just like, “Let’s go.” Going from that depressed to that inspired in a single conversation… you could get the bends from that.
Lex Fridman
(02:55:57)
Yeah, and it’s not just the money. It’s having somebody who believes in you.
Paul Rosolie
(02:56:01)
No, it’s that he believes we can do it. Money means tuna cans and gasoline and being able to buy shoes, you know? We never had those things before. We were just living in the jungle watching our bodies decay. And he was like, “No, I know how to run a company. I can tell what you guys need to run an organization.” And he did that and has stuck by us. He came not that long ago to the Amazon, and we— And we took him around, and he looked around and he went, “I’ve never seen people…” Because when we started, he said, “You guys remind me of a startup. You’re a mess.” And that was really right before Stephon had come in.
Paul Rosolie
(02:56:43)
And so now, he’s seeing ranger teams and boats going up and down. We have complex systems and a donor program, and all these things are working well. We’re actually making progress and we have annual reports and all this data. And he’s like, “People have donor fatigue, where they donate money and they don’t know where it’s going. Here, they can see what’s happening.” And so having someone like Dax in your corner is a miracle, really. In the book, it’s gonna sound… again, a lot of the things that happened to me in my life sound like bad writing.
Paul Rosolie
(02:57:16)
You know in the movie when they’ve got the gun against their head and they’re on the ground, and you go, “They’re not getting out of this one.” And then someone bursts through the door and saves them. That’s just happened too many times to me. It sounds like bad writing, but it’s a really good life.
Lex Fridman
(02:57:31)
Since you mentioned Stefan one more time— …one of the things I forgot to mention, one of my happiest moments in life, and I had many of them in the jungle with you, is just talking late at night after ayahuasca, funny enough— …chatting with Stefan and Dan and you, and giggling and just talking about life and everything. And Dan is a guy I have to give a shout-out to. You should go follow him on Instagram. Life with Dan. He’s an incredible wildlife photographer. I’ve seen him. He’s worked quite a lot with you. He has a love of nature, a love of the wilderness, a love of beauty, and is extremely good at taking pictures, but just goes to the edges with you. He’s the only guy I’ve seen with two giant cameras able to follow you into the darkness.
Paul Rosolie
(02:58:24)
Well, Dan… First of all, that picture I showed you where I’m in the tree, because I told you the story about with JJ where I climbed the giant tree. Well, this is years later, I climbed it with Dan. Dan was there, and so he flew the drone up and got me in the tree. But what Dan’s a really good example of is, like you were saying, what would you say to the kids? Dan listened to our first podcast while living in Singapore, and he’s a young filmmaker.
Paul Rosolie
(02:58:53)
He signed himself up—again, just get out there—to come on a Tamandua Expeditions with my company, and he showed up. Sure enough, their boat broke down while I was off doing Jungle Keepers stuff, and someone was like, “Yo, their boat broke down.” So we show up and I haul their boat and he comes up to me and goes, “I’m such a big fan. I just wanted to say hi.” I said, “Well, great. Hello. Let’s get you back on the river.” And then someone came up to me and they said, “You know, he’s a really good photographer.” I said, “Everybody’s a good photographer today. That’s great. Amazing.”
Paul Rosolie
(02:59:25)
We have Stefan and Mohsin.” I said, “What else do we need?” And then someone I trust was like, “Hey, listen, look at his stuff. It’s not normal.” And then I watched a few of his videos and I went, “Holy shit.” And I went, “Would you ever think of coming down for a few weeks to film?” And at the time, he was like, “No way.” He was so amazed. And then like now we’re bros. And we film together all the time. But he put himself in the position where he has the skill, the insane skill. I mean, some of his things, he’s doing tracking shots of a white-winged sparrow over the water where he’s in the boat with an 800-millimeter lens— Getting these insane shots. I’ve never seen a talent like him with video.
Lex Fridman
(03:00:12)
But wildlife photography and documentary filmmaking in general, it’s not just about the competence of being able to pull off a difficult shot. It’s the patience required and the discipline to just sit there and wait. I mean, when we went out into the jungle, he waited.
Paul Rosolie
(03:00:32)
Yeah. No, I mean, even looking on this page, that shot of the— …of the emerald tree boa there, he got up before dawn to wait for the sideways light because he had a vision of lighting the snake from the side, and then the macaws coming off the clay lick. How many days at the clay lick till he got the explosion of macaws? And I’m up in the tree and he’s on the walkie-talkie. And then also your lenses are gonna fog. You have to be able to hike and do everything everybody else is doing, and your job. I mean, the dude is…
Lex Fridman
(03:01:04)
You attract a lot of incredible people because the mission is clear and there’s just a vibrancy and energy to the whole thing. It’s exciting. That’s why the best people come to work with you, come to hang out with you.
Paul Rosolie
(03:01:17)
It’s become an amazing team. I look around at the people and I go, “How did this happen?”
Lex Fridman
(03:01:24)
But it is getting more intense and dangerous and so on. I have to ask you the thing we’ve talked about. What do you think you’ll do when you’re getting older? This is pretty intense. This is pretty insane. Where do you see yourself years from now?
Paul Rosolie
(03:01:39)
I want to protect this river. We have to protect this river in the next year and a half or else we’ll lose the chance. First book, I got to the Amazon and it was wild. Second book, we built this amazing organization and we got so close. It’ll be like those movies, like Blow, where it’s like, “For a time it was amazing,” and then at the end it’s not so great.
Lex Fridman
(03:02:00)
By the way, great movie.
Paul Rosolie
(03:02:01)
Great movie. But I’m writing this story as it happens and the endgame might be written by somebody else. Or we just got really close and then it all fell apart. But we’re 130,000 acres of the way. If we make it to 300,000, I think enough people are going to learn about this. It’s going to tidal wave. We’re going to make an amazing documentary about how we protected the wildest place on earth. And then I would love to have a few kids, get a PhD, teach other conservationists around the world how to do this to save really wild places, keep inspiring people, keep writing books, and keep going on expeditions. I don’t have any problems with that.
Paul Rosolie
(03:02:48)
I can’t do this much longer because the pressure of wondering if it’s going to be okay—I’ve used all of it that I can. My Lord of the Rings analogy of carrying the ring, it’s like you can only do that for so long. And so I’m actually very excited to… I need to know that it’s safe. I mean, that monkey that I rescued out of the river, the toucan, Lucas, who comes back to visit us. We just saw a giant anteater not that long ago with Dax in the jungle. I know these animals and I’m responsible for protecting their home. It would be so amazing to bring people to the treehouse and show them this amazing place and put out documentaries. So I have no problem imagining a transition period. I’d like to transition out of Blood Diamond and go to more of the professor role after this.
Lex Fridman
(03:03:45)
You mean like an Indiana Jones type of professor?
Paul Rosolie
(03:03:48)
Yeah. Running from the tribes. As long as it doesn’t go supernatural at the end, I’ll be very happy. That always kind of let me down.
Lex Fridman
(03:03:58)
Well, thank you for giving basically everything you’ve got towards this mission. And thank you for being who you are. It’s been the honor of a lifetime to be able to call you a friend and to have this conversation, brother. This is the third time we’ve spoken. I think we’ll talk at least 10 more times, and I think I speak for everybody in saying thank you, and please don’t die trying to save the rainforest.
Paul Rosolie
(03:04:30)
I have to say thank you to you because our first conversation changed everything. It really did. It brought so many more people onto the mission. I think it also lifted me up because, as we often acknowledge, this can weigh you down. I often do get weighed down and I lose hope myself. And then I get lifted up by moments like that where someone I’m a huge fan of and who I respect so much reaches out and goes, “Do you want to come to Austin and do this podcast I do?” And I respond, “The Lex Fridman podcast?” But you’ve really changed the narrative and allowed this to be a reality.
Lex Fridman
(03:05:18)
And everybody, go pre-order Jungle Keeper, the book, available everywhere. And if you can, donate on junglekeepers.org. This is an important mission, an ultra-competent team, and this is such a beautiful part of the world that I really hope we protect. So thank you for talking today, and now let’s go eat.
Paul Rosolie
(03:05:45)
Thank you, brother.
Lex Fridman
(03:05:47)
Thanks for listening to this conversation with Paul Rosolie. To support this podcast, please check out our sponsors in the description, or you can also find links to contact me, ask questions, give feedback, and so on. And once more, let me say thank you for everything. Thank you for your support. Thank you for the love. And thank you for listening. I hope to see you next time.

#488 – Infinity, Paradoxes that Broke Mathematics, Gödel Incompleteness & the Multiverse – Joel David Hamkins

Joel David Hamkins is a mathematician and philosopher specializing in set theory, the foundations of mathematics, and the nature of infinity, and he’s the #1 highest-rated user on MathOverflow. He is also the author of several books, including Proof and the Art of Mathematics and Lectures on the Philosophy of Mathematics. And he has a great blog called Infinitely More.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep488-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:
https://lexfridman.com/joel-david-hamkins-transcript

CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:
Joel’s X: https://x.com/JDHamkins
Joel’s Website: https://jdh.hamkins.org
Joel’s Substack: https://www.infinitelymore.xyz
Joel’s MathOverflow: https://mathoverflow.net/users/1946/joel-david-hamkins
Joel’s Papers: https://jdh.hamkins.org/publications
Joel’s Books:
Lectures on the Philosophy of Mathematics: https://amzn.to/3MThaAt
Proof and the Art of Mathematics: https://amzn.to/3YACc9A

SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Perplexity: AI-powered answer engine.
Go to https://www.perplexity.ai/
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Miro: Online collaborative whiteboard platform.
Go to https://miro.com/
CodeRabbit: AI-powered code reviews.
Go to https://coderabbit.ai/lex
Chevron: Reliable energy for data centers.
Go to https://chevron.com/power
Shopify: Sell stuff online.
Go to https://shopify.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
MasterClass: Online classes from world-class experts.
Go to https://masterclass.com/lexpod

OUTLINE:
(00:00) – Introduction
(01:58) – Sponsors, Comments, and Reflections
(15:40) – Infinity & paradoxes
(1:02:50) – Russell’s paradox
(1:15:57) – Gödel’s incompleteness theorems
(1:33:28) – Truth vs proof
(1:44:52) – The Halting Problem
(2:00:45) – Does infinity exist?
(2:18:19) – MathOverflow
(2:22:12) – The Continuum Hypothesis
(2:31:58) – Hardest problems in mathematics
(2:41:25) – Mathematical multiverse
(3:00:18) – Surreal numbers
(3:10:55) – Conway’s Game of Life
(3:13:11) – Computability theory
(3:23:04) – P vs NP
(3:26:21) – Greatest mathematicians in history
(3:40:05) – Infinite chess
(3:58:24) – Most beautiful idea in mathematics

Transcript for Infinity, Paradoxes, Gödel Incompleteness & the Mathematical Multiverse | Lex Fridman Podcast #488

This is a transcript of Lex Fridman Podcast #488 with Joel David Hamkins.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman
(00:00:00)
The following is a conversation with Joel David Hamkins, a mathematician and philosopher specializing in set theory, the foundation of mathematics, and the nature of infinity. He is the number one highest rated user on MathOverflow, which I think is a legendary accomplishment. MathOverflow, by the way, is like StackOverflow but for research mathematicians. He is also the author of several books, including Proof in The Art of Mathematics and Lectures on the Philosophy of Mathematics. And he has a great blog, infinitelymore.xyz. This is a super technical and super fun conversation about the foundation of modern mathematics and some mind-bending ideas about infinity, nature of reality, truth, and the mathematical paradoxes that challenged some of the greatest minds of the 20th century.
Lex Fridman
(00:01:02)
I have been hiding from the world a bit, reading, thinking, writing, soul-searching, as we all do every once in a while. But mostly, just deeply focused on work and preparing mentally for some challenging travel I plan to take on in the new year. Through all of it, a recurring thought comes to me, how damn lucky I am to be alive and to get to experience so much love from folks across the world. I want to take this moment to say thank you from the bottom of my heart for everything, for your support, for the many amazing conversations I’ve had with people across the world. I got a little bit of hate and a whole lot of love, and I wouldn’t have it any other way. I’m grateful for all of it. This is the Lex Fridman Podcast.
Lex Fridman
(00:02:00)
To support it, please check out our sponsors in the description, where you can also find ways to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Joel David Hamkins.

Infinity & paradoxes

Lex Fridman
(00:02:17)
Some infinities are bigger than others. This idea from Cantor at the end of the 19th century, I think it’s fair to say, broke mathematics before rebuilding it. I also read that this was a devastating and transformative discovery for several reasons. So one, it created a theological crisis, because infinity is associated with God, how could there be multiple infinities? Also, Cantor was deeply religious himself. Second, there’s a kind of mathematical civil war. The leading German mathematician, Kronecker, called Cantor a corrupter of youth and tried to block his career.
Lex Fridman
(00:02:57)
Third, many fascinating paradoxes emerged from this, like Russell’s paradox, about the set of all sets that don’t contain themselves, and those threatened to make all of mathematics inconsistent. Finally, on the psychological and personal side, Cantor’s own breakdown. He literally went mad, spending his final years in and out of sanatoriums, obsessed with proving the continuum hypothesis. So laying that all out on the table, can you explain the idea of infinity, that some infinities are larger than others, and why was this so transformative to mathematics?
Joel David Hamkins
(00:03:35)
Well, that’s a really great question. I would want to start talking about infinity and telling the story much earlier than Cantor actually, because, I mean, you can go all the way back to Ancient Greek times when Aristotle emphasized the potential aspect of infinity as opposed to the impossibility, according to him, of achieving an actual infinity. Archimedes’ method of exhaustion where he is trying to understand the area of a region by carving it into more and more triangles, say, and sort of exhausting the area and thereby understanding the total area in terms of the sum of the areas of the pieces that he put into it. And it proceeded on this kind of potential understanding of infinity for hundreds, thousands of years.
Joel David Hamkins
(00:04:25)
Almost all mathematicians were potentialists only and thought that it was incoherent to speak of an actual infinity at all. Galileo is an extremely prominent exception to this, though he argued against this sort of potentialist orthodoxy in The Dialogue of Two New Sciences. Really lovely account there that he gave. In many ways, Galileo was anticipating Cantor’s developments, except he couldn’t quite push it all the way through and ended up throwing up his hands in confusion, in a sense. The Galileo paradox is the idea or the observation that if you think about the natural numbers, I would start with zero, but I think maybe he would start with one.
Joel David Hamkins
(00:05:17)
The numbers one, two, three, four, and so on, and you think about which of those numbers are perfect squares. So zero squared is zero and one squared is one and two squared is four, three squared is nine, 16, 25, and so on. And Galileo observed that the perfect squares can be put into a one-to-one correspondence with all of the numbers. I mean, we just did it. I associated every number with its square. And so it seems like on the basis of this one-to-one correspondence that there should be exactly the same number of squares, perfect squares, as there are numbers, and yet there are all the gaps in between the perfect squares, right?
Joel David Hamkins
(00:06:03)
And this suggests that there should be fewer perfect squares, more numbers than squares because the numbers include all the squares plus a lot more in between them, right? And Galileo was quite troubled by this observation because he took it to cause a kind of incoherence in the comparison of infinite quantities, right? Another example is, if you take two line segments of different lengths, and you can imagine drawing a kind of foliation, a fan of lines that connect them. So the endpoints are matched from the shorter to the longer segment, and the midpoints are matched and so on. So spreading out the lines as you go. And so every point on the shorter line would be associated with a unique, distinct point on the longer line in a one-to-one way.
Joel David Hamkins
(00:06:57)
And so it seems like the two line segments have the same number of points on them because of that, even though the longer one is longer. And so it makes, again, a kind of confusion over our ideas about infinity. Also, with two circles, if you just place them concentrically and draw the rays from the center, then every point on the smaller circle is associated with a corresponding point on the larger circle, in a one-to-one way. And again, that seems to show that the smaller circle has the same number of points on it as the larger one, precisely because they can be put into this one-to-one correspondence.
Joel David Hamkins
(00:07:36)
Now, of course, the contemporary attitude about this situation is that those two infinities are exactly the same, and that Galileo was right in those observations about the equinumerosity. The way we would talk about it now is to appeal to what I call the Cantor-Hume principle, or some people just call it Hume’s principle, which is the idea that if you have two collections, whether they’re finite or infinite, then we want to say that those two collections have the same size, they’re equinumerous, if and only if there’s a one-to-one correspondence between those collections. And so Galileo was observing that line segments of different lengths are equinumerous, and the perfect squares are equinumerous with the whole…
Joel David Hamkins
(00:08:17)
All of the natural numbers, and any two circles are equinumerous and so on. The tension between the Cantor-Hume principle and what could be called Euclid’s principle, which is that the whole is always greater than the part, is a principle that Euclid appealed to in the Elements. Many times when he’s calculating area and so on, he wants… It’s a kind of basic idea that if something is just a part of another thing, then the whole is greater than the part. And so what Galileo was troubled by was this tension between what we call the Cantor-Hume principle and Euclid’s principle.
Joel David Hamkins
(00:08:59)
It really wasn’t fully resolved, I think, until Cantor. He’s the one who really explained so clearly about these different sizes of infinity and so on in a way that was so compelling. So he exhibited two different infinite sets and proved that they’re not equinumerous; they can’t be put into one-to-one correspondence. It’s traditional to talk about the uncountability of the real numbers. So Cantor’s big result was that the set of all real numbers is an uncountable set. Maybe if we’re going to talk about countable sets, then I would suggest that we talk about Hilbert’s Hotel, which really makes that idea perfectly clear.
Lex Fridman
(00:09:39)
Yeah, let’s talk about Hilbert’s Hotel.
Joel David Hamkins
(00:09:41)
Hilbert’s Hotel is a hotel with infinitely many rooms. Each room is a full floor suite. So there’s floor zero… I always start with zero because for me, the natural numbers start with zero, although that’s maybe a point of contention for some mathematicians. The other mathematicians are wrong.
Lex Fridman
(00:09:58)
Like I mentioned, I’m a programmer, so starting at zero is a wonderful place to start.
Joel David Hamkins
(00:10:01)
Exactly. So there’s floor zero, floor one, floor two, or room zero, one, two, three, and so on, just like the natural numbers. So Hilbert’s Hotel has a room for every natural number, and it’s completely full. There’s a person occupying room N for every N. But meanwhile, a new guest comes up to the desk and wants a room. “Can I have a room, please?” And the manager says, “Hang on a second, just give me a moment.” You see, when the other guests had checked in, they had to sign an agreement with the hotel that maybe there would be some changing of the rooms during their stay.
Joel David Hamkins
(00:10:39)
And so the manager sent a message up to all the current occupants and told every person, “Hey, can you move up one room, please?” So the person in room five would move to room six, and the person in room six would move to room seven and so on. And everyone moved at the same time. Of course, we never want to be placing two different guests in the same room, and we want everyone to have their own private room. But when you move everyone up one room, then the bottom room, room zero, becomes available, of course. So he can put the new guest in that room. So even when you have infinitely many things, then the new guest can be accommodated. And that’s a way of showing how the particular infinity of the occupants of Hilbert’s Hotel, it violates Euclid’s principle.
Joel David Hamkins
(00:11:25)
It exactly illustrates this idea because adding one more element to a set didn’t make it larger, because we can still have a one-to-one correspondence between the total new guests and the old guests by the room number, right?
Lex Fridman
(00:11:40)
So to just say one more time, the hotel is full.
Joel David Hamkins
(00:11:45)
The hotel is full.
Lex Fridman
(00:11:46)
And then you could still squeeze in one more, and that breaks the traditional notion of mathematics and breaks people’s brains about when they try to think about infinity, I suppose. This is a property of infinity.
Joel David Hamkins
(00:11:59)
It’s a property of infinity that sometimes when you add an element to a set, it doesn’t get larger. That’s what this example shows. But one can go on with Hilbert’s Hotel, for example. I mean, maybe the next day, 20 people show up all at once. We can easily do the same trick again, just move everybody up 20 rooms. Then we would have 20 empty rooms at the bottom, and those new 20 guests could go in. But on the following weekend, a giant bus pulled up, Hilbert’s bus. And Hilbert’s bus has, of course, infinitely many seats. There’s Seat Zero, Seat One, Seat Two, Seat Three, and so on. So one wants to… You know, all the people on the bus want to check into the hotel, but the hotel is completely full. So what is the manager going to do?
Joel David Hamkins
(00:12:50)
And when I talk about Hilbert’s Hotel in class, I always demand that the students provide the explanation of how to do it. So maybe I’ll ask you. Can you tell me, yeah, what is your idea about how to fit them all in the hotel, everyone on the bus, and also the current occupants?
Lex Fridman
(00:13:08)
You separate the hotel into even and odd rooms, and you squeeze in the new Hilbert bus people into the odd rooms, and the previous occupants go into the even rooms.
Joel David Hamkins
(00:13:20)
That’s exactly right. So, I mean, that’s a very easy way to do it. If you just tell all the current guests to double their room number, so in Room N, you move to Room 2 times N. So they’re all going to get their own private room, the new room, and it will always be an even number because 2 times N is always an even number. And so all the odd rooms become empty that way. And now we can put the bus occupants into the odd-numbered rooms.
Lex Fridman
(00:13:42)
And by doing so, you have now shoved an infinity into another infinity.
Joel David Hamkins
(00:13:47)
That’s right. So what it really shows, I mean, another way of thinking about it is that, well, we can define that a set is countable if it is equinumerous with a set of natural numbers. And a kind of easy way to understand what that’s saying in terms of Hilbert’s Hotel is that a set is countable if it fits into Hilbert’s Hotel, because Hilbert’s Hotel basically is the set of natural numbers in terms of the room numbers. So to be equinumerous with a set of natural numbers is just the same thing as to fit into Hilbert’s Hotel. And so what we’ve shown is that if you have two countably infinite sets, then their union is also countably infinite. If you put them together and form a new set with all of the elements of either of them, then that union set is still only countably infinite.
Joel David Hamkins
(00:14:34)
It didn’t get bigger. And that’s a remarkable property for a- a notion of infinity to have, I suppose. But if you thought that there was only one kind of infinity, then it wouldn’t be surprising at all, because if you take two infinite sets and put them together, then it’s still infinite. And so if there were only one kind of infinity, then it shouldn’t be surprising- … that the union of two countable sets is countable. So there’s another way to push this a bit harder, and that is when when Hilbert’s train arrives, and Hilbert’s train has infinitely many train cars- … and each train car has infinitely many seats.
Joel David Hamkins
(00:15:13)
And so we have an infinity of infinities of the train passengers together with the current occupants of the hotel, and everybody on the train wants to check in to Hilbert’s Hotel. So the manager can, again, of course, send a message up to all the rooms telling every person to double their room number again. And so that will occupy all the even-numbered rooms again and free up again the odd-numbered rooms. So somehow, we want to put the train passengers into the odd-numbered rooms. And so while every train passenger is on some car, let’s say Car C and Seat S, somehow, we have to take these two coordinates, you know, C, S, the car number and the seat number, and produce from it an odd number in a one-to-one way. And that’s actually not very difficult.
Joel David Hamkins
(00:16:10)
In fact, one can just use, say… An easy way to do it is to just use the number 3 to the C times 5 to the S. 3 to the C, 3 to the car number, so 3 x 3 x 3, you know, the number of the car. You multiply 3 by itself the number of the train car, and then you multiply 5 by itself the seat number of times, and then you multiply those two numbers together. So 3 to the C times 5 to the S. That’s always an odd number, because the prime factorization has only 3s and 5s in it. There’s no 2 there. So therefore, it’s definitely an odd number, and it’s always different because of the uniqueness of prime factorization. So every number can be factored uniquely into primes. So if you have a number of that form, then you can just factor it, and that tells you the exponent on 3 and the exponent on 5.
Joel David Hamkins
(00:17:06)
And so you know exactly which person it was, which car they came from, and which seat they came from.
Lex Fridman
(00:17:10)
And prime factorization is every single number can be decomposed into the atoms of mathematics, which is the prime numbers. You can multiply them together to achieve that number.
Joel David Hamkins
(00:17:23)
That’s, uh-
Lex Fridman
(00:17:23)
And that’s prime factorization. You’re showing 3 and 5 are both prime numbers, odd. So through this magical formula, you can deal with this train, an infinite number of cars, with each car having an infinite number of seats.
Joel David Hamkins
(00:17:41)
Exactly right. We’ve proved that if you have countably many countable sets, then the union of those sets, putting all those sets together into one giant set, is still countable. You know, because the train cars are each countable, plus the current hotel. It’s sort of like another train car, if you want to think about it that way. The current occupants of the hotel could, you know, have the same number as any of the train cars. So putting countably many countable sets together to make one big union set is still countable. It’s quite remarkable, I think. When I first learned this many, many years ago, I was completely shocked by it and transfixed by it.
Joel David Hamkins
(00:18:20)
It was quite amazing to me that this notion of countable infinity could be closed under this process of infinitely many infinities adding up still to the very same infinity, which is a strong instance, a strong violation of Euclid’s principle once again, right? So, the new set that we built is… has many more elements than the old set in the sense that there’s additional elements, but it doesn’t have many more elements in terms of its size because it’s still just a countable infinity and it fits into Hilbert’s Hotel.
Lex Fridman
(00:18:53)
Have you been able to sort of internalize a good intuition about countable infinity? ‘Cause that is a pretty weird thing. You can have a countably infinite set of countably infinite sets, and you can shove it all in and it still is a countable infinite set.
Joel David Hamkins
(00:19:11)
Yeah, that’s exactly right. I mean, I guess, of course when you work with these notions that the argument of Hilbert’s Hotel becomes kind of clear, there are many, many other ways to talk about it too. For example, let’s think about, say, the integer lattice, the grid of points that you get by taking pairs of natural numbers, say, so the upper right quadrant of the integer lattice, yeah? So there’s the, you know, row zero, row one, row two and so on, column zero, column one, column two and so on, and each row and column has a countable infinity of points on it, right? So those dots, if you think about them as dots, are really the same as the train cars if you think about each column in the integer lattice, it’s a countable infinity.
Joel David Hamkins
(00:20:01)
It’s like one train car and then there’s the next train car next to it, and then the next column next to that, the next train car. And so… But if we think about it in this grid manner, then I can imagine a- a kind of winding path winding through these grid points, like up and down the diagonals- … winding back and forth. So I start at the corner point and then I go down, up and to the left, and then down and to the right, up and to the left, down and to the right, and so on, in such a way that I’m gonna hit every grid point in- on this path. So, this gives me a way of assigning room numbers to the points.
Joel David Hamkins
(00:20:38)
Because every grid point is going to be the Nth point on that path for some N. And that gives a correspondence between the grid points and the natural numbers themselves. So it’s a kind of different picture. I mean, before we used this 3 to the C, 5 times 5 to the S, which is a kind of, you know, overly arithmetic way to think about it. But there’s a kind of direct, you know, way to understand that it’s still a countable infinity when you have countably many countable sets because you can just start putting them on this list. And as long as you give each of the infinite collections a chance to add one more person to the list, then you’re going to accommodate everyone in any of the sets in one list.
Lex Fridman
(00:21:20)
Yeah, it’s a really nice visual way to think about it. You just zigzag your way across the grid to make sure everybody’s included, that gives you kind of an algorithm for including everybody. So can you speak to the uncountable infinities?
Joel David Hamkins
(00:21:33)
Yeah, absolutely.
Lex Fridman
(00:21:33)
What are the integers and the real numbers-
Joel David Hamkins
(00:21:35)
Correct
Lex Fridman
(00:21:36)
… and what is the line that Cantor was able to find?
Joel David Hamkins
(00:21:38)
Maybe there’s one more step I want to insert before doing that.
Lex Fridman
(00:21:43)
Right
Joel David Hamkins
(00:21:43)
which is the rational numbers. So we did pairs of natural numbers. Right? That’s the train car, basically. But maybe it’s a little bit informative to think about the rational, the fractions, the set of fractions, or rational numbers, because a lot of people maybe have an expectation that maybe this is a bigger infinity because the rational numbers are densely ordered; between any two fractions you can find another fraction, right? The average of two fractions is another fraction. And so sometimes people, it seems to be a different character than the integers, which are discretely ordered, right? From any integer, there’s a next one and a previous one and so on, but that’s not true in the rational numbers. And yet, the rational numbers are also still only a countable infinity.
Joel David Hamkins
(00:22:35)
And the way to see that is actually it’s just exactly the same as Hilbert’s train again, because every fraction consists of two integers, the numerator and the denominator. And so if I tell you two natural numbers, then you know what fraction I’m talking about. I mean, plus the sign issue, I mean if it’s positive or negative. But if you just think about the positive fractions, then you know, you have the numbers of the form P over Q, where Q is not zero. So you can still do 3 to the P times 5 to the Q; the same idea works with the rational numbers. So this is still a countable set. And you might think, well, every set is going to be countable because there’s only one infinity.
Joel David Hamkins
(00:23:21)
I mean if that’s a kind of perspective maybe that you’re adopting, but it’s not true, and that’s the profound achievement that Cantor made is proving that the set of real numbers is not a countable infinity. It’s a strictly larger infinity, and therefore there’s more than one concept of infinity, more than one size of infinity.
Lex Fridman
(00:23:40)
So let’s talk about the real numbers. What are the real numbers? Why do they break infinity?
Joel David Hamkins
(00:23:44)
Right.
Lex Fridman
(00:23:44)
The countable infinity.
Joel David Hamkins
(00:23:45)
Right.
Lex Fridman
(00:23:46)
Looking it up on Perplexity, real numbers include all the numbers that can be represented on the number line, encompassing both rational and irrational numbers. We’ve spoken about the rational numbers, and the rational numbers, by the way, are, by definition, the numbers that can be represented as a fraction of two integers.
Joel David Hamkins
(00:24:05)
That’s right. So with the real numbers, we have the algebraic numbers. We have of course all the rational numbers. The integers and the rationals are all part of the real number system, but then also we have the algebraic numbers like the square root of 2 or the cube root of 5 and so on. Numbers that solve an algebraic equation over the integers, those are known as algebraic numbers. It was an open question for a long time whether that was all of the real numbers or whether there would exist numbers that are the transcendental numbers. The transcendental numbers are real numbers that are not algebraic.
Lex Fridman
(00:24:38)
And we won’t even go to the surreal numbers, about which you have a wonderful blog post. We’ll talk about that a little bit later.
Joel David Hamkins
(00:24:43)
Oh, great. So it was Liouville who first proved that there are transcendental numbers, and he exhibited a very specific number that’s now known as the Liouville constant, which is a transcendental number. Cantor also famously proved that there are many, many transcendental numbers. In fact, it follows from his argument on the uncountability of the real numbers that there are uncountably many transcendental numbers. So most real numbers are transcendental.
Lex Fridman
(00:25:12)
And again, going to Perplexity, “Transcendental numbers are real or complex numbers; they are not the root of any non-zero polynomial with integer or rational coefficients. This means they cannot be expressed as solutions to algebraic equations with integer coefficients, setting them apart from algebraic numbers.”
Joel David Hamkins
(00:25:29)
That’s right. So some of the famous transcendental numbers would include the number pi, you know, the 3.14159265 and so on. So that’s a transcendental number. Also, Euler’s constant, the e, like e to the x, the exponential function.
Lex Fridman
(00:25:47)
So you could say that some of the sexiest numbers in mathematics are all transcendental numbers?
Joel David Hamkins
(00:25:51)
Absolutely. That’s true. Yeah, yeah. Although, you know, I don’t know, square root of two is pretty.
Lex Fridman
(00:25:56)
Square root. All right. So it depends. Let’s not. Beauty can be found in-
Joel David Hamkins
(00:26:00)
That’s right
Lex Fridman
(00:26:00)
…in all the different kinds of sets, but yeah.
Joel David Hamkins
(00:26:02)
That’s right. And if you have a kind of simplicity attitude, then, you know, zero and one are looking pretty good too, so… And they’re definitely not.
Lex Fridman
(00:26:07)
Sorry to take that tangent, but what is your favorite number? Do you have one?
Joel David Hamkins
(00:26:10)
Oh, gosh. You know-
Lex Fridman
(00:26:12)
Is it zero?
Joel David Hamkins
(00:26:13)
Did you know there’s a proof that every number is interesting? You can prove it, because…
Lex Fridman
(00:26:22)
Yeah? What’s that proof look like?
Joel David Hamkins
(00:26:23)
Yeah, okay.
Lex Fridman
(00:26:23)
How do you even begin?
Joel David Hamkins
(00:26:24)
I’m gonna prove to you-
Lex Fridman
(00:26:25)
Okay
Joel David Hamkins
(00:26:26)
… that every natural number is interesting.
Lex Fridman
(00:26:28)
Okay.
Joel David Hamkins
(00:26:29)
Yeah. I mean, zero’s interesting because, you know, it’s the additive identity, right? That’s pretty interesting. And one is the multiplicative identity, so when you multiply it by any other number, you just get that number back, right? And two is, you know, the f- the first prime number. That’s super interesting, right? Okay. So… …One can go on this way and give specific reasons, but I want to prove as a general principle that every number is interesting. And this is the proof. Suppose, toward contradiction, that there were some boring numbers. Okay?
Lex Fridman
(00:27:03)
Okay.
Joel David Hamkins
(00:27:04)
But if, if there was an uninteresting number- … then there would have to be a smallest uninteresting number. But that’s a contradiction, because the smallest uninteresting number is a super interesting property to have. So therefore-
Lex Fridman
(00:27:23)
Ah, that’s good
Joel David Hamkins
(00:27:24)
…there cannot be any boring numbers.
Lex Fridman
(00:27:26)
I’m going to have to try to find a hole in that proof, because there’s a lot of baked in in the word interesting, but yeah, that’s beautiful.
Joel David Hamkins
(00:27:33)
Right.
Lex Fridman
(00:27:34)
That doesn’t say anything about the transcendental numbers, about the real numbers that you just…
Joel David Hamkins
(00:27:38)
That’s right
Lex Fridman
(00:27:38)
… proved from just-
Joel David Hamkins
(00:27:39)
That’s right
Lex Fridman
(00:27:39)
… four natural numbers.
Joel David Hamkins
(00:27:40)
Yeah. Okay, so should we get back to Cantor’s argument, or?
Lex Fridman
(00:27:42)
Sure.
Joel David Hamkins
(00:27:42)
Okay.
Lex Fridman
(00:27:43)
You’ve masterfully avoided the question. Well, you basically said, “I love all numbers.”
Joel David Hamkins
(00:27:47)
Yeah, basically.
Lex Fridman
(00:27:48)
Is that what you said? Okay. All right.
Joel David Hamkins
(00:27:49)
That was my intention.
Lex Fridman
(00:27:49)
Back to Cantor’s argument. Let’s go.
Joel David Hamkins
(00:27:51)
Okay, so Cantor wants to prove that the infinity of the real numbers is different and strictly larger than the infinity of the natural numbers. So the natural numbers are the numbers that start with zero and add one successively, so zero, one, two, three, and so on. And the real numbers, as we said, are the numbers that come from the number line, including all the integers and the rationals and the algebraic numbers and the transcendental numbers and all of those numbers altogether. Now obviously, since the natural numbers are included in the real numbers, we know that the real numbers are at least as large as the natural numbers. And so the claim that we want to prove is that it’s strictly larger.
Joel David Hamkins
(00:28:36)
So suppose that it wasn’t strictly larger, then they would have the same size. But to have the same size, remember, means by definition that there’s a one-to-one correspondence between them. So we suppose that the real numbers can be put into one-to-one correspondence with the natural numbers. So therefore, for every natural number N, we have a real number, let’s call it R sub N. R sub N is the Nth real number on the list. Basically, our assumption allows us to think of the real numbers as having been placed on a list, R1, R2, and so on. Okay, and now I’m going to define the number Z, and it’s going to be… The integer part is going to be a zero, and then I’m going to put a decimal place, and then I’m going to start specifying the digits of this number Z, D1, D2, D3, and so on.
Joel David Hamkins
(00:29:28)
And what I’m going to make sure is that the Nth digit after the decimal point of Z is different from the Nth digit of the Nth number on the list.
Joel David Hamkins
(00:29:40)
Okay? So, to specify the Nth digit of Z, I go to the Nth number on the list, R sub N, and I look at its Nth digit after the decimal point. And whatever that digit is, I make sure that my digit is different from it. Okay? And then I want to do something a little bit more, and that is I’m going to make it different in a way that I’m never using the digits zero or nine. I’m just always using the other digits and not zero and. It would form a kind of diagonal going down and to the right, and for that reason, this argument is called the diagonal argument because we’re looking at the Nth digit of the nth number, and those exist on a kind of diagonal going down. And we’ve made our number Z so that the Nth digit of Z is different from the Nth digit of the nth number.
Joel David Hamkins
(00:30:58)
But now it follows that Z is not on the list because Z is different from R1 because, well, the first digit after the decimal point of Z is different from the first digit of R1 after the decimal point. That’s exactly how we built it. And the second digit of Z is different from the second digit of R2 and so on. The Nth digit of Z is different from the Nth digit of R7 for every end. So therefore, Z is not equal to any of these numbers R7. But that’s a contradiction because we had assumed that we had every real number on the list, but yet here is a real number Z that’s not on the list, okay? And so that’s the main contradiction.
Lex Fridman
(00:31:43)
And so it’s a kind of proof by construction.
Joel David Hamkins
(00:31:45)
Exactly. So, given a list of numbers, Cantor is proving… It’s interesting that you say that, actually, because there’s a kind of philosophical controversy that occurs in connection with this observation about whether Cantor’s construction is constructive or not. Given a list of numbers, Cantor gives us a specific means of constructing a real number that’s not on the list, is a way of thinking about it. There’s this one aspect, which I alluded to earlier, but some real numbers have more than one decimal representation, and it causes this slight problem in the argument. For example, the number one, you can write it as 1.0000 forever, but you can also write it as 0.999 forever. Those are two different decimal representations of exactly the same number.
Lex Fridman
(00:32:38)
You beautifully got rid of the zeros and the nines. Therefore, we don’t need to even consider that, and the proof still works.
Joel David Hamkins
(00:32:44)
Exactly, because the only kind of case where that phenomenon occurs is when the number is eventually zero or eventually nine. Since our number Z never had any zeros or nines in it, it wasn’t one of those numbers. So actually, in those cases, we didn’t need to do anything special to diagonalize. The mere fact that our number has a unique representation already means that it’s not equal to those numbers. So maybe it was controversial in Cantor’s day, more than 100 years ago, but I think it’s most commonly looked at today as, you know, one of the initial main results in set theory, and it’s profound and amazing and insightful and the beginning point of so many later arguments.
Joel David Hamkins
(00:33:24)
And this diagonalization idea has proved to be an extremely fruitful proof method, and almost every major result in mathematical logic is using in an abstract way the idea of diagonalization. It was really the start of so many other observations that were made, including Russell’s paradox and the halting problem and the recursion theorem. So many other principles are using diagonalization at their core.
Lex Fridman
(00:33:56)
Can we just step back a little bit?
Joel David Hamkins
(00:33:58)
Sure.
Lex Fridman
(00:33:58)
This infinity crisis led to a kind of rebuilding of mathematics. So it’d be nice if you lay out the things it resulted in. One is set theory became the foundation of mathematics. All mathematics could now be built from sets, giving math its first truly rigorous foundation. The axiomatization of mathematics, the paradoxes forced mathematicians to develop ZFC and other axiomatic systems, and mathematical logic emerged. Gödel, Turing, and others created entirely new fields. So, can you explain what set theory is and how does it serve as a foundation of modern mathematics, and maybe even the foundation of truth?
Joel David Hamkins
(00:34:43)
That’s a great question. Set theory really has two roles that it’s serving. There are two ways that set theory emerges. On the one hand, set theory is its own subject of mathematics, with its own problems and questions and answers and proof methods. So really, from this point of view, set theory is about the transfinite recursive constructions or well-founded definitions and constructions. Those ideas have been enormously fruitful, and set theorists have looked into them and developed so many ideas coming out of that. But set theory has also happened to serve in this other foundational role. It’s very common to hear things said about set theory that really aren’t taking account of this distinction between the two roles that it’s serving.
Joel David Hamkins
(00:35:38)
It’s its own subject, but it’s also serving as a foundation of mathematics. So in its foundational role, set theory provides a way to think of a collection of things as one thing. That’s the central idea of set theory. A set is a collection of things, but you think of the set itself as one abstract thing. So when you form the set of real numbers, then that is a set. It’s one thing. It’s a set, and it has elements inside of it. So it’s sort of like a bag of objects. A set is kind of like a bag of objects. So we have a lot of different axioms that describe the nature of this idea of thinking of a collection of things as one thing itself, one abstract thing.
Lex Fridman
(00:36:20)
And axioms are, I guess, facts that we assume are true, based on which we then build the ideas of mathematics. So there’s a bunch of facts, axioms about sets that we can put together, and if they’re sufficiently powerful, we can then build on top of that a lot of really interesting mathematics.
Joel David Hamkins
(00:36:42)
Yeah, I think that’s right. So, the history of the current set theory axioms, known as the Zermelo-Fraenkel axioms, came out in the early 20th century with Zermelo’s idea. The history is quite fascinating because Zermelo in 1904 offered a proof that what’s called the axiom of choice implies the well-order principle. So he described his proof, and that was extremely controversial at the time. There was no theory, there weren’t any axioms there. Cantor was not working in an axiomatic framework. He didn’t have a list of axioms in the way that we have for set theory now, and Zermelo didn’t either. And his ideas were challenged so much with regard to the well-order theorem—
Joel David Hamkins
(00:37:29)
that he was pressed to produce the theory in which his argument could be formalized, and that was the origin of what’s known as Zermelo set theory.
Lex Fridman
(00:37:39)
And going to Perplexity, the axiom of choice is a fundamental principle in set theory which states that for any collection of non-empty sets, it is possible to select exactly one element from each set, even if no explicit rule to make the choices is given. This axiom allows the construction of a new set containing one element from each original set, even in cases where the collection is infinite or where there is no natural way to specify a selection rule. So this was controversial, and this was described before there’s even a language for axiomatic systems.
Joel David Hamkins
(00:38:14)
That’s right. So on the one hand, the axiom of choice principle is completely obvious that we want this to be true, that it is true. A lot of people take it as a law of logic. If you have a bunch of sets, then there’s a way of picking an element from each of them. There’s a function. If I have a bunch of sets, then there’s a function that when you apply it to any one of those sets, gives you an element of that set. It’s a completely natural principle. It’s called the axiom of choice, which is a way of anthropomorphizing the mathematical idea. It’s not like the function is choosing something. It’s just that if you were to make such choices, there would be a function that consisted of the choices that you made.
Joel David Hamkins
(00:39:01)
And the difficulty is that when you can’t specify a rule or a procedure by which you’re making choices, then it’s difficult to say what the function is that you’re asserting exists. You want to have the view that, well, there is a way of choosing. I don’t have an easy way to say what the function is, but there definitely is one. This is the way of thinking about the axiom of choice.
Lex Fridman
(00:39:28)
So we’re going to say the three letters of ZFC may be a lot in this conversation. You already mentioned—
Joel David Hamkins
(00:39:33)
Right
Lex Fridman
(00:39:33)
Zermelo-Fraenkel set theory. The Z and the F and the C in that come from this axiom of choice.
Joel David Hamkins
(00:39:41)
That’s right.
Lex Fridman
(00:39:42)
So ZFC sounds like a super technical thing, but it is the set of axioms that’s the foundation of modern mathematics.
Joel David Hamkins
(00:39:49)
Yeah, absolutely. So one should be aware also that there are huge parts of mathematics that pay attention to whether the axiom of choice is being used, and they don’t want to use the axiom of choice, so they work out the consequences that are possible without the axiom of choice or with weakened forms of Zermelo-Fraenkel set theory, and so on. And there’s quite a vibrant amount of work in that area. But going back to the axiom of choice for a bit, it’s maybe interesting to give Russell’s description of how to think about the axiom of choice. So Russell describes this rich person who has an infinite closet.
Joel David Hamkins
(00:40:31)
In that closet, he has infinitely many pairs of shoes, and he tells his butler, “Please go and give me one shoe from each pair.” And the butler can do this easily because for any pair of shoes, he can just always pick the left shoe. There’s a way of picking that we can describe. We always take the left one or always take the right one, or take the left one if it’s a red shoe and the right one if it’s a brown shoe, you know. We can invent rules that would result in these kinds of choice functions so we can describe explicit choice functions. For those cases, you don’t need the axiom of choice to know that there’s a choice function.
Joel David Hamkins
(00:41:14)
When you can describe a specific way of choosing, then you don’t need to appeal to the axiom to know that there’s a choice function. But the problematic case occurs when you think about the infinite collection of socks that the person has in their closet. And if we assume that socks are indistinguishable within each pair, you know, they match each other, but they’re indiscernible, then the butler wouldn’t have any kind of rule for which sock in each pair to pick. And so it’s not so clear that he has a way of producing one sock from each pair, right?
Joel David Hamkins
(00:41:56)
So that’s what’s at stake, is the question of whether you can specify a rule by which the choice function, you know, a rule that it obeys that defines the choice function, or whether there’s sort of this arbitrary choosing aspect to it. That’s when you need the axiom of choice to know that there is such a function. But of course, as a matter of mathematical ontology, we might find attractive the idea that, well, look, I mean, not every way of choosing the socks has to be defined by a rule. Why should everything that exists in mathematical reality follow a rule or a procedure of that sort? If I have the idea that my mathematical ontology is rich with objects, then I think that there are all kinds of functions and ways of choosing.
Joel David Hamkins
(00:42:45)
Those are all part of the mathematical reality that I want to be talking about, and so I don’t have any problem asserting the axiom of choice. Yes, there is a way of choosing, but I can’t necessarily tell you what it is. But in a mathematical argument, I can assume that I fix the choice function because I know that there is one. So it’s a… The philosophical difference between working when you have the axiom of choice and when you don’t is the question of this constructive nature of the argument. So if you make an argument and you appeal to the axiom of choice, then maybe you’re admitting that the objects that you’re producing in the proof are not going to be constructive. You’re not going to be able to necessarily say specific things about them.
Joel David Hamkins
(00:43:30)
But if you’re just claiming to make an existence claim, that’s totally fine. Whereas if you have a constructive attitude about the nature of mathematics, and you think that mathematical claims maybe are only warranted when you can provide an explicit procedure for producing the mathematical objects that you’re dealing with, then you’re probably going to want to deny the axiom of choice and maybe much more.
Lex Fridman
(00:43:51)
Can we maybe speak to the axioms that underlie ZFC? So ZFC, or Zermelo-Fraenkel set theory with the axiom of choice, as we mentioned, is the standard foundation for most modern mathematics. It consists of the following main axioms: axiom of extensionality, axiom of empty set, axiom of pairing, axiom of union, axiom of power set, axiom of infinity, axiom of separation, axiom of replacement, axiom of regularity, and axiom of choice. Some of these are quite basic, but it would be nice to give people a sense…
Joel David Hamkins
(00:44:28)
Sure
Lex Fridman
(00:44:28)
of what it means to be an axiom. Like, what kind of basic facts we can lay on the table on which we can build some beautiful mathematics.
Joel David Hamkins
(00:44:37)
Yeah, so the history of it is really quite fascinating. So, Zermelo introduced most of these axioms, as part of what’s now called Zermelo set theory, to formalize his proof from the axiom of choice to the well-order principle, which was an extremely controversial result. So in 1904, he gave the proof without the theory, and then he was challenged to provide the theory. And so in 1908, he produced the Zermelo set theory and gave the proof that in that theory, you can prove that every set admits a well ordering. And so the axioms on the list, these things like extensionality, express the most fundamental principles of the understanding of sets that he wanted to be talking about. So for example, extensionality says if two sets have the same members, then they’re equal.
Joel David Hamkins
(00:45:26)
So it’s this idea that the sets consist of the collection of their members, and that’s it. There’s nothing else that’s going on in the set. So it’s just if two sets have the same members, then they are the same set. So it’s maybe the most primitive axiom in some respect.
Lex Fridman
(00:45:44)
Well, there’s also, just to give a flavor, there exists a set with no elements called the empty set. For any two sets, there’s a set that contains exactly those two sets as elements. For any set, there’s a set that contains exactly the elements of the elements of that set, so the union set. And then there’s the power set. For any set, there’s a set whose elements are exactly the subsets of the original set, the power set. And the axiom of infinity, there exists an infinite set, typically a set that contains the empty set and is closed under the operation of adding one more element. Back to our hotel example.
Joel David Hamkins
(00:46:22)
That’s right.
Lex Fridman
(00:46:23)
And there’s more, but it’s kind of fascinating to put yourself in the mindset of people at the beginning of this, of trying to formalize set theory. It’s fascinating that humans can do that.
Joel David Hamkins
(00:46:37)
I read some historical accounts by historians about that time period, specifically about Zermelo’s axioms and his proof of the well-order theorem. And the historians were saying, never before in the history of mathematics has a mathematical theorem been argued about so publicly and so vociferously as that theorem of Zermelo’s. And it’s fascinating also because the axiom of choice was widely regarded as a kind of, you know, basic principle at first, but then people were very suspicious of the well-order theorem because no one could imagine a well ordering, say, of the real numbers. And so this was a case when Zermelo seemed to be, from principles that seemed quite reasonable, proving this obvious untruth. And so people, mathematicians, were objecting.
Joel David Hamkins
(00:47:30)
But then Zermelo and others actually looked into the mathematical papers and so on of some of the people who had been objecting so vociferously, and found, in many cases, that they were implicitly using the axiom of choice in their own arguments, even though they would argue publicly against it. Because it’s so natural to use it, because it’s such an obvious principle in a way. I mean, it’s easy to just use it by accident if you’re not critical enough and you don’t even realize that you’re using the axiom of choice. That’s true now, even. People like to pay attention to when the axiom of choice is used or not used in mathematical arguments, up until this day. It used to be more important.
Joel David Hamkins
(00:48:13)
In the early 20th century it was very important because people didn’t know if it was a consistent theory or not, and there were these antinomies arising, and so there was a worry about consistency of the axioms. But then, of course, eventually, with the result of Gödel and Cohen and so on, this consistency question specifically about the axiom of choice sort of falls away. We know that the axiom of choice itself will never be the source of inconsistency in set theory. If there’s inconsistency with the axiom of choice, then it’s already inconsistent without the axiom of choice. So it’s not the cause of inconsistency. And so in that…
Joel David Hamkins
(00:48:49)
from that point of view, the need to pay attention to whether you’re using it or not from a consistency point of view is somehow less important. But still, there’s this reason to pay attention to it on the grounds of these constructivist ideas that I had mentioned earlier.
Lex Fridman
(00:49:05)
And we should say, in set theory, consistency means that it is impossible to derive a contradiction from the axioms of the theory. So it means that there are no contradictions. That’s a…
Joel David Hamkins
(00:49:15)
That’s right
Lex Fridman
(00:49:15)
a consistent axiomatic system is that there are no contradictions.
Joel David Hamkins
(00:49:19)
A consistent theory is one for which you cannot prove a contradiction from that theory.

Russell’s paradox

Lex Fridman
(00:49:23)
Maybe a quick pause, a quick break, quick bathroom break. You mentioned to me offline we were talking about Russell’s paradox and that there’s another kind of anthropomorphizable proof of uncountability. I was wondering if you can lay that out.
Joel David Hamkins
(00:49:41)
Oh yeah, sure. Absolutely.
Lex Fridman
(00:49:42)
Both Russell’s paradox and the proof.
Joel David Hamkins
(00:49:44)
Right. So we talked about Cantor’s proof that the real numbers, the set of real numbers is an uncountable infinity, it’s a strictly larger infinity than the natural numbers. But Cantor actually proved a much more general fact, namely that for any set whatsoever, the power set of that set is a strictly larger set. So the power set is the set containing all the subsets of the original set. So if you have a set and you look at the collection of all of its subsets, then Cantor proved that this is a bigger set. They’re not equinumerous. Of course, there’s always at least as many subsets as elements because for any element, you can make the singleton subset that has only that guy as a member, right? So there’s always at least as many subsets as elements.
Joel David Hamkins
(00:50:36)
But the question is whether it’s strictly more or not. And so Cantor reasoned like this. It’s very simple. It’s a kind of distilling the abstract diagonalization idea without being encumbered by the complexity of the real numbers. So we have a set X and we’re looking at all of its subsets. That’s the power set of X. Suppose that X and the power set of X have the same size, suppose towards contradiction, they have the same size. So that means we can associate to every individual of X a subset. And so now let me define a new set. I mean, another set, I’m going to define it. Let’s call it D. And D is the subset of X that contains all the individuals that are not in their set.
Joel David Hamkins
(00:51:28)
Every individual was associated with a subset of X, and I’m looking at the individuals that are not in their set. Maybe nobody’s like that. Maybe there’s no element of X that’s like that, or maybe they’re all like that, or maybe some of them are and some of them aren’t. It doesn’t really matter for the argument. I defined a subset D consisting of the individuals that are not in the set that’s attached to them, but that’s a perfectly good subset. And so because of the equinumerosity, it would have to be attached to a particular individual, you know? And- Let’s call that person, it should be a name starting with D, so Diana.
Joel David Hamkins
(00:52:10)
And now we ask, is Diana an element of D or not? But if Diana is an element of D, then she is in her set. So she shouldn’t be because the set D was the set of individuals that are not in their set. So if Diana is in D, then she shouldn’t be. But if she isn’t in D, then she wouldn’t be in her set. And so she should be in D. That’s a contradiction. So therefore, the number of subsets is always greater than the number of elements for any set. And the anthropomorphizing idea is the following. I’d like to talk about it this way. For any collection of people, you can form more committees from them than there are people, even if you have infinitely many people.
Joel David Hamkins
(00:53:03)
Suppose you have an infinite set of people, and what’s a committee? Well, a committee is just a list of who’s on the committee basically, the members of the committee. So there’s all the two-person committees and there’s all the one-person committees and there’s the universal, the worst committee, the one that everyone is on. Okay. The best committee is the empty committee. With no members and never meets and so on. Or is the empty committee meeting all the time? I’m not sure.
Lex Fridman
(00:53:29)
Yeah. That’s… wow, that’s a profound question. And does a committee with just one member meet also?
Joel David Hamkins
(00:53:35)
Yeah. Maybe it’s always in session. I don’t know. So the claim is that there are more committees than people. Okay. Suppose not. Well, then we could make an association between the people and the committees. So we would have a kind of… every committee could be named after a person in a one-to-one way. And I’m not saying that the person is on the committee that’s named after them or not on it, whatever. Maybe sometimes that happens, sometimes it doesn’t. I don’t know. It doesn’t matter. But let’s form what I call committee D, which consists of all the people that are not on the committee that’s named after them.
Joel David Hamkins
(00:54:16)
Okay. Maybe that’s everyone, maybe it’s no one, maybe it’s half the people. It doesn’t matter. That’s a committee, it’s a set of people. And so it has to be named after someone. Let’s call that person Daniella. So now we ask, is Daniella on the committee that’s named after her? Well, if she is, then she shouldn’t be because it was the committee of people who aren’t on their own committee. And if she isn’t, then she should be. So again, it’s a contradiction. So when I was teaching at Oxford, one of my students came up with the following different anthropomorphization of Cantor’s argument. Let’s consider all possible fruit salads. We have a given collection of fruits.
Joel David Hamkins
(00:55:07)
You know, apples and oranges and grapes, whatever. And a fruit salad consists of some collection of those fruits. So there’s the banana, pear, grape salad and so on. There’s a lot of different kinds of salad. Every set of fruits makes a salad, a fruit salad. Okay… And we want to prove that for any collection of fruits, even if there are infinitely many different kinds of fruit, for any collection of fruits, there are more possible fruit salads than there are fruits. So if not, then you can put a one-to-one correspondence between the fruits and the fruit salads, so you could name every fruit salad after a fruit. That fruit might not be in that salad, it doesn’t matter. We’re just… it’s a naming, a one-to-one correspondence.
Joel David Hamkins
(00:55:53)
And then, of course, we form the diagonal salad, which consists… Of all the fruits that are not in the salad that’s named after them. And that’s a perfectly good salad. It might be a kind of diet salad, if it was the empty salad, or it might be the universal salad…
Joel David Hamkins
(00:56:12)
which had all fruits in it, if all the fruits were in it. Or it might have just some and not all. So that diagonal salad would have to be named after some fruit. So let’s suppose it’s named after durian, meaning that it was associated with durian in the one-to-one correspondence. And then we ask, well, is durian in the salad that it’s named after? And if it is, then it shouldn’t be. And if it isn’t, then it should be. And so it’s, again, the same contradiction. So all of those arguments are just the same as Cantor’s proof that the power set of any set is bigger than the set.
Joel David Hamkins
(00:56:48)
And this is exactly the same logic that comes up in Russell’s paradox, because Russell is arguing that the class of all sets can’t be a set because if it were, then we could form the set of all sets that are not elements of themselves. So basically, what Russell is proving is that there are more collections of sets than elements. Because we can form the diagonal class, you know, the class of all sets that are not elements of themselves. If that were a set, then it would be an element of itself if and only if it was not an element of itself. It’s exactly the same logic in all four of those arguments. So there can’t be a class of all sets, because if there were, then there would have to be a class of all sets that aren’t elements of themselves.
Joel David Hamkins
(00:57:40)
But that set would be an element of itself if and only if it’s not an element of itself, which is a contradiction. So this is the essence of the Russell paradox. I don’t call it the Russell paradox. Actually, when I teach it, I call it Russell’s theorem. There’s no universal set. And it’s not really confusing anymore. At the time, it was very confusing, but now we’ve absorbed this nature of set theory into our fundamental understanding of how sets are, and it’s not confusing anymore. I mean, the history is fascinating though, about the Russell paradox, because before that time, Frege was working on his monumental work undertaking, implementing the philosophy of logicism, which is the attempt to reduce all of mathematics to logic.
Joel David Hamkins
(00:58:30)
So Frege wanted to give an account of all of mathematics in terms of logical notions, and he was writing this monumental work and had formulated his basic principles. And those principles happened to imply that for any property whatsoever, you could form the set of objects with that property. This is known as the general comprehension principle. And he was appealing to the principles that support that axiom throughout his work. I mean, it was really… It wasn’t just an incidental thing, he was really using this principle.
Joel David Hamkins
(00:59:11)
And Russell wrote him a letter when he observed the work in progress, that there was this problem, because if you accept the principle that for any property whatsoever you can make a set of objects with that property, then you could form the set of all sets that are not members of themselves. That’s just an instance of the general comprehension principle. And… But the set of all sets that aren’t elements of themselves can’t be a set, because if it were, then it would be an element of itself if and only if it’s not a member of itself, and that’s a contradiction. And so Russell wrote this letter to Frege, and it was just at the moment when Frege was finishing his work. It was already at the publishers and, you know, in press basically. But it’s completely devastating.
Joel David Hamkins
(00:59:58)
I mean, it must have been such a horrible situation for Frege to be placed in, because he’s finished this monumental work, you know, years of his life dedicated to this, and Russell finds this basically one-line proof of a contradiction in the fundamental principles of the thesis that completely destroys the whole system. And Frege had put in the appendix of his work a response to Russell’s letter in which he explained what happened, and he wrote very gracefully, “Hardly anything more unwelcome can befall a scientific writer than to have one of the foundations of his edifice shaken after the work is finished. This is the position into which I was put by a letter from Mr.
Joel David Hamkins
(01:00:46)
Bertrand Russell as the printing of this volume was nearing completion.” And then he goes on to explain the matter, it concerns his basic law five and so on.
Lex Fridman
(01:00:54)
It’s heartbreaking. I mean, there’s nothing more traumatic to a person who dreams of constructing mathematics all from logic, to get a very clean, simple contradiction. I mean, that’s just…
Joel David Hamkins
(01:01:08)
You devote your life to… This work, and then it’s shown to be contradictory, and that must have been heartbreaking.
Lex Fridman
(01:01:16)
What do you think about the Frege project, the philosophy of logic, the dream of the power of logic… To construct a mathematical universe?
Joel David Hamkins
(01:01:24)
So, of course, the project of logicism did not die with Frege, and it was continued, and, you know, there’s a whole movement, the neologicists and so on, in contemporary times even. But my view of the matter is that really, we should view the main goals of logicism are basically completely fulfilled in the rise of set-theoretic foundationalism. I mean, when you view ZFC as the foundation of mathematics, and in my view, the principles of ZFC are fundamentally logical in character, including the axiom of choice, as I mentioned, as a principle of logic. This is a highly disputed point of view, though, because a lot of people take even the axiom of infinity as mathematical, inherently mathematical and not logical and so on.
Joel David Hamkins
(01:02:14)
But I think if you adopt the view that the principles of ZFC have to do with the principles of abstract, you know, set formation, which is fundamentally logical in character, then it’s complete success for logicism. So the fact that set theory is able to serve as a foundation means that mathematics can be founded on logic.

Gödel’s incompleteness theorems

Lex Fridman
(01:02:35)
I think this is a good moment to talk about Gödel’s incompleteness theorems. So, can you explain them and what do they teach us about the nature of mathematical truth?
Joel David Hamkins
(01:02:47)
Absolutely. It’s one of the most profound developments in mathematical logic. I mean, the incompleteness theorems are when mathematical logic, in my view, first became sophisticated. It’s a kind of birth of the subject of mathematical logic. But to understand the theorems, you really have to start a little bit earlier with Hilbert’s program because at that time, you know, with the Russell Paradox and so on, there were these various contradictions popping up in various parts of set theory and the Burali-Forti paradox and so on. And Hilbert was famously supportive of set theory.
Joel David Hamkins
(01:03:25)
I mean, there’s this quote of him saying, “No one shall cast us from the paradise that Cantor has created for us.” And what I take him to mean by that is he was so captured by the idea of using set theory as a foundation of mathematics, and it was so powerful and convenient and unifying in a way that was extremely important. And he didn’t want to give that up, despite the danger of these paradoxes, these contradictions, basically, is how some people viewed them. And so, it’s…
Lex Fridman
(01:03:58)
This minefield of paradoxes. Yeah.
Joel David Hamkins
(01:04:00)
Right. A minefield. That’s a really good way of describing the situation. And so Hilbert said, “Well, look, we have to fix this problem, you know. We want to use the set theory foundations, but we want to do it in a way that is trustworthy and reliable. We can’t allow that the foundations of mathematics are in question, you know.” This is a kind of attitude, I think, that underlies Hilbert and the Hilbert program. And so he proposed, “Look, we’re going to have this strong theory, this set theory that we want to be proving our theorems in. But on the one hand, we want it to be as strong as possible.
Joel David Hamkins
(01:04:40)
We would like it to answer all the questions.” There’s another famous quote of Hilbert in his retirement address where he proclaims, “Wir müssen wissen, wir werden wissen,” so, “We must know, we will know,” in which he’s very optimistic about the ability of mathematics to answer all of the questions of mathematics that we have posed. We have all these problems we want to solve, and he is saying, “We’re going to do it. We’re going to solve all these problems.” So we want to propose this strong theory, and one has the sense that he had in mind set theory in which all the questions are going to be answered. Okay?
Joel David Hamkins
(01:05:21)
But secondly, we want to combine that with a very weak arithmetic, purely finitistic theory; we want to prove that the reasoning process of the strong theory is safe. Okay? So in order to make sense of that point of view, you basically have to invent the philosophy of formalism where we can look at what a proof is, what is the nature of mathematical reasoning. And on Hilbert’s way of thinking about this, a proof is basically itself a finitistic kind of object. It’s a sequence of… If you think about the nature of what a proof is, it’s a sequence of assertions which can be viewed as sort of sequences of symbols that conform with certain rules of logical reasoning. And this is a formalist way of understanding the nature of proof.
Joel David Hamkins
(01:06:16)
So we think about a proof in a kind of syntactic, formal way. Even though the contents of those statements might be referring to infinite uncountable objects, the statements themselves are not infinite uncountable objects. The statements themselves are just finite sequences of symbols.
Lex Fridman
(01:06:33)
So when you kind of think of proof as… Maybe it’s fair to say almost, like, outside of math? It’s, like, tools operating on math. And then for Hilbert, he thought proof is inside the axiomatic system. Something like this.
Joel David Hamkins
(01:06:45)
Yeah, that’s helpful.
Lex Fridman
(01:06:46)
That’s wild.
Joel David Hamkins
(01:06:48)
The main thing about formalism is that you think of the process of doing mathematics. You divorce it from the meaning of the mathematical assertions, right? So the meaning of the mathematical assertions that you make in this infinitary theory has to do with these huge uncountable infinities and so on, possibly. And that’s a very sort of uncertain realm, maybe, and the source of the paradoxes and so on in some people’s minds. But the reasoning process itself consists of writing down sequences of symbols on your page and, you know, undertaking an argument with them which is following these finitary rules. And so, if we divorce the meaning of the symbols from just the process of manipulating the symbols, it’s a way of looking at the nature of mathematics as a kind of formal game in which…
Joel David Hamkins
(01:07:43)
the meaning may be totally absent. I don’t think it’s necessarily part of the formalist view that there is no meaning behind, but rather it’s emphasizing that we can divorce the meaning of the sentences from the process of manipulating those sentences. And then Hilbert wanted to prove in this purely finitary theory that if we follow the rules of that game, we’re never going to get a contradiction. So those were the two aims of the Hilbert program: to found the strong infinitary theory, probably set theory, which is going to answer all the questions. And then secondly, prove in the finitary theory that the strong theory is safe. In other words, consistent, yeah?
Lex Fridman
(01:08:31)
What does the word “finitary” in finitary theory mean?
Joel David Hamkins
(01:08:34)
Yeah. Well, this is, of course, philosophically contentious, and people have different ideas about what exactly it should mean. And so there’s hundreds of papers on exactly that question. But I like to take it just kind of informally. I mean, it means that we’re talking about finite sequences of symbols, and we’re going to have a theory, you know, finite strings of symbols. And a finitary theory would be one whose subject matter is about those kinds of things so that we can conceivably argue about the nature of these finite strings.
Joel David Hamkins
(01:09:05)
So a- a proof is just a finite sequence of statements, so that every statement is either one of the axioms or follows by the laws of logic from the earlier statements in some specified manner, like using modus ponens or some other law of logic like that. And such that the last line on the list is, you know, the theorem that you’re proving. So that’s what a proof is in this kind of way of thinking.
Joel David Hamkins
(01:09:29)
To take a specific example, I mean, I always conceive of the, perhaps the most natural finitary theory that one would be called upon to exhibit would be Peano arithmetic, the theory of Peano arithmetic, which is a first order theory of the nature of arithmetic. But okay, so some people say, “Well, Peano arithmetic has these strong first order induction axioms, and there’s much, much weaker versions of arithmetic, like I-sigma-naught or I-sigma-1 and so on, which are even more finitary than Peano arithmetic.” So different philosophical positions take different attitudes about what does it take to be finitary? How finitary do you have to be to be truly finitary?
Lex Fridman
(01:10:10)
So according to Perplexity, Peano arithmetic is a foundational system for formalizing the properties and operations of natural numbers using a set of axioms called the Peano axioms. Peano arithmetic provides a formal language and axioms for arithmetic operations, such as addition and multiplication over the natural numbers. The axioms define the existence of a first natural number, usually zero or one; the concept of successor function, which generates the next natural number; rules for addition and multiplication built from these concepts; the principle of induction allowing proofs around all natural numbers. And it goes on. So it’s a very particular kind of arithmetic that is a finitary.
Joel David Hamkins
(01:10:51)
You know, I view it as finitary, but this is a contentious view. Not everyone agrees with that. That’s what I was trying to hint at.
Lex Fridman
(01:10:58)
Okay. I got it. All right.
Joel David Hamkins
(01:10:58)
Peano arithmetic is one of the hugely successful theories of the natural numbers and elementary number theory. Essentially, all of classical number theory, whatever kind of theorems you want to be proving about the prime numbers or factorization or any kind of finitary reasoning about finite combinatorial objects, all of it can be formalized in Peano arithmetic. That’s the basic situation. Of course, one has to qualify those statements in light of Gödel’s incompleteness theorem, but for the most part, the classical number theoretic analysis of the finite numbers is almost entirely developable inside Peano arithmetic.
Joel David Hamkins
(01:11:45)
So if we go back to the Hilbert program, Hilbert has these two goals: produce a strong theory which is going to answer all the questions, and then prove by purely finitary means that that theory will never lead into contradiction. And one can think about, well, the incompleteness theorem should be viewed as a decisive refutation of the Hilbert program. It defeats both of those goals decisively, completely. But before explaining that, maybe one should think about, you know, what if Hilbert had been right? What would be the nature of mathematics in the world that Hilbert is telling us to search for?
Lex Fridman
(01:12:24)
And if I may, going to Perplexity’s definition of Hilbert’s program, it was David Hilbert’s early 20th-century project to give all of classical mathematics a completely secure finitary foundation. In essence, the goal was to formalize all of mathematics in precise axiomatic systems and then prove using only very elementary finitary reasoning about symbols that these systems are free of contradiction.
Joel David Hamkins
(01:12:51)
Right, exactly right. Let’s imagine what it would be like if he had been right. So we would have this finitary theory, and it would prove that the strong theory was free of contradiction. So we could start enumerating proofs from the strong theory. I mean, right now, we can write a computer program that would systematically generate all possible proofs from a given theory. And so we could have this theorem enumeration machine that just spit out theorems all day long in such a manner that every single theorem would eventually be produced by this device. And so if you had a mathematical question of any kind, you could answer it by just waiting for either the answer to come out yes or the answer to come out no.
Joel David Hamkins
(01:13:51)
So the nature of mathematical investigation in Hilbert’s world is one of just turning the crank of the theorem enumeration machine. Devoid of creative thinking or imagination, it’s just getting the answer by rote procedure. So Hilbert, in effect, is telling us, with his program, that the fundamental nature of mathematics is rote computation. I mean, the way I think about the Hilbert program seems extremely attractive in the historical context of being worried about the antinomies, the inconsistencies, and so how can we kind of block them. And so-
Joel David Hamkins
(01:14:31)
It seems natural, first of all, to have a strong theory that’s going to answer all the questions, because the idea of logical independence and pervasiveness that we now know exists just wasn’t, you know, there was no known… They didn’t know anything like that happening ever. And so it’s natural to think that it wouldn’t happen, and also that they would be able to guard against this inconsistency. So it seems like the goals of the Hilbert program are quite natural in that historical context. But, you know, when you think a little more about what the nature of it would be like, it shows you this kind of rote procedure.
Joel David Hamkins
(01:15:09)
And now you’re saying, well, that doesn’t seem so unlikely maybe, in the light of increasing computer power and so on, it’s actually maybe turning into our everyday experience, where the machines are calculating more and more for us and in a way that could be alarming. Okay, but to talk about the alternative to the Hilbert point of view, I mean, if he’s wrong, then what is the nature of mathematical reality? Well, it would mean that we couldn’t ever maybe, for the first goal, we couldn’t ever write down a theory that answered all the questions. So we would always be in a situation where our best theory, even the infinitary theories, would have questions that they stumble with and are unable to answer. Independence would occur.
Joel David Hamkins
(01:16:02)
But then also, because of the failure of the second goal, we would also have to be constantly worrying about whether our theories were consistent or not, and we wouldn’t have any truly convincing means of saying that they were free from contradiction. And the fact of Gödel’s Incompleteness Theorem shows that that is exactly the nature of mathematical reality, actually. Those are the two incompleteness theorems. So the first incompleteness theorem says you cannot write down a computably axiomatizable theory that answers all the questions. Every such theory will be incomplete, assuming it includes a certain amount of arithmetic. And secondly, no such theory can ever prove its own consistency.
Joel David Hamkins
(01:16:47)
So not only is it the case that the finitary theory can’t prove the consistency of the strong infinitary theory, but even the infinitary theory can’t prove its own consistency, right? That’s the second incompleteness theorem. And so it’s, in that sense, a decisive takedown of the Hilbert program, which is really quite remarkable, the extent to which his theorem just really answered that whole puzzle. It’s quite amazing. There’s another aspect, kind of easy to think about. I mean, if you’re wondering about theories that prove their own consistency, then would you trust a theory that proves of itself that it’s consistent? I mean, that’s like…
Joel David Hamkins
(01:17:35)
It’s like the used car salesman telling you, “Oh, I’m trustworthy.” I mean, it’s not a reason to trust the used car salesman, is it? Just because he says that. So similarly, if you have a theory that proves its own consistency, well, I mean, even an inconsistent theory would prove its own consistency. And so it doesn’t seem to be a logical reason to believe in the consistency, if you have a theory that proves itself consistent.
Lex Fridman
(01:17:59)
Just for clarification, you used the word theory. Is it, in this context, synonymous with axiomatic system?
Joel David Hamkins
(01:18:07)
Right. So in mathematical logic, “theory” is a technical term. And it means any set of sentences in a formal language. And so if you say “axiomatic system,” it’s basically synonymous to my usage with theory. So a theory means, you know, the consequences of a set of axioms or… People are sometimes unclear on whether they just mean the axioms or the consequences of the axioms, but…
Lex Fridman
(01:18:29)
So theory includes both the axioms and the consequences of the axioms, and you use it interchangeably and the context is supposed to help you figure out which of the two you’re talking about? The axioms or the consequences? Or maybe to you, they’re basically the same?
Joel David Hamkins
(01:18:44)
Yeah, well, they’re so closely connected, although, you know, all the features aren’t the same. So if you have a computable list of axioms for a theory, then you can start enumerating the consequences of the axioms, but you won’t be able to computably decide whether a given statement is a consequence or not. You can enumerate the consequences, so you can semi-decide the consequences, but you won’t be able to decide yes or no whether a given statement is a consequence or not. So it’s the distinction between a problem being computably decidable and a problem being computably enumerable, which was made clear following the work of Turing and others that came from that. I mean, it…
Joel David Hamkins
(01:19:30)
So that’s one difference between the list of axioms of the theory and the theory itself. The axioms could be… You can decide, maybe computably, whether something is an axiom or not, but that doesn’t mean that you can decide computably whether or not something is a theorem or not. Usually, you only get to decide the positive instances. If something is a theorem, you will eventually come to recognize that, but if something isn’t a theorem, maybe at no point will you be able to say, “No, that’s not a theorem.”
Lex Fridman
(01:19:56)
And that’s of course connected to the halting problem- … and all of these-… all of these contradictions and paradoxes are all nicely beautifully interconnected.
Joel David Hamkins
(01:20:04)
That’s right. Absolutely.

Truth vs proof

Lex Fridman
(01:20:06)
So, can we just linger on Gödel’s incompleteness theorem?
Joel David Hamkins
(01:20:09)
Sure.
Lex Fridman
(01:20:09)
You mentioned the two components there. You know, there are so many questions to ask. Like, what is the difference between provability and truth? What is true and what is provable? Maybe that’s a good line to draw.
Joel David Hamkins
(01:20:21)
Yeah, this is a really core distinction that it’s fascinating to me to go back and read even the early 20th-century people before Gödel and Tarski, and they were totally sloppy about this distinction between truth and proof. It wasn’t clear at all until Gödel, basically. Although even as late as Bourbaki has a kind of confusion in this foundational work, so this standard graduate-level textbooks used in France in the presentation of logic, they are conflating truth and proof. To be true for them means to be provable. So in the early days, maybe it wasn’t clear enough that the concept of truth needed a mathematical investigation or analysis. Maybe it was already taken to be fully clear.
Joel David Hamkins
(01:21:19)
But because of the incompleteness theorem, we realized that actually there are quite subtle things happening, right? And so why don’t we talk about this distinction a bit? To me, it’s absolutely core and fundamental to our understanding of mathematical logic now.
Joel David Hamkins
(01:21:34)
…this distinction between truth and proof. So truth is on the semantic side of the syntax-semantics dichotomy. Truth has to do with the nature of reality. I mean, okay, when I talk about reality, I’m not talking about physical reality. I’m talking about mathematical reality. So we have a concept of something being true in a structure, a statement being true in a mathematical structure. Like maybe you have the real field or something, and you want to know, does it satisfy this statement or that statement? Or you have a group of some kind, or maybe you have a graph. This is a particular kind of mathematical structure that has a bunch of vertices and edges, and you want to know, does this graph satisfy that statement?
Joel David Hamkins
(01:22:19)
And Tarski gave this absolutely wonderful account of the nature of truth in what’s now known as the disquotational theory of truth. And what Tarski says is the sentence, quote, “Snow is white,” unquote, is true if and only if snow is white. And what he means by that is… Look, to say truth is a property of an assertion, so we can think of the assertion as it syntactically. So the sentence is true if and only if the content of the sentence is the case, you know? So the sentence, “Snow is white,” you know, in quotations, is true. That just means that snow is white, and that’s why it’s called the disquotational theory because we remove the quotation marks.
Joel David Hamkins
(01:23:16)
…from the assertion, right? And you can use this idea of disquotation to give a formal definition of truth in a mathematical structure of a statement in a formal language. So for example, if I have a formal language that allows me to make atomic statements about the objects and relations of the structure, and I can build up a formal language with, you know, with the logical connectives of and, and or, and implies, and not, and so on, and maybe I have quantifiers, right? Then, for example, to say that the structure satisfies phi and psi, that that single statement, phi and psi, I’m thinking of that as one statement, just means that it satisfies phi and it satisfies psi. And if you notice what happened there, I…
Joel David Hamkins
(01:24:06)
At first, the “and” was part of the sentence inside the sentence, but then in the second part, I was using the word “and” to refer to the conjunction of the two conditions. So…
Lex Fridman
(01:24:17)
Yeah, it has the disquotation.
Joel David Hamkins
(01:24:18)
Yeah, it has the disquotation. And so this idea can be done for all the logical connectors and quantifiers and everything. You’re applying Tarski’s idea of disquotation, and it allows you to define by induction the truth of any assertion in a formal language inside any mathematical structure. And so to say that a sentence is true, first of all, it’s ambiguous unless you tell me which structure you’re talking about it being true in. And so maybe we have in mind the standard model of arithmetic or something with the natural numbers and the arithmetic structure, and I want to know is a given statement true in that structure. Then we have a formal definition of what that means according to the Tarski recursive definition of truth. Okay, that’s truth.
Joel David Hamkins
(01:25:04)
Proof, on the other hand, is, you know, in this Hilbert way of thinking, we can develop proof theory. What is a proof for a mathematician, for a mathematical logician? A proof is a certain sequence or arrangement of sentences in the formal language that accord with the logical rules of a proof system. So there are certain modes of reasoning that are allowed. So if you know A and you know A implies B in the proof, then at a later step you’re allowed to write B as a consequence. So if you know A and you know A implies B, those are both two statements that are known, then you can deduce B as a consequence according to the rule of modus ponens. This is the rule of modus ponens. And, you know, there are a lot of other rules. Some people would call this implication elimination.
Joel David Hamkins
(01:25:56)
There are different kinds of proof systems. There are a lot of different formal proof systems that exist that are studied by the proof theorists, and all of them have the property that they’re sound, which means that if the premises of the argument are all true in a structure and you have a proof to get a conclusion, then the conclusion is also true in that structure. So that’s what it means to be sound. That proofs preserve truth. They’re truth-preserving arguments. Okay? But also the proof systems are also generally complete.
Joel David Hamkins
(01:26:35)
They’re both sound and complete, and complete means that whenever a statement is a consequence, a logical consequence of some other statements, which means that whenever the assumptions are true, then the consequence is also true in the structure. So whenever you have a logical consequence, then there is a proof of it. Okay? And the proof systems generally have both of those properties; they’re sound and complete. There’s a third property, a lot of logicians talk about sound and complete this, sound and complete that. But actually, there’s a hidden third adjective that they should always be talking about in any such case, which is that you should be able to recognize whether or not something is a proof or not.
Joel David Hamkins
(01:27:19)
So there’s a computable aspect to the proof systems. We want to be able to recognize whether something is a proof. It should be computably decidable whether a given sequence of statements is a proof or not. So we don’t want a proof system in which someone claims to have a proof, but we can’t check that fact whether it’s a proof or not. We want to be able to, you know, to correctly adjudicate all claims to having a proof.
Lex Fridman
(01:27:46)
Yeah, a mathematician comes to mind that said he has a proof, but the margins are too small…
Joel David Hamkins
(01:27:51)
That’s right
Lex Fridman
(01:27:51)
… to continue.
Joel David Hamkins
(01:27:52)
Exactly. So…
Lex Fridman
(01:27:53)
So that doesn’t count as a proof.
Joel David Hamkins
(01:27:55)
Yeah. So generally, all the classical proof systems that are used are sound and complete, and also computably decidable in the sense that we can decide whether something is a proof or not.
Lex Fridman
(01:28:04)
So what is, again, the tension between truth and proof? Which is more powerful, and how do the two interplay with the contradictions that we’ve been discussing?
Joel David Hamkins
(01:28:15)
Right. So the incompleteness theorem is the question whether we could, say, write down a theory for arithmetic. Say, for the standard model of arithmetic where we have the natural numbers and plus and times and zero, one, and less than, and so on. In that formal language, we can express an enormous number of statements about the nature not only of arithmetic, but actually by various coding methods, we can express essentially all of finite mathematics in that structure. So the question would be, can we write down a computable list of axioms that will answer all those questions by proof? In other words, we want to have a complete theory, a theory of arithmetic that proves all and only the true statements. That would be the goal. Hilbert would love that.
Joel David Hamkins
(01:29:03)
I mean, that would be supportive of Hilbert’s program to have such a complete theory of arithmetic, and Godel proved that this is impossible. You cannot write down a computable list of axioms that is complete in that sense. There will always be statements… if the theory is consistent, there will always be statements that you cannot prove and you cannot refute. So they are independent of that theory.
Lex Fridman
(01:29:26)
How traumatic is that, that there’s statements that are independent from the theory?
Joel David Hamkins
(01:29:30)
I mean, my view is that, yeah, this isn’t traumatic at all. This is rather completely eye-opening in terms of our understanding of the nature of mathematical reality. I mean, we’re not… we understand this profound fact about our situation with regard to mathematical truth. The incompleteness theorem tells us, look, we just can’t write down a list of axioms that is going to be consistent and it’s going to answer all the questions. It’s impossible. And so I don’t think of it as trauma. I just think, look, this is the nature of mathematical reality, and it’s good that we know it, and so now we need to move on from that and, you know, do what we can in light of that.
Lex Fridman
(01:30:19)
Is it fair to say that in general it means if I give you a statement, you can’t know if your axiomatic system would be able to prove it?
Joel David Hamkins
(01:30:31)
That’s right. In general, you cannot… the provability problem, we can formulate it as a decision problem. Given a theory and given a statement, is that statement a consequence of that theory? Yeah. This is one of the most famous decision problems. In fact, the very first one, because it’s equivalent to the Hilbert-Ackermann Entscheidungsproblem, which is also appearing in the title of Turing’s 1936 paper that was so important for computability theory. So it’s a formulation of the Entscheidungsproblem. Does a given theory have a given statement as a logical consequence?
Joel David Hamkins
(01:31:08)
Which, because of Godel’s completeness theorem, not his incompleteness theorem, but his earlier completeness theorem, Godel had proved that the proof systems that they studied did have this completeness property that I mentioned. So provability is the same as logical consequence, so… and this is an undecidable decision problem. Turing proved, and we now know it’s equivalent to the halting problem.

The Halting Problem

Lex Fridman
(01:31:30)
Can you describe the halting problem? Because it’s a thing that shows up in a very useful and, again, traumatic way through a lot of computer science, through a lot of mathematics.
Joel David Hamkins
(01:31:39)
Yeah. The halting problem is expressing a fundamental property of computational processes. So given a program, or maybe we think of it as a program together with its input, but let me just call it a program. So given a program, we could run that program, but I want to pose it as a decision problem. Will this program ever complete its task? Will it ever halt? And the halting problem is the question, given a program, will it halt? Yes or no? And of course, for any one instance, the answer’s either yes or no. That’s not what we’re talking about. We’re talking about whether there’s a computable procedure to answer all instances of this question.
Joel David Hamkins
(01:32:23)
So it’s a decision problem given as a scheme of instances for all possible programs that you could ask about. What I want to know is, is there a computable procedure that will answer those questions? And it turns out the answer’s no. The halting problem is computably undecidable. There is no computable procedure that will correctly answer all instances of whether a given program will halt. And of course, we can get half the answers in the sense that you give me a program and you say, “Will this halt?” And I could take that program and I could run it.
Joel David Hamkins
(01:32:59)
And I could keep running it, and maybe in a week, it would halt. And at that time, I could say, “Yes, it halted.” So I can get the yes answers correctly for halting, all the yes answers. But the problem is if it didn’t halt yet, like maybe I waited, you know, a thousand years and it still hasn’t halted, I don’t seem entitled to say, “No, it’s not going to halt” yet, because maybe in a thousand and one years, it’ll halt. And so at no point can I seem to say no. In order to say, “No, it won’t ever halt,” it seems like I would have to really understand how the program worked and what it was doing. So giving the yes answers was sort of trivial. You didn’t have to understand it; you just needed to run it, which is a kind of rote task.
Joel David Hamkins
(01:33:48)
But to give the no answers, you need to have a kind of deep insight into the nature of the program and what it’s doing in such a way that you would understand it and be able to see, “Oh, no, I can see this program is never going to halt.” Because, you know, it’s a much more difficult task to say, “No, it won’t halt,” than it is to say, “Yes, it halted because I ran it and it halted.” And it turns out to be impossible to have a computable procedure that gives the no answers, you know? And the argument is not very difficult. Should we do it?
Lex Fridman
(01:34:18)
Yes, let’s do it.
Joel David Hamkins
(01:34:19)
Okay. Suppose toward contradiction, I mean, all these proofs are by contradiction, and this argument is going to be a diagonal argument in the same style as the Russell argument and the Cantor argument and Godel’s argument that we haven’t talked about yet. So many diagonal arguments come in. So suppose towards contradiction that we had a procedure for determining whether a given program halted on a given input. Now, let me describe. I’m going to use that procedure as a subroutine in the following process. And my process, let’s call it Q, process Q, and it takes as input a program P, okay? And the first thing it does is it asks that subroutine, “Hey, would P halt if I ran it on P itself?” Okay. That’s the diagonal part, because we’re applying P to P, right?
Joel David Hamkins
(01:35:14)
Okay, so I’m describing program Q, and program Q takes as input P, which is itself a program. And the first thing it does is it asks the halting subroutine program, “Would P halt on P?” And if the answer comes back from the subroutine, “Yeah, that would halt,” then what I do in program Q is I immediately jump into an infinite loop. So I don’t halt. If P halts on P, I don’t halt. But if the answer came back, “No, P is never going to halt on P,” then I halt immediately. Okay, and that’s it. I’ve described what Q does.
Joel David Hamkins
(01:35:53)
And the thing about Q is that Q’s behavior on P was the opposite of P’s behavior on P. I mean, that’s how we designed Q specifically so that Q on P had the opposite behavior as P on P. Okay, so now, of course, what do we do? Well, the same thing that Russell did, and so forth, and Cantor, we ask, “Well, what would Q do on Q?” And because of this opposite behavior, Q would halt on Q if and only if Q does not halt on Q, which is a contradiction, because Q has to have the opposite behavior on Q than Q does, but that’s just contradictory.
Lex Fridman
(01:36:38)
What a beautiful proof. Simple.
Joel David Hamkins
(01:36:39)
It’s absolutely beautiful. I agree. It’s following the same logic of Russell and Cantor. I mean, going back to Cantor basically, because Russell is also quoting Cantor in his letter to Frege. Therefore, the conclusion is that the halting problem is not computably decidable. Now we can immediately prove Godel’s theorem using this, actually. It’s an immediate consequence. So why don’t we just do that? I view this as the simplest proof of Godel’s theorem. You don’t need the Godel sentence to prove Godel’s theorem. You can do it with the halting problem. So suppose that we could write down a computable axiomatization of all of the true facts of elementary mathematics, meaning arithmetic and finite combinatorial things such as Turing machine computations and so on.
Joel David Hamkins
(01:37:35)
So in fact, all those finite combinatorial processes are formalizable inside arithmetic with the standard arithmetization coding process. But let me just be a little bit informal and say suppose we could write down a complete theory of elementary finite mathematics. So we have an axiomatization of that theory. Then we could produce all possible theorems from those axioms in the way that I was describing earlier with Hilbert’s program. If we had a complete theory of elementary mathematics, we could construct a theorem enumeration machine that produced all the theorems and only the theorems from that theory. So now, I have this theorem enumeration device on my desk, and I announce that I’m open for business to solve the halting problem.
Joel David Hamkins
(01:38:25)
So you give me a program and input that you want to run that program on, and I’m going to answer the halting problem. The way I’m going to do it is I’m just going to wait for the statement coming out of the theorem enumeration device that asserts either that P does halt on that input, or I wait for the statement that P does not halt on that input. But one of them’s going to happen because it was a complete theory that was enumerating all the true statements of elementary mathematics. So therefore, if I had such a system, I could solve the halting problem, but we already proved that you cannot solve the halting problem, so therefore you cannot have such a complete theory of arithmetic. So that proves Godel’s theorem.
Lex Fridman
(01:39:05)
Maybe to take a little bit of a tangent, can you speak… You’ve written a wonderful book about proofs and the art of mathematics. So what can you say about proving stuff in mathematics? What is the process of proof? What are the tools? What is the art? What is the science of proving things in mathematics?
Joel David Hamkins
(01:39:22)
This is something that I find so wonderful to teach young mathematicians who are learning how to become mathematicians and learning about proof, and I wrote that book when I was teaching such a proof-writing class in New York.
Joel David Hamkins
(01:39:37)
Many universities have such a course, the proof-writing course, which is usually taken by students who have learned some mathematics. Usually, they’ve completed maybe the calculus sequence and are making the kind of transition to higher mathematics, which tends to involve much more proof, and it’s a kind of challenging step for them. So many math departments have this kind of course on proof-writing where the students would get exposed to how to write proofs. I wasn’t happy with most of the other books that exist for those kind of courses, and the reason was that they were so often so dull because they would concentrate on these totally uninteresting parts of what it’s like to write a proof, these kind of mechanistic procedures about how to write a proof.
Joel David Hamkins
(01:40:28)
You know, if you’re going to prove an implication, then you assume the hypothesis and argue for the conclusion, and so on. All of that is true and fine, and that’s good to know, except if that’s all that you’re saying about the nature of proof, then I don’t think you’re really learning very much. So I felt that it was possible to have a much better kind of book, one that was much more interesting and that had interesting theorems in it that still admitted elementary proof. So I wrote this book and tried to fill it with all of the compelling mathematical statements with very elementary proofs that exhibited lots of different proof styles in it. So, I found that the students appreciated it a lot.
Lex Fridman
(01:41:12)
We should say, “We dedicate the book to my students, may all their theorems be true, proved by elegant arguments that flow effortlessly from hypothesis to conclusion while revealing fantastical mathematical beauty.” Is there some interesting proofs that maybe illustrate, for people outside of mathematics or for people who just take math classes…
Joel David Hamkins
(01:41:37)
Right
Lex Fridman
(01:41:37)
…in high school and so on?
Joel David Hamkins
(01:41:39)
Yeah, let’s do a proof. There’s one in the book. We can talk about it. I think it’s a nice problem. It’s in discrete math, yeah, the 5.1, that one, more pointed at than pointing. Okay. So this is the following problem. Suppose you’re gathered with some friends, you know, in a circle, and you can point at each other however you want, or yourself, whatever, it doesn’t matter, and you can point at more than one person, you know, use all your fingers or your feet or whatever you want. So maybe you point at three of your friends or something and they point at two or three of their friends or whatever, and one person is pointing at 10 people and somebody isn’t pointing at anybody maybe, and various people are pointed at also, right?
Joel David Hamkins
(01:42:20)
So the question is, could we arrange a pattern of pointing so that everyone was more pointed at than they are pointing at others? So in other words, maybe there’s seven people pointing at me, but I’m only pointing at five people and maybe there’s, you know, 20 people pointing at you, but you’re only pointing at 15 people or something like that, right? So I want to know. There’s a similar question on Twitter. For a group of people on Twitter, could you arrange that everyone has more followers than following? Yeah, it’s the same question. Mathematically, it’s identical. Although, I don’t know, it’s not identical, because I said you could point at yourself, and I think that’s not… Can you follow yourself?
Lex Fridman
(01:43:09)
No, I don’t think so, no.
Joel David Hamkins
(01:43:10)
I don’t think you can. Okay. So can you arrange it so that everyone is more pointed at than pointing? In my book, I give a couple of different proofs of this. I think I give an induction proof and then there’s another proof. I think there’s three different proofs in there. But why don’t we just talk about my favorite proof? Suppose it were possible to arrange that we’re all more pointed at than pointing, okay? Now what we’re going to do, we’re going to agree, we’re going to give a dollar to everyone that we’re pointing at.
Joel David Hamkins
(01:43:40)
Okay? So what happens? Everybody made money, because I was pointed at by more people than I’m pointing, so I got $10 but I only paid out $7. And similarly, you got paid $20 but you only paid out $15. So if everyone is more pointed at than pointing, then everyone makes money. But it’s obviously impossible for us to make money as a group by just trading money with ourselves. And therefore, it can’t be possible that we’re all more pointed at than pointing. And this proof illustrates something. It’s one of my habits that I suggest in the book: to anthropomorphize your mathematical ideas. So,
Joel David Hamkins
(01:44:22)
you should imagine that the mathematical objects that are playing a role in your question are people, or active, somehow, animals or something that maybe have a will and a goal and so on. This is this process of anthropomorphizing. And it often makes the problems easier to understand, because we all are familiar with the fact that it’s difficult to make money, and the proof is totally convincing because of our knowledge that we can’t make money as a group by trading dollars between us, without any new money coming into the group.
Joel David Hamkins
(01:45:01)
But that by itself is actually a difficult mathematical claim. I mean, if someone had to prove that you can’t make money by trading within a group, you know, it can’t be that everyone in the group makes money just by shifting money around in the group. Maybe you think that’s obvious, and it is obvious if you think about money. But if you had asked the question about mathematical functions of a certain kind and so on, then maybe it wouldn’t be as clear as it is when you’re talking about this money thing, because we can build on our human experience about the difficulty of getting money and other resources. It doesn’t have to be money; it could be candy, whatever. You know, we just know that you can’t easily get more things in that kind just by trading within a group.
Lex Fridman
(01:45:48)
And we should say that sometimes the power of proof is such that the non-obvious can be shown, and then over time that becomes obvious. So, in the context of- money or social systems, there’s a bunch of things that are non-obvious. And the whole point is that proof can guide us to the truth, to the accurate description of reality. We just proved a property of money.
Joel David Hamkins
(01:46:14)
It’s interesting to think about, well, what if there were infinitely many people in your group? Then it’s not true anymore. The theorem fails. In fact, you can arrange that everyone is strictly more pointed than pointing. And also, if everyone has even just one dollar bill-
Joel David Hamkins
(01:46:35)
then you can arrange that afterwards everyone has infinitely many dollar bills. Because in terms of cardinality, that’s the same. It’s just, say, countable infinity in each case. If you had countably many friends and everyone has one dollar bill, then you can arrange a pattern of passing those dollar bills amongst each other so that afterwards everyone has infinitely many dollar bills. What you need is for each person to be attached to, you know, one of the train cars or something. So, think of everyone as coming from Hilbert’s train, but also think of them as fitting into Hilbert’s Hotel. So, just have everyone on the Nth car give all their money to the person who ends up in the Nth room. So, they each give one dollar to that person.
Joel David Hamkins
(01:47:16)
So afterwards, that person has infinitely many dollars, but everyone only paid out one dollar. So it’s a way of making it happen.

Does infinity exist?

Lex Fridman
(01:47:23)
To what degree, sticking on the topic of infinity, should we think of infinity as something real?
Joel David Hamkins
(01:47:33)
That’s an excellent question. I mean, a huge part of the philosophy of mathematics is about this kind of question, that what is the nature of the existence of mathematical objects, including infinity? But I think asking about infinity specifically isn’t that different than asking about the number five. What is- what does it mean for the number five to exist? What are the numbers really, right?
Joel David Hamkins
(01:47:58)
This, this is maybe one of the fundamental questions of mathematical ontology. I mean, there’s many different positions to take on the question of the nature of the existence of mathematical objects or abstract objects in general. And there’s a certain kind of conversation that sometimes happens when you do that. And it goes something like this: sometimes people find it problematic to talk about the existence of abstract objects such as numbers, and there seems to be a kind of wish that we could give an account of the existence of numbers or other mathematical objects or abstract objects that was more like, you know, the existence of tables and chairs and rocks and so on.
Joel David Hamkins
(01:48:42)
And so there seems to be this desire to reduce mathematical existence to something, you know, that we can experience physically in the real world. But my attitude about this attempt is that it’s very backward, I think, because I don’t think we have such a clear existence of the nature of physical objects, actually. I mean, we all have experience about existing in the physical world, as we must, because we do exist in the physical world, but I don’t know of any satisfactory account of what it means to exist physically.
Joel David Hamkins
(01:49:28)
I mean, if I ask you, say, “Imagine, you know, a certain kind of steam locomotive,” you know, and I describe the engineering of it and the weight of it and the nature of the gear linkages, and, you know, and I show you schematic drawings of the whole design and so on, and, you know, we talk in detail about every single detailed aspect of this steam locomotive. But then suppose after all that conversation, I say, “Okay, now I would like you to tell me what would it mean for it to exist physically, I mean, as opposed to just being an imaginary steam locomotive?” Then what, what could you possibly say about it? I mean, except by saying, “Oh, I just mean that it exists in the physical world.” But what does that mean? That’s the question, right? It’s not an answer to the question.
Joel David Hamkins
(01:50:18)
That is the question. So I don’t think that there’s anything sensible that we can say about the nature of physical existence. It is a profound mystery. In fact, it becomes more and more mysterious the more physics we know. I mean, back in, say, Newtonian physics, one had a picture of the nature of physical objects as, you know, little billiard balls or something, or maybe they’re infinitely divisible or something like that. Okay, but then this picture is upset with the atomic theory of matter. But then that picture’s upset when we realize that the atoms actually can be split and consist of electrons and protons and neutrons and so on.
Joel David Hamkins
(01:50:56)
But then that picture’s upset when we realize that those things themselves are built out of quarks and leptons and so on, and who knows what’s coming. And furthermore, all of those things, the nature of their existence is actually as wave functions in some cloud of probability and so on. And so it just becomes more and more mysterious the more we learn, and not at all clarifying. And so the nature of what it means to say that there’s an apple on my desk, and to give an account of what that physical existence really is at bottom, I think, is totally absent. Whereas we do seem to have a much more satisfactory account of the nature of abstract existence. I mean, I can talk about the nature of the empty set.
Joel David Hamkins
(01:51:43)
You know, this is the predicate which is never true or something like that. I can talk about those kind of logical properties or the singleton of the empty set and so on. I mean, of course, it’s very difficult if you go very far with it, but the point is that it doesn’t get more and more mysterious. The more that you say, it becomes only more and more clear. And so it seems to me that we don’t really have any understanding of what the physical world is as opposed to the abstract world, and it’s the abstract world where existence is much more clear.
Lex Fridman
(01:52:19)
It is very true that we don’t know anything about the soda bottle or the steam locomotive just because we can poke at it. Again, we anthropomorphize, and that actually gets us into trouble sometimes because I’m not feeling the quantum mechanics when I’m touching it.
Joel David Hamkins
(01:52:34)
That’s right.
Lex Fridman
(01:52:34)
And therefore, it’s easy to forget and feel like this is real and mathematical objects are not, but you’re making the opposite argument. When you draw a distinction between numerals and numbers, which numerals are the representation of the number on the page and so on, could you say that a number is real? Do numbers exist?
Joel David Hamkins
(01:52:57)
I happen to think so. I mean, I’m on the side of realism in mathematics, and I think that these abstract objects do have a real existence in a way that we can give an account of, in a way I just tried to describe.
Lex Fridman
(01:53:10)
So, like, you would describe it as the size of a set with four elements in it?
Joel David Hamkins
(01:53:14)
Well, there are different ways to understand the nature of four. I mean, actually, this gets into the question of structuralism, which is maybe a good place to talk about it.
Lex Fridman
(01:53:24)
What is structuralism?
Joel David Hamkins
(01:53:25)
Structuralism is a philosophical position in mathematics, or the philosophy of mathematics, by which one emphasizes that what’s important about mathematical objects is not what they’re made out of or what their substance or essence is, but rather how they function in a mathematical structure. And so, what I call the structuralist attitude in mathematics is that we should only care about our mathematical structures up to isomorphism.
Joel David Hamkins
(01:53:53)
If I have a mathematical structure of a certain kind and I make an exact copy of it using different individuals to form the elements of that structure, then the isomorphic copy is just as good mathematically, and there’s no important mathematical difference that would ever arise from working with this isomorphic copy instead of the original structure. And so, therefore, that’s another way of saying that the substance of individuals in a mathematical structure is irrelevant with regard to any mathematical property of that structure. And so…
Joel David Hamkins
(01:54:33)
So to ask a question like, “What is the number four really?” is an anti-structuralist thing, because if you have a structure, say, the natural numbers, with all the numbers in it, 0, 1, 2, 3, 4, and so on, then I could replace the number four with something else, like, you know, this bottle of water could play the role of the number four in that structure, and it would be isomorphic. And it wouldn’t matter at all for any mathematical purpose to use this alternative mathematical system, you know? That’s to say that we don’t care what the number four is really. That is irrelevant. What…
Joel David Hamkins
(01:55:12)
The only thing that matters is what are the properties of the number four in a given mathematical system, and recognizing that there are other isomorphic copies of that system, and the properties of that other system’s number four are going to be identical to the properties of this system’s number four with regard to any question that’s important about the number four. But those questions won’t be about essence. So in a sense, structuralism is anti-essential in mathematics.
Lex Fridman
(01:55:42)
So is it fair to think of numbers as a kind of pointer to a deep underlying structure?
Joel David Hamkins
(01:55:48)
Yeah, I think so, because I guess part of the point of structuralism is that it doesn’t make sense to consider mathematical objects or individuals in isolation. What’s interesting and important about mathematical objects is how they interact with each other and how they behave in a system, and so maybe one wants to think about the structural role that the objects play in a larger system, a larger structure. There’s a famous question that Frege had asked actually when he was looking into the nature of numbers, because in his logicist program, he was trying to reduce all of mathematics to logic.
Joel David Hamkins
(01:56:26)
And in that process, he was referring to the Cantor-Hume principle that, whenever two sets are equinumerous, then they have the same number of elements, if and only if. And he founded his theory of number on this principle, but he recognized that there was something that dissatisfied him about that situation, which is that the Cantor-Hume principle does not seem to give you criteria for which things are numbers. It only tells you a kind of identity criteria for when are two numbers equal to each other. Well, two numbers are equal just in case the sets of those sizes are equinumerous, so that’s the criteria for number identity. But it is not a criteria for what is a number.
Joel David Hamkins
(01:57:12)
And so this problem has become known as the Julius Caesar problem because Frege said we don’t seem to have any way of telling from the Hume principle whether Julius Caesar is a number or not. So he’s asking about the essence of number and whether… Of course, one has a sense that he picked maybe what he was trying to present as a ridiculous example… …Because maybe you have the idea that, well, obviously Julius Caesar is not a number, and there’s a lot of philosophical writing that seems to take that line also, that obviously the answer is that Julius Caesar is not a number. But the structuralists disagree with that position. The structuralist attitude is, “Look, you give me a number system. If Julius Caesar isn’t a number, then I can just…
Joel David Hamkins
(01:58:01)
Let’s take the number 17 out of that system and plug in Julius Caesar for that role, and now I’ve got a new number system, and now Julius Caesar happens to be the number 17.” And that’s totally fine, you know. So the point of structuralism is that the question of whether Julius Caesar is a number or not is irrelevant to mathematics. It is irrelevant because it is not about structure, it’s about this essence of the mathematical objects. So that’s the structuralist criticism of Frege’s point.
Lex Fridman
(01:58:35)
You’ve kind of made the case that you can say more concrete things about the existence of objects in mathematics than you can in our physical reality, about which to us human brains, things are obvious or not. So what’s more real, the reality we see with our eyes or the reality we can express in mathematical theorems?
Joel David Hamkins
(01:58:59)
I’m not quite sure. I mean… I live entirely in the platonic realm, and I don’t really understand the physical universe at all. So I don’t have strong views.
Lex Fridman
(01:59:13)
Let’s talk about the platonic realm. Is it… Like, because you live there, is it real? Or-
Joel David Hamkins
(01:59:19)
Oh yeah, totally, yeah. This is the realist position in mathematics is that abstract objects have a real existence. And okay, what’s meant by that is that there’s some sense of existence in which those objects can be regarded as real.
Lex Fridman
(01:59:32)
How should we think about that? How should we try to visualize that? What does it mean to live amongst abstract objects? Because life is finite. We’re all afraid of death. We fall in love with other physical manifestations of objects. And you’re telling me that maybe reality actually exists elsewhere, and this is all just a projection…
Joel David Hamkins
(01:59:59)
Well, I mean-
Lex Fridman
(01:59:59)
…from the abstract realm?
Joel David Hamkins
(02:00:02)
Do abstract objects exist in a place and at a time? That’s very debatable, I think.
Lex Fridman
(02:00:07)
Right. And what does place and time mean?
Joel David Hamkins
(02:00:08)
All time, yeah, so…
Lex Fridman
(02:00:10)
So what’s more real: physics or the mathematical Platonic space?
Joel David Hamkins
(02:00:16)
Well, the mathematical Platonic realm is… I’m not sure I would say it’s more real, but I’m saying we understand the reality of it in a much deeper and more— …a more convincing way. I don’t think we understand the nature of physical reality very well at all, and I think most people aren’t even scratching the surface of the question as I intend to be asking it. So, you know, obviously we understand physical reality. I mean, I knock on the table— …and so on, and we know all about what it’s like to, you know, have a birthday party or to drink a martini or whatever. And so we have a deep understanding of existing in the physical world. But maybe understanding is the wrong word. We have an experience of living in the world—
Lex Fridman
(02:01:01)
Yeah, experience.
Joel David Hamkins
(02:01:01)
…and riding bicycles and all those things, but I don’t think we actually have an understanding at all, I mean, very, very little of the nature of physical existence. I think it’s a profound mystery. Whereas I think that we do have something a little better of an understanding of the nature of mathematical existence and abstract existence. So that’s how I would describe the point.
Lex Fridman
(02:01:26)
Somehow it feels like we’re approaching some deep truth from different directions, and we just haven’t traveled as far in the physics world as we have in the mathematical world.
Joel David Hamkins
(02:01:41)
Maybe I could hope that someone will give, you know, the convincing account, but it seems to be a profound mystery to me. I, I can’t even imagine what it would be like to give an account of physical existence.
Lex Fridman
(02:01:52)
Yeah, I wonder, like a thousand years from now as physics progresses- … what this same conversation would look like.
Joel David Hamkins
(02:01:59)
Right. That would be quite interesting.
Lex Fridman
(02:02:00)
Do you think there’s breakthroughs a thousand years from now on the mathematics side? ‘Cause we’ve just discussed, and we’ll return to a lot of turmoil a century ago. Do you think there’s more turmoil to be had?
Joel David Hamkins
(02:02:14)
It’s interesting to me because I have my feet in two worlds: mathematics and philosophy, and to compare the differences between these subjects. One of the big cultural differences is towards the idea of progress in the subject, because mathematics has huge progress. We simply understand the mathematical ideas much, much better, continually improving our understanding, and there’s growth in knowledge. We understand the nature of infinity now better than they did 100 years ago. I mean, definitely better. And they understood it better 100 years ago than they did, you know, for the previous thousands of years, and so on.
Joel David Hamkins
(02:02:57)
So, in almost every part of mathematics, there’s improved understanding of the core issues, so much so that the questions at hand become totally different, and the field sort of moves on to more difficult, interesting questions. Whereas in philosophy, that’s a little bit true that there’s progress. But meanwhile, it’s also true that there are these eternal questions that have been with us for thousands of years, and in fact, so much so that you can find a lot of philosophers arguing the important contribution of philosophy is in asking the questions rather than answering them, because it’s hopeless to answer them. I mean, the nature of these deep philosophical questions is so difficult. Less of a sense of progress is what I’m trying to say.
Joel David Hamkins
(02:03:48)
I don’t see any reason to think that the progress in mathematics, in the growth in our mathematical understanding and knowledge, won’t simply continue. And so, a thousand years from now, maybe the mathematics that they will be doing at that time would probably be completely unrecognizable to me. I maybe wouldn’t even begin to understand what they’re talking about, even without sort of witnessing the intervening developments. So if you bring someone from ancient times to today, they maybe wouldn’t even understand what we’re talking about with some of the questions. But I feel that, you know, if Archimedes came and we were able to communicate, I think I would be able to tell him about some of the things that are going on in mathematics now, and maybe, you know…
Joel David Hamkins
(02:04:43)
Or anyone from that time, I mean. So I think it is possible to have this kind of progress, even when the subject kind of shifts away from the earlier concerns as a result of the progress, basically.

MathOverflow

Lex Fridman
(02:04:57)
To take a tangent on a tangent, since you mentioned philosophy, maybe potentially more about the questions, and maybe mathematics is about the answers, I have to say you are a legend on MathOverflow, which is like Stack Overflow but for math. You’re ranked number one all time on there with currently over 246,000 reputation points. How do you approach answering difficult questions on there?
Joel David Hamkins
(02:05:24)
Well, MathOverflow has really been one of the great pleasures of my life. I’ve really enjoyed it. And I’ve learned so much from interacting on MathOverflow. I’ve been on there since 2009, which was shortly after it started. I mean, it wasn’t exactly at the start, but a little bit later. And I think it gives you the stats for how many characters I typed, and I don’t know how many million it is, but this enormous amount of time that I’ve spent thinking about those questions, and it has really just been amazing to me.
Lex Fridman
(02:06:06)
How do you find the questions that grab you and how do you go about- … answering them?
Joel David Hamkins
(02:06:11)
So, I’m interested in any question that I find interesting. So… And it’s not all questions. Sometimes certain kinds of questions just don’t appeal to me that much.
Lex Fridman
(02:06:21)
So you go outside of set theory as well?
Joel David Hamkins
(02:06:23)
When I first joined MathOverflow, I was basically one of the few people in logic who was answering. I mean, there were other people who know some logic, particularly from category theory and other parts of mathematics that aren’t in the most traditional parts of logic, but they were answering some of the logic questions. So I really found myself able to make a contribution in those very early days by engaging with the logic-related questions. But there weren’t many logic people asking questions either. But what I found was that there was an enormous amount of interest in topics that were logic-adjacent.
Joel David Hamkins
(02:07:04)
So a question would arise, you know, in group theory, but it had a logic aspect or an analysis or whatever, and there would be some logic angle on it. And what I found was that I was often able to figure out an answer by learning enough about that other subject matter. This is what was so rewarding for me, is because basically I had to learn enough. My main expertise was logic, but someone would ask a question, you know, that was about, say, the axiom of choice in this other subject matter or the continuum hypothesis or something like that in the other subject matter. And I would have to learn enough about that other subject and the context of the question in order to answer, and I was often able to do that. And so I was quite happy to do that.
Joel David Hamkins
(02:07:50)
And also I learned a lot by doing that, because I had to learn about these other problem areas. And so it really allowed me to grow enormously as a mathematician.
Lex Fridman
(02:08:01)
To give some examples of questions you’ve answered: What are some reasonable sounding statements that are independent of ZFC? What are the most misleading alternate definitions in taught mathematics? Is the analysis as taught in universities in fact the analysis of definable numbers? Solutions to the continuum hypothesis? Most unintuitive application of the axiom of choice? Non-trivial theorems with trivial proofs? Reductio ad absurdum or the contrapositive? What is a chess piece mathematically? We should say you worked quite a bit on infinite chess, which we should definitely talk about. It’s awesome. You’ve worked on so many fascinating things. Has philosophy ever clarified mathematics? Why do we have two theorems when one implies the other?

The Continuum Hypothesis

Lex Fridman
(02:08:48)
And, of course, just as an example, you’ve given a really great, almost historical answer on the topic of the continuum hypothesis. Maybe that’s a good place to go. We’ve touched on it a little bit, but it would be nice to lay out what is the continuum hypothesis that Cantor struggled with. And I would love to also speak to the psychology of his own life story, his own struggle with it. The human side of mathematics is also fascinating. So what is the continuum hypothesis?
Joel David Hamkins
(02:09:16)
The continuum hypothesis is the question that arises so naturally whenever you prove that there’s more than one size of infinity. So Cantor proved that the infinity of the real numbers is strictly larger than the infinity of the natural numbers. But immediately when you prove that, one wants to know, “Well, is there anything in between?” I mean, what could be a more natural question to ask immediately after that? And so Cantor did ask it, and he spent his whole life thinking about this question. The continuum hypothesis is the assertion that there is no infinity in between the natural numbers and the real numbers. And, of course, Cantor knew many sets of real numbers. Everything in between…
Joel David Hamkins
(02:10:01)
I mean, everything that’s in that interval would be equinumerous with some set of real numbers. But we know lots of sets of real numbers. I mean, there are all these various closed sets, Cantor sets, and so on. There’s Vitali set. We have all kinds of sets of real numbers. And so you might think, “Well, if the continuum hypothesis is false, then we’ve probably seen the set already. We just have to prove that it’s strictly in between.” But it turned out that for all the sets that anyone ever could define or pick out or observe, for all the sets of real numbers, it was always the case either that they were countable, in which case they’re equinumerous with the natural numbers or else finite, or they were fully equinumerous with the whole real line.
Joel David Hamkins
(02:10:44)
And so they were never strictly in between. I mean, you’re in this situation and you have hundreds, thousands of sets that are candidates to be in between, but in every single case, you can prove it’s on one side or the other and not strictly in between. And so in every situation where you’re able to figure out whether it’s in between or not, it’s always never strictly in between.
Lex Fridman
(02:11:09)
Now, Cantor was obsessed with this.
Joel David Hamkins
(02:11:11)
I think he was. Yeah, I’m not a historian, so I don’t know the exact history.
Lex Fridman
(02:11:15)
Well, everything I’ve seen, it seems to be the question that broke him, huh? I mean, just struggling with different opinions on the hypothesis within himself and- …desperately chasing, trying to prove it.
Joel David Hamkins
(02:11:29)
So he had a program for proving it, which has been affirmed in a certain respect. Of course, the continuum hypothesis holds for open sets. That’s easy to see. If you have an open interval, then this is fully equinumerous with the whole real line. Any interval is equinumerous with the whole line because all you would need is a function, you know, like the arctangent function or something that maps the whole real line into an interval. And that’s a one-to-one function. So we know the open sets have the property that they’re non-trivial open sets are all fully equinumerous with the whole real line. So never strictly in between. But remarkably, Cantor proved it also for the closed sets, and that is using what’s called the Cantor-Bendixson theorem.
Joel David Hamkins
(02:12:15)
So it’s quite a remarkable result. It’s definitely not obvious. And this theorem actually was the origin of the ordinals. Cantor had to invent the ordinals in order to make sense of his Cantor-Bendixson process.
Lex Fridman
(02:12:31)
Can you define the open and the closed set in this context?
Joel David Hamkins
(02:12:34)
Oh, yeah. Sure. So a set of reals is open if every point that it contains is surrounded by a little interval of points, the whole tiny little interval. But that tiny little interval is already just by itself equinumerous with the whole line. So that’s why that question is sort of easy for open sets. A closed set is a complement of an open set, and there are a lot of closed sets that are really complicated, of varying sizes. So of course, any closed interval is a closed set, but it’s not only those. There are also things like the Cantor set, which you get by omitting middle thirds. Maybe some people have seen this construction. Or you can imagine sort of randomly taking a lot of little tiny open intervals, you know, all over the line and so on.
Joel David Hamkins
(02:13:18)
So that altogether would be an open set, and the complement of it would be a closed set. So you can imagine just kind of tossing down these open intervals, and what’s left over is the closed set. Those sets can be quite complicated, and they can have isolated points, for example, if the two open intervals were just kissing and leaving only the one point between them. But also you could have sequences that are converging to a point that would also be a closed set, or convergent sequences of convergent sequences and so on. That would be a closed set also.
Lex Fridman
(02:13:50)
The Cantor set is constructed by iteratively removing open intervals, middle thirds, like you mentioned, from the interval, and trying to see can we do a thing that- that goes in between?
Joel David Hamkins
(02:14:00)
Right. So the question would be, “Can you produce a set that has an intermediate size-” …an intermediate cardinality, right?” And Cantor proved with the closed set, “No, it’s impossible. Every closed set is either countable or equinumerous with the whole real line.”
Joel David Hamkins
(02:14:18)
And what the Cantor program for solving the continuum hypothesis was, was a sort of working up. So you did it for open sets and for closed sets, and you sort of work up. Maybe he wants to go into what are called the Borel sets, which are sort of combinations of open and closed sets. And there’s a vast hierarchy of Borel complexity. And it turns out that the continuum hypothesis has been proved also for the Borel sets in this hierarchy. And, but then one wants to go beyond. What about more complicated sets? So there’s this hierarchy of complexity for sets of real numbers.
Joel David Hamkins
(02:14:53)
And Cantor’s idea was to sort of work your way up the hierarchy by proving that the continuum hypothesis was more and more true for those more and more complicated sets, based on our understanding of the earlier cases. And that has been carried out to a remarkable degree. It turns out that one needs, one begins to need large cardinal assumptions though in order to get to the higher realms, even at the level of projective hierarchy, which are sets that you can define by using quantifiers over the real numbers themselves. So you get this hierarchy on top of the Borel hierarchy, the hierarchy of projectively definable sets.
Joel David Hamkins
(02:15:35)
And it turns out that if you have enough large cardinals, then the projective sets also are always either countable or equinumerous with the whole real line. And then one can try to go beyond this and so on. So I view all of those results, which came, you know, in the past 50 years, the later ones, as fulfilling this Cantor idea that goes back, you know, 120 years to his idea that we would prove the continuum hypothesis by establishing more and more instances for greater and greater complexity of sets. But of course, even with what we know now, it hasn’t fully succeeded and it can’t because the hierarchy of complexity doesn’t include all sets of real numbers. Some of them are sort of transcending this hierarchy completely in a way.
Joel David Hamkins
(02:16:27)
And so the program can’t ever fully be successful, especially in light of the independence result.
Lex Fridman
(02:16:34)
Yeah. Well, spoiler alert, can you go to the independence result?
Joel David Hamkins
(02:16:38)
Sure.
Lex Fridman
(02:16:39)
So what does that mean? So the continuum hypothesis was shown to be independent from the ZFC axioms of mathematics?
Joel David Hamkins
(02:16:46)
Right. So the ZFC axioms were the axioms that were put forth first by Zermelo in 1908 in regard to his proof of the well-order theorem using the axiom of choice. That wasn’t fully ZFC. At that time, it was just Zermelo theory because he sort of… There was a kind of missing axiom, the replacement axiom and the foundation axiom were added later, and that’s what makes the Zermelo-Fraenkel axiomatization, which became sort of standard. Actually, there’s another aspect, which is Zermelo’s original theory allowed for the existence of ur-elements, or these atoms, mathematical objects that are not sets but out of which we build the set theoretic universe, whereas set theorists today generally don’t use ur-elements at all.
Joel David Hamkins
(02:17:34)
I argue that it’s really the philosophy of structuralism that leads them to omit the ur-elements because it turns out that if you adopt ZFC axioms with ur-elements, ZFCU it’s called, or ZFA, then any structure that exists, any mathematical structure that exists in that set theoretic universe with the atoms is isomorphic to a structure that doesn’t use the atoms at all. And you don’t need the atoms if you’re a structuralist because you only care about the structures up to isomorphism anyway, and the theory is simply more elegant and clear without the atoms. They’re just not needed. And so that’s why today when we talk about set theory, generally we talk about the atom-free version, and ZFC has no ur-elements. Okay. So we formulate the ZFC axioms of set theory.

Hardest problems in mathematics

Joel David Hamkins
(02:18:26)
These are expressing the main principle ideas that we have about the nature of sets and set existence. And Cantor had asked about the continuum hypothesis in the late 19th century, and it remained open, totally open until 1938.
Lex Fridman
(02:18:50)
We should mention, I apologize, that it was the number one problem in the Hilbert’s 23 set of problems formulated at the beginning of the century.
Joel David Hamkins
(02:18:59)
That’s right.
Lex Fridman
(02:18:59)
Maybe you can comment on why he put that as number one.
Joel David Hamkins
(02:19:02)
So, right. So Hilbert had introduced at his famous address at the turn of the century this list of problems that he thought could guide or were important to consider in the coming century of mathematics. I mean, that’s how people talk about it now, although I’m not sure at all… Of course, I can’t really speak for Hilbert at all, but if you were a very prominent mathematician, I find it a little hard to believe that Hilbert would have conceived of his list in the same way that we now take his lists. I mean, having observed the century unfold, we know that that list of 23 problems did in fact guide whole research programs, and it was extremely important and influential.
Joel David Hamkins
(02:19:46)
But at the time, Hilbert would have no reason to think that that would be true, and he was just giving a lecture and had a list of problems that he thought were very important. And so I would find it more reasonable to think that he was just making a list of problems that he thought were extremely interesting and important and fundamental in a way without the heavy burden of guiding this 20th century research. Although it turns out that, in fact, that’s exactly what they did.
Joel David Hamkins
(02:20:18)
And we already discussed Hilbert’s views on the nature of set theory and the fundamental character, that quote where he said, “No one will cast us from the paradise that Cantor has created for us.” So I think Hilbert was convinced by Cantor on the importance and the fundamental nature of the continuum hypothesis for the foundations of mathematics, which was a critically important development for the unity of mathematics. I mean, before set theory emerged as a foundation of mathematics, there were… you know, there are different subjects in mathematics. There’s algebra and there’s analysis, real analysis, and topology and geometry, and so on. There’s all these disparate subjects with their own separate axioms, right?
Joel David Hamkins
(02:21:03)
And, but sometimes it happens, like when you’re proving, say, the fundamental theorem of algebra, you know, that the complex numbers are an algebraically closed field that you can solve any polynomial equation in. But the proof methods for that theorem come from other parts of mathematics, you know, those topological proofs and so on. And so how does that work? I mean, if you have totally different axiom systems, but you’re using results from one subject in another subject, it’s somehow incoherent unless there’s one underlying subject. So the unity of mathematics was provided by the existence of a mathematical foundation like set theory. And at the time, it was set theory.
Joel David Hamkins
(02:21:47)
And so it’s critically important to be able to have a single theory in which one views all of mathematics as taking place to resolve that kind of transfer and borrowing phenomenon that was definitely happening. So that must have been part of Hilbert’s thinking about why it’s so important to have a uniform foundation, and set theory was playing that role at the time. Now, of course, we have other possible foundations coming from category theory or type theory, and there’s univalent foundations now. So there are competing foundations now. There’s no need to just use one set theoretic foundation.
Joel David Hamkins
(02:22:25)
Although set theory continues to, in my view, have an extremely successful metamathematical analysis as a foundation, I think is much more successful than set theory for any of those other foundations, but it’s much less amenable to things like computer proof and so on, which is part of the motivation to find these alternative foundations. So, yeah, just to talk about Hilbert, I think he was motivated by the need for unifying foundation of mathematics and set theory was playing that role, and the continuum hypothesis is such a core fundamental question to ask, so it seems quite natural that he would put it on the list. There were other logic-related questions, though, like Hilbert’s tenth problem is also related to logic.
Joel David Hamkins
(02:23:08)
This is the question about Diophantine equations, and he asked to provide an algorithm to decide whether a given Diophantine equation has a solution in the integers. So a Diophantine equation is just… I mean, it’s a, maybe a fancy way of talking about something that’s easy to understand, a polynomial equation, except it’s not just one variable, many variables. So you have polynomials in several variables over the integers, and you want to know, can you solve it? So the problem is, as stated by Hilbert, provide an algorithm for answering the question whether a given polynomial equation has a solution in the integers. So he’s sort of presuming that there is an algorithm, but he wants to know what it is. What is the algorithm?
Joel David Hamkins
(02:23:55)
But the problem was solved by proving that there is no algorithm. It’s an undecidable problem, like the halting problem. There is no computable procedure that will correctly decide whether a given polynomial equation has a solution in the integers. So that’s quite a remarkable development, I think. So there were also a few other logic-related questions on the list.
Lex Fridman
(02:24:20)
And so eventually, the continuum hypothesis was shown to be independent from ZFC axioms, as we’ve mentioned. So what… How does that make you feel? What is independence and what does that mean?
Joel David Hamkins
(02:24:30)
But once you tell the story, the historical story-
Lex Fridman
(02:24:32)
Yes
Joel David Hamkins
(02:24:32)
… is really quite dramatic-
Lex Fridman
(02:24:34)
Yeah, that’s great
Joel David Hamkins
(02:24:34)
I think. Because Cantor poses the question in the late 19th century. And then it’s totally open. Hilbert asks about it, you know, at the turn of the 20th century. Nobody has any clue. There’s no answer coming. Until 1938, this is four decades later, right? So a long time, and Godel, Kurt Godel proved half of it. What he proved is that if the axioms of set theory are consistent, then there is a set theoretic world where both the axiom of choice and the continuum hypothesis are true. So what he’s doing is showing, this is called the constructible universe, Godel’s L. So he solved this… this is the same result where he answers the safety question of the axiom of choice, but also for the continuum hypothesis. They’re true in the same set theoretic universe we get.
Joel David Hamkins
(02:25:38)
So if ZF, without the axiom of choice, is consistent, then so is ZFC plus the continuum hypothesis is the result, 1938. It’s such a beautiful argument. It’s just incredible, I think, because he’s building an alternative mathematical reality. That’s the structure of the proof is that, okay, if there’s any mathematical reality, if there’s any set theoretic world, then we’re going to build another one, a separate one, a different one, maybe different. Maybe it’s the same as the original one. It could be. If we started already in the one that he built, then it would be the same. But there’s no reason to assume it was the same.
Joel David Hamkins
(02:26:15)
So he has this kind of model construction method to build this alternative set theoretic reality, the constructible universe, and then he proves that the axiom of choice is true there and also the continuum hypothesis is true there, and it’s just amazing. Really beautiful argument. Okay, so then for the other part of the independence, that’s only half of it, because Godel shows basically that you can’t refute the continuum hypothesis, but that’s not the same thing as proving that it’s true. He showed that if set theory is consistent without the continuing hypothesis, then it’s consistent with the continuing hypothesis. So that’s not the same thing as proving that it’s true. Yeah.
Joel David Hamkins
(02:26:59)
And then it didn’t come until 1963 when Paul Cohen invented the method of forcing and proved that if there’s a model of set theory, then there’s a model of set theory in which the continuum hypothesis is false. So Cohen also is giving us this extremely powerful tool for building alternative mathematical realities, is how I think about it. He’s explained to us how to take any set theoretic world and build another different one in which the continuum hypothesis is false. The forcing extension.
Lex Fridman
(02:27:37)
It’s such a fascinating technique, a tool of forcing. Maybe I’m anthropomorphizing it, but it seems like a way to escape one mathematical universe into another, or to expand it or to alter it. So you travel between mathematical universes. Can you explain the technique of forcing?
Joel David Hamkins
(02:27:57)
Yeah, exactly. It’s all those things. It’s so wonderful. I mean, that’s exactly how I think about it. I mean…

Mathematical multiverse

Lex Fridman
(02:28:03)
And we should mention, maybe this is a good place to even give a bigger picture. One of your more controversial ideas in mathematics, as laid out in the paper, “The Set-Theoretic Multiverse,” you describe that there may not be one true mathematics, but rather multiple mathematical universes, and forcing is one of the techniques that gets you from one to the other, so… Can you explain the whole shebang? The whole…
Joel David Hamkins
(02:28:27)
Yeah, sure, let’s get into it. So the lesson of Cohen’s result and Gödel’s result and so on, these producing these alternative set theoretic universes. We’ve observed that the continuum hypothesis is independent and the axiom of choice is independent of the other axioms, but it’s not just those two. We have thousands of independence results. Practically every non-trivial statement of infinite combinatorics is independent of ZFC. I mean, this is the fact. It’s not universally true. There are some extremely difficult prominent results where people proved things in ZFC, but for the most part, if you ask a non-trivial question about infinite cardinalities, then it’s very likely to be independent of ZFC.
Joel David Hamkins
(02:29:15)
And we have these thousands of arguments, these forcing arguments that are used to establish that. And so how should we take that? I mean, on the one hand, if you have a theory and it doesn’t answer any of the questions that you’re interested in, okay, so what does that mean? If you’re following what I call the universe view or the monist view, you might naturally say, “Well, look, ZFC is a weak theory, and there’s the true set theoretic reality out there, and we need a better theory because the current theory isn’t answering the questions. Everything’s independent.” And so that seems like a quite reasonable thing to take if you think that there is…
Joel David Hamkins
(02:29:59)
that every set theoretic question has a definite answer, and there’s a unique set theoretic truth or a unique fact of the matter, right? This is the universe view.
Lex Fridman
(02:30:09)
And by the way, to reiterate, independent means it cannot be proved or disproved within this axiomatic system, within this theory.
Joel David Hamkins
(02:30:16)
Right, exactly. So to be independent means you can’t prove it, and also you can’t prove that it’s false. You can’t refute it.
Lex Fridman
(02:30:22)
And you’re saying that’s why the statement is so traumatic or sad, that most of the interesting stuff, as you said, has been shown to be independent. Of ZFC.
Joel David Hamkins
(02:30:31)
But that’s an interesting way to put it, I think, because it reminds me of this… When I was a graduate student in Berkeley, there was another graduate student who was working with a non-logic professor in C*-algebras or something like this. So it’s a part of analysis or functional analysis, and they were looking at a question, and it turned out to be independent of ZFC, right? And the attitude of this other professor was that, “Oh, I guess I asked the wrong question.” But my attitude and the attitude of all the set theorists was when you ask a question that turns out to be independent, then you asked exactly the right question because this is the one… You know, it’s carving nature at its joints.
Joel David Hamkins
(02:31:18)
You’re adjudicating the nature of set theoretic reality by finding these two realms. You find one of these dichotomies. You know, there are the worlds where it’s true and the worlds where it’s false. And so when you ask that question, that’s to be celebrated. It means you asked exactly the right, interesting, fascinating question. So it’s not a kind of bleak thing that you can’t prove it and you can’t refute it, and that’s such a disaster. Rather, it means that you found this cleavage in mathematical reality, and it’s good to know about those when they happen, you know?
Lex Fridman
(02:31:52)
Carving nature at its joints. So what can you do about the things that are shown to be independent from ZFC?
Joel David Hamkins
(02:32:00)
Right. So…
Lex Fridman
(02:32:00)
What are the techniques?
Joel David Hamkins
(02:32:01)
So one thing is that because of the incompleteness theorem, we know that there’s going to be… For any theory that we can write down, there’s going to be true things we can’t prove in it. So those things are going to be independent. And so we’re already aware of the fact that there will always be these independent phenomena for any theory that we write. And furthermore, some of those theories we won’t even be able to prove that they’re consistent, you know, like the consistency of their own theory. So that’s called the consistency-strength hierarchy.
Joel David Hamkins
(02:32:37)
So it’s a direct consequence of Gödel’s second incompleteness theorem that for any theory we can write down, then towering over it is this incredibly tall tower of consistency strength, where the strength in theories aren’t just adding another axiom, but they’re adding another axiom even whose consistency was not provable in the previous layers of the hierarchy. And so how lucky we are to find the large cardinal axioms that instantiate exactly this feature of increasing consistency strength, this unending and extremely tall hierarchy of consistency strength of axioms. And it exactly fulfills the prediction that Gödel’s theorem makes about that kind of thing.
Joel David Hamkins
(02:33:28)
Except, the axioms in the large cardinal hierarchy aren’t, you know, metalogical self-referential statements of the form that sometimes arise in the Gödel analysis, but rather they’re professing existence of big infinities, these large cardinal axioms. And so it’s such a welcome development, and yet it’s also known that the continuum hypothesis is independent of all of the known large cardinal axioms. So none of the large cardinal axioms can settle the continuum hypothesis. So the independence phenomenon is still there for things like the continuum hypothesis and the cardinal combinatorics that I mentioned.
Lex Fridman
(02:34:17)
So you’re building this incredible hierarchy of axiomatic systems that are more powerful than ZFC.
Joel David Hamkins
(02:34:24)
More powerful than ZFC and then more powerful than that, more powerful than that, and so on. It keeps going forever, and it will never be finished.
Lex Fridman
(02:34:32)
And still, to this day, the continuum hypothesis does not…
Joel David Hamkins
(02:34:36)
It’s not settled by any of the large cardinal axioms.
Lex Fridman
(02:34:39)
Wow. Wow. How does that make you feel? Will it ever be settled?
Joel David Hamkins
(02:34:47)
Well, it’s part of my multiverse view, I guess. We started by describing the universe view, which is the view that there are facts about all of these questions, and it will turn out—if you’re a universe view person, which I’m not, but if you are—then you will hold that there is a right answer to the continuum hypothesis question, and there’s a right answer to the large cardinal questions, and so on. And that what we should be aiming to do is figure out this one true set theory. In contrast, I take the developments of set theory over the past half-century or more as evidence that there isn’t such a unique set-theoretic reality.
Joel David Hamkins
(02:35:33)
Rather, what we’ve been doing for decades now is producing more and more alternative set-theoretic universes in which the fundamental truths differ from one to the other. And that is the answer to the continuum hypothesis question: the fact that given any model of set theory, there’s a forcing extension where the continuum hypothesis is true, and another one where it’s false. You can sort of turn it on and off like a light switch. And that’s the fundamental nature of the continuum hypothesis, that you can have it or you can have the negation as you like within a very closely related set-theoretic world. Wherever you happen to be living, there’s a closely related one where CH is true, where the continuum hypothesis is true, and one where it’s false.
Joel David Hamkins
(02:36:23)
And that itself is a kind of answer. It’s not a singularist answer, a universe view answer. It’s a pluralist answer. And this led me to my views on the multiverse view of set theory and pluralist truth, namely the fundamental nature of set-theoretic truth has this plural character in that there isn’t a singular meaning to the fundamental terms, but rather there’s this choice of alternative set-theoretic universes that have different truths.
Lex Fridman
(02:36:55)
So what does the multiverse view of mathematics enable you to do? What does it empower you to do, and what are the limitations? What are the things it breaks about mathematics as a field, as a space of knowledge, and what does it enable?
Joel David Hamkins
(02:37:10)
First of all, I guess one should say that these different philosophical positions that you might take in the philosophy of set theory, like the multiverse view or the universe view, we don’t ever disagree about the mathematics. We’re all agreeing on what the theorems are. It’s a question of philosophical perspective on the underlying meaning or the context, or really what is a philosophy of mathematics for? And I mean, if you look back in history, for example, to the time of calculus with Newton and Leibniz, right?
Joel David Hamkins
(02:37:44)
They famously developed the ideas of calculus using their concepts of infinitesimals, and those foundations were roundly mocked by Bishop Berkeley and so on, who talked about, you know, “What are these same evanescent increments? And shall we not call them the ghosts of departed quantities?” But the foundations really were kind of completely suspect, I think, at the time. And the foundations of infinitesimal calculus really only became rigorous in the 1950s or so with the development of non-standard analysis and Robinson’s work. Okay, so the point I’m trying to make is that, do you need a robust, rigorous foundation of mathematics to make enduring insights in mathematics?
Joel David Hamkins
(02:38:34)
And the answer, regrettably, is apparently not because in calculus, even with that lousy, creaky foundation of infinitesimals not even well understood that Newton and Leibniz had, they proved all the fundamental theorems of calculus and, you know, they had all the main insights in those early days with that extremely bad foundation. And so that shows you something about the relevance of the kind of foundational views on mathematics and how important they are for mathematical developments and progress and insight. I mean, because I view those early mathematical developments in calculus as genuinely mathematical and extremely important and insightful, even though the foundations weren’t any good by contemporary perspectives. Okay. So, rather…
Joel David Hamkins
(02:39:32)
So when it comes to the philosophy of set theory and the dispute between the universe view and pluralism, my view is that the choice of the philosophical perspective doesn’t actually have to do with the mathematical developments directly at all. Rather, it tells us, “Where should set theory go? What kind of set theory should we be looking at? What kind of questions should we be asking?” So if you have a universe mentality, the universe view, then you’re going to be pushed to try to find and articulate the nature of the one true set-theoretic universe. And I think that remark is really well borne out by the developments with Hugh Woodin, who’s one of the most prominent mathematicians and philosophers with the universe view and his theory of ultimate L and so on. And he’s really striving.
Lex Fridman
(02:40:26)
Who was also your advisor.
Joel David Hamkins
(02:40:28)
He was also my supervisor, yeah, my graduate supervisor.
Lex Fridman
(02:40:30)
Which is a personal story as well.
Joel David Hamkins
(02:40:32)
This fundamental dispute, yeah, on this question. But he is a very strong and successful research program, sort of trying to give legs to finding the nature of the one true set-theoretic universe. And it’s driving the questions that he’s asking and the mathematical programs that he’s pursuing. Whereas if you have a pluralist view, as I do, then you’re going to be led and attracted to questions that have to do with the interaction of different set-theoretic universes, or maybe you want to understand the nature of how are the models of set theory related to their forcing extensions and so on. And so this led to things that I call, say, set-theoretic potentialism, where you think about a set-theoretic universe in a potentialist way.
Joel David Hamkins
(02:41:21)
Not in the sense of potential infinity directly, because all of these universes have infinite sets inside them already, but they’re potentialist in the sense that we could have more sets. The universe could be wider and taller and so on, by forcing or by extending upward. And so we want to understand the nature of this realm of set-theoretic universes. And that’s quite some exciting work. And so with Benedikt Loewe and I, we proved some theorems on the modal logic of forcing and set-theoretic potentialism under end extension. I’ve done a bunch of work on this topic. And also I mounted, together with Gunter Fuchs and Jonas Riets, who was one of my own PhD students, the topic of set-theoretic geology, which is studying…
Joel David Hamkins
(02:42:11)
It’s taking the metaphor of forcing. I mean, in forcing, you have the ground model and the forcing extension. And when I was first working with Jonas, he said, “I want to undo forcing. I want to go backward.” And I at first said, “But Jonas, it doesn’t work that way. You start in the model, in the ground model, and you go out, you go to the bigger one. You know, that’s how forcing works.” And he said, “No, no, I want to go backward.” And so he was quite persistent, actually. And so finally, I said, “Okay, let’s do it.” Let’s take it seriously.” And so we sat down and started thinking more precisely and carefully and deeply about the nature of taking a set-theoretic universe and seeing where did it come from by forcing, which was a new way of thinking about forcing at the time.
Lex Fridman
(02:42:58)
Like reverse-engineering the forcing?
Joel David Hamkins
(02:43:00)
Yeah, something like that. Forcing is a way of producing a new universe. And so you could start somewhere and go to that new universe, or you could look where you are and say, “Well, look, I got here by doing that already in the past.”
Joel David Hamkins
(02:43:12)
So we defined models of the bedrock model and ground, sort of undoing the forcing. And really, it was quite fruitful. And I view this as part of the pluralist perspective, except the difference is that set-theoretic geology is amenable to the universe view. So even though the work was inspired by this philosophical view on the multiverse view, nevertheless, the central ideas of geology have now been picked up by the people with the research program in the universe view, because it turns out that set-theoretic geology is helping them or us to discover the nature of the one true universe relates to its mantle. There’s this concept of the set-theoretic mantle that I had introduced in a way that is extremely interesting.
Joel David Hamkins
(02:44:01)
And so it’s historically quite funny, I think, because this research program that grew entirely out of the pluralist point of view ended up being picked up by the universe point of view research program in a way that is quite important.
Lex Fridman
(02:44:20)
Can you prove something in the world that you arrived at through forcing and then take some of that back to the ground model?
Joel David Hamkins
(02:44:30)
Yeah, absolutely. And that’s a really powerful argument method, actually. People often want to do that. Suppose you’re in some set-theoretic context. You know, you could think about it as living in a set-theoretic universe, and you want to prove something in that universe only. But maybe one way to do it is to first construct this forcing extension and then use the features about this forcing extension to realize that certain things must have already been true in the ground model. And then you throw the forcing extensions away and you-
Lex Fridman
(02:45:01)
Oh, cool
Joel David Hamkins
(02:45:01)
Yeah. So this can happen. To pick a more elementary example, if you think about the early days of people reasoning with the complex numbers before they really understood them. So they would have these algebraic equations that they’re trying to solve. They would have the tools and methods of doing it, but then in the course of, you know, they would have to do things to the polynomial and change the factors and so on, and produce other polynomials and solve them and so on. Sometimes, they could produce solutions. In the middle of their construction, they were led to, like, the square root of minus five or something in the construction. And they didn’t have any meaning for that, but they would just do it symbolically, you know.
Joel David Hamkins
(02:45:48)
And eventually, it would turn out, because of the methods that they had, they would combine and they would cancel and so on, and all the complex parts would cancel out and they’d end up with this actual answer, you know, three plus square root of 17 or whatever. And they could check it and it worked. It was a solution of the original equation. And so it must have been bewildering to them because they would start with this question purely in the real numbers, an algebraic question, and they would march on their method and proceed through the land of nonsense, you know, with these square roots of negative numbers and then end up with an answer that was real again that they could verify was correct.
Joel David Hamkins
(02:46:29)
And so I view this kind of forcing argument that I was just describing in a similar way. You start in set theory and you go to this land of nonsense in the forcing extension, this imaginary world. And you argue and you come back. I mean, you make a consequence in the ground model, and it’s such a beautiful way of arguing.

Surreal numbers

Lex Fridman
(02:46:47)
So speaking of the land of nonsense, I have to ask you about surreal numbers, but first, I need another bathroom break. All right, we’re back, and there’s this aforementioned wonderful blog post on the surreal numbers and that there’s quite a simple surreal number generation process that can basically construct all numbers. So maybe this is a good spot to ask what are surreal numbers and what is the way we can generate all numbers?
Joel David Hamkins
(02:47:20)
So the surreal number system is an amazing, an amazingly beautiful mathematical system that was introduced by John Conway.
Lex Fridman
(02:47:30)
Rest in peace, one of the great mathematicians ever on this earth.
Joel David Hamkins
(02:47:33)
Yes, absolutely. And I really admire his style of mathematical thinking and working in mathematics and the surreal number system is a good instance of this. So the way I think about the surreal numbers system is what it’s doing is providing us a number system that unifies all the other number systems. So it extends the real numbers. Well, not only does it extend the integers, the natural numbers, the rational numbers, and the real numbers, but also the ordinals and the infinitesimals. So they’re all sitting there inside the surreal numbers, and it’s this colossal system of numbers. It’s not a set even. It’s a proper class, it turns out, because it contains all the ordinal numbers.
Joel David Hamkins
(02:48:19)
But it’s generated from nothing by a single rule, and the rule is, so we’re going to generate the numbers in stages, in a transfinite sequence of stages. And at every stage, we take the numbers that we have so far and in all possible ways, we divide them into two sets, a lower set and an upper set, or a left set and a right set. So we divide them into these two sets so that everything in the left set is less than everything in the right set, and then at that moment, we create a new number that fits in the gap between L and R. Okay? That’s it. That’s all we do. So let me say it again.
Joel David Hamkins
(02:49:05)
The rule is we proceed in stages, and at any stage, in all possible ways, we divide the numbers we have into two collections, the left set and the right set, so that everything in the left set is less than everything in the right set. And we create a new number, a new surreal number that will fit in that gap. Okay. So for example, we could start… Well, at the beginning, we don’t have any numbers. We haven’t created anything yet, and so, we could take nothing and we could divide it into two sets, the empty lower set and the empty upper set. I mean, the two empty sets. And everything in the empty set is less than everything in the empty set because that’s a vacuous statement.
Joel David Hamkins
(02:49:48)
So we’re, we satisfy the conditions and we apply the number generation rule, which says we should create a new number. And this is what I call the big bang of numbers, the surreal genesis when the number zero is born. Zero is the firstborn number that is bigger than everything in the empty set and less than everything in the empty set. Okay, but now we have this number zero, and so therefore, we now can define new gaps. Because if we put zero into the left set and have an empty right set, then we should create a new number that’s bigger than zero and less than everything in the empty set, and that number is called the number one.
Joel David Hamkins
(02:50:30)
And similarly, at that same stage, we could have put zero into the right set, and so that would be the firstborn number that’s less than zero, which is called minus one. So now we have three numbers, minus one, zero, and one, and they have four gaps because there could be a number below minus one or between minus one and zero or between zero and one or above one, and so we create those four new numbers. The first number above one is called two. The first number between zero and one is called 1/2, and then on the negative side, we have minus 1/2 and minus two and so on. So now we have, what is that, seven numbers. So there’s eight gaps between them.
Joel David Hamkins
(02:51:10)
So at the next birthday, they call them, the next stage will be born all the numbers between those gaps, and then between those and between those and so on. And as the days progress, we get more and more numbers. But those are just the finite birthdays, because as I said, it’s a transfinite process. So at day omega, that’s the first infinite day, we’re going to create a lot of new surreal numbers. So every real number will be born at that stage, because every real number fills a gap in the previously born rational numbers that we had just talked about. It’s not all the rationals, because actually the rational numbers that are born at the finite stages are just the rationals whose denominator is a power of two, it turns out. Those are called the dyadic rationals.
Joel David Hamkins
(02:51:57)
So the real numbers are all born on day omega, but also some other numbers are born on day omega. Namely, the ordinal omega itself is the firstborn number that’s bigger than all those finite numbers, and minus omega is the firstborn number that’s less than all those finite numbers. But also, we have the number epsilon, which is the firstborn number that’s strictly bigger than zero and strictly less than all the positive rational numbers. So that’s going to be an infinitesimal number in that gap, and so on. On day omega plus one, we get more numbers, and then omega plus two and so on. And the numbers just keep coming forever. So, this is how you build the surreal number system.
Joel David Hamkins
(02:52:39)
And then it turns out you can define the arithmetic operations of addition and multiplication in a natural way that is engaging with this recursive definition. So we have sort of recursive definitions of plus and times for the surreal numbers. And it turns out you can prove that they make the surreal numbers into what’s called an ordered field. So they satisfy the field axioms, which means that you have distributivity and commutativity of addition and multiplication, and also you have reciprocals for every non-zero number. You can divide by the number. So you can add and multiply and divide and subtract. And furthermore, you can take square roots.
Joel David Hamkins
(02:53:21)
And furthermore, every odd degree polynomial has a root, which is true in the real numbers, because if you think about, say, a cubic or a fifth degree polynomial, then you know it’s going to cross the axis, because it has opposite behaviors on the two infinities, because it’s an odd degree polynomial. So on the positive side, it’s going to the positive infinity. On the negative side, it would be going to minus infinity. So it has to cross. So we know in the real numbers, every odd degree polynomial has a root. And that’s also true in the surreal numbers. So that makes it what’s called a real closed field which is a very nice mathematical theory. So it’s really quite interesting how we can find copies of all these other number systems inside the surreal numbers.
Lex Fridman
(02:54:09)
But the surreal numbers are fundamentally discontinuous as you’re worried about. What are the consequences of this?
Joel David Hamkins
(02:54:14)
Right. So the surreal numbers have a property that they form a non-standard model of the real field, which means that they provide a notion of infinitesimality that one can use to develop calculus on the grounds of Robinson’s non-standard theory that I had mentioned earlier. But they don’t have the least upper bound property for subcollections. There’s no set of surreal numbers, no non-trivial set of surreal numbers has at least upper bound, and there are no convergent sequences in the surreal numbers. And so for the sort of ordinary use in calculus based on limits and convergence, that method does not work in the surreal numbers at all. So that’s what I mean when I say the surreal numbers are fundamentally discontinuous. They have a fundamental discontinuity going on.
Joel David Hamkins
(02:55:07)
But you can still do calculus with them, because you have infinitesimals if you use these non-standard methods, the infinitesimal based methods to calculus. And people do that. I once organized a conference in New York, and we had John Conway as a speaker at that conference. And there was a question session, and someone asked him, I mean, it’s a bit of a rude question, I think, but they asked it and the question was, “What is your greatest disappointment in life?” I mean, I would never ask a question like that at a conference in a very public setting.
Joel David Hamkins
(02:55:41)
But Conway was extremely graceful and he answered by saying that, “The surreal numbers…” Not the numbers themselves, but the reception of the surreal numbers, because he had ambition that the surreal numbers would become a fundamental number system used throughout mathematics and science, because it was able to do non-set analysis, it was able to do calculus, it unified the ordinals and so on. And it’s such a unifying, amazing structure, beautiful structure with elegant proofs and sophisticated ideas all around it. And he was disappointed that it never really achieved that unifying status that he had the ambition for. And this, he mentioned as his greatest disappointment.
Lex Fridman
(02:56:32)
Yeah, Donald Knuth tried to celebrate it, but it never quite took hold.
Joel David Hamkins
(02:56:36)
So I don’t want to give the impression, though, that the surreal numbers are not widely studied, because there are thousands of people who are…
Lex Fridman
(02:56:41)
Sure
Joel David Hamkins
(02:56:42)
…studying it. In fact, Philip Ehrlich, who is one of the world experts on the surreal numbers, mentioned to me once that Conway was his own worst enemy with regard to that very issue because in the Conway style, everything is a game. And he treated the surreal numbers as a kind of plaything, a toy, and maybe that makes people not take it seriously. Although my view is that it is extremely serious and useful and profound, and I’ve been writing a whole series of essays on the surreal numbers for my Substack at Infinitely More. And I just find the whole subject so fascinating and beautiful. I mean, it’s true. I’m not applying it in engineering, which maybe was part of this Conway ambition.

Conway’s Game of Life

Lex Fridman
(02:57:30)
And I just wanted to, before I forget, mention Conway turning everything into a game. It is a fascinating point that I didn’t quite think about, which I think the Game of Life is just an example of exploration of cellular automata. I think cellular automata is one of the most incredible, complicated, fascinating… It feels like an open door into a world we have not quite yet explored. And it’s such a beautiful illustration of that world, the Game of Life, but calling it a game… Maybe life balances it, because that’s your powerful word, but it’s not quite a game. It’s a fascinating invitation to an incredibly complicated and fascinating mathematical world.
Lex Fridman
(02:58:09)
I think every time I see cellular automata and the fact that we don’t quite have mathematical tools to make sense of that world, it fills me with awe. Speaking of a thousand years from now, it feels like that is a world we might make some progress on.
Joel David Hamkins
(02:58:23)
The Game of Life is a sort of playground for computably undecidable questions because, in fact, you can prove that the question of whether a given cell will ever become alive is computably undecidable. In other words…
Lex Fridman
(02:58:39)
Yeah
Joel David Hamkins
(02:58:39)
…given a configuration, and you ask, “Will this particular cell ever, you know, be alive—” …in the evolution?” And you can prove that that question is equivalent to the halting problem. It’s computably undecidable. It’s semi-decidable in the sense that if it will become alive, then you will know it at a finite stage because you could just run the Game of Life algorithm and let it run. And if it ever did come alive, you could say, “Yeah, it was alive.” But if you’ve run it for a thousand years and it hasn’t come alive yet, then you don’t necessarily seem to have any basis for saying, “No, it won’t ever come alive,” if the behavior was very complicated.
Joel David Hamkins
(02:59:18)
Maybe if you have a complete understanding of the evolution of the behavior, then you can say no, but you can prove you won’t always have that understanding— …precisely because the problem is equivalent to the halting problem.
Lex Fridman
(02:59:28)
And nevertheless, when you sit back and look and visualize the thing, some little mini cellular automata civilizations are born and die quickly, and some are very predictable and boring, but some have this rich, incredible complexity. And maybe that speaks to a thing I wanted to ask on the halting problem and decidability. You’ve mentioned this thing where if you understand the program deeply, you might be able to say something. So can we say something interesting about, maybe, how many programs, statistically, we know something about in terms of whether they halt or not? Or what does it mean to understand a program deeply enough—

Computability theory

Joel David Hamkins
(03:00:09)
Right
Lex Fridman
(03:00:09)
…to be able to make a prediction?
Joel David Hamkins
(03:00:11)
The main lesson of computability theory, in my view, is that it’s never the case that you can have a thorough understanding of the behavior of a program by looking at the program, and that the content of what you learn from a program, I mean, in the most general case, is always obtained just by running it and looking at the behavior. And the proof of that is there’s a theorem called Rice’s Theorem, which makes that idea completely robust. But I want to just take a little detour towards another question riffing on something that you just said. Namely, one can ask the question, what is the behavior of a random program? So you have some formal computing language, you know, and you want to look at the collection of all programs of a certain size.
Joel David Hamkins
(03:01:06)
Maybe there are only finitely many. And can you say something about the behavior of a randomly chosen one, like with a certain likelihood it will have a certain behavior? And the answer turns out to be extremely interesting. Once, years ago, Alexey Myasnikov asked me a question. He had this concept of a decision problem with a black hole, and what that means is it’s a decision problem which is possibly difficult in the worst case, but the difficulty was concentrated in a very tiny region called the black hole. And outside of that black hole, it was very easy.
Joel David Hamkins
(03:01:42)
And so, for example, this kind of problem is a terrible problem to use if you’re basing your encryption scheme. You don’t want to use a black hole problem because if someone can rob the bank 95% of the time, then that’s not what you want, or even any nontrivial percent of the time is too dangerous. So you don’t want to use problems that are almost every case is easily solved as the basis of your encryption. And the question Alexey asked me was, “Does the halting problem have a black hole?”
Joel David Hamkins
(03:02:17)
And so if we take, say, the standard model of Turing machines—it’s one-way infinite tape with zeros and ones on the tape and so on, the head moving back and forth, and it stops when it gets into the halt state—then it turns out we proved that there is a black hole. And what that means is there’s a computer procedure that decides correctly almost every instance of the halting problem. Even though the halting problem is not decidable, we can decide almost every instance. So more precisely, there’s a collection of Turing machine programs such that we can easily decide whether a program’s in that collection or not. And for the programs in the collection, we can decide the halting problem for those programs easily.
Joel David Hamkins
(03:03:03)
And furthermore, almost every program is in the collection in the sense that as the number of states becomes large, the proportion of programs in the collection goes to 100%. So the asymptotic density of the programs is one. And the proof was quite fascinating because it’s one of these situations where the theorem sounds really surprising, I think, to many people when I first tell it, I mean, to computability experts. Then it’s sort of intriguing to think that you can solve almost every instance of a halting problem. But then when they hear the proof, it’s completely a letdown. Unfortunately, nobody likes the theorem after the proof.
Joel David Hamkins
(03:03:47)
And so the proof is so simple, though. If you know how a Turing machine operates, there’s this infinite paper tape on which the machine writes zeros and ones, and the head moves back and forth according to rigid instructions. And the instructions are all of the form: if the machine is in such and such a state and it’s reading such and such a symbol on the tape, then it should write this symbol on the tape, it should change to this new state specified, and it should either move left or right as specified. So a program consists of instructions like that. If you look at a program, one of the states is the halt state, and that’s when the program halts.
Joel David Hamkins
(03:04:30)
But you can calculate how many programs don’t have any instruction that transitions to the halt state. You can easily calculate the proportion. And in the limit, it goes to 1 over E squared, 13 and a half percent. If you calculate the limit, the proportion of programs with end states that don’t ever halt because they don’t have any instruction saying halt— —those programs obviously never halt because they can’t halt. They don’t have any instruction that says halt.
Lex Fridman
(03:05:04)
So 13% of programs, you could say—
Joel David Hamkins
(03:05:06)
13%, you can say they don’t halt because you just look at them and you can understand them.
Lex Fridman
(03:05:10)
There’s no halt state.
Joel David Hamkins
(03:05:11)
They never change to the halt state, so they can’t halt.
Lex Fridman
(03:05:13)
I mean, that nevertheless is beautiful to know. To show.
Joel David Hamkins
(03:05:17)
So that’s a kind of trivial reason for non-halting. And when I first made that observation, I thought, “Okay, this is the proof strategy.” Because I wanted to say at first the goal was, “Look, that’s a stupid reason for a program not to halt. And I just want to pile up as many stupid reasons as I can think of—” —until it gets more than 50%, and then I can say most.
Lex Fridman
(03:05:43)
That was brilliant.
Joel David Hamkins
(03:05:43)
Yeah, that was my goal.
Lex Fridman
(03:05:44)
I love this.
Joel David Hamkins
(03:05:45)
Yeah, so we thought more about it, though, and we hit the jackpot because we found one gigantic stupid reason that converged to 100%, in the limit. And so, the stupid reason for a program not to halt is that, well, if you think about the behavior: the head is sitting there. It’s on the leftmost cell of the tape at the very beginning. It’s in the start state, and the head is following an instruction. And the instruction says, “When you’re in the start state,” which it is, “and you’re reading something on the tape, then you should write something and you should change to a new state, and you should either move left or right.” But half of them move left. But if you move left and you are already at the end, then the head falls off.
Joel David Hamkins
(03:06:32)
And so the computation stops because the head fell off the tape. That’s a pretty stupid reason. Okay, but that’s half of them already, just like that. And then some of them went right and they changed to a new state. And amongst those, the new state, half of those ones are going left and half are going right from that place. And then most of those are changing to a new state. When there are a lot of states, it’s very likely that the next state that you transition to is new. And so you get this random walk behavior, if you know what that means, where half go left and half go right at each step.
Joel David Hamkins
(03:07:04)
And there’s a theorem due to Pólya, which is called the Pólya recurrence theorem, which says when you have a random walk, a one-dimensional random walk, then it’s very likely to come back to where you started. And when that happens for us, then half of them from that place fall off on the next step. And so you can show, using this kind of analysis, that the probability one behavior of a random Turing machine is that the head falls off the tape before it repeats a state. And that is the stupid proof that shows how to solve the halting problem.
Joel David Hamkins
(03:07:40)
Because when that happens, we can answer the halting problem saying, “No, the computation stopped because the machine crashed, not because it halted, so therefore it doesn’t count as halting on some accounts.” Or, you know, if you want to define that as halting, crashing as halting, then… But in any case, however it is that you set up your formalism, you’re going to be able to answer the question for the behavior of the machine when the head falls off.
Lex Fridman
(03:08:03)
So statistically, in the limit, you solve the halting problem.
Joel David Hamkins
(03:08:08)
Yes, exactly. Computably solve it.
Lex Fridman
(03:08:11)
What do we take from that? Because you didn’t solve the halting problem.
Joel David Hamkins
(03:08:15)
No, it’s impossible to fully solve…
Lex Fridman
(03:08:17)
Right
Joel David Hamkins
(03:08:17)
…the halting problem correctly in all cases.
Lex Fridman
(03:08:20)
That’s pretty cool. That’s kind of… I mean, I don’t know. This is…
Joel David Hamkins
(03:08:22)
It’s a probabilistic way… I mean, it’s probabilistic in the sense that we’re solving almost all instances… …Computably. There are versions of this that are maybe more interesting from the point of view of complexity theory and actually useful. I mean, there’s the whole P-NP problem and so on. And there’s this genre of NP-complete problems, which are problems that are infeasible. They would take exponential time to solve them in the ordinary way. And they’re not known to be polynomial time solvable, although in these cases it’s an open question whether there is a polynomial time algorithm, a feasible algorithm. And for most of the NP-complete problems, you can prove that there’s a polynomial time approximation that solves almost all instances…
Joel David Hamkins
(03:09:09)
…in a feasible amount of time. So like the knapsack problem, you know, packing problems, and so on, other kinds of problems, the satisfaction problem when… Depending on how you set up the formalism, you can prove, and I’ve proven many instances of this, but also I think it’s widespread for almost all the NP-complete problems, the difficult problems, and these are important problems for industrial application when these are problems… …That we actually want to solve. We can have feasible algorithms that solve almost every instance of them.

P vs NP

Lex Fridman
(03:09:42)
The amount of fields and topics you’ve worked on is truly incredible. I have to ask about P versus NP. This is one of the big open problems in complexity theory. So for people who don’t know, it’s about the relation between computation time and problem complexity. Do you think it will ever be solved? And is there any chance the weird counterintuitive thing might be true, that P equals NP?
Joel David Hamkins
(03:10:06)
Yeah, that’s an interesting question. Sometimes people ask about whether it could be independent, which I think is-
Joel David Hamkins
(03:10:11)
…an interesting question for logicians. And of course, well, one has to say if you’re entertaining the idea of independence, you know, over which theory? Because every statement is going to be independent over an extremely weak theory. So that’s, you know, it doesn’t make sense to say it’s independent all by itself. You’re only independent relative to a theory, right? So the way I think about P-NP is that… I mean, of course it’s a theoretical question about the asymptotic behavior of these problems. I mean, for a problem to be in P means that there is a computable decision procedure that runs in time bounded by some polynomial. But the coefficients on that polynomial could be enormous, and the degree could be incredibly high.
Joel David Hamkins
(03:10:59)
And so for small values of inputs, then it doesn’t make sense to talk about this polynomial time feasibility with respect to, say, the range of problem inputs that we will ever give it in our lifetime or in the span of human civilization or whatever. I mean, because it’s an asymptotic property, it’s really in the limit as the size of the inputs goes to infinity, that’s the only time that polynomial or NP becomes relevant. And so maybe it’s important to keep that in mind when… Sometimes you find kind of overblown remarks made about, you know, if P equals NP, then this will be incredibly important for human civilization because it means that we’ll have feasible algorithms for solving these incredibly important…
Joel David Hamkins
(03:11:45)
…problems in NP. You know, that it would cause immense wealth for human societies and so on because we would be able to solve these otherwise intractable problems, and that would be the basis of new technology and industry and so forth. I mean, people make these kinds of remarks, but…
Lex Fridman
(03:12:03)
Of course.
Joel David Hamkins
(03:12:04)
…you have to temper those remarks by the realization that P and P equal NP or P not equal NP are not about these practical things at all because of the asymptotic nature of the question itself. Okay, that’s on the one hand. But on the second hand, we already have the algorithm, so we could use it already, except it’s a terrible algorithm because it involves this incredible amount of coding and so on.
Lex Fridman
(03:12:30)
And on the third hand, like you said, we already have approximation algorithms that…
Joel David Hamkins
(03:12:34)
Yes.
Lex Fridman
(03:12:34)
…that from a pragmatic perspective, solve all the actual real engineering problems of human civilization.
Joel David Hamkins
(03:12:42)
Like the SAT solvers work amazingly well, you know, in lots and lots of cases, even though we can prove we don’t expect… If P is not equal to NP, then there won’t be a polynomial time SAT solver. But actually, the SAT solver approximations are really quite amazing.

Greatest mathematicians in history

Lex Fridman
(03:12:59)
Sorry to ask the ridiculous question, but who is the greatest mathematician of all time? Who are the possible candidates? Euler, Gauss, Newton, Ramanujan, Hilbert. We mentioned Gödel, Turing, if you throw him into the bucket.
Joel David Hamkins
(03:13:14)
So this is, I think, an incredibly difficult question to answer. Personally, I don’t really think this way about ranking mathematicians by greatness. Um…
Lex Fridman
(03:13:28)
So you don’t have, like… You know, some people have a Taylor Swift poster in their dorm room. You don’t have it.
Joel David Hamkins
(03:13:33)
I mean, if you forced me to pick someone, it would probably be Archimedes because…
Lex Fridman
(03:13:37)
Archimedes
Joel David Hamkins
(03:13:37)
…he had such incredible achievements in such an early era, which totally transcended the work of the other people in his era. But I also have the view that I want to learn mathematics and gain mathematical insight from whoever can provide it and wherever I can find it. And this isn’t always just coming from the greats. Sometimes the greats are doing things that are just first and not… You know, somebody else could have easily been first. So there’s a kind of luck aspect to it when you go back and look at the achievements. And because of this progress issue in mathematics that we talked about earlier, namely we really do understand things much better now than they used to.
Joel David Hamkins
(03:14:22)
And when you look back at the achievements that had been made, then maybe you can imagine thinking, “Well, somebody else could’ve had that insight also.” And maybe they would have… It’s already a known phenomenon that disparate mathematicians end up proving essentially similar results at approximately the same time. But, okay, the person who did it first is getting the credit and so on.
Lex Fridman
(03:14:48)
What do you make of that? Because I see that sometimes when mathematicians… This also applies in physics and science, where completely separately, discoveries are made…
Joel David Hamkins
(03:14:58)
Right. Yeah.
Lex Fridman
(03:14:58)
…maybe at a very similar time. What does that mean?
Joel David Hamkins
(03:15:01)
It’s relatively common. I mean, I think it’s like certain ideas are in the air and being thought about but not fully articulated, and so this is the nature of growth in knowledge.
Lex Fridman
(03:15:13)
Do you understand where ideas come from?
Joel David Hamkins
(03:15:16)
Not really.
Lex Fridman
(03:15:17)
I mean, what’s your own process when you’re thinking through a problem?
Joel David Hamkins
(03:15:22)
Yeah, that’s another difficult question. I suppose it has to do with… My mathematical style, my style as a mathematician, is that I don’t really like difficult mathematics. What I love is simple, clear, easy-to-understand arguments that prove a surprising result. That’s my favorite situation. And actually, the question of whether it’s a new result or not is somehow less important to me. And so that has to do with this question of the greats and so on, whoever does it first. Because I think, for example, if you prove a new result with a bad argument or a complicated argument, that’s great because you proved something new. But I still want to see the beautiful, simple, because that’s what I can understand.
Joel David Hamkins
(03:16:16)
Also, I’m kind of naturally skeptical about any complicated argument because it might be wrong. And… …If I can’t really understand it fully, like every single step all at once in my head, then I’m just worried maybe it’s wrong. And so these different styles, sometimes mathematicians get involved with these enormous research projects that involve huge numbers of working parts and… …Different technology coming together. I mean, mathematical technology, not physical technology.
Lex Fridman
(03:16:48)
And sometimes it actually involves now more and more something like the Lean programming language where some parts are automated, so you have this gigantic…
Joel David Hamkins
(03:16:54)
Yeah, yeah, I see. Well, that’s another issue because maybe those things are less subject to skepticism when it’s validated…
Lex Fridman
(03:17:02)
Sure
Joel David Hamkins
(03:17:02)
…by Lean. But I’m thinking about the case where the arguments are just extremely complicated, and so I sort of worry whether it’s right or not, whereas you know, I like the simple thing. So I tend to have often worked on things that are a little bit off the beaten path from what other people are working on from that point of view.
Lex Fridman
(03:17:23)
Your curiosity draws you towards simplicity.
Joel David Hamkins
(03:17:25)
Yeah. I want to work on the things that I can understand and that are simple. Luckily, I’ve found that I’ve been able to make contributions that other people seem to like, in this way, in this style. So I’ve been fortunate from that point of view. My process always, though, and I’ve recommended this always to my students, is just a kind of playful curiosity. So whenever I have…
Joel David Hamkins
(03:17:55)
Whenever there’s an idea or a topic then I just play around with it and change little things or understand a basic case and then make it more complicated or press things a little bit on this side or apply the idea to my favorite example that’s relevant, and see what happens, or you just play around with ideas, and this often leads to insights that then lead to more methods or more, then pretty soon you’re making progress on the problem. So this is basically my method, is I just fool around with the ideas until I can see a path through towards something interesting… …And then prove that, and that’s worked extremely well for me. So I’m pretty pleased with that method.
Lex Fridman
(03:18:47)
You do like thought experiments where you anthropomorphize like you mentioned?
Joel David Hamkins
(03:18:51)
Yeah, yeah. So this is a basic tool. I mean, I use this all the time. You imagine a set-theoretic model, a model of ZFC, as like a place where you’re living, and you might travel to distant lands by forcing. This is a kind of metaphor for what’s going on. Of course, the actual arguments aren’t anything like that because there’s not land and you’re not traveling and you’re not…
Lex Fridman
(03:19:13)
But you allow your mind to visualize that kind of thing-
Joel David Hamkins
(03:19:15)
Yeah
Lex Fridman
(03:19:15)
… in the natural real world.
Joel David Hamkins
(03:19:16)
And it helps you to understand. Particularly when there are parts of the argument that are in tension with one another, then you can imagine that people are fighting or something. And those kinds of metaphors, or you imagine it in terms of a game theoretic, you know, two players trying to win. So that’s kind of tension. And those kinds of metaphorical ways of understanding a mathematical problem often are extremely helpful in realizing, aha, the enemy is going to pick this thing to be like that because, you know, it makes it more continuous or whatever, and then we should do this other thing in order to… So it makes you realize mathematical strategies for finding the answer and proving the theorem that you want to prove because of the ideas that come out of that anthropomorphization.
Lex Fridman
(03:20:01)
What do you think of somebody like Andrew Wiles, who spent seven years grinding at one of the hardest problems in the history of mathematics? And maybe contrasting that a little bit with somebody who’s also brilliant, Terence Tao, who basically says if he hits a wall, he just switches to a different problem and he comes back and so on. So it’s less of a focused grind for many years without any guarantee that you’ll get there, which is what Andrew Wiles went through.
Joel David Hamkins
(03:20:30)
Right.
Lex Fridman
(03:20:30)
Maybe Grigori Perelman did the same.
Joel David Hamkins
(03:20:32)
I mean, Wiles proved an amazing theorem, Fermat’s Last Theorem result is incredible. This is a totally different style than my own practice, though, of working in isolation. For me, mathematics is often a kind of social activity. I have… I counted, I mean, it’s pushing towards a hundred collaborators, co-authors on various papers and so on. And, you know, if anybody has an idea they want to talk about with me, if I’m interested in it, then I’m going to want to collaborate with them and we might solve the problem and have a joint paper or whatever. You want to have a joint paper? Let me-
Lex Fridman
(03:21:06)
Yeah, exactly. Let’s go.
Joel David Hamkins
(03:21:08)
So my approach to making mathematical progress tends to involve working with other people quite a lot rather than just working on my…
Joel David Hamkins
(03:21:17)
…own, and I enjoy that aspect very much. So I, personally, I couldn’t ever do what Wiles did. Maybe I’m missing out. Maybe if I locked myself, you know, in the bedroom and just worked on whatever, then I would solve it. But I tend to think that no, actually, being on MathOverflow so much and I’ve gotten so many ideas, so many papers have grown out of the MathOverflow conversations and back and forth. Someone posts a question and I post an answer on part of it, and then someone else has an idea and it turns into a full solution, and then we have a three-way paper coming out of that. That’s happened many times. And so for me, I enjoy this kind of social aspect to it. And it’s not just the social part.
Joel David Hamkins
(03:22:01)
Rather, that’s the nature of mathematical investigation as I see it, is putting forth mathematical ideas to other people and they respond to it in a way that helps me learn, helps them learn, and I think that’s a very productive way of undertaking mathematics.
Lex Fridman
(03:22:20)
I think it’s when you work solo on mathematics, from my outsider perspective, it seems terrifyingly lonely. And because you’re, especially if you do stick to a single problem, especially if that problem has broken many brilliant mathematicians in the past, that you’re really putting all your chips in. And just the torment… …The rollercoaster of day to day. Because I imagine you have these moments of hopeful break, mini breakthroughs, and then you have to deal with the occasional realization that, no, it was not a breakthrough, and that disappointment.
Lex Fridman
(03:23:00)
And then you have to go, like, a weekly, maybe daily disappointment where you hit a wall, and you have no other person to brainstorm with. You have no other avenue to pursue. And it’s, I don’t know, the mental fortitude it takes to go through that. But everybody’s different. Some people are recluse and just really find solace in that lone grind. I have to ask about Grisha Grigori Perelman. What do you think of him famously declining the Fields Medal and the Millennial Prize? So he stated, “I’m not interested in money or fame. The prize is completely irrelevant to me. If the proof is correct, then no other recognition is needed.” What do you think of him turning down the prize?
Joel David Hamkins
(03:23:52)
I guess what I think is that mathematics is full of a lot of different kinds of people. And my attitude is that, hey, it doesn’t matter. Maybe they have a good math idea, and so I want to talk to them and interact with them. And so I think the Perelman case is maybe an instance where, you know, he’s such a brilliant mind and he solved this extremely famous and difficult problem, and that is a huge achievement. But he also had these views about, you know, prizes and somehow, I don’t really fully understand why he would turn it down.
Lex Fridman
(03:24:33)
I do think I have a similar thing, just observing Olympic athletes that are, in many cases, don’t get paid very much, and they nevertheless dedicate their entire lives for the pursuit… … Of the gold medal. I think his case is a reminder that some of the greatest mathematicians, some of the greatest scientists and human beings do the thing they do, take on these problems for the love of it, not for the prizes or the money or any of that. Now, as you’re saying, if the money comes, you could use it for stuff. If the prizes come, and the fame, and so on, that might be useful. But the reason fundamentally the greats do it is because of the art itself.
Joel David Hamkins
(03:25:13)
Sure, I totally agree with that. I mean, I share the view. That’s, you know, that’s why I’m a mathematician is because I find the questions so compelling and I’ve spent my whole life thinking about these problems. But, you know, but like if I won an award…
Lex Fridman
(03:25:32)
Yeah, it’s great. It’s great. I mean, I’m pretty sure you don’t contribute to MathOverflow for the wealth and the power. That you gain. I mean, it’s, yeah, genuine curiosity.
Joel David Hamkins
(03:25:46)
Well, you asked who the greatest mathematician is, and of course if we want to be truly objective about it, we would need a kind of an objective criteria…
Lex Fridman
(03:25:55)
Criteria, yeah.
Joel David Hamkins
(03:25:55)
…about how to evaluate the relative, you know, strength and the reputation of various mathematicians. And so, of course, we should use MathOverflow score… …Because…
Lex Fridman
(03:26:06)
That you’re definitively… I mean, nobody’s objectively the greatest mathematician of all time.
Joel David Hamkins
(03:26:10)
Yes, that’s true. I’ve also argued that tenure and promotion decisions should be based…
Lex Fridman
(03:26:15)
Based on MathOverflow.
Joel David Hamkins
(03:26:16)
…Yeah. So my daughter introduced me to her boyfriend. …And told me that she had a boyfriend. And I, um…
Lex Fridman
(03:26:25)
Asked him what his MathOverflow…
Joel David Hamkins
(03:26:26)
I wanted to know, first of all, what is his chess rating, and secondly, what is his MathOverflow score?

Infinite chess

Lex Fridman
(03:26:34)
Oh, man. Well, that’s the only way to judge a person, I think. That’s, I think, objectively correct. Yeah. I mean, since you bring up chess, I’ve got to ask you about infinite chess. I can’t let you go. You’ve, I mean, you’ve worked on a million things, but infinite chess is one of them. Somebody asked on MathOverflow for the mathematical definition of chess.
Joel David Hamkins
(03:26:54)
Right.
Lex Fridman
(03:26:54)
So can we talk about the math of chess and the math of infinite chess? What is infinite chess?
Joel David Hamkins
(03:26:59)
Oh, yeah, absolutely. Infinite chess is fantastic. Chess ordinarily is played on this tiny, tiny board. It’s an eight by eight board, right? So when you play chess, normally it’s on the eight by eight board. But we want to play infinite chess, so on the integer board. It’s infinite in all four directions, you know, but it still has the chessboard pattern, and maybe there are pieces on this board, maybe infinitely many pieces we allow. But one difference from finite ordinary chess, in infinite chess, we don’t play from a standard starting position. Rather, you…
Joel David Hamkins
(03:27:36)
The interesting situation is that you present a position where there’s a lot of pieces already on the board in a complicated way, and you say, “What would it be like to start from this position or from that one?” You know, and we want to produce positions that have interesting features, meaning mathematically interesting features. And so I can tell you for example… probably a lot of people are familiar with, say, the mate in two genre of chess problem. You know, you have a chess problem and it’s white to mate in two, which means that white is going to make two moves, but the second move is going to be a checkmate.
Joel David Hamkins
(03:28:15)
Or maybe mate in three or mate in five or whatever. We can have mate in N positions for any N. In infinite chess, you can create a position which is not mate in N for any N, but white has a winning strategy that will win in infinitely many moves. In other words, Let me say it again. There are positions in infinite chess that white can definitely win. In infinitely many moves, white is going to make checkmate, but there’s no particular N for which white can guarantee to win in N moves.
Lex Fridman
(03:28:56)
There’s no N?
Joel David Hamkins
(03:28:57)
No N. So it’s not mate in N for any N, but it’s a white win, infinitely many. The way to think about it is white is going to win, but black controls how long it takes.
Lex Fridman
(03:29:09)
Ah, got it.
Joel David Hamkins
(03:29:10)
But it’s doomed. Black can say, “Well, I know you’re going to win, but this time it’s going to… you’re going to take a thousand moves at least.” And… Or maybe in a different way of playing, black can say, “Well, I know you’re going to win, but this time you’re going to have to take a million moves.” For any number, black can say that. So these are really interesting positions. There’s a position in my first infinite chess paper. So it’s black to play in this position, and if black doesn’t move that rook there, then white is going to checkmate pretty quickly.
Lex Fridman
(03:29:41)
By the way, can we describe the rules of infinite chess?
Joel David Hamkins
(03:29:44)
Right. So the rules of infinite chess are the… there’s just the ordinary pieces, and they move on this infinite board, which is just a chessboard, but extended in all directions.
Joel David Hamkins
(03:29:53)
Infinitely, with no edge. So there’s no boundary. But the pieces move just like you’d expect. So the knights move just the same and the rooks move, you know, on the ranks and files, and the bishops move on the same color diagonals and… just like you would expect, except they can move as far as they want, you know, if there’s no intervening piece in the way. The one thing is that… Okay, so the white pawns always move upwards and the black pawns always move downwards, but when they’re capturing, the pawns, you know, capture on the diagonal. So I think the piece movement is pretty clear. There are a couple of differences that you have to pay attention to from ordinary chess.
Joel David Hamkins
(03:30:32)
For example, there’s this threefold repetition rule in ordinary chess, but we just, we just get rid of this for infinite chess because, of course, threefold repetition is just a proxy for infinite play. The real rule is infinite play is a draw, not threefold repetition is a draw. That’s just a kind of convenient approximation to what I view as the actual rule, which is that infinite play is a draw. So the only way to win is to make checkmate on the board at a finite stage of play. And if you play infinitely, you haven’t done that, and so it’s a draw.
Lex Fridman
(03:31:05)
And the pawns can’t be converted into something else?
Joel David Hamkins
(03:31:06)
And there’s no promotion because there’s no edge. Right, exactly. And this position that we were just talking about is a position with game value omega, which means that because it has an ordinal value, white is going to win, but black can play as though counting down from omega. What is the nature of counting down from omega? If you’re black and you need to count down from omega, then you have to say a finite number.
Joel David Hamkins
(03:31:33)
… and then after that, it’s going to be at most that many numbers afterwards to count down, right? So the nature of counting down from omega is that you take this giant step on the first count, and then after that, you subtract one each time. You can’t subtract one from omega because that’s not an ordinal. So if you count down from omega, you have to go to some finite number, and then if you just subtract one each time, then that’s how many more moves you get. So that’s the sense in which black can make it take as long as he wants because he can pick his initial number to be whatever he wants.
Lex Fridman
(03:32:05)
By the way, I, I just noticed that you were citing a MathOverflow question, which is really cool.
Joel David Hamkins
(03:32:10)
That’s right, yeah. My interest in infinite chess was born on MathOverflow because someone asked this question.
Lex Fridman
(03:32:15)
Noam Elkies asked this question. That’s so cool to see a MathOverflow citation in a, in an arXiv paper. That’s cool. How do you construct the position-
Joel David Hamkins
(03:32:28)
Right
Lex Fridman
(03:32:28)
…that satisfies this? Is there an algorithm for construction?
Joel David Hamkins
(03:32:31)
No. This is an act of mathematical creativity, really, to come up with… I had a co-author, my co-author, Corey Evans. He’s a U.S. national master chess player. Very strong chess player. He’s also a philosophy professor of law.
Lex Fridman
(03:32:49)
Your collaborations are wonderful. That’s great.
Joel David Hamkins
(03:32:53)
So I met him because he was a grad student at CUNY, where I was at the time in New York. And also he was my son’s chess coach for when my son was… …Playing chess competitively in elementary school. Corey was the coach. And so we knew him that way. That was right around the time when I was getting interested in infinite chess, and I knew I needed a chess-knowledgeable partner. Corey was invaluable for the paper because the proofs in infinite chess are extremely finicky because you create these positions, but the details of the argument have to do with chess reasoning, you know? My chess reading wasn’t quite up to it because I would create the positions… Almost all the positions are ones that I made, but this is like after many generations…
Joel David Hamkins
(03:33:50)
…of being corrected by Corey because Corey would come and say, “Hey, you know, this pawn is hanging, and it breaks your argument, and…” “…or, or, you know, this bishop can leak out…” …of the cage,” or whatever. And so the process was I knew kind of in terms of these ordinals what we needed to create with…
Joel David Hamkins
(03:34:10)
…the position, and I would struggle to do it and create something that sort of had the features that I wanted, and then I would show it to Corey and he would say, “Look, it doesn’t work because of this and that,” and so on. This kind of back and forth was extremely helpful to me, and eventually we, you know, converged on arguments that were correct. So, yeah, it’s quite interesting. Also, maybe another thing to say is the follow-up paper to this one was a three-way paper with Corey, myself, and my PhD student, Norman Perlmutter, in which we improved the bound. So we were aiming to produce more and more chess positions with higher and higher ordinal values.
Joel David Hamkins
(03:34:52)
The initial position was value omega, and then we made omega-squared and omega-cubed in the first paper, and then in this three-way collaboration, we made omega to the fourth.
Lex Fridman
(03:35:03)
The title of the paper is “The Position in Infinite Chess with Game Value Omega to the 4th.”
Joel David Hamkins
(03:35:09)
Right. And so at the time, this was the best-known result, the state of the art, but since that time, it’s been improved now dramatically. And in fact, we know now that every countable ordinal arises as the game value of a position in infinite chess, so it’s a fantastic result.
Lex Fridman
(03:35:28)
Before I forget, let me ask about your views on AI and LLMs that are getting better and better at mathematics. We’ve spoken about collaborators, and you have so many collaborators. Do you see AI as a potential great collaborator to you as a mathematician, and what do you think the future role of those… …Kinds of AI systems is?
Joel David Hamkins
(03:35:52)
I guess I would draw a distinction between what we have currently and what might come in future years. I’ve played around with it and I’ve tried experimenting, but I haven’t found it helpful at all, basically zero. It’s not helpful to me. And, you know, I’ve used various systems and so on, the paid models and so on, and my typical experience interacting with AI on a mathematical question is that it gives me garbage answers that are not mathematically correct. And so I find that not helpful and also frustrating. If I was interacting with a person, the frustrating thing is when you have to argue about whether or not the argument they gave you is right, and you point out exactly the error—
Joel David Hamkins
(03:36:47)
…in the AI saying, “Oh, it’s totally fine.” If I were having such an experience with a person, I would simply refuse to talk to that person again. But okay, one has to overlook these kind of flaws. And so I tend to be a skeptic about the current value of the current AI systems as far as mathematical reasoning is concerned. It seems not reliable. But I know for a fact that there are several prominent mathematicians who I have enormous respect for who are saying that they are using it in a way— …that’s helpful, and I’m often very surprised to hear that based on my own experience, which is quite the opposite. Maybe my process isn’t any good, although I use it for other things like programming or image generation and so on. It’s amazingly powerful and helpful.
Joel David Hamkins
(03:37:50)
But for mathematical arguments, I haven’t found it helpful, and maybe I’m not interacting with it in the right way— …yet, or it could be. And so maybe I just need to improve my skill. But also maybe I wonder, like, these examples that are provided by other people maybe involve quite a huge amount of interaction, and so I wonder if maybe the mathematical ideas are really coming from the person, you know, these great mathematicians—
Joel David Hamkins
(03:38:21)
…who are doing it rather than the AI. And so I tend to be skeptical. But also, I’m skeptical for another reason, and that is because of the nature of the large language model approach to AI doing mathematics. I recognize that the AI is trying to give me an argument that sounds like a proof rather than an argument that is a proof. The motivation is misplaced. And so I worry that this is a very dangerous source of error because it often happens in mathematics that… I mean, if I think back to when I was an undergrad, here at Caltech, I was a math major eventually, and at that time, LaTeX was a pretty new thing and I was learning LaTeX, and so I was typing up my homeworks in LaTeX and they looked beautiful. Actually, they looked like garbage. From my current standards—
Joel David Hamkins
(03:39:28)
…I’m sure it was terrible. Except at the time, I didn’t know anything. I was an undergrad and LaTeX was sort of unheard of, and so I was producing these beautifully typeset, you know— …problem sets, solutions, and so on. And I would print it up and submit it and so on, and the grades would come back, terrible grades, and I realized what was happening— The copy was so beautiful, mathematically typeset in this way, it looked like the kind of mathematics you find in a book. Because basically, that’s the only time you saw that kind of mathematical typesetting was in a professional, published book. And that mathematics was almost always correct. …In a book, right? And so I had somehow lost my…
Joel David Hamkins
(03:40:20)
…because it was so beautiful, and I’m used to only seeing that kind of typesetting when an argument was totally right. I wasn’t critical enough and was making these bonehead mistakes in the proofs. So, okay, I corrected this, of course.
Lex Fridman
(03:40:36)
But this kind of effect is very much real with the modern LLM system.
Joel David Hamkins
(03:40:39)
Yes.
Lex Fridman
(03:40:40)
That’s right.
Joel David Hamkins
(03:40:40)
And so I think that the chat programs and so on are producing these arguments that look really… That’s what they’re striving to do, that’s what they’re designed to do. They’re not designed to make a logically correct argument. They’re designed to make something that looks like a logically correct argument. And it’s easy to get fooled if you’re not skeptical. And so that’s why I worry a bit when people rely on AI for mathematical arguments. I mean, using… Tying them to Lean in the formal proof verification systems and so on, this is a totally different…
Joel David Hamkins
(03:41:17)
…way of operating. But for the ordinary person sitting down and using chat to come up with a mathematical argument, I think it’s a dangerous source of error if you’re not especially attuned to this very issue that the AI is going to produce something that’s not grounded in mathematical understanding, but rather something that is trying to look like something that is grounded…
Joel David Hamkins
(03:41:41)
…in mathematical understanding. And those are not the same thing at all. And furthermore, I really wonder if one can make a kind of system for producing genuine mathematical insight that isn’t based in what I would view as mathematical understanding as opposed to the text generation systems. The methods that are used, they don’t seem close enough grounded in understanding of the underlying mathematical concepts, but rather grounded in the way words appear on a page in arguments about those concepts, which are not the same.
Lex Fridman
(03:42:17)
So there’s a couple of things to say there. One, I think there is a real skill in providing the LLM system with enough information to be a good collaborator. Because you really are dealing with a different… It’s not a human being. You really have to load in everything you possibly can from your body of work, from the way you’re thinking, and that’s a real skill. And then the other thing is, for me, if it’s at all anything like programming, because I have a lot of colleagues and friends who are programmers who feel similarly to you. And for me, I’ve gotten better and better at giving as much information as possible to the systems in a really structured way, maybe because I just like natural language as a way to express my thinking.
Lex Fridman
(03:43:08)
And then the benefit comes from the inspiration that the system can provide by its ability to know a lot of things and make connections between disparate fields and between disparate concepts. And in that way, it provides not the answer but the inspiration, the handholding, the camaraderie that helps me get to the answer, because it does know a lot more than me, like knowledge. And if you give it a lot of information and ask the broader questions, it can make some really beautiful connections. But I do find that I have to be extremely patient, like you said. The amount of times I’ll do something dumb where I feel like, “You don’t get this at all, do you?” That’s a source of a lot of frustration for us humans. Like, “This…
Lex Fridman
(03:44:04)
…Wait, this thing doesn’t understand at all.” If you can have the patience to look past that, there might be some brilliant little insights that it can provide.
Joel David Hamkins
(03:44:16)
Right.
Lex Fridman
(03:44:16)
At least for me in the realm of programming. I should say programming, there’s just so much training data. There’s so much there. At least I see the light at the end of the tunnel of promising possibilities of it being a good collaborator, versus something that gives you really true genius-level insights.
Joel David Hamkins
(03:44:39)
Right. It’s probably true. I also find it likely that a lot of the, as far as mathematical training data is concerned, I just have to assume that MathOverflow answers are part of the training data.
Lex Fridman
(03:44:52)
Yes, of course.
Joel David Hamkins
(03:44:53)
It’s so…
Lex Fridman
(03:44:54)
And you’re…
Joel David Hamkins
(03:44:56)
So-
Lex Fridman
(03:44:56)
I mean, you’re talking to yourself, essentially.
Joel David Hamkins
(03:44:57)
Yeah, maybe.

Most beautiful idea in mathematics

Lex Fridman
(03:45:00)
Sorry for the ridiculously big question, but what idea in mathematics is most beautiful to you? We’ve talked about so many.
Joel David Hamkins
(03:45:10)
The most beautiful idea in mathematics is the transfinite ordinals. These were the number system invented by Georg Cantor about counting beyond infinity, just the idea of counting beyond infinity. I mean, you count through the ordinary numbers, the natural numbers, zero, one, two, three, and so on, and then you’re not done because after that comes omega and then omega plus one and omega plus two and so on. And you can always add one. And so of course after you count through all those numbers of the form omega plus N, then you get to omega plus omega, the first number after all those. And then comes omega plus omega plus one and so on. You can always add one. And so you can just keep counting through the ordinals. It never ends.
Joel David Hamkins
(03:45:59)
Eventually, you get to omega times three, omega times four, and so on, and then the limit of those numbers, the first number that comes after all those numbers will be omega squared. And this one is the first compound limit ordinal because it’s a… A limit ordinal is one of these numbers, an ordinal, that doesn’t have an immediate predecessor like omega and omega times two, omega times three. Those are all limit ordinals. But-… by omega squared is a limit ordinal, but it’s also a limit of limit ordinals because the omega times three, omega times four, and so on, those are all limit ordinals that limit up to omega squared. And then, of course, you form omega squared plus one, and then omega squared plus two, and so on, and it never stops.
Joel David Hamkins
(03:46:43)
And it’s just absolutely beautiful and amazing, and furthermore, forms the foundation for these transfinite recursive constructions that came later. I mean starting with the Cantor-Bendixson theorem that I mentioned. And continuing with the construction of the, of the V hierarchy and Godel’s constructible universe is built this way, and Zermelo’s proof of the well-order principle using the axiom of choice is a transfinite recursive construction. And, and so the idea of just counting past infinity is so simple and elegant, and has led to so much fascinating mathematics.
Lex Fridman
(03:47:27)
Yeah, the infinity’s not the end. And what about philosophy? What to you is the most beautiful idea in philosophy?
Joel David Hamkins
(03:47:35)
So I d- I have a foot in both fields, philosophy and mathematics, and in some contexts I seem to be required to choose whether I’m a mathematician or a philosopher. I mean, my training is in mathematics. My PhD, all my degrees are mathematics. But somehow I turned myself into a philosopher over the years because my mathematical work was engaging with these philosophical issues. And so when I went… In New York, I had appointments first in mathematics only, but then eventually I was also joining the philosophy faculty at the graduate center. And when I went to Oxford for the first time, my main appointment was in philosophy, and that’s also true now at Notre Dame although I’m also a concurrent professor in mathematics.
Joel David Hamkins
(03:48:20)
And I have math PhD students still and philosophy PhD students. And so I don’t really care to decide whether I’m a mathematician or a philosopher. And my work is engaging with mathematics and with philosophical issues in mathematics and with plain philosophy, and there’s this ample region between these re- between these two subjects. So it’s not necessary to choose. I remember when I first went to Oxford and I told my daughter that I was going to become professor of philosophy in Oxford and she looked at me plaintively and said, “Uh, but, but Papa, you’re not a philosopher.” Because in her mind, you know, her father was the mathematician and her mother was the philosopher ’cause my wife, Barbara, is a philosopher. Now also at Notre Dame. We’re together there.
Joel David Hamkins
(03:49:13)
And okay, but fortunately I don’t really have to choose be- between them. So you ask about the most beautiful idea in philosophy, and I would have to say that I think it’s the distinction between truth and proof, the one that we discussed already. It’s so profound and gets at the heart of so many philosophical issues. I mean, of course, this is a distinction that’s maybe born in mathematics or mathematical logic, but that’s already philosophical to a degree, and it’s fundamentally a philosophical distinction. The truth is about the nature of the world and the way things are. It’s about objective reality in a sense, whereas proof is about our understanding of the world and about how we come to know the things that we know about the world.
Joel David Hamkins
(03:50:18)
And so to focus on proof is to focus on the interaction that we have with the objective reality. And okay, I’m talking about the reality of mathematics, not the physical world, because as I said, I live in the platonic realm and I interact with mathematical reality, and so proof is about the interaction and how we come to know the facts that are true in this mathematical reality, whereas truth is about what’s really the case, sort of apart from our knowledge of it. And and this is I think a… such a core way that I have of understanding the world and and the nature of logic and reasonings.
Lex Fridman
(03:51:03)
And the gap between the two is full of fascinating mysteries, both in the platonic realm, but also in the physics realm, in I would even say in the, in the human psychology, sociology, politics, geopolitics, all of it, if you think about proof more generally, which is the process of discovery versus the truth itself. And that’s our journey whatever field we’re in. Well, I, for one, am grateful for… … For how marvelous of a philosopher, mathematician, and human being you are. It’s truly an honor to speak with you today.
Joel David Hamkins
(03:51:43)
Well, thank you so much. It’s such a pleasure to be here, and thank you for inviting me.
Lex Fridman
(03:51:47)
Thanks for listening to this conversation with Joel David Hamkins. To support this podcast, please check out our sponsors in the description where you can also find links to contact me, ask questions, get feedback, and so on. Thank you for listening. As always, happy New Year. I love you all.

Transcript for Irving Finkel: Deciphering Secrets of Ancient Civilizations & Flood Myths | Lex Fridman Podcast #487

This is a transcript of Lex Fridman Podcast #487 with Irving Finkel.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman
(00:00:00)
The following is a conversation with Irving Finkel, a scholar of ancient languages, curator at the British Museum for over 45 years, and a much-admired and respected world expert on cuneiform script. More generally, he’s an expert on ancient languages of Sumerian, Akkadian, and Babylonian, as well as ancient board games and Mesopotamia magic, medicine, literature, and culture. I should also mention that both on and off the mic, Irving was a super kind and fun person to talk to, with an infectious enthusiasm for ancient history that, of course, I already love but fell in love with even more. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description, or you can also find links to contact me, ask questions, get feedback, and so on.

Origins of human language

Lex Fridman
(00:00:53)
And now, dear friends, here’s Irving Finkel. Where and when did writing originate in human civilization? Let’s go back a few thousand years.
Irving Finkel
(00:01:05)
The first attempts at writing that we could call writing go back to the middle of the fourth millennium, say around 3500 BC, something like that. There were people in the Middle East, individuals who lived between the Euphrates and Tigris Rivers, who had clay as their operating material for building and all sorts of other purposes, and eventually as a writing support. They somehow developed the idea of the basis of writing, which means that you can make a sign, which people agree on, on a surface that another person, when they see it, they know what sound it engenders. That is the essence of writing: that there’s an agreed system of symbols that A can use and B can then play back, either in their heads or literally with their voices, a bit like a gramophone record.
Irving Finkel
(00:01:54)
So when it really began is a terribly, terribly awkward question for us, because the truth of the matter is, we have no idea when anything began. And all we can say is that the oldest evidence we have is around 3500 BC, but whether that was anywhere near the time or the stage when this started off for the first time seems to me very, very unlikely. So, among these Mesopotamians around 3500, they started to do this. They made up signs which everybody understood and they could write simple pictographic messages. Foot is a foot, leg is a leg, and barley is barley.
Irving Finkel
(00:02:36)
And then very, very gradually, they had the idea of how you could represent numerals, and then they had the idea that the pictures could also represent signs. And once they had the idea that you could write sounds with pictures, that’s the crucial thing, that a picture of a foot not only meant foot but it meant the sound of the word for foot. Once this happened, some probably very, very imaginative and clever persons had a kind of lightbulb moment when they realized that they could develop a whole panoply of signs which could convey sound. And once you had that, you’re liberated from pictographic writing into a position where you can record language. So, language, grammar, and all the rest of it, and before long, proverbs and literature and all the other things that got written down.
Irving Finkel
(00:03:29)
So it was a pretty gigantic step whenever it was taken, but we really have no idea when it was first taken. But the first evidence we have presents a sort of clear-ish picture. It was simple and it got more complicated, then it became magnificent, so that with all the signs a fluent, well-trained scribe could not only write down the Sumerian language, which was one of the native tongues of Iraq, or the Babylonian language, which was the other main language of Iraq, but also any other language he heard. So, if somebody came speaking French ahead of their time and spoke out loud, he could record with these signs the sound of French.
Irving Finkel
(00:04:10)
And we have examples of funny languages around the world in the Bronze Age, which were written in cuneiform purely by ear. And often, sometimes the scribes who recorded by dictation or by something, wrote stuff they couldn’t understand, but somebody else could read and understand it. So, what you have is long before the alphabet, when the alphabet was not even a dream, a complex, bewildering-looking, off-putting writing system, which was actually very beautiful, very flexible, and lasted for well over three millennia, probably closer to four millennia. And it took a long time for the alphabet, which anybody would say was much, much more useful and much more sensible, to displace it.
Irving Finkel
(00:04:52)
So it’s one of the major stages of man’s intellect, because quite soon after the writing first took off, the signs began to proliferate, and someone said, “Hey, we haven’t got a sign for this sound,” or, “We haven’t got a sign for this idea.” And so it began to swell out. And at some extremely remarkable stage, one, probably only one person, suddenly realized that if there was no control, they would grow exponentially and exponentially until it was all nonsense and everybody had their own writing. And the second thing is that no one could remember them unless they were written down in a retrievable way.
Irving Finkel
(00:05:34)
So they invented not only writing, they invented lexicography, which means that early in the third millennium, they put down all the things that were made of wood, and all the things that were made of reeds, and all the names of colors and of countries and all the gods and everything. They made a systematic attempt to make these signs to standardize them and to make them retrievable, and of course, to teach them. And having exercised that rigor from the outset, it meant that the thing became streamlined and stayed more or less as it was all the way through, for three millennia or more. Because the stamp put on it by those early visionaries, not only who came up with the system and how it would work, but to preserve it and to safeguard it, was fantastically effective.
Irving Finkel
(00:06:26)
So, it means that there were scholars in Babylon in the third century or the second century, when Alexander was there, for example. If somebody dug up a tablet in very early writing, they would have a pretty good idea what it meant. They would recognize the signs even though they were so ancient, and they’d see the relationships between them. So, you have a fantastically strong system where the spinal cord was structured in a lexicographic, regular system. So, lexicography and what the signs were was jealously safeguarded and protected, and it lasted fantastically.

Cuneiform

Lex Fridman
(00:07:04)
We should say that the name of that system that lasted for 3,000 years is cuneiform.
Irving Finkel
(00:07:10)
Yeah. So in the 19th century, about 1840, 1850, they started to find these things on excavations in Iraq, the big Assyrian cities and sometimes further south, the Babylonian cities. They found these clay tablets, which in the ground lasted unimaginable lengths of time. And they were all written in what we call cuneiform script. And the cuneiform part of it means wedge-shaped, because “cuneus” in Latin means wedge. And when they first saw these signs, they realized that a cluster of marks broke down into different arrangements of triangular shapes. And it’s most clear on the Assyrian reliefs where the writing is very big and you can easily tell that they were that shape. On a tablet, the wedge is not quite so predominant. So, that was it.
Irving Finkel
(00:08:02)
So, they first called them cuneatic or cuneiform, and the word stuck. And of course, growing up in the British Museum and reading these things for a living becomes a kind of lifetime’s work to make sure that everybody in the country knows what cuneiform means. Because once in a while you meet somebody who never heard of the word at all, and this is appalling. So, people do survive, however. But it’s an important mission because such an achievement by man and so much knowledge was encapsulated in these lumps of clay, because they used it for everyday things like letters and business documents and contracts. This is one thing. And then the kings wrote long, elaborate accounts of their campaigns and their military activities.
Irving Finkel
(00:08:46)
And then there was proper literature and magic and medicine and all other genres of literature that we would naturally list on a sheet of paper in alphabetic writing, what you would use writing for. They basically did. And it had the unexpected quality that most of these clay things lasted in the ground until now. So, however many hundreds of thousands of tablets are in the world’s museums and collections, there must be millions of them in the ground awaiting excavation. So in a way that’s a comforting thought, because they’re safe there and protected.
Lex Fridman
(00:09:26)
You said that the development of cuneiform, of these tablets, of written language is one of the greatest, probably the greatest invention in human history. How hard do you think it was to come up with this? And we should make clear that that very specific element of encoding sound on the tablet, that’s the genius invention. Drawing a picture makes sense. Okay, here’s, you know, barley. Here’s the sun. Here’s whatever, the actual object.
Irving Finkel
(00:10:00)
Exactly.
Lex Fridman
(00:10:00)
But to actually write down sound is a genius invention.
Irving Finkel
(00:10:05)
Well, I think it’s rather paradoxical, because the first generation or so of tablets that we have are written in these pictographic signs where each sign means what it looks like. So, this is a very limited method of recording messages, and it doesn’t lend itself to recording grammar. And then the secondary phase, as we understand it from archaeology, is the perception that you could take these signs, still meaning what they look like but also what the words sounded like. So, then you have all these wonderful ice cubes which express all the sounds of the language from which you can record words and grammar and everything else. Now, the thing is, the received law from Assyriology is it was that way around, that first we had pictures and secondly we had sound.
Irving Finkel
(00:10:55)
Well, I have to say, I find this very hard to believe, because if you had a group of people in an environment where it was compellingly necessary to make a system that you made marks on a surface which everybody could understand and use, why wouldn’t you start out with signs that made sounds? Because everybody speaks the same language, right? So, they didn’t have A, B, C, D, E, F, G, but they could easily work out all the vowels and consonants without naming them as vowels and consonants, but they’re component parts. So, they could have had signs that started out… Because if you decided you had… We have 26, let’s say they had 50 signs that would create the sound, they could write anything without any further trouble.
Irving Finkel
(00:11:44)
So, I find it very bewildering that they started off with the least flexible and the least adaptable system of pictographs and then they moved on to the sound. I don’t know why they bothered with it. And my hunch is that the archaeological evidence that we have on this score is ultimately misleading, because I think this: that probably for a very, very long time before the Sumerians, people in the world, the world of what we call the Middle East, were in contact. They traded, they probably even had wars, and they had messages between them. And I think there was a long running system of communication between people who didn’t share a language for whom pictures would suffice. So, if merchants come and they have three sheep to sell, so they draw three little sheep.
Irving Finkel
(00:12:35)
You know how much it is and what they are and so forth. And so I think that what happened with the Sumerians, with their pictographic signs, is that those signs are right at the end of a very, very, very long period of time, when somebody thought, “What we can do is take these stupid inhibited no smoking signs and write language.” That is what I think happened. That’s what I think happened.
Lex Fridman
(00:13:04)
Is this a controversial statement?
Irving Finkel
(00:13:05)
Highly controversial. Many Assyriologists would leave the room.
Irving Finkel
(00:13:12)
But I’m not scared of controversy because it’s natural. I mean, if you think about it, it’s natural because you don’t have to have an alphabet to divide your word into sounds, see? For example, in Sumerian, you have a funny system, right? You have a root, like “du,” which means “to go.” And then you have prefixes, like E or Mu or Ba, and one’s a passive, one’s an active, and this and this. So when you have a sentence, you have one of the Mu, Ba, or E prefixes, then you have the root, and then you have things at the end. So it is called agglutinative by people who like to make things look more important than they are. So you have the central thing, you slap stuff on the beginning, slap stuff on the end, and each particle creates a bit of meaning.
Irving Finkel
(00:13:56)
So you have a long verb which tells you, “He would’ve done it if he could, but he couldn’t,” kind of thing, in the form of the verb. But the thing is, if you wanted to write it down, you and I decided to write it down, so the first thing we would do is have a sign Mu, and then we’d have Ba, and then we’d have E, because every five minutes people made those noises. You see what I mean?

Controversial theory about Göbekli Tepe

Lex Fridman
(00:14:16)
Yeah, absolutely. Do you think it’s possible we might find much, much older…
Irving Finkel
(00:14:23)
I do
Lex Fridman
(00:14:24)
…cuneiform-type tablets?
Irving Finkel
(00:14:26)
Or pictographic-type tablets, before the cuneiform and its drawing type, and I’ll tell you why. Because there’s this marvelous site in Turkey called Gobekli Tepe.
Lex Fridman
(00:14:36)
Oh yeah?
Irving Finkel
(00:14:36)
Do you know about Gobekli Tepe?
Lex Fridman
(00:14:37)
Yes, of course.
Irving Finkel
(00:14:37)
Well, everybody knows about the buildings and the architecture. Everybody knows about it. If you go all the way through the photographs, which the archeologists unwisely put online, you will find in the middle of one color plate with lots of other things, a round green stone like a scarab from Egypt. That’s to say, it has an arched back and a flat bottom. And on the flat bottom, there are hieroglyphic signs carved in the stone. No one said anything about it at all, but it’s clear to me, A, that this was a stamp to ratify, where the carvings of the signs on clay or some other sealing material would leave an impression. It must be that. So this is about 9000 BC.
Irving Finkel
(00:15:26)
Now when I was a boy at university, my professor said to me that the reason writing evolved in Mesopotamia is because they had complex cities with ziggurats and big buildings and lots of people and they had to organize everything, and so they invented writing to cope with it. Well, if they had to cope with that in Sumer in 3000 BC, they sure as hell had to do it at Gobekli Tepe, because they’ve hardly even begun to finish excavating the sites of…
Irving Finkel
(00:15:54)
…Gobekli Tepe. They go on and on like Manchester and Newcastle United. And really, the old rule would be you could not have architecture like that, planned and built according to principle with all the different people. You couldn’t have that without writing in southern Iraq. So how come suddenly then 7,000 years earlier, they do it there? That, and that green stone shows that they had writing. That was an official who sealed this, got the stuff or whatever it was, or it was his dad’s name or whatever it is, got a wiggly snake and a wiggly this. That is pictographic writing. Maybe even as phonetic writing, I don’t know, but it was writing thousands of years before in the south. And that’s what I think it is.
Irving Finkel
(00:16:43)
You know, people came with metal or precious stones from Anatolia. They knew that in the south they had lots and lots of stuff, they wanted to trade, they had to communicate. And it’s basically like having a cigarette with an X through the middle. Everybody in the world knows what that means. They don’t know what the word for cigarette is in this language or cancer or filter or tobacco, it doesn’t matter. It’s pictographic writing. We still use it. And it’s above all kinds of mess. And I think that was the prevailing system because I honestly believe that the people at this time were not stupid. They weren’t gorillas. They weren’t less advanced than we are. They were probably indistinguishable from what we are.
Irving Finkel
(00:17:25)
So you have merchants and wanderers and people who say, “Let’s go down the river and see where we end up.” And people looking for money, looking for women, looking for everything. I mean, that’s surely how it was. But if you look at those Gobekli buildings with a skeptical eye, how it could be. I mean, the finish of it is astonishing, the structure of it, the vision of it. So the workforce and the tools and the organization, you know, what do they do it with? A megaphone? “Your breakfast!” And all that kind of stuff. No way. No way.
Lex Fridman
(00:17:57)
So that’s a really controversial statement that…
Irving Finkel
(00:17:59)
It is really controversial
Lex Fridman
(00:18:00)
…at the time of Gobekli Tepe, there may have already been a writing system.
Irving Finkel
(00:18:04)
There was, because the thing is about it, that it’s a seal to ratify. It’s not just a squiggle on a pot and you can say, “Oh, that’s just a piece of…” This is a finished thing with a flat surface. You press it down, so you have some contract, you have some building arrangement. That we’re paying for these bricks, whatever it was, and the official person had to squash it down and it leaves the impression. I mean, I am a great believer in Sherlock Holmes- …as a teaching system for intelligence and rationality and logic in thinking. I read those stories a million times when I was a kid, and one of the things which impresses me most of all was this point quoted by Holmes, not original to him, that it is theoretically possible to infer the Niagara Falls from a raindrop.
Lex Fridman
(00:18:52)
That’s a powerful statement. Yeah.
Irving Finkel
(00:18:54)
It’s a powerful statement. Well, that seal from Gobekli Tepe is a raindrop from which I infer writing, and it’s perfectly possible they all wrote on flat leaves. After all, in many parts of the world, that’s what happened. So, for example, in the Indus Valley, people write the most abject nonsense about the Indus Valley writing system, but all we have is seals, basically. So they are also for ratification purposes, and they have the name of the owner in three or four or maybe five signs, and it’s probably me, son of my dad, or milkman or whatever it is. And it’s obvious, it’s obvious that they had writing on a perishable material. They can’t just have had inscribed stone seals, and many parts of India today write on palm leaf. Why should it be any different?
Irving Finkel
(00:19:45)
So people think, you know, “Oh, well, just because it’s now, it wouldn’t be then.” But actually, that argument is utterly, utterly fallacious, because the process of evolution is stymied left, right, and center by inertia. Inertia is nearly as strong as evolution, and this is something that the people who talk about progress and ideas have no idea about.
Lex Fridman
(00:20:07)
First of all, your whole line of work, you’re making me realize, is a kind of Sherlock Holmes type of process. The deciphering of the language, archaeology, of taking those pieces of evidence and trying to reconstruct a vision of that world. And now you’re making me realize that even all the cuneiform tablets we have is just a raindrop compared to the waterfall of thousands of years of humans.
Irving Finkel
(00:20:39)
Yes, we have a lot, but it’s nothing in comparison with what existed. But not only that, you see, we don’t have to decipher anymore. We can read Akkadian or Babylonian, Sumerian pretty well fluently. That’s not a problem. So the information which you can get from these sources, especially three millennia of sources, is very, very substantial. Very substantial, but it means that Assyriologists have the inbuilt idea that what we have is something like all there ever was, which is absurd. For example, there’s a period called the Ur III Period, where people lived in city-states. They wrote very small account tablets by the thousand, and there were two or three major cities where this is the way they lived.
Irving Finkel
(00:21:25)
People had to bring tithes and offerings, and everything was recorded by what I always refer to, and people sympathize with, is the ancestors of the Inland Revenue, because everything had to be written down so that some schmuck could check it and fill out the ledger, and some other schmuck above him could okay it, so there’s no funny business or no mistakes. Now, the thing is, there are thousands of those tablets written in about 2100 to 2000 BC, thousands of them, about the size of a box of matches.
Irving Finkel
(00:21:55)
So people like to generalize about the Sumerians at this time of the world, but they probably all came out of two rooms, because they were dumped when they were no longer needed in some kind of room, and the archaeologists in the 19th century came down on these, and then all the locals came and they dug them up and they sold them all over the place, and they’ve gone all over the world. Thousands and thousands of them, out of probably two storage rooms, which is not a whole culture or a whole country, or their whole history, or their belief systems. So our view of it is skewed by the nature of the material, and sometimes the material is opulent and benevolent, but not always, and sometimes the people who work with skewed material don’t even realize how skewed it is.
Irving Finkel
(00:22:47)
I mean, you know, it’s quite remarkable.
Lex Fridman
(00:22:50)
So, in all your time of studying cuneiform tablets, do you sometimes late at night get a glimpse of the waterfall? Like, can you imagine?
Irving Finkel
(00:23:01)
Yes, I can imagine. I can imagine easily, because once in a while, a library is discovered. In the 1850s at Nineveh, which was the Assyrian capital, there was a fat king, king of the world, called Ashurbanipal, and he had a fantastic library, and he promoted it. He impounded tablets, he had them brought to Nineveh. He wanted all the prevailing knowledge and all knowledge from before under one roof. It was a kind of Alexandria thing. So he was a trained scholar, and this is what he did, and they found it in the 19th century. They dug it up, Layard and those people. So what did they find? They found the tablets higgledy-piggledy all over the floor of a huge room and in the corridors and everything.
Irving Finkel
(00:23:43)
And lots of them broken and lots of them burnt. So ever since then, until really quite recently, Assyriologists who spent all their… Well, people who work on these Nineveh tablets spent all their time joining the bits together, and you have the story about Gilgamesh and the goddess who falls in love with him in the garden, and she wants to seduce him, and dot, dot, dot, you can’t find the bit… So you look for another bit and you look for another bit. And gradually, they piece together the literature, and the assumption has always been that if you put them all together again, you’ll have the whole library.
Irving Finkel
(00:24:14)
But it’s the absolute opposite. Because what happened was that the Babylonians in the south, in my opinion, they worked hand-in-glove with the Elamites from Iran. They had a pincer movement and they beat Assyria, they conquered Assyria. And they ran through the capital and they set fire to everything. Pinched all the women and took all the jewelry and all the gold. And people say that in a fit of pique, they destroyed the library. But they wouldn’t destroy the library because it was the giant brain from which the Assyrians ran a world empire, and it had all the knowledge in the world. They destroyed that? They spoke the same language, they had the same writing system. They’d have taken them all safely home, cart after cart after cart.
Irving Finkel
(00:24:58)
And I think what’s left there is duplicates and broken things, the things that got dropped and everything, and that’s what everyone thinks it is. So this is also a controversial point.
Lex Fridman
(00:25:09)
You’re just nonstop…
Irving Finkel
(00:25:10)
But it’s common…
Lex Fridman
(00:25:10)
…starting trouble.
Irving Finkel
(00:25:11)
It’s common sense. It’s common…
Lex Fridman
(00:25:12)
You’re gonna get both of us canceled today.
Irving Finkel
(00:25:14)
But you see, the thing is, it’s predicated on the assumption that what we have is only what there was. And this is such a fallacy. It needs to be attacked left, right, and center.

How to write and speak Cuneiform

Lex Fridman
(00:25:29)
So, a lot of the cuneiform language is already deciphered. Can you speak to the deciphering process? How hard is it? Maybe take us to this place for you yourself first learning a language, figuring out the puzzle of it. How does it feel? What does it look like to a brain that doesn’t deeply understand it? And how do you then piece stuff together? Maybe you can go to the early days, sort of the Rosetta Stone of cuneiform.
Irving Finkel
(00:26:01)
That’s important. Well, the first thing is how the cuneiform writing system works, because the crucial point, and once you see it, it makes a lot of things clear, is that they wrote in syllables. So if you take the English alphabet, which of course they didn’t, you have the letter B, G, D, P, H, and so forth. They couldn’t write a consonant. They couldn’t do that. So what they did is they had a vowel before a consonant or one after. Say you have “Ab” and “Ba.” But as they had four vowels, you had to have Ab and Ba, Ib and Bi, Ub and Bu, Eb and Be.
Irving Finkel
(00:26:43)
So you had the range of things clustered around what we call a consonant. So they had all those for all the letters, which gave them a basic system. There was much more to it than that, and it was more complicated than that. We don’t have to really go into it, but basically if you are a Babylonian and you want to write the word “museum,” which of course is one of the most important words in the English language and other languages too. So what you would do is you would write the syllable mu-
Irving Finkel
(00:27:09)
And then the sign “Z” and then the sign “um.” So you split the word up into its component syllables. When you read it in your mind, you squash them together into “museum.” That’s the basic system. They had other signs which gave you a clue as to the meaning and the bits around the edge, but it’s basically syllabic writing. So when you go to university to study cuneiform, what you have to learn is all the signs and all their values, because unfortunately they didn’t just have one for each, they had multiple ones. And the reason is not that they were mad or they wanted to make life hell, but because the syllables derive from the writing of Sumerian words. So the Sumerian vocabulary had a lot of words that were probably differentiated by tone.
Irving Finkel
(00:28:07)
So you might have “Ba” and then a rising “A” and then a lowering… and these signs all retain the “Ba” value even though there were no tones. So it means if you look at a sign list, there’s a lot of signs. You have “Ba” number one, which is the common. Then there’s “Ba” number two, “Ba” number three, and you have to learn them all. And when you read, you have to learn how to do it. So in the modern world, if you go to university to do Assyriology, which I hope you and all of your disciples will do as soon as possible, you actually have to cope with two languages: the Sumerian and the Babylonian. Now, the first thing is this, that the Babylonian language is a Semitic tongue, which although it’s extinct, is connected to or related to Hebrew, Aramaic, Arabic, Ethiopic, Syriac.
Irving Finkel
(00:28:56)
All that family of Semitic languages which are still alive, it’s an early example of one of those. So that when the decipherment came along, it was the Semitic dictionary that they fell back on to identify words, nouns, and roots. The other language, which is Sumerian, the one when you stick bits in the beginning and stick bits at the end, is not only not Semitic, it’s not related to any other known language.
Lex Fridman
(00:29:25)
Oh, no.
Irving Finkel
(00:29:26)
Yeah, this is a bewitching thing. It’s a bewitching thing to me, and this is how to understand it. Because the languages that we study in the world today, linguists study, they more or less all fall into a language group. So you have Indo-European with Spanish, Italian, Latin, Hittite, and so forth, and that’s French, that’s one group. And you have Germanic and you have Slavonic. And most languages, even the far-flung ones, fall into what can be seen to be maybe big and airy groups, their family like that. There’s not one for Sumerian. So this means that the truth that languages do not exist in a vacuum but they’re part of a big family must always have been true. So that when writing arrives about 3000, say, 300 BC, to write properly.
Irving Finkel
(00:30:19)
It means that Sumerian was recorded just in time, but the big languages, maybe in China, in Russia, in somewhere else in Asia, that were related to Sumerian-
Lex Fridman
(00:30:33)
Are gone?
Irving Finkel
(00:30:34)
… are all gone. They’re gone forever and ever and ever, unless something amazing happens. So we’ve got the one representative of this bizarre family. Is that-

Primitive human language

Lex Fridman
(00:30:44)
Amazing.
Irving Finkel
(00:30:44)
It is. And it’s a very stimulating thing to imagine. I personally believe that Neanderthals and early Homo sapiens for sure had language, for sure they talked to one another. It’s impossible that they didn’t. The point came when they did, they did. And the Neanderthals, 800,000 years of living in Europe. They had to deal with the Ice Age, they all lived together, they bring up their children. You think they couldn’t speak anything? They have the same apparatus. And if you have a human brain, then it responds to stimulus. And the more stimulus there is for communication, I mean, the idea that you and I are out hunting rhino…
Irving Finkel
(00:31:23)
…and you say, “Legs.” Well, shut up, I’m concentrating. “Legs, legs.” And then I suddenly think, “Oh, I get it. You are legs.” Right? You only have to do that once, then you know who I am. So I know that I’m me and that you are you. So people who say that they couldn’t distinguish ego and all that, it’s absolutely stupid. If you cut your hand with a knife, you sure as hell experience… You sure as hell do. It hurts. It hurts a lot. You might even bleed to death. But it’s not somebody else’s hand, and it’s your hand and it’s your existence and your life that’s threatened. You think people weren’t conscious that they were an entity? I don’t believe it.
Lex Fridman
(00:32:06)
And they probably had a way to express that with sounds.
Irving Finkel
(00:32:10)
Well, eventually, yes. Names. I mean…
Lex Fridman
(00:32:13)
Names
Irving Finkel
(00:32:13)
…names the things, and then you have the idea that a label fixes to something. Then the light bulb has gone on, and next minute you have rhino and you have skin and you have babies. Because I think you have an idea, and the idea then drives the brain, and the brain has another idea. It works like fertility.

Development of writing systems

Lex Fridman
(00:32:31)
So what do you think is the motivation, the primary driver of developing written language? Does it go hand-in-hand with civilization?
Irving Finkel
(00:32:40)
I think that the media in which it appears is when there’s a lot of people living in an urban environment. And with rival institutions or the king or the government or all those sorts of… …Things. And that’s why I think Gobekle Tepe must have been the same thing. I read somewhere that they’re all nomads and they only came to Gobekle Tepe three months a year. I mean, that cannot be true that they were nomads. Cannot be true. To get the stone and someone has to draw on the ground the plan of the building, they have to work out how thick it’s going to be, how high it’s going to be. I mean, you can’t just like that, like gorillas.

Decipherment of Cuneiform

Lex Fridman
(00:33:25)
All right. So deciphering, the process of deciphering.
Irving Finkel
(00:33:28)
So when I started, there were grammars and scientists and dictionaries. Everything was marvelous. It was all basically deciphered, all you had to do was get on with learning it. But at the beginning, when the first tablets and bricks in cuneiform and stone inscriptions came to light, no one could read them. But they knew they were writing, but they didn’t know how to read them. And what happened was, like you said before, with the Rosetta Stone, it was something directly comparable, because there was an inscription of one of the Persian kings halfway up a mountain in a place called Bisutun, where this King Darius had written an account of his successful career in Elamite and in Babylonian and in Old Persian, a trilingual version.
Irving Finkel
(00:34:18)
And Old Persian, although it isn’t obviously an archaic form of the language, Persian is still alive, it was still alive in the 19th century. So since the Old Persian was written in a very simple style of cuneiform, they deciphered it, they twigged it was Old Persian, they read it in Persian, and they read the names Darayawush in Old Persian. And then suddenly, somebody realized that the other two columns were about the same length.
Lex Fridman
(00:34:43)
Brilliant.
Irving Finkel
(00:34:44)
What do you know? And the thing is, it said, “I am Darius the great king, king of the world, king of the… son of… diddly, diddly, diddly, diddly… grandson of… diddly, diddly, diddly…” So there’s a whole paragraph with repeated things in the Persian which they could understand. So what do you know? They’re reiterated passages in the other two languages. So that was the key, that kind of… the chisel that opened up cuneiform writing proper. And the thing was, they soon twigged that the language of the Babylonian was a Semitic tongue. And this was so important. I think the first word they discovered was the word for river, which is Ɔaru in Akkadian and نحر in Arabic and Aramaic.
Irving Finkel
(00:35:27)
And when they realized that the word that corresponded to the Persian had this form, this was a gift, a gift of gold, because everybody immediately seized their Arabic and Hebrew dictionaries and started leafing through looking for words that would fit in the context. And they basically deciphered this inscription in that sort of way. And of course, all the other inscriptions came in order, and there were lots and lots of difficulties which had to be resolved, but that’s the basic thing. And without that trilingual, I don’t know what would have happened. I mean, I suppose it’s conceivable that in the very modern world, something might have happened. But as it was, it was done by sheer brainpower, by very, very clever persons just doing it. And they cracked it.
Irving Finkel
(00:36:14)
The Elamite language is much more difficult, but they got a lot of it too. So it was a very romantic thing because the inscription was carved on a mountain face far above the plain, and Henry Rawlinson, who was an upstanding young British officer who claimed to decipher cuneiform quite unjustifiably, climbed up there with some miserable kid and made squeezes of the whole thing overlooking the plain, thousands of feet up in the air, and brought those back, and they were used in the decipherment. So it’s very romantic.
Lex Fridman
(00:36:46)
Wait a minute. More controversial statement from today. Henry Rawlinson doesn’t deserve the credit for that?
Irving Finkel
(00:36:52)
No, I don’t think he does. He’s called the Father of Assyriology, but I think he’s the Stepfather of Assyriology because when he first got these inscriptions, he wrote a long book about it, which was almost entirely wrong.
Irving Finkel
(00:37:05)
And there was a clergyman in Northern Ireland called Edward Hincks who lived in a place called Killyleagh and had five daughters and ran this church, who was possibly a card-carrying genius, if not jolly, jolly close. And what happened with him was this: there was an ongoing competition, well, an ongoing challenge to decipher hieroglyphic writing, which Champollion usually gets the credit for, and Hincks was very interested in trying to decipher hieroglyphic ahead of the French. And he ran into a sort of dead end at one stage, and he thought he’d have a look at cuneiform to see if it was helpful.
Irving Finkel
(00:37:46)
And at the same time, he cracked it. He worked out how it worked. He realized that one sign can have more than one value of sound and of meaning because they are multivalent, the signs. I tried to shelter you from the horrible news… …But it’s not a walk in the park. It takes about five years to… you’d probably do it in about four, probably.
Lex Fridman
(00:38:11)
That is a compliment. I think you just complimented me. Thank you. Thank you very much. So, you’re saying one sign that looks exactly the same might have different sounds given the context?
Irving Finkel
(00:38:24)
Yeah, and you have to choose the right sound, and also different meaning as well. Because, for example, if you have a sign for the word hot, right? You can’t really have a picture sign for hot. It doesn’t make sense, but what they did is they did a drawing of a kind of complex thing with a brazier inside another sign, which meant hot. So that sign existed, but it also meant other things as well, and you had to choose the right one for the context. It’s… all the context does matter. I mean, it really is quite a matter for despair when you start cuneiform, because on top of everything else, they didn’t leave gaps between the words. And that’s really…
Lex Fridman
(00:39:04)
So they’re all connected?
Irving Finkel
(00:39:05)
That’s really mean. Yeah. So when you read, what you have to do is start with the first sign, and you think of the sign, this, and you go through the values in your mind, and there’s the next sign. And if one is ‘ba’ and the next one is ‘ab’ among other readings, ba-ab sounds like a syllable structure for a word, and you go on like that. So there are two things about it. One is that if you want to, you can master it. The other thing is that the number of variables was restricted. They controlled it so it wasn’t insane. So in other words, if you learn the corpus and you learn how the signs are composed and you learn their different values, then you’ve got it down. And off you go. And it’s very beautiful, I think. It’s marvelous.
Lex Fridman
(00:39:51)
Can you, in all seriousness, take me back to the time when you were learning it? What’s the process of learning it?
Irving Finkel
(00:39:57)
Well, I had a very abnormal upbringing because when I went to university, for about three years beforehand, I’d wanted to be an Egyptologist. So I read the grammar by Gardiner and was looking forward very much to studying ancient Egyptian. What happened was that I went up to the University of Birmingham, where I went to university, and there was a man called Rundle Clark, who was an Egyptologist. Rundle Clark came in on the Monday and gave us one lesson about Egyptian sculpture or something like that, and the next minute, next day, he died. Bang.
Irving Finkel
(00:40:38)
So the professor called me into his room and said, “Look, it’s going to take me a while to get an Egyptologist. They don’t grow on trees, but there’s another person in this department who teaches another ancient language called Lambert, and he teaches cuneiform. So what I suggest is you go and do a bit of cuneiform with Professor Lambert, and then when I get an Egyptologist, you can convert back.” So I went and knocked on the door. “Yes?” So I went in and said, “I want to learn cuneiform.” And Professor Lambert, who was rather a Sherlock Holmes kind of figure—aesthetic, bony, sarcastic… Cruel-
Lex Fridman
(00:41:20)
Cruel.
Irving Finkel
(00:41:20)
…cruel, absolutely terrifying. And I said I wanted to learn cuneiform, and he wasn’t at all pleased because this was a time in Britain when professors resented having students to teach because it cut into their research time. It was that sort of arrangement. Anyway, I started it, and after about, I don’t know, maybe one or two lessons, I knew this was going to be my life’s work. So that’s what happened to me. It was an amazing thing. So he gave me a list of basic signs to learn. I did, and in the next couple of days, then we came in and we started reading.
Lex Fridman
(00:42:02)
So given the complexity of the signs, why did cuneiform last 3,000 years? The most successful writing system ever.
Irving Finkel
(00:42:12)
Fair question. There are several factors. One is the famous factor of inertia- The second thing is that people who could read and write and were in charge of archives, and were the clerks in the temple, and the writers for the king and everything, commanded a very great deal of power because most of the public couldn’t. So they reserved to themselves knowledge, understanding, philosophical inquiry. I mean, no doubt it went on in pubs and things, but they were in charge. They had everything under lock and key. And I think the scribal schools were rather cliquey. They were certainly cliquey in the sense of Oxford and Cambridge being rivals, that sort of thing.
Irving Finkel
(00:43:02)
They had that sort of idea. And it was in no one’s interest whatsoever. Nobody would ever concede any interest in the idea of literacy for all. This would be—it would never be thought of, and it would be anathema. And so if you got on a soapbox on a Saturday afternoon and said, “Oh, enough of this, we have to teach the children,” they’d be taken away, I think.
Lex Fridman
(00:43:25)
So we’re getting, in these tablets, the output of the intellectual class, a very small fraction of-
Irving Finkel
(00:43:32)
We are
Lex Fridman
(00:43:32)
…humans.
Irving Finkel
(00:43:33)
We are.
Lex Fridman
(00:43:33)
So we’re getting just the Oxford and the Cambridge.
Irving Finkel
(00:43:35)
We are, except that when you went to scribal school, you had to learn Sumerian and Akkadian, the languages properly, and all the vocabulary and the grammar.
Irving Finkel
(00:43:45)
So some boys probably had a lot of trouble doing this. And, you know, they were okay, but then there ain’t gonna be no geniuses. And I think the situation in the school was that the teachers farmed out the kids who would actually rather have been outside playing football but could read and write, to earning their living doing low-level reading and writing. That’s to say writing contracts, letters, everyday things for people. Because no one could read and write, so you had to get a scribe if you’re gonna marry your daughter off, and you get all the witnesses about the presence and all this, all that thing had to be done for four days. So the writer would come and do… So your medium-level writers would serve that requirement.
Irving Finkel
(00:44:35)
And very talented or clever or intellectual students would be encouraged to go into one of the literary professions, which would be, so to speak, medicine, law, working for the king, working for the church, I mean, the priesthood. All those things which were dependent upon archives and writing, they would find their niveau. And also architecture, because if a big building had to be built, then somebody had to know about load-bearing things and brick measurements. And so some of them went into that kind of work. And also, probably some of them went into running the army. You had to move stores and animals and…
Irving Finkel
(00:45:17)
So they found their niveau, and some of them were intellectually very able indeed, and they went into the disciplines of, on the one hand astrology, but more seriously into astronomy and theoretical grammar. Because they had treatises about the relationship between the two languages and how they worked and different parts of speech, and they wrote learned commentaries as well, what words meant. So there was an intellectual high-level top, and then there were lots of professional scribes, and then the kids who left school as soon as possible and did all that, like today.

Limits of language

Lex Fridman
(00:45:57)
I apologize to be philosophical, but Wittgenstein, the philosopher, said, “The limits of our language is the limits of our world.” To what degree did the languages that were encoded in cuneiform define human civilization, would you say? What were the things that were complicated to express and therefore were not expressed often?
Irving Finkel
(00:46:21)
That’s a really an interesting question. So in terms of richness of vocabulary and richness of verbal subtlety, I think Babylonian rivals Arabic and of course English. You know, in other words, you can say whatever you want in English- …however subtle it might be, even if people understand the subtlety. You can, because the tools are fantastic. And Arabic has lots of synonyms and lots of devices, and all the same. Same in Babylonian. It was a fully-fledged literary language. The question about whether the language put a stop to further things, which is basically what you’re asking…
Irving Finkel
(00:47:05)
…is immensely complicated. But the one thing that strikes me as relevant is that a very huge proportion of scholarly literature in Mesopotamia takes the form of omens, because they believed that events, accidental or deliberately stimulated, had implications for what was going to happen.
Irving Finkel
(00:47:29)
And they took omens from things in the sky and things in the street, every single thing. If you were a well-qualified diviner, it would have this significance, right? Now, there are thousands of lines of omens of all different kinds. And in Akkadian it says, for example, “If a lizard runs across the breakfast table, the queen will die.” So if you translate the Akkadian this way, the word “if,” verb and everything, “If that, then this.” So there are thousands upon thousands of lines translated into many books about omens where, “If this happens, that will happen.” So this is how it’s understood by my colleagues.
Irving Finkel
(00:48:15)
Well, this is absolutely impossible, because if you’re the chief diviner for the king, and you open up a sheep to take a liver out and examine it according to the… And if the queen’s gonna die and the king’s there, you’re not gonna say, “The queen’s gonna die.” I mean, you’re gonna look a fucking idiot if she doesn’t die.
Irving Finkel
(00:48:36)
And if she does die, you’re gonna be responsible. So all you can ever do and ever, ever have been able to do is to say, “There’s a sign here that says that the queen could die,” meaning “could die,” not “will die.” And therefore, the requisite ritual or magic must immediately swing into action to defer the danger. So the point is that A equals B is never true. It means that with A, B could be, might be, ought to be, should be, could be true, all those subtle things. So that the diviner who works for the king must have been a philosopher who looks at the king, and he knows what the king wants him to say. So he has to tell the king what he wants to hear. He has to tell the king if it’s bad news in such a way that he doesn’t mind it or he won’t worry.
Irving Finkel
(00:49:32)
It’s the most beautiful thing. It’s so subtle. It’s like a violin concerto. It can never have been A equals B for a minute. So the medical texts say, “If you do… If a man has this,” doo-dee-dee-doo, you know? “You do this, your drink’s…” He’ll get better, right? He says, “You’ll get better.” So have you ever met a doctor who will say, “You do this, you’ll get better”? No. They say, “All being well, you’ll be back on your feet.” Or, “I’ve seen this kind of condition many times, everything should go fine. You should get better, you should be better soon.” But never, “You will get better,” because what happens if you die? Where are you?
Lex Fridman
(00:50:10)
Th- the lawyers will show up.
Irving Finkel
(00:50:12)
Absolutely. So this means that not expressible in Akkadian grammar are these modal verbs… …Could, might, should, ought. They can’t be expressed grammatically, but it is impossible that it was such a magnificent literary language where they didn’t have these subtleties. It’s utterly impossible. And if you translate, “He will,” in a literary text, “He might,” then the whole text is different. The whole text is different.
Lex Fridman
(00:50:45)
Yeah, absolutely.
Irving Finkel
(00:50:46)
And, and they don’t… My colleagues translate… It says in the grammar books, ne-ne-ne-ne-ne, like that, automatically. There’s no self-appraisal of the folly of it.

Art of translation

Lex Fridman
(00:50:57)
You have said the translation is part archeology, part detective work, part poetry. Can we just speak about translation and the art of it a bit more?
Irving Finkel
(00:51:05)
Yes.
Lex Fridman
(00:51:06)
I mean, it’s such a, such an incredible discipline. Just like you said, hinted at, just a subtle variation in a single word can change everything.
Irving Finkel
(00:51:16)
Well, you know, the truth about translation is that you never really have a word in one language which precisely equates another. You never do. They’re always a kind… The best you can do. And sometimes it makes no difference, and sometimes it’s really quite misleading. And so what people do when they learn Akkadian, is they learn the Akkadian word and they learn the English translation, right? You have, “To divide.” So whenever you have the verb, it’s some form of divide or division. But actually, it’s not, because divide is like the primary root, but there’s maybe 10 nuances of what that can mean in English, where the one at the bottom and the one at the top, you’d hardly know they were connected.
Irving Finkel
(00:52:02)
And the Chicago Dictionary, which is such a magnificent thing, when you come to the museum and see me- …I’ll show you this. It’s the most salient and important thing that came out of America in all its history, is the Chicago Assyrian Dictionary, which is this long. There’s only one rival to it for cultural importance, which is the electric guitar, of course. But the two of them, I think, are your countrymen’s greatest achievement.
Lex Fridman
(00:52:29)
It’s the pride of our nation-
Irving Finkel
(00:52:31)
I think so
Lex Fridman
(00:52:31)
… those two things.
Irving Finkel
(00:52:32)
The very thing.
Lex Fridman
(00:52:33)
Chicago Dictionary… Can you… I’m sorry to take the attention. What is the Chicago Dictionary?
Irving Finkel
(00:52:37)
It started in the ’20s, and they made a dictionary of the Babylonian language.
Lex Fridman
(00:52:42)
Ah.
Irving Finkel
(00:52:42)
A, A to Z, so to speak. And it’s as long as this table. It’s magnificent thing, and this big. And the people who worked on it were real translators, so they knew that it wasn’t lexically A means B, but they had… So if you have something in a proverb, the meaning is going to be a little bit different from in their letter. And, you know, so people really, really understand Akkadian, they really do. But this thing about the modal verbs is an interesting conundrum to me, because there’s no way it’s reflected in the writing. So I can only assume that there was some kind of drawing out of the vowel in a verb, meaning could… Now, like you were saying, it might do it, you know, something like that. Anyway, so nowadays we… It’s not the decipherment that’s the job, it’s just reading.
Irving Finkel
(00:53:31)
And if you have lots of tablets to work on, like on a dig, it’s very exciting if they come out of the ground and no one’s looked for them before you, you know, it’s your job. And if you’re a competent assyriologist, you should be able to sight-read more or less. Except most… Say, a letter or something like that, but most documents have some damage, so you have to learn how to interpret stuff. And also, some literature is very difficult because of technical vocabulary, and then they had technical vocabulary and unusual words.
Lex Fridman
(00:54:03)
So you can do all of that. You can kind of figure out the technical complexities.
Irving Finkel
(00:54:09)
Sure.
Lex Fridman
(00:54:09)
You can figure out the noise, meaning missing pieces.
Irving Finkel
(00:54:14)
Yeah. Sometimes you can calculate what it ought to be, make a reasonable suggestion. And this dictionary, which I was talking to you about, is such a fantastic tool because a lot of people worked on it. It was the National Endowment for the Humanities, and it was for decades and decades of work. And most of the world’s best Assyriologists collaborated on it, so the quality of translation and understanding is really extraordinary.
Lex Fridman
(00:54:41)
What are some things you’ve read from that time? Are there some jokes? Are there some love letters?
Irving Finkel
(00:54:48)
There are one or two letters about that from a chap to a woman about, you know, “You are very beautiful and your lips are like radishes and your ears are like walruses,” and things. But I mean, there are some things like that. And then there’s a kind of street drama in Babylon, in 4th century BC, something like that, when there must have been actors who did this in the street. And it’s Marduk and Sarpanitum, his wife, and another woman. Marduk’s having an affair with this other…
Lex Fridman
(00:55:21)
Oh, no
Irving Finkel
(00:55:21)
…goddess. And Sarpanitum is jealous, and these women fight in the street and hurl insults at one another, and, you know, “slop bucket” and all this kind of stuff. It’s hilarious, and it must have been a bit like a sort of Verdi opera without the music, I suppose. I don’t know. But anyway, it starts off when Sarpanitum is in the room and Marduk’s in bed with this other goddess on the roof, and she can hear. You could say it was an eternal human issue.
Lex Fridman
(00:55:50)
Yeah, love, heartbreak, jealousy, all of that.
Irving Finkel
(00:55:53)
And between deities also. Because deities are only modeled on human beings, after all, so…
Lex Fridman
(00:55:58)
Yeah, deities are a grandiose way of expressing human affairs, human behaviors, human ways, yeah.
Irving Finkel
(00:56:05)
Indeed.

Gods

Lex Fridman
(00:56:06)
In the writing, what was their relationship to the divine?
Irving Finkel
(00:56:10)
Relationship with the divine, well, the first thing to say is that they had a large pantheon of gods. So there were three gods at the top, sometimes called Anu, Enlil, and Ea. There were three gods at the top and hundreds of other gods and goddesses. And you have the situation that I think lots of small villages and towns had their old, ancient gods, and eventually they were all worked into a kind of theological system like a phone book. And the lesser, minor gods were amalgamated and then they were given jobs in the households of the big gods. So there was a sort of structure. So you have this in the background, a big, sweeping theology, like you have in the world today in some parts of the world, and this was the main system.
Irving Finkel
(00:56:59)
And the main gods were concerned with the ruler and the fate of the country. Another god was concerned with illness and the dead, and what happens to the dead, and they had other specialties, and they all had their own temples. And when a baby came into the world, probably this was universally true, the baby was put under the tutelage of one or other of the gods. Sometimes, you know, the royal family, they were the big shots, but sometimes not, or the ones that were in the family or something like that. So people had, grew up with the idea that among all of them, there were special ones for the family and they had a special one who was supposed to look after them. That’s the sort of basic idea.
Irving Finkel
(00:57:46)
But the trouble is, since gods are, as you say, human beings on a larger scale, they can be forgetful or uninterested or on holiday, and there are lots of ways that you have to prompt them. Make little sacrifices and little bribes so they do their job and keep an eye on you. So they had that kind of slightly practical view of gods, that they were a bit unpredictable, “great when they were there but not always there” sort of idea. And I also believe this, that a lot of people in the world today who did not have the disadvantage of growing up in a stifling religion, but are just normal people, get a lot more interested when they’re really ill or when they have a big disaster.
Irving Finkel
(00:58:36)
All of a sudden, God or gods seem a lot more important than they do normally. So few people walk about in a state of religious awe, and a good proportion of clergymen I’ve ever met don’t do that either. It’s a kind of conception that’s not actually based on reality, that the individual’s response to religious stimuli fluctuates and is varied and is often a response to need. It doesn’t come from nothing. I mean, people don’t suddenly feel, “I got to thank the Lord for the rainbow,” or something like that. I think it’s probably true today, I mean, when you read the investigations they make of religion today, Christianity in this country is on the decline because people who are supposed to be Christian say they aren’t anymore, they’re atheists.
Irving Finkel
(00:59:27)
And people who say, “I go to church and I believe in everything,” it is a relatively small number of people saying now this is the situation, which is quite remarkable if you think about it. Lord knows what the consequence will be for the human race, whether religion will balance out, whether it will die off. Who knows?
Lex Fridman
(00:59:45)
I think it’s an ancient technology that has proven across millennia to give a set of tools to humans to contend, as you said, with suffering. That’s a part of life. So when those rare moments come when you have to deal with deep pain, loss, suffering, heartbreak, all those things… …Looking up to the sky and asking questions and trying to figure out the answers in your conversation with the divine.
Irving Finkel
(01:00:15)
I think that’s true, but I think in Mesopotamia it was different in terms of its potency and immediacy because there are no skyscrapers in Iraq. You know, if you live in Southern Iraq and you sleep on the roof, there are no lights at night. You know, you’re under the stars, you can see everything because of no smog and everything like that. And the idea that the gods are there watching, it’s not like a big artifice like it is here. It just doesn’t ring true here. You can’t come to it and really believe in it, whereas these people didn’t have to really believe in it because it was it.
Lex Fridman
(01:00:53)
It’s the obvious practical part of life. They’re right there.
Irving Finkel
(01:00:57)
Yeah. But they didn’t believe in ghosts, they took them for granted. And they didn’t believe in the gods, they took them for granted. This is a different mechanism, because nobody here in the world today takes those things for granted, just the opposite. But I think that’s how it worked. So you didn’t have people wrestling with the idea of whether the gods really exist or whether they really care about me. They gave them a nudge when it was necessary, and they might offer this, they might offer that, but it was the system, it was the prevailing system. And I think it’s an important difference.

Ghosts

Irving Finkel
(01:01:32)
And also that thing about ghosts is that it’s clear from the inscriptions, all of them that I managed to find, that nobody ever asked themselves, “Do these things exist or not?” Or, “Did I really see them or not?” They didn’t. They just took it all for granted.
Lex Fridman
(01:01:50)
What are ghosts? Is it usually ancestors?
Irving Finkel
(01:01:54)
Well, everybody, everybody who died in bed naturally or peacefully, what we call their ghost, went down to the netherworld, and there they were. So they buried people jolly quick for obvious reasons, and like they do in Islam and Judaism today, it’s the same kind of idea. And the spirit of the person went down through the gates of the netherworld and stayed there. So that’s the basic situation. And people in their houses had actual burials under the courtyard, and they had a thing where you pour stuff down a hole, fluid and food, kind of symbolic offerings to them.
Lex Fridman
(01:02:39)
So isn’t that a way to lessen the impact of mortality?
Irving Finkel
(01:02:44)
I don’t know, because you know that everyone’s going to die. I think the real tragedy would be, is if we’re not supposed to. That would be the tragedy. But every single person is going to die. So all relationships have this finite clause in them. So if you’re very fond of somebody or you love somebody and they die, it’s kind of infantile to whine about it ever after. Because what did you think was gonna happen? Either you or them. You know, I always see it like that. I don’t feel grief when people die.
Lex Fridman
(01:03:24)
It is infantile, but I gotta tell you something about human beings. We’re all kind of infantile all the way through from, you know, we don’t stop being infantile after we’re infants. It’s one thing to know it, you know, theoretically, and it’s another thing to know it know it. Like this thing ends, this ride ends.
Irving Finkel
(01:03:45)
But that’s the pain, it’s the fact that the- the whole thing ends. And when people fall off the edge, they fall off the edge.
Lex Fridman
(01:03:54)
So yeah, the knowledge that it ends is the painful thing, not the actual moment of the ending. Yeah. Many times what makes moments really precious is that they’re going to be gone. I think that’s not a trivial thing for us humans to really contend with. I think religion, religious thought, the divine, I think help with that.
Irving Finkel
(01:04:12)
I think the big mistake for mankind was the creation of monotheistic religions, because they brought evil into the world. Because if you believe in a monotheistic religion, that means I’m right and you’re wrong if you don’t. So it’s already on that footing.
Lex Fridman
(01:04:31)
It’s very dogmatic. Yeah.
Irving Finkel
(01:04:32)
Dogmatic, and it’s led to everything, there’s inquisitions and this, you know, all this kind of stuff. It’s all a result of it, that one religion is superior and the others should be stamped out and all that. And in my opinion, monotheistic religion has generated the most fantastic amount of non-religious feeling. Whereas when you have all the different gods and they have different specialties, and the ones you like and the ones that everybody likes, and they have their temples and their offerings. It was very interesting to me to go into a temple in Kolkata when I went there with my wife, Joanna, we went into the temples and saw how they were, and I think they must be very much like the ones in Mesopotamia.
Irving Finkel
(01:05:13)
So there was never anything about them which affronted people’s individuality, or I mean, there’s no religious prejudice or even racial prejudice in antiquity. All these things are modern disadvantageous matters. If you think what’s done in the name of religion, it is absolutely staggering.
Lex Fridman
(01:05:33)
So let’s talk… go to literature, ’cause we didn’t really mention literature much, except you did briefly mention Epic of Gilgamesh. So that was written in cuneiform. It’s one of the earliest works of literature.
Irving Finkel
(01:05:45)
That’s right.
Lex Fridman
(01:05:45)
Can you tell me about this work?
Irving Finkel
(01:05:47)
Yeah. Well, we know it best from this Assyrian library set of tablets. There are 12 of them, it’s a 12-tablet work, so it’s quite long. And Gilgamesh is the hero of it. But the literature, we know it from earlier texts, and we know that Gilgamesh lived. He was a real person, he was a king in Uruk, and he was one of those people who lived after their death in the world, like Alexander, for example. So there were stories about Gilgamesh when he was alive, there were stories about him afterwards. And firstly, they were oral literature, not written down at all, and then around the 1800s, people started to write them down in Sumerian or Babylonian. So there was a corpus, and eventually they were woven into this long 12 Homeric-type thing about the adventures of Gilgamesh.
Irving Finkel
(01:06:38)
So it is certainly literature, and it’s to do with humanity and immortality and man in the hands of the gods, and incorporates lots of interesting and exciting stories. It’s very Hollywoody kind of thing. And you can see within it, even in the sophisticated Nineveh version, its roots are in oral literature. Because when somebody speaks, it says, “Gilgamesh opened his mouth to speak and addressed his friend Enkidu,” and then there’s a speech. “And then Enkidu opened his mouth and addressed his friend Gilgamesh.” Well, when you’re reading a story, you don’t need that, and that must be because of when there was an enacting of an oral thing, a narrator would say and it suddenly got frozen into the text.
Irving Finkel
(01:07:28)
So it’s a very strange thing, because if you’re reading it, it’s obvious that one person speaks and the other person speaks, and they always have this complicated thing stuck in the text. So it must be an echo of presumably you have your protagonists enacting their timeless matter with a and the person who’s writing it down says, “And then Gilgamesh said…” you know, like in a script.
Lex Fridman
(01:07:55)
I mean, what can you say about the telling of stories in written form during that time? Do you think that, too, stretched back in time?
Irving Finkel
(01:08:06)
I do. I think the fireside narrative matters. You know, when we were kids it would be twerps with a guitar sitting around a fire on holiday. But that mechanism, when people gather after dark when there is a fire and talk, is the sort of environment where narrative accounts flourish naturally among human beings.
Lex Fridman
(01:08:30)
Stories, telling of stories. It doesn’t have to be pragmatic, it could be… …Literary in a way.
Irving Finkel
(01:08:35)
Yeah. Either a human person like Gilgamesh or stories about the gods, and someone sees the Milky Way and they think, “There’s a god riding a chariot up it,” and then they have a story about… you know, and all those sorts of things. Or whatever it would be. But I think probably you have to allow for a strong creative principle surfacing in Homo sapiens at a quite early age, because of the paintings on cave walls…
Irving Finkel
(01:09:04)
You try drawing a running antelope in color on a wall. I mean, the quality of the workmanship, of the artistic ability, is unsurpassable. It’s not just good. So how is that an explicable thing at this very early date? It means that among all the population, you have imbeciles and Einsteins, and somewhere along the line you have Rembrandts. And I imagine that half the cave paintings in Europe were done by one person. I mean, you get the impression every family had a genius painter. It’s impossible. Probably there was a person who went from place to place doing these paintings because they could draw straight away accurately like that. But they are a distillation of creative artistic ability plus skill.
Irving Finkel
(01:09:57)
So this, this is right at a pretty early stage, is it not, the cave painting material? So if you consider the human stock which encapsulates such ideas ever after, then you have to reckon with that. You have to reckon with that. Very creative, very creative people. So the function of stories to tell the young and about what happened and about famous battles or when the flood came, or how people learned to make fire, or how we invented the wheel. All those sorts of things everybody puts down, but that’s presumably what absolutely happened. And you have the capacity for people to adore and to respect among their own kind people of astounding ability.
Irving Finkel
(01:10:47)
There must have been hunters who were ferociously quick and, you know, wrestle with polar bears and all that kind of… And all this stuff would be grist to the narrator’s thing. And as things got more complicated and more sophisticated, so lessons might be incorporated or lessons might come out of them unintentionally. Because if you tell a story without a moral, there is usually a moral if you think about it.

Ancient flood stories

Lex Fridman
(01:11:13)
And many of those stories are sadly lost to time or not yet found. You mentioned floods, and speaking of stories that have been lost and found, you’re well-known for a lot of things. One of them is decoding the so-called Ark Tablet.
Irving Finkel
(01:11:29)
Yeah. That was a challenge, because it’s really hard to read.
Lex Fridman
(01:11:32)
You gotta tell me the story. This ancient Babylonian clay tablet dating 1700 BC which contains a flood narrative that predates the biblical story of Noah by a thousand years.
Irving Finkel
(01:11:45)
At least.
Lex Fridman
(01:11:46)
At least. Okay. Well, you gotta tell me the full story.
Irving Finkel
(01:11:50)
So the full story is like this. Visitors used to come to the museum to ask questions of the experts who worked there, and one would be on duty periodically, and sometimes people would bring objects, sometimes they’d ask questions. And somebody once came in with a whole load of objects, including this tablet, which, to cut a long story short, I identified pretty much straight away as being part of the flood story. It was a tablet about eight inches by three. Not the whole flood story, which is a complex narrative which ended up in the Gilgamesh narrative much, much later, but this one was an early narrative in which the point was taken up where the gods in heaven had decided that the population of Mesopotamia needed to be wiped out because they were so noisy.
Irving Finkel
(01:12:40)
This was the expression. And the gods couldn’t sleep after lunch, sort of thing. So they decided they would wipe them out and create something quieter that worked harder. So this was the basic mechanism, and they had different ways of doing it. And the most effective one was they were going to send a flood to wipe them all out. And one of the gods, the number three in the triumvirate, thought this was a deplorable idea. So he took it upon himself to warn this person called Atra-Hasis, who lived in Mesopotamia, to build a boat to rescue life when the waters came. And in it, he told him the shape of it and the materials he would need and how much he would need of the materials so he could do it safely. And in the 60 lines of the tablet, all this stuff was there.
Irving Finkel
(01:13:29)
It was like a blueprint to build this boat. And it was just extraordinary because the boat was round. And everybody who knew their Bible, the ark’s a coffin-shape kind of affair. And nobody thought of it being a round boat. And the fact is that round boats were used in Mesopotamia on the rivers, coracles that’s to say, because for transporting things, and they would never sink. They were very appropriate. And so in this story, it was decided it was going to be a giant coracle, a really, really big one that would never sink, and the male and female animals could go in and Atra-Hasis’ wife and his three sons and so forth, could go in and everything would be there and it would float on the water.
Irving Finkel
(01:14:23)
And when it came down, they said, “We’ll start all over again.” So it’s got very many points in common with the description of the flood in Genesis. And of course, so did the one in Gilgamesh. So in 1872, there was a Assyriologist in the British Museum called George Smith, and he was a very, very talented reader. And in 1872, he discovered that one of the tablets from the Nineveh library we were talking about before had on it a passage which ran in parallel with Gilgamesh about the waters coming and the boat and everybody floating.
Irving Finkel
(01:15:02)
And even to the point that when the rain stopped and the ark came to rest on a mountain, the hero of this thing in Gilgamesh, who was called Utnapishtim, released a bird three times to see whether the trees had come up, and the first one came back and the second one and the third one didn’t. So he knew that… So this was not only in the Epic of Gilgamesh, but it was also in the Book of Genesis. So what it meant was that it wasn’t… You couldn’t have two stories… It wasn’t two stories about the same thing. It was literary dependence.
Irving Finkel
(01:15:40)
It was literary dependence. The one was locked into the other. The text of the Hebrew Bible, from whenever it was written down, of course, nobody knows quite when, but whenever it was, it was about the same time as the one from Nineveh, about the 7th century, 6th century, something like that. The time interval between the Gilgamesh version from Nineveh and the Hebrew Bible is not like a big expanse of time. So there was an argument that one goes this way and one goes that way. But then when this tablet came in, a thousand years older, nobody believes the Bible was written in 1700 BC. So the primacy of the Mesopotamian matter was established. And it’s important because you never get floods in Jerusalem. You just don’t. But in Mesopotamia, they had floods.
Irving Finkel
(01:16:31)
The rivers, sometimes there wasn’t enough water, sometimes there was too much, sometimes there was far too much water. So the mechanism that the waters could be used as a destructive force by the powers that be is a plausible Mesopotamian mechanism and is based, in a sort of sense, in my opinion, in reality. I think there must have been some tsunami once, most people were drowned, and those who survived were in boats, obviously. And then afterward, nobody ever forgot it. It went on and on and on.
Lex Fridman
(01:17:01)
So you mean there actually could have been a catastrophic event of a large scale?
Irving Finkel
(01:17:04)
Yeah, not the whole world, ’cause people were…
Lex Fridman
(01:17:07)
But just enough to imagine.
Irving Finkel
(01:17:08)
Yeah, sweeping down to the Persian Gulf, and, you know, the flat plains, everything would be destroyed, all the houses would be destroyed, animals would be drowned, and…
Lex Fridman
(01:17:16)
This is an incredible discovery. Do you think it’s possible that this is the original? There are flood myths in many cultures.
Irving Finkel
(01:17:24)
I believe this. The Mesopotamians had a deep-seated horror of dependency on water when they couldn’t control it. They were fearful of it. And they had a rainbow in Babylonia, like in the Bible, as a proof that the disastrous flood would never happen again. But I think there must have been one episode of this kind, maybe 5,000 years before the tablet, 10,000, it doesn’t matter. Because with the passage of time, nothing happens in that part of the world. So something will be alive, grandfather to grandson, before you go to sleep, “And remember, my boy, you know, you only have to be careful because otherwise…” and all that stuff. For sure, bogeyman stuff. It never quite died out in their conscious minds.
Irving Finkel
(01:18:12)
So I think that when the Judeans from Jerusalem, after the destruction of the temple by the Babylonians and the rout of the priesthood and everything, the king and the others went overland to Babylon as refugees, and they had to live there for three generations of time under Nebuchadnezzar’s reign. So I believe that the text of the Bible was written then, because if you read the Bible attentively, which I can’t say I do on a regular basis, but if you do read it dispassionately, you have the mechanism that the only books in the Bible explain to the reader how it is that these people are in such a mess, because they’re supposed to be the chosen people doing all that.
Irving Finkel
(01:18:58)
And, and look, they haven’t got a temple, they haven’t got a country, they’re washed up and everything like that. So I think that what happened was it’s a complex thing, but the Judeans from Jerusalem, they spoke Hebrew, but they also spoke Aramaic, right? The two languages, they’re sister languages. And the Babylonians spoke Babylonian, and they also spoke Aramaic. And they all wore the same kind of clothes, and they all had brown skin. And when all these refugees from Jerusalem were milling around in Babylonia, they would have intermarried and disappeared within no time at all.
Irving Finkel
(01:19:36)
And the authorities who were there prevented this by drawing up a kind of charter of their history, explaining things from the beginning of time up until now, how it happened and what happened, and it was all intentional. So that is, in my opinion, the driving force behind the Hebrew text. And the thing about it is that they didn’t have in Jewish philosophical tradition stuff about creation and the beginning of the world. And they took Babylonian ideas, which they learned when they were there, and they recycled them. So whereas the Babylonians decided that the gods were going to wipe out the noisy persons, when the Jewish philosophers got this narrative to recycle about the vengeful Almighty who was in the Old Testament a very unpleasant and vengeful person, it was because of sin.
Irving Finkel
(01:20:34)
It wasn’t because of racket and playing the radio, it was sin. So they took one narrative and they recycled it for their own purposes.
Lex Fridman
(01:20:43)
The flood is a useful tool to, to punish people for whatever X is?
Irving Finkel
(01:20:49)
That’s exactly right. And something else is this, right? You have five days to build the ark or whatever it is, or two weeks to build the ark, so the clock goes tick, tick, tick, tick, tick. And about a third of the films that come out of Hollywood are the world’s going to be demolished by aliens and you’ve got 24 hours to think of a cure, tick, tick, tick, tick, tick, tick, tick. So it’s that narrative is irresistible, that one man can save the world, if he’s lucky, in time from disaster. So it starts off with Utnapishtim, and then it goes on to Noah, and then it goes on to Hollywood.

Noah’s Ark

Lex Fridman
(01:21:27)
Do you think this ark in the tablet actually was ever built? You did build a replica one third the size. And you, people should check out, you tell the story of that wonderfully. What did you learn from building this replica? And do you think the actual ark existed?
Irving Finkel
(01:21:46)
No, I don’t think so. I think it’s a literary construction out of the reality that people who did survive were on boats. I mean, they had boats for sure, and you might wake up in the Persian Gulf and never know what happened, but, you know, it’s a literary moral principle teaching narrative. And look, missionaries take it all around the world. That’s the other thing. See, this is the mystery of it, that you have flood stories everywhere, and some of them are from meddlesome missionaries who have all these innocent little kids sitting on benches, and, “I’m going to tell you a story,” like that. So it moves into this consciousness, then it gets recycled, and it gets recycled. So this is one thing.
Irving Finkel
(01:22:29)
And then also, there probably are spontaneous ideas, because it’s not so complicated or so amazing that independently people would have such a narrative. After all, you know, like the great river in China floods and everybody gets… so that’s not at all surprising. But what was so shocking for George Smith, who was such a clever person, is to read for the first time on this tablet from Nineveh, long before the one that I discovered came to light, about the three birds being released one after the other. And that was the clincher that the two stories were locked together. And lots of clergymen got very miserable about it and didn’t know what to make of it.
Lex Fridman
(01:23:10)
So that’s, that’s a definitive proof that those are literary-
Irving Finkel
(01:23:13)
A literary, I think literary link. I think so.
Lex Fridman
(01:23:18)
And I mean, these puzzles are then connected, but the ark you discovered is 1,000 years older. So that means that story of the flood has been told many, many times across that span to, you know… … “Do your homework or the flood is gonna come.”
Irving Finkel
(01:23:40)
That’s right.
Lex Fridman
(01:23:40)
Do all that, can… oh.
Irving Finkel
(01:23:42)
That’s right. And every time somebody built a coracle and they didn’t do the waterproofing right—
Irving Finkel
(01:23:49)
…you know what will happen? You’ll be out on the river, and that will be your lot, you know? I, I think so. I think it was a… I, I… there, there’s a certain amount of evidence that in Mesopotamian society, people talk about the time before the flood and after the flood. And it’s like when I was a boy, people used to talk about, “Before the war, we used to…” And now, we, we do this. It’s, it’s a kind of cataclysmic cut across history, which provides a, a, a ruler, so things are either before it or after it. Because there’s a king list, for example, where they wrote down the names of all the kings, all the way back to the beginning, including kings before the flood. They knew about that… they have their names and their great regnal years, or thousands of years. Fascinating.
Lex Fridman
(01:24:42)
So there’s a guy named Graham Hancock who talks about the Younger Dryas hypothesis, 10,000 BC, that there was an asteroid that hit Earth and melted the ice sheets, and that created a flood in North America. So that means an actual cataclysmic global event, that then as all the different civilizations sprung up, they all carried that knowledge, that memory. That’s his idea. What probability would you assign to that?
Irving Finkel
(01:25:13)
I would say negligible, because I regard it as a literary matter— …which is not predicated on the existence of flood in people’s minds. But I do believe that the story in Mesopotamia owes its inception to a disastrous flood, but nothing global. Nothing that touched America or China or Birmingham. So I, I, I don’t have any sympathy with that. But people have made drilled cores and then they, and then they… I do that all the… I’m not interested in all that stuff. It’s, to my mind—
Lex Fridman
(01:25:42)
It’s a literary device.
Irving Finkel
(01:25:43)
…it’s a literary top-off of great potency, of irresistible potency, because everybody identifies with the idea of being in bed and someone knocks on the door, says, “Get up, you’ve got to build a boat and this is what you’re going to need, and you’ve got to get on with it, sunshine, or we’re all sunk.” I mean, what are you going to do? The most interesting thing is this Atrahasis in the 1700 text, he wasn’t a king and he wasn’t a sailor or a boat builder. So how comes this clever god who wants to find someone to build… wouldn’t you go for a look in the Yellow Pages for a, a boat building company, say, “Listen, fellow, I’ve got a deal…” No.
Irving Finkel
(01:26:22)
He had to tell him, “This is the blueprint, this is the shape, you need all this, you need all that, you’ve got to measure it and all that.” It’s a very interesting thing.
Lex Fridman
(01:26:30)
I mean, yeah, that’s a great story. You don’t go to the great boat builder, you go—
Irving Finkel
(01:26:35)
Taxi driver or something like that.
Lex Fridman
(01:26:36)
…to the taxi, and then that’s that—that hero’s journey. That’s the stuff of great myths, yeah.
Irving Finkel
(01:26:43)
It is, it is a great myth.
Lex Fridman
(01:26:45)
A little detail would be really cool about the replica, like, uh—
Irving Finkel
(01:26:49)
Of the boat?
Lex Fridman
(01:26:49)
What did you… Of the boat, yeah. One-third replica, of course.
Irving Finkel
(01:26:51)
That was something else. There were three blokes who did it. And they were specialists in reconstructing medieval Arab boats. Because quite often, they’re found in the mud or bits, or they have information and they reconstruct them, so they were at home in it. And we built it on a small lagoon in Kerala. It was just the most unbelievably wonderful thing, because they used the instructions as a blueprint. They made it about a third of the size of the original, a pretty huge thing. But they made it because it had wooden ribs, you see?
Irving Finkel
(01:27:28)
They could get wood ribs. They worked out by computer the maximum size they could do it when it would work. Beyond it, it would be impossible, because once they built the curved ribs, and then they stuffed woven all around it, it had to be covered in bitumen, which is also very heavy, to make it waterproof. So they calculated the size and it worked. So they built this thing on rollers and it was pushed out into the… It was just the most unbelievable… I went out there with my dear wife for the last few days and was on the maiden voyage. And they had trouble with the bitumen because Indian bitumen is really not up to scratch, and they couldn’t get Iraqi bitumen because its cultural property is carcinogenic.
Irving Finkel
(01:28:10)
They wouldn’t export a tanker load of Iraqi, so we had to use the Indian stuff. But the thing is this, the bitumen which they coated it with was okay but it wasn’t perfect, so when it went out into the waters, there was a bit of a leak, water had to be bailed out. So, they said, “Ah,” you see, and I said, I said, “Okay, listen, sunshine,” I said to this producer, “Have you ever been in a rowing boat without water in the bottom? Excuse me?”
Lex Fridman
(01:28:35)
Oh, you’re saying that’s- that’s a feature, not a bug.
Irving Finkel
(01:28:37)
That’s the feature of the thing, yeah. That’s the feature. That thing could have gone to ports.
Lex Fridman
(01:28:41)
So it’s authentic.
Irving Finkel
(01:28:43)
Absolutely right. We had such an adventure with that thing. They made a documentary film. In various languages. And you know what they did? You know, I was in it a bit, and they had people saying, “Oh, I don’t think it was this, I don’t think it was that,” you know. And they didn’t let me go back and say, “What the hell are you talking about? I did it. I know what… I can read, you know, funkan you.” They didn’t have, they didn’t do it. I couldn’t get my own back. I was really annoyed. Really furious.
Lex Fridman
(01:29:10)
So you’re- you’re saying that there’s some inaccurate things to it.
Irving Finkel
(01:29:13)
I am saying there’s some inaccurate things. Yeah, somebody in Iraq said, “Oh, it couldn’t have been that. They probably had lots of little coracles all tied together.” Did they f-? I mean, I, you know, he couldn’t read the stuff. I mean, it’s really, really, really annoying. I mean, you should have a chance, shouldn’t you? You know, if you’re gonna have a fencing match, you both have to have a rapier, wouldn’t you say?
Lex Fridman
(01:29:32)
Yeah, and you’re the OG. You’re the person that decoded it.
Irving Finkel
(01:29:37)
Well, I can read. Yeah. But the thing is this, the proportions of the material were accurate. This is the crucial thing, that, um- …what had happened was, they took the information about how you make a real coracle, which is usually enough, two people… …And a few sheep and goats…
Lex Fridman
(01:29:56)
Got it.
Irving Finkel
(01:29:56)
…and they bumped them up… …So that it worked. And I know why that is, because it goes back to your question about oral literature, because there must’ve been times when people went to villages and told them about the flood, and when they got to the question of the boat, they’d say something like this, “And Enki said, ‘You got to build the biggest coracle you’ve ever seen.'” Like that, right? Well, I mean, if you do this in a cinema in Guildford, people will say, “Well, that’s fine,” but if you do it to a whole load of river people who use coracles and make them, they’re not going to take that, they’re not… “So how big was it then? Come on, how big was it?” So what do they do? They go to a coracle place and they work out…
Irving Finkel
(01:30:45)
…the proportions of material and then they bump it up so that the actor who reads this, for the first few times, he has in his pocket how much it is. But after a while, he knows it by heart so that none of these people get angry. “You can’t expect us, big enough for all this.” So then he’d have all the stuff and he’d do it this way, “And you need all this and, and you need all this,” and they’d all be hypnotized by it. That, I think, is, it’s actually, regarding your question, it’s on the cusp of purely oral literature to purely literary literature. It’s actually there because you can see that it was molded in the environment when people were still talking.
Lex Fridman
(01:31:26)
Yeah, you’ve got to make it authentic to really connect with people.
Irving Finkel
(01:31:29)
Well, you couldn’t pull it over their eyes. I mean, you know?
Lex Fridman
(01:31:32)
Yeah, well, I wish many of the films in Hollywood today would have the same level of rigor.
Irving Finkel
(01:31:37)
Rigor is one of the things lacking in the world.
Lex Fridman
(01:31:40)
By the way, I forgot to ask, why was the flood myth focused on noisy people?
Irving Finkel
(01:31:45)
Well, it can’t really be noisy. I’ll tell you what the explanation is; it’s something quite different. Before the flood, the gods had not created death. So I think the noise was a reflex of the fact there were just too many animals, too many people, and they had to do something about it. So it’s a sort of euphemism, so to speak, because after the flood, at the end of the tablet… not my tablet, but the other ones, where it’s still broken. It says, there’s a tantalizing thing where they create barren women who can’t have children and men who can’t have children and princesses who don’t have children, and they institute in society some figures who will not reproduce the species. So it’s actually a rather sophisticated Malthusian kind of philosophical position.
Irving Finkel
(01:32:41)
It’s remarkable. So that the noise means there are so many of them, not that they’re actually so noisy that we can’t hear ourselves think.

The Royal Game of Ur

Lex Fridman
(01:32:50)
You have to tell me about the world of ancient games. Maybe we can start with the ancient Royal Game of Ur. What is it and how were you somehow able to crack the rules of it?
Irving Finkel
(01:33:05)
Well, the Royal Game of Ur is a board of 20 squares in a rather idiosyncratic form. And it was pretty much unknown until the 1920s when Sir Leonard Woolley was digging at the site of Ur, and in the graves of the royal family, Sumerian rulers, they found four or five boards of this pattern, together with dice and pieces, which showed that it was popular among them at this time, and also that wherever they were going in the world to come, they would want to be playing it. And so that was one thing, and we had the number of pieces and some dice. So lots of people had ideas about how it might have been played, and that went on like that for a very long time. And thereafter, boards for this game turned up in most of the countries of the Middle East, sometimes quite a lot of them.
Irving Finkel
(01:34:07)
And the one from Ur dates to about 2600 BC, and from then down to the end of the first millennium, there are examples of boards from Mesopotamia itself and from Egypt, Syria, Lebanon, Jordan, Turkey, Greece, Crete, all over the place. And when you put all the boards together, you realize that you’re dealing with a board game which was extremely widespread and extremely popular.
Lex Fridman
(01:34:44)
Across space and time.
Irving Finkel
(01:34:46)
across space and time. So, it lasted for nearly 3,000 years and it was played all over the place. So, it’s one of those games that’s like chess or backgammon, which you can say are world conquerors. Because the way I see the issue is that human beings for a very long time have been, shall we say, hungry for things to do. Because all through the Bronze Age and the Iron Age, there was no television, you know? There was nothing, and kids played with pull-along things and adults had board games, and they’re kind of embedded in culture from a very early time.
Irving Finkel
(01:35:38)
And this game was so widespread, you know, Tutankhamun, for example, in his tomb, there were two or three boards for it with the pieces. So it arrived in the middle of the second millennium in Egypt and even the Pharaoh played it. So you have a game, which the interesting point about it is that it spread across the known world without written rules and without people necessarily knowing the same language. So a merchant would go, end up in a bar, you know, come from India or, I don’t know, where, start seeing these guys playing, have a go himself, and it looks rather interesting. You go home and try and remember what it looked like and try and work out how to, you know, be transported this way and the other.
Irving Finkel
(01:36:29)
And so you can see that the board has 20 squares, so you have a block of four by three and then a bridge of two and then a second three by two thing at the end. So it’s difficult to describe the actual shape. But what happened was after about 2000 BC, the squares at the far end, which were two on one flank and two on the other, were all put at the end of the central avenue. So you end up with 12 squares down the middle. All the boards after the period of Ur have 12 squares down the middle, and then four on each side at one end. So it meant then that when you play the game, you have dice to move the pieces, you have pieces all the same, and you obviously put them on your first corner, and you turn the corner and you go up the middle and off the end.
Irving Finkel
(01:37:22)
And it was a race game of the kind that everybody knows from their own childhood. Some squares, which had rosettes on, were either safe squares or you had another throw, and you could maybe put two on one square, we don’t know. You could try and block people. But anyway, the crucial thing is that the widespread distribution of this idiosyncratic shape, and it’s a lasting thing, shows it must have been a very good game if people more or less played the same thing on it everywhere. I mean, it may be that they were completely different games, but probably not. So this is the thing, it makes you wonder what would be about it that would fit so well with a wide appetite from different persons, different types of person.
Irving Finkel
(01:38:08)
And the thing is that although it’s a race game where you’re at the mercy of dice and lucky squares and unlucky squares, the process of getting your pieces on and off the board as a winner is primarily fortuitous. But it has built within it, is the way I understand the game plays, a measurable quota of strategy. It’s a mix of probability and strategy. So most games are either just probability like Snakes and Ladders, like Chutes and Ladders is just a thing like that. Or you have a game like chess, which is pure strategy. It’s a mix, and the grown-up game in the modern world where fortuity or chance and strategy have a good balance is backgammon, which is a sort of grown-up version of this sort of game.
Irving Finkel
(01:39:03)
where, nevertheless, if you play according to the most rational interpretation, its strategy is a major factor. So what happened was that many people had ideas how it was played, and the route followed, and I did too. And then I discovered this tablet in the British Museum, which was written at a very late period in the second century BC, so 2,300 years after this object existed, and it had on it the names of the pieces and what the pieces were like and various things about the throws. And it was obvious that it was, the rules were to do with a game which was derived from this simple early game. And that working backwards from it, you could reconstruct the game in accordance with its later incarnation that might be workable.
Irving Finkel
(01:39:54)
And it jolly well turned out to be workable ’cause people play this all over the world now. And they even play in Iraq in cafes. “Wait, now?” They do. “Oh, wow.” Because after it’s come back to life, it’s on the internet, people play, there are different rules. “That’s amazing.” The ones I invented are pretty much regular. So if you have a good balance between chance and strategy, and it’s a fair game, and doesn’t take four days to play like modern board games. So you could have a go and if you’re lucky, you win fast and then you have another go, maybe best of three or something like that. It works out rather well.
Irving Finkel
(01:40:32)
And once I was in California, at the Getty, and I had to give a talk about this with all the information, because there’s lots of things to say about it. And the lady who ran the Friends of the Getty had a brilliant idea. So what she brought in 20 or so commercial copies of this game, and they had small tables with chairs. And after the lecture, I was supposed to say to everybody, “Okay, this is what you have to do, this is how you play.” Because you can get the rules down in like three minutes. So I said, “Okay, first you have to do this, first you have to do that, so off you go.” So there was silence, and then after a while, someone said, “I hate you!
Irving Finkel
(01:41:13)
I’m never playing this game with you again.” When they’d never played it before, when somebody had escaped at the last minute, cleaned up just when they thought they were going to get it. And it provokes that salutary, benevolent fury and rage in the players, which all good board games do. And they were happily married couples who were, at the end of the afternoon, phoning their respective lawyers to discuss the future, that kind of thing. Beautiful. Do you think games, our desire to play games, a mix of chance, a mix of strategy, is a part of human nature? Do you think that’s always been there? I do. I do, yes. I think…
Irving Finkel
(01:41:59)
I mean, you can say that in communities you have rivalry, hostility, and who’s the best, who’s the fastest, who’s the strongest and things. And if you play a board game like that, all the reality of it is sublimated into a safe terrain, yes, the safety of it, where you can nevertheless get angry, but it’s not like that. That’s one thing. But more significantly, I believe is the question of what in India people call “time pass.” Which is not quite the same as “past time.” Time pass is the question of what you do when it’s too hot to do anything, which is true a good part of the day and a good part of the year. And grandmothers sit under trees with their grandchildren and they tell stories and they do this and they do that.
Irving Finkel
(01:42:47)
And “time pass” is a very useful catch-all phrase for the existence of board games. And in India, there are many board games. Chess, of course, is the famous one, but there are quite a lot of three-in-a-row type games or fox-against-geese games, and wolves against sheep and all those sorts of things which come out of the landscape in miniature and were played for pleasure. And also in a kind of way, it doesn’t really matter who wins, because you might play and it goes round and round and round, eventually somebody wins and then they have another game. So it’s a sort of that kind of rather graceful, valid function for not wasting time, doing something which is stimulating and beneficial without it being overpowering in either way. So I think it is a human matter.
Lex Fridman
(01:43:42)
Of course, we humans also sometimes mix in gambling into the whole thing, to add some money on top of it, which I’m sure sometimes was- … involved here.
Irving Finkel
(01:43:52)
I think so, but probably only late on, because money as such… … Of course that doesn’t appear until quite late. But there are… We know in Mesopotamia, it’s a rather interesting thing, there’s a school tablet with three or four lines quoted from one literary thing and three or four from another literary thing. And one of them, it has this, “Oh my astragal, oh my astragal, woe is me, woe is me.” And that’s all we have. And I think this is an example of a genre of literature called “the gambler’s lament.” Because they used knucklebones or astragals as dice. And I’m sure there were people who bet a sack of this or a roomful of that on the throw of the knucklebones.
Irving Finkel
(01:44:48)
And this extract in the school text is probably from a literary tablet in which somebody lost everything, even though there weren’t coins, because I think you’re right that it’s a natural thing for it to accrue. And also maybe men and women played differently, because there are some games which were played in harems among girls, you know, woman on a hot afternoon where nobody was going to win anything. But the rules tablet, which gives this kind of backhanded information about it, is couched in such a way that it talks about people in a bar. Because the movement of the pieces is calculated in terms of food and drink and women, what you win. So the landscape in which the rules are couched for credibility are for just exactly that setup.

British Museum

Lex Fridman
(01:45:49)
As you mentioned, you are the curator at the, possibly the greatest place on Earth, the British Museum.
Irving Finkel
(01:45:57)
Oh, yes.
Lex Fridman
(01:45:58)
Can you tell me what are some of the incredible, magical aspects of the British Museum?
Irving Finkel
(01:46:04)
Well, the British Museum is a magical place and it’s a special case because there’s a lot of flurry and dispute now about what museums are and what they’re for and why they exist and whether they should ever have existed, and all these sorts of issues which people go on about. But the British Museum is unlike almost all museums in the world because it’s to do with the achievements of mankind from the beginning onwards. So it’s a kind of celebration of art and more, but it’s not an art museum. It’s to do with the struggle of the human race against all the things that beset it and how it has triumphed, and how marvelous it is and the things that have happened.
Irving Finkel
(01:46:50)
And not turning a blind eye to all the contrasting horrible things that have happened, but it’s the narrative of the human race, as I see it, as discernible in objects. So it means that we serve two very important horizons. One is that we represent, as far as we can, the whole world with no injudicious attention paid to any one or other culture, that they’re all to us one. So there’s no favoring any religious group, any country group, anything of the kind. It’s the human species. We try to tell the narrative of, in its own right, and how it overlaps with its neighbors and what it’s learned from what came before. All those features together is really what the concern of the museum is. And of course, to collect everything we…
Irving Finkel
(01:47:53)
Or has been, to collect everything we can to tell those narratives, and also to look after them according to scientific principle. So, all those things at once are the task for the British Museum. And the second horizon it serves is the unborn. So babies yet to be born, and their children, and their children, and their children. And it seems to me that the task of the museum is of such cultural significance and such, so to speak, sacred validity, that it shouldn’t have to put up with people carping about this or that or saying, “Museums are sinful and wicked and should be demolished,” because the people who say these things don’t really have any idea of actually what it really does stand for.
Irving Finkel
(01:48:44)
And it’s a kind of lighthouse in a universe where we are surrounded by darkness, ignorance, stupidity, uninterest, disinterest, skepticism, ignorance, and so forth about the very issues that we’re interested in. And it’s one of the places in the world where you can talk about truth and beauty and elegance and intelligence without it being an affront to people who have none of those qualities, and without it being the kind of speech that people shudder or they think you’re being naive about it, because those are the crucial things. And also about religion, that we don’t favor a religion and we don’t sponsor a religion. We try to look at them for what they are and to assess their relationships and what they offer.
Irving Finkel
(01:49:40)
Perhaps less, with a less acerbity and less criticism than I would if I was the director. I would try to put them down the wrong end of a microscope and look at them for what they are and what they have done and what’s been done in the name of religion. You probably would never get away with that, but maybe one day that will be an important part because it’s a major contributive factor to what’s happened to the human race, which has never really articulated sharply about what religion has done to us and where we might have been without it. Because not having religion does not mean not having law or morality or sensitivity or consideration or love or any of those things. None of those things depends on religion, and those are the things which are important.
Irving Finkel
(01:50:28)
So I think it’s people say, “Oh, you say this ’cause you work there and you, you know, you’re a curator, you wouldn’t say that the British Museum is a special place.” It’s nothing to do with that. It is actually a special place because you cannot point to another museum in the world with the same task. For example, the Louvre is basically a museum of art, basically a museum of art, not a museum of ideas. And the Met is definitely a museum of art. It’s called the Museum of Art and that’s their priority. Design and color and shape, to my mind, it’s the British Museum. This is one factor among many others. And we’re not an art museum and we’re not a local museum, we’re not a museum of the history of the bicycle, we’re not a celebration of evil.
Irving Finkel
(01:51:16)
We are, as it were, doing, as I see it, the best we could do if, for example, a whole load of Martians arrived in the Great Court and burst through the front door and said to us, “Tell us all about this place. Tell us about the world. Can you do it fast ’cause we got to leave?” And if you took them around and said, “Look at this, look at this, look at this, look at this,” they’d get some picture which wasn’t insane. The only thing they wouldn’t get is a recording of “Johnny B. Goode” by Chuck Berry, but apparently one’s been put into space. Yeah. I’ve heard about it. So this is a very comforting thing.
Lex Fridman
(01:51:58)
But that’s kind of what… The task of the British Museum is to do that, but for the entirety of human history.
Irving Finkel
(01:52:02)
Yeah. It can be done.
Lex Fridman
(01:52:03)
It would be a store of artifacts…
Irving Finkel
(01:52:05)
Yes
Lex Fridman
(01:52:06)
…that are the raindrops from which you can reconstruct the waterfall.
Irving Finkel
(01:52:09)
Precisely so. And it’s not a valid criticism to say to us that most of the stuff is not on exhibition, which is what everybody says. “It should go here, it should go there ’cause it’s not on exhibition.” But we’re not doing it for any other reason than stockpiling for future examination. See, this is the important perspective that nobody considers. Because the thing is, when you have something which is contemporary, if you’re a clever journalist or a clever thinker, you can write essays about it, you can talk about it and you can see it, but you can only see it from the perspective from which you operate. And with the passage of time, the significance of objects, what they stand for, what they meant, and what they can still mean shifts.
Irving Finkel
(01:52:57)
And the further back you go, the sharper you can understand things, especially in terms of their own precedent and their own contemporary parallels. So the benefit of distance, storage, and contemplation is inestimable.

Evolution of human civilization

Lex Fridman
(01:53:13)
There are so many questions I want to ask you. What wisdom do you think the people from whom these artifacts came had that we, the modern-day humans, may have lost or lost in part or in whole? It’s often, as you’ve spoken about, we see the ancient peoples as lesser, dumber, more primitive. And you’ve spoken about how they are basically the same.
Irving Finkel
(01:53:46)
I think if you put them on a bus all wearing the same clothes, you wouldn’t know. That’s my feeling.
Lex Fridman
(01:53:51)
But there is some… I’m sure there’s some greater wisdom they had about certain things, as we have greater wisdom about others. Thanks to Einstein, we’ve figured out the curvature of space-time.
Irving Finkel
(01:54:05)
Yeah.
Lex Fridman
(01:54:05)
Which they didn’t know about. But…
Irving Finkel
(01:54:07)
They knew quite a lot about astronomy, though. Quite a lot about astronomy.
Lex Fridman
(01:54:11)
They stared at the stars.
Irving Finkel
(01:54:12)
Yeah, and they measured them and they made calculations. And when the Greeks went to Babylon, they thought, “Hey man, this is really cool.” And they wrote it all down and went home. Yeah, definitely, definitely. Well, I think it’s a hard question to answer. But one of the things is that they were spared things which have cluttered up the essence of humanity. Because I think that the modern adherence to the electronic universe is disastrous for humans, and because it reduces the vitality of the human component. I think it’s restrictive in a way that people don’t realize until it’s too late. Like drugs, if you take drugs now and again, you think, “Oh, it’s fine, it’s fine.” Then suddenly you realize you’re addicted to heroin. It’s a bit like that.
Irving Finkel
(01:55:06)
People use the electronic world like an addictive drug and they can’t get through without it. And I think this is a very recent thing, but I suppose I’m not a Luddite and say we shouldn’t have railway engines and we shouldn’t have kettles. But I think one of the things about the ancient world was that people never went anywhere unless they were merchants or soldiers. They never went anywhere. Probably people born and died in a village and then their children born and died in the village, and they never knew anything about the outside world. Maybe a very little. Sometimes there’d be a message, but in principle, they had no idea about other countries, other languages, or how big they were or any…
Irving Finkel
(01:55:50)
So I don’t think they had wisdom in a way that you could type out following precepts will make life better. Because they told lies and they esteemed the truth, and they fell in love and they committed adultery and they did murder, and they did all the things. I think in a way, the ancient world allowed human beings to behave more naturally than the world in which we live now. I mean, if you live in a rustic environment or by the sea, or you’re a fisherman, or you… I mean all those normal, real kind of things, then it’s probably all right. But most people who live crammed in the cities live a very, very artificial life where the principles which they regard as ineluctably crucial are not ineluctably crucial. They’re not in…
Irving Finkel
(01:56:43)
You know, one example is this ghastly thing on mobiles where you get a short clip from a real program.
Lex Fridman
(01:56:50)
Yeah, yeah.
Irving Finkel
(01:56:50)
I think it’s utterly, utterly wicked. So you have children all over the world who cannot articulate, spell, or make meaning clear using the best, most literary, and most beneficial language that’s ever been created, which is English. They have to save their lives, and they use a word… I’ll give you an example.
Lex Fridman
(01:57:13)
Yeah.
Irving Finkel
(01:57:14)
Right? Like I went.
Lex Fridman
(01:57:16)
Like I went.
Irving Finkel
(01:57:17)
Like I went.
Lex Fridman
(01:57:18)
Yeah.
Irving Finkel
(01:57:18)
So difficult to define that grammatically. Difficult. Like, I should have gone. Where “I went” or “I should have gone” means “to speak.” Now, how would it be if when we see the verb “to go” in Sumerian it actually meant “to speak”? Where would we be? Where would we be?
Lex Fridman
(01:57:39)
I mean, we should probably say that even in that time there was probably slang, right? It just wouldn’t end up written.
Irving Finkel
(01:57:45)
Yes, there were dialects. There were words that sailors used. For sure, all those things.
Lex Fridman
(01:57:50)
But they wouldn’t end up in writing.
Irving Finkel
(01:57:52)
Sometimes they do.
Lex Fridman
(01:57:53)
We have to remember that Cambridge and Oxford speak in a certain way that’s proper and formal and very smart, but most of the people in bars, sailors, have a different way of speaking.
Irving Finkel
(01:58:06)
They do.
Lex Fridman
(01:58:06)
So they would probably say “like I went” and have emojis and…
Irving Finkel
(01:58:13)
But the thing is, you have to moderate your vocabulary. If you talk to people of a certain age, because they don’t know what the fuck you’re talking about if you use language. And the thing that was just so exquisite about English is like with a barrister, you can make a case which is absolutely wonderful because it says exactly what it means and there’s no wiggle room. The conversation should be like that, with no wiggle room. It’s not just a matter of spelling, but the basic vocabulary. You know something very interesting, people say they know English or they speak English. Have you ever in your life opened a full-size volume of the Oxford English Dictionary? It’s about that thick, this fat. I have a whole set. I love them. So this is it.
Irving Finkel
(01:58:58)
You take a volume off the shelf and you open the book, and you run your forefinger down the various columns of writing. You might have to turn several pages before you find a single word you’ve ever heard before, because English is unimaginably rich. I grew up in a house where everybody read literature all the time, and I had three sisters and then a brother, and we all read literature. Went to the library every week, read lots and lots and lots of books, so we all had really good vocabulary, and that’s how you get vocabulary. Otherwise, you don’t, because in conversation, “Do you want more tea?” All this sort of stuff, you don’t learn new vocabulary, you have to get it from reading and listening to proper stuff.
Lex Fridman
(01:59:46)
Which is a very important aspect of vocabulary, why it’s important to know a lot of words and to speak clearly, because those words also define the quality of your thoughts.
Irving Finkel
(01:59:56)
Sure
Lex Fridman
(01:59:56)
… at the end of the day.
Irving Finkel
(01:59:57)
That’s exactly right. I must say, I think it is a pity if having produced such wonderful languages in the world that their use is so inhibited.
Lex Fridman
(02:00:08)
I think the right way to think about it is the way the British Museum thinks about it. So you’re commenting on the ephemeral, on the thing that is in the moment right now is happening. The reality is only a few select things will last a hundred, 200 years from now about this moment in time. And so we have to sort of think with the big picture perspective and the slowness of time. Yes, in the moment there are these catastrophes, there are changing ways of speaking, the technology tearing apart the fabric of society, but when you zoom out…You will think about the grand ideas of Einstein, the battle of ideologies with communism and Nazism of the 20th centuries. The bad, the triumphant, the rockets. Humans started launching rockets, going to the moon, maybe to Mars.
Lex Fridman
(02:01:05)
Those things. And we won’t be thinking about emojis and any of that. And in some sense, that’s the stuff you’re looking at with cuneiforms, is the things that stand the test of time, that are there.
Irving Finkel
(02:01:23)
That’s true. But I think that language, properly used, is a crucial human tool for communication.
Lex Fridman
(02:01:32)
Absolutely, yes. Speaking of which, I have to ask some more about the cuneiform tablets at the British Museum, when you’re surrounded by so many… And by the way, how many cuneiforms?
Irving Finkel
(02:01:45)
About 130,000.
Lex Fridman
(02:01:47)
That is so cool.
Irving Finkel
(02:01:48)
It is. It’s pretty cool.
Lex Fridman
(02:01:49)
What are some of the most beautiful cuneiforms to you? Maybe ones we don’t know about, that make you smile?
Irving Finkel
(02:01:59)
Well, there are not many jokes. You asked about jokes.
Lex Fridman
(02:02:01)
Yeah, they lost their sense of humor in cuneiform.
Irving Finkel
(02:02:04)
Yeah, I think there are. There’s one that I can remember. A fly or a mosquito lands on the back of an elephant and says, “Am I too heavy for you?” Or something like that. That’s sort of a Babylonian joke. You wouldn’t use it in the pub or anything like that.
Lex Fridman
(02:02:21)
Yeah, yeah. You had to be there.
Irving Finkel
(02:02:24)
But then also, do you like Tom Lehrer?
Lex Fridman
(02:02:26)
Of course.
Irving Finkel
(02:02:27)
Okay, that’s good. That’s good. I once went to America on a lecture tour and I ended up in the town where Dr. Wernher von Braun ended up running the American rocket… … Industry.
Lex Fridman
(02:02:48)
It doesn’t matter?
Irving Finkel
(02:02:49)
Once the rockets are up, who cares where they come down?
Lex Fridman
(02:02:51)
Where they come down, yeah.
Irving Finkel
(02:02:52)
“That’s not my department,” says Wernher von Braun.
Lex Fridman
(02:02:55)
That guy, I mean, I could tell where your wit comes from, the fact that you know Tom Lehrer.
Irving Finkel
(02:03:00)
But he’s such a… The way he plays the piano is fantastic. I think my dad recorded them off the radio on a reel-to-reel tape recorder, and I learned them all by heart because they were so fantastic. But I knew a Harvard professor who I stayed with once who was a Sumerologist. And his wife said that she knew Tom Lehrer when he was in the math department… …And they used to have parties and he always played the piano in the corner of the room. He’s just amazing, that man.
Lex Fridman
(02:03:25)
Yeah, I mean, he had a real… You have that. You know, I’ve watched a lot of your stuff. Your whole way of being, the wit. There’s something about that biting wit. It’s a bit of humor, bit of sadness in it. It just kind of feels like it really quickly gets to the complexity of what it means to be human.
Irving Finkel
(02:03:49)
I think so. But the paradoxical thing about Tom Lehrer is that when he’s talking about the bomb and all that, and devices, and international trouble, it’s so unchanged. And same with Dr. Strangelove. It’s just, it’s very remarkable. Anyway, next time you’re here or when you’re here, you should come and see me in the museum.
Lex Fridman
(02:04:16)
I will
Irving Finkel
(02:04:16)
And I’ll show you…
Lex Fridman
(02:04:16)
I will
Irving Finkel
(02:04:16)
…some of these confounded things for yourself, and show you the Chicago Dictionary and give you a grammar book to learn.
Lex Fridman
(02:04:26)
Irving, you’re a remarkable human being. It’s-
Irving Finkel
(02:04:28)
Well, I’m very glad we met.
Lex Fridman
(02:04:29)
It’s truly an honor to meet you, to talk to you.
Irving Finkel
(02:04:31)
Me- me too. It’s been very interesting.
Lex Fridman
(02:04:33)
Irving, thank you so much for talking to me.
Irving Finkel
(02:04:34)
It’s been a big pleasure for me, Lex. Be well.
Lex Fridman
(02:04:38)
Thanks for listening to this conversation with Irving Finkel. To support this podcast, please check out our sponsors in the description where you can also find links to contact me, ask questions, give feedback and so on. And now, let me leave you with some words from Ludwig Wittgenstein, “The limits of my language means the limits of my world.” Thank you for listening. I hope to see you next time.

Transcript for Michael Levin: Hidden Reality of Alien Intelligence & Biological Life | Lex Fridman Podcast #486

This is a transcript of Lex Fridman Podcast #486 with Michael Levin.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman
(00:00:00)
The following is a conversation with Michael Levin, his second time on the podcast. He is one of the most fascinating and brilliant biologists and scientists I’ve ever had the pleasure of speaking with. He and his labs at Tufts University study and build biological systems that help us understand the nature of intelligence, agency, memory, consciousness, and life in all of its forms here on Earth and beyond. This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Michael Levin.

Biological intelligence

Lex Fridman
(00:00:45)
You write that the central question at the heart of your work, from biological systems to computational ones, is how do embodied minds arise in the physical world, and what determines the capabilities and properties of those minds? Can you unpack that question for us and maybe begin to answer it?
Michael Levin
(00:01:04)
Well, the fundamental tension is in both the first-person, the second-person, and third-person descriptions of mind. So in third-person, we want to understand how do we recognize them, and how do we know looking out into the world what degree of agency there is, and how best to relate to the different systems that we find. And are our intuitions any good when we look at something and it looks really stupid and mechanical, versus it really looks like there’s something cognitive going on there? How do we get good at recognizing them? Then there’s the second-person, which is the control, and that’s both for engineering but also for regenerative medicine, when you want to tell the system to do something. What kind of tools are you going to use?
Michael Levin
(00:01:45)
And this is a major part of my framework, is that all of these kinds of things are operational claims. Are you going to use the tools of hardware rewiring, of control theory and cybernetics, of behavior science, of psychoanalysis and love and friendship? Like, what are the interaction protocols that you bring? And then in first-person, it’s this notion of having an inner perspective and being a system that has valence and cares about the outcome of things. Makes decisions and has memories and tells a story about itself and the outside world. And how can all of that exist and still be consistent with the laws of physics and chemistry and various other things that we see around us?
Michael Levin
(00:02:20)
So that I find to be maybe the most interesting and the most important mystery for all of us, both on the science and also on the personal level. So that’s what I’m interested in.
Lex Fridman
(00:02:30)
So your work is focused on starting at the physics, going all the way to friendship and love and psychoanalysis.
Michael Levin
(00:02:37)
Yeah, although actually I would turn that upside down. I think that pyramid is backwards, and I think it’s behavior science at the bottom. I think it’s behavior science all the way. I think in certain ways, even math is the behavior of a certain kind of being that lives in a latent space, and physics is what we call systems that at least look to be amenable to a very simple, low-agency kind of model, and so on. But that’s what I’m interested in, is understanding that and developing applications.
Michael Levin
(00:03:05)
Because it’s very important to me that what we do is transition deep ideas and philosophy into actual practical applications that not only make it clear whether we’re making any progress or not, but also allow us to relieve suffering and make life better for all sentient beings, and enable us and others to reach their full potential. So these are very practical things, I think.
Lex Fridman
(00:03:28)
Behavioral science, I suppose, is more subjective, and mathematics and physics is more objective? Would that be the clear difference?
Michael Levin
(00:03:35)
The idea basically is that where something is on that spectrum, and I’ve called it the spectrum of persuadability. You could call it the spectrum of intelligence or agency or something like that. I like the notion of the spectrum of persuadability, because it’s an engineering approach. It means that these are not things you can decide or have feelings about from a philosophical armchair. You have to make a hypothesis about which tools, which interaction protocols you’re going to bring to a given system, and then we all get to find out how that worked out for you, right? So you could be wrong in many ways, in both directions. You can guess too high or too low, or wrong in various ways, and then we can all find out how that’s working out.
Michael Levin
(00:04:14)
And so I do think that the behavior of certain objects is well-described by specific formal rules, and we call those things the subject of mathematics. And then there are some other things whose behavior really requires the kinds of tools that we use in behavioral cognitive neuroscience, and those are other kinds of minds that we think we study in biology or in psychology or other sciences.
Lex Fridman
(00:04:39)
Why are you using the term persuadability? Who are you persuading, and of what?
Michael Levin
(00:04:43)
Well-
Lex Fridman
(00:04:44)
In this context.
Michael Levin
(00:04:45)
Yeah, the beginning of my work is very much in regenerative medicine, in bioengineering, things like that. So for those kinds of systems, the question is always, how do you get the system to do what you want it to do? So there are cells, there are molecular networks, there are materials, there are organs and tissues and synthetic beings and biobots and whatever. And so the idea is, if I want your cells to regrow a limb, for example, if you’re injured and I want your cells to regrow a limb, I have many options. Some of those options are, I’m going to micromanage all of the molecular events that have to happen, right? And there’s an incredible number of those. Or maybe I just have to micromanage the cells and the stem cell kinds of signaling factors.
Michael Levin
(00:05:28)
Or maybe actually I can give the cells a very high-level prompt that says, “You really should build a limb,” and convince them to do it, right? And so which of those is possible? I mean, clearly people have a lot of intuitions about that. If you ask standard people in regenerative medicine and molecular biology, they’re going to say, “Well, that convincing thing is crazy. What we really should be doing is talking to the cells, or better yet, the molecular networks.” And in fact, all the excitement of the biological sciences today are at single molecule approaches and big data and genomics and all of that.
Michael Levin
(00:06:00)
The assumption is that going down is where the action’s going to be, going down in scale. And I think that’s wrong. But the thing that we can say for sure is that you can’t guess that. You have to do experiments and you have to see, because you don’t know where any given system is on that spectrum of persuadability. And it turns out that every time we look and we take tools from behavioral science, so learning different kinds of training, different kinds of models that are used in active inference and surprise minimization and perceptual multi-stability and visual illusions and all these kinds of interesting things, you know, stress perception and active memory reconstruction, all these interesting things.
Michael Levin
(00:06:41)
When we apply them outside the brain to other kinds of living systems, we find novel discoveries and novel capabilities, actually being able to get the material to do new things that nobody had ever found before. And precisely because I think that people didn’t look at it from those perspectives, they assumed that it was a low-level kind of thing. So when I say persuadability, I mean different types of approaches, right? And we all know if you want to persuade your wind-up clock to do something, you’re not going to argue with it or make it feel guilty or anything. You’re going to have to get in there with a wrench and you’re going to have to, you know, tune it up and do whatever.
Michael Levin
(00:07:15)
If you want to do that same thing to a cell or a thermostat or an animal or a human, you’re going to be using other sets of tools that we’ve given other names to. And so that’s… Now, of course, that spectrum, the important thing is that as you get to the right of that spectrum, whereas the agency of the system goes up, it is no longer just about persuading it to do things. It’s a bidirectional relationship, what Richard Watson would call a mutual vulnerable knowing. So the idea is that on the right side of that spectrum, when systems reach the higher levels of agency, the idea is that you are willing to let that system persuade you of things as well. You know, in molecular biology, you do things, hopefully the system does what you want it to do, but you haven’t changed.
Michael Levin
(00:07:53)
You’re still exactly the way you came in. But on the right side of that spectrum, if you’re having interactions with even cells, but certainly, you know, dogs, other animals, maybe other creatures soon, you’re not the same at the end of that interaction as you were going in. It’s a mutual bidirectional relationship. So it’s not just you persuading something else, it’s not you pushing things. It’s a mutual bidirectional set of persuasions, whether those are purely intellectual or of other kinds.
Lex Fridman
(00:08:20)
So in order to be effective at persuading an intelligent being, you yourself have to be persuadable. So the closer in intelligence you are to the thing you’re trying to persuade, the more persuadable you have to become, hence the mutual vulnerable knowing. What a term.
Michael Levin
(00:08:37)
Yeah. Richard, you should talk to Richard as well. He’s an amazing guy and he’s got some very interesting ideas about the intersection of cognition and evolution. But I think what you bring up is very important because there has to be a kind of impedance match between what you’re looking for and the tools that you’re using. I think the reason physics always sees mechanism and not minds is that physics uses low-agency tools. You’ve got voltmeters and rulers and things like this. And if you use those tools as your interface, all you’re ever going to see is mechanisms and those kinds of things. If you want to see minds, you have to use a mind, right? You have to have, there has to be some degree of resonance between your interface and the thing you’re hoping to find.

Living vs non-living organisms

Lex Fridman
(00:09:18)
You said this about physics before. Can you just linger on that and expand on it, what you mean, why physics is not enough to understand life, to understand mind, to understand intelligence? You make a lot of controversial statements with your work. That’s one of them because there’s a lot of physicists that believe they can understand life, the emergence of life, the origin of life, the origin of intelligence using the tools of physics.
Michael Levin
(00:09:41)
Yeah.
Lex Fridman
(00:09:41)
In fact, all the other tools are a distraction to those folks. If you want to understand fundamentally anything, you have to start at physics to them. And you’re saying, “No, physics is not enough.”
Michael Levin
(00:09:52)
Here’s the issue. Everything here hangs on what it means to understand. For me, understand doesn’t just mean have some sort of pleasing model that seems to capture some important aspect of what’s going on. It also means that you have to be generative and creative in terms of capabilities. And so for me, that means if I tell you this is what I think about cognition in cells and tissues, it means, for example, that I think we’re going to be able to take those ideas and use them to produce new regenerative medicine that actually helps people in various ways, right? It’s just an example.
Michael Levin
(00:10:26)
So if you think as a physicist you’re going to have a complete understanding of what’s going on from that perspective of fields and particles and, you know, who knows what else is at the bottom there. Does that mean then that when somebody is missing a finger or has a psychological problem or has these other high-level issues, that you have something for them, that you’re going to be able to do something? Because my claim is that you’re not going to, and even if you have some theory of physics that is completely compatible with everything that’s going on, that is… it’s not enough. That’s not specific enough to enable you to solve the problems you need to solve.
Michael Levin
(00:11:04)
In the end, when you need to solve those problems, the person you’re going to go to is not a physicist. It’s going to be either a biologist or a psychiatrist, or who knows, but it’s not going to be a physicist. And the simple example is this: let’s say someone comes in here and tells you a beautiful mathematical proof, okay? It’s just really, you know, deep and beautiful. And there’s a physicist nearby, and he says, “Well, I know exactly what happened. There were some air particles that moved from that guy’s mouth to your ear. I see what goes on.
Michael Levin
(00:11:32)
It moved the cilia in your ear and the electrical signals went up to your brain.” I mean, we have a complete accounting of what happened, done and done. But if you want to understand what’s the more important aspect of that interaction, it’s not going to be found in the physics department. It’s going to be found in the math department. So that’s my only claim is that physics is an amazing lens with which to view the world, but you’re capturing certain things, and if you want to stretch to sort of encompass these other things, we just don’t call that physics anymore, right? We call that something else.
Lex Fridman
(00:12:03)
Okay. But you’re kind of speaking about super complex organisms. Can we go to the simplest possible thing where you first take a step over the line, the Cartesian cut as you’ve called it, from the non-mind to mind, from the non-living to living? The simplest possible thing, isn’t that in the realm of physics to understand? How do we understand that first step where you’re like, that thing is no mind, probably non-living, and here’s a living thing that has a mind. That line. I think that’s a really interesting line. Maybe you can speak to the line as well, and can physics help us understand it?
Michael Levin
(00:12:43)
Yeah, let’s talk about it. Well, first of all, of course it can. I mean, it can help, meaning that I’m not saying physics is not helpful. Of course it’s helpful. It’s a very important lens on one slice of what’s going on in any of these systems. But I think the most important thing I can say about that question is I don’t believe in any such line. I don’t believe any of that exists. I think there is a continuum. I think we as humans like to demarcate areas on that continuum and give them names because it makes life easier, and then we have a lot of battles over, you know, so-called category errors when people transgress those categories.
Michael Levin
(00:13:18)
I think most of those categories at this point, they may have done some good service at the beginning of when the scientific method was getting started and so on. I think at this point they mostly hold back science. Many, many categories that we can talk about are at this point very harmful to progress, because what those categories do is they prevent you from hoarding tools. If you think that living things are fundamentally different from non-living things, or if you think that cognitive things are these like advanced brainy things that are very different from other kinds of systems, what you’re not going to do is take the tools that are appropriate to these kind of cognitive systems, right?
Michael Levin
(00:13:55)
So the tools that have been developed in behavioral science and so on, you’re never going to try them in other contexts because you’ve already decided that there’s a categorical difference, that it would be a categorical error to apply them. And people say this to me all the time, that you’re making a category error, as if these categories were given to us, you know, from on high, and we have to obey them forevermore. The categories should change with the science. So yeah, I don’t believe in any such line, and I think a physics story is very often a useful part of the story, but for most interesting things, it’s not the entire story.

Origin of life

Lex Fridman
(00:14:30)
Okay. So if there’s no line, is it still useful to talk about things like the origin of life? That’s one of the big open mysteries before us as a human civilization, as scientifically minded curious homo sapiens. How did this whole thing start? Are you saying there is no start? Is there a point where you could say that invention right there was the start of it all on Earth?
Michael Levin
(00:15:01)
My suggestion is that it’s much better than trying to define any kind of a line, okay? Because inevitably I’ve never found, and people try to… You know, we play this game all the time when I make my continuum claim. Then people try to come up, “Okay, well, what about this, you know, what about this?” And I haven’t found one yet that really shoots that down, that you can’t zoom in and say, “Yeah, okay, but right before then this happened, and if we really look close, like here’s a bunch of steps in between,” right? Pretty much everything ends up being a continuum, but here’s what I think is much more interesting than trying to make that line.
Michael Levin
(00:15:34)
I think what’s really more useful is trying to understand the transformation process. What is it that happened to scale up? And I’ll give you a really dumb example. And we always get into this because people often really don’t like this continuum view: the word adult, right?
Michael Levin
(00:15:51)
Everybody is going to say, “Look, I know what a baby is. I know what an adult is. You’re crazy to say that there’s no difference.” I’m not saying there’s no difference. What I’m saying is the word adult is really helpful in court because you just need to move things along, and so we’ve decided that if you’re 18, you’re an adult. However, what it hides is… what it completely conceals is the fact that first of all nothing happens on your 18th birthday, right? That’s special. Second, if you actually look at the data, the car rental companies actually have a much better estimate because they actually look at the accident statistics and they’ll say it’s about 25 is really what you’re looking for, right?
Michael Levin
(00:16:27)
So theirs is a little better. It’s less arbitrary. But in either case what it’s hiding is the fact that we do not have a good story of what happened from the time that you were an egg to the time that you’re the supposed adult and what is the scaling of personal responsibility, decision-making, judgment. These are deep fundamental questions. Nobody wants to get into that every time somebody, you know, has a traffic ticket. So we’ve just decided that there’s this adult idea. And of course it does come up in court because then somebody has a brain tumor or somebody’s eaten too many Twinkies or something has happened. You say, “Look, that wasn’t me.
Michael Levin
(00:17:01)
Whoever did that, I was on drugs.” “Well, why’d you take the drugs?” “Well, that was, you know, that was yesterday. Me today, this is I’m…” Right? So we get into these very deep questions that are completely glossed over by this idea of an adult. So I think once you start scratching the surface, most of these categories are like that. They’re convenient and they’re good. You know, I get into this with neurons all the time. I’ll ask people, “What’s a neuron? Like, what’s really a neuron?” And yes, if you’re in neurobiology 101, of course you just say like, “These are what neurons look like.
Michael Levin
(00:17:30)
Let’s just study the neuroanatomy and we’re done.” But if you really want to understand what’s going on, well, neurons develop from other types of cells and that was a slow and gradual process, and most of the cells in your body do the things that neurons do. So what really is a neuron, right? So once you start scratching this, this happens, and I have some things that I think are coming out of our lab and others that are I think very interesting about the origin of life. But I don’t think it’s about finding that one boon, like this is. Yeah, there will be innovations, right? There are innovations that allow you to scale in an amazing way, for sure. And there are lots of people that study those, right?
Michael Levin
(00:18:06)
So things like thermodynamic, kind of metabolic things and all kinds of architectures and so on. But I don’t think it’s about finding a line. I think it’s about finding a scaling process.

The search for alien life (on Earth)

Lex Fridman
(00:18:16)
the scaling process, but then there is more rapid scaling and there are slower scaling. So innovation, invention, I think is useful to understand so you can predict how likely it is on other planets, for example. Or to be able to describe the likelihood of these kinds of phenomena happening in certain kinds of environments. Again, specifically in answering how many alien civilizations there are.
Lex Fridman
(00:18:44)
It’s useful. But it is also useful on a scientific level to have categories, not just ’cause it makes us feel good and fuzzy inside, but because it makes conversation possible and productive, I think. If everything is a spectrum, it becomes difficult to make concrete statements, I think. Like, we even use the terms of biology and physics. Those are categories. Technically, it’s all the same thing, really. Fundamentally, it’s all the same. There’s no difference between biology and physics. But it’s a useful category. If you go to the physics department and the biology department, those people are different in some categorical way. So somehow, I don’t know what the chicken or the egg is, but the categories…
Lex Fridman
(00:19:28)
Maybe the categories create themselves because of the way we think about them and use them in language, but it does seem useful.
Michael Levin
(00:19:35)
Let me make the opposite argument. They’re absolutely useful. They’re useful specifically when you want to gloss over certain things. The categories are exactly useful when there’s a whole bunch of stuff. And this is what’s important about science, is like the art of being able to say something without first having to say everything, right?
Michael Levin
(00:19:50)
which would make it impossible. So, categories are great when you want to say, “Look, I know there’s a bunch of stuff hidden here. I’m going to ignore all that, and we’re just going to, like, let’s get on with this particular thing.” And all of that is great as long as you don’t lose track of the stuff that you glossed over. And that was what I’m afraid is happening in a lot of different ways. And in terms of, look, I’m very interested in life beyond Earth and all of these kinds of things so that we should also talk about what I call SUTI, S-U-T-I, the search for unconventional terrestrial intelligences. I think we got much bigger issues than actually recognizing aliens off Earth, but I’ll make this claim.
Michael Levin
(00:20:27)
I think the categorical stuff is actually hurting that search. Because if we try to define categories with the kinds of criteria that we’ve gotten used to, we are going to be very poorly set up to recognize life in novel embodiments. I think we have a kind of mind blindness. I think this is really key. To me, the cognitive spectrum is much more interesting than the spectrum of life. I think really what we’re talking about is the spectrum of cognition. And it is… Well, I know it’s weird as a biologist to say, I don’t think life is all that interesting a category. I think the categories of different types of minds, I think, is extremely interesting.
Michael Levin
(00:21:06)
And to the extent that we think our categories are complete and are cutting nature at its joints, we are going to be very poorly placed to recognize novel systems. So for example, a lot of people will say, “Well, this is intelligent and this isn’t,” right? And there’s a binary thing, and that’s useful occasionally for some things. I would like to say, instead of that, let’s admit that we have a spectrum, but instead of just saying, “Oh, look, everything’s intelligent,” right? Because if you do that, you’re right, you can’t do anything after that. What I’d like to say instead is, no, no, you have to be very specific as to what kind and how much. In other words, what problem spaces they’re operating in?
Michael Levin
(00:21:43)
What kind of mind does it have? What kind of cognitive capacities does it have? You have to actually be much more specific. And we can even name, right? That’s fine. We can name different types of… I mean, this is doing predictive processing. This can’t do that, but it can form memories. What kind? Well, habituation and sensitization, but not associative conditioning. It’s fine to have categories for specific capabilities, but it actually makes for much more rigorous discussions because it makes you say, what is it that you are claiming this thing does? And it works in both directions. So, some people will say, “Well, that’s a cell. That can’t be intelligent.” And I’ll say, “Well, let’s be very specific.
Michael Levin
(00:22:19)
Here are some claims about… here’s some problem solving that it’s doing. Tell me why that doesn’t… you know, why doesn’t that match? Or in the opposite direction, somebody comes to me and says, “You’re right, you’re right. You know, the whole, the whole solar system, man. It’s just like this amazing…” I’m like, “Whoa, okay. Well, what is it doing?” Like, “Tell me what tools of cognitive and behavioral science are you using to reach that conclusion,” right? And so I think it’s actually much more productive to take this operational stance and say, “Tell me what protocols you think you can deploy with this thing that would lead you to use these terms.”
Lex Fridman
(00:22:49)
To have a bit of a meta conversation about the conversation, I should say that part of the persuadability argument that we two intelligent creatures are doing is me playing devil’s advocate every once in a while. And you did the same, which is kind of interesting, taking the opposite view and see what comes out. Because you don’t know the result of the argument until you have the argument, and it seems productive to just take the other side of the argument.
Michael Levin
(00:23:14)
For sure. It’s a very important thinking aid to, first of all, you know, what they call steel manning, right? To try to make the strongest possible case for the other side and to ask yourself, “Okay, what are all the places that I am sort of glossing over because I don’t know exactly what to say? And where are all the holes in the argument, and what would a, you know, a really good critique really look like?” Yeah.
Lex Fridman
(00:23:38)
Sorry to go back there just to linger on the term, because it’s so interesting, persuadability. Did I understand correctly that you mean that it’s kind of synonymous with intelligence? So it’s an engineering-centric view of an intelligence system. Because if it’s persuadable, you’re more focused on, how can I steer the goals of the system, the behaviors of the system? Meaning an intelligence system maybe is a goal-oriented, goal-driven system with agency. And when you call it persuadable, you’re thinking more like, “Okay, here’s an intelligence system that I’m interacting with that I would like to get to accomplish certain things.” But fundamentally, are they synonymous or correlated, persuadability and intelligence?
Michael Levin
(00:24:28)
They’re definitely correlated. So let me… I want to preface this with one thing. When I say it’s an engineering perspective, I don’t mean that the standard tools that we use in engineering and this idea of enforced control and steering is how we should view all of the world. I’m not saying that at all, and I want to be very clear on that because people do email me and say, “Bah, this engineering thing. You’re going to drain the, you know, the life and the majesty out of these high-end, like, human conversation.” My whole point is not that at all. It’s that of course, at the right side of the spectrum, it doesn’t look like engineering anymore, right?
Michael Levin
(00:25:04)
It looks like, it looks like friendship and love and psychoanalysis and all these other tools that we have. But here’s what I want to do. I want to be very specific to my colleagues in regenerative medicine and everything. Just imagine if I, you know, if I went to a bioengineering department or a genetics department and I started talking about high-level, you know, cognition and psychoanalysis, right? They don’t want to hear that. So I focus on the engineering approach…
Michael Levin
(00:25:28)
…because I want to say, look, this is not a philosophical problem. This is not a linguistics problem. We are not trying to define terms in different ways to make anybody feel fuzzy. What I’m telling you is, if you want to reach certain capabilities, if you want to reprogram cancer, if you want to regrow new organs, you want to defeat aging, you want to do these specific things, you are leaving too much on the table by making an unwarranted assumption that the low-level tools that we have, so these are the rules of chemistry and the kind of remolecular rewiring, that those are going to be sufficient to get to where you want to go.
Michael Levin
(00:25:59)
It’s an assumption only, and it’s an unwarranted assumption. And actually, we’ve done experiments now, so not philosophy, but real experiments, that if you take these other tools, you can in fact persuade the system in ways that has never been done before. And we can unpack all that. But it is absolutely correlated with intelligence, so let me flesh that out a little bit. What I think is scaling in all of these things, right, because I keep talking about the scaling. So what is it that’s scaling? What I think is scaling is something I call the cognitive light cone, and the cognitive light cone is the size of the biggest goal state that you can pursue. This doesn’t mean how far do your senses reach.
Michael Levin
(00:26:39)
This doesn’t mean how far can you affect it. So the James Webb Telescope has enormous sensory reach, but that doesn’t mean that’s the size of its cognitive light cone. The size of the cognitive light cone is the scale of the biggest goal you can actively pursue, but I do think it’s a useful concept to enable us to think about very different types of agents of different composition, different provenance, you know, engineered, evolved, hybrid, whatever, all in the same framework. And by the way, the reason I use light cone is that it has this idea from physics that you’re putting space and time kind of in the same diagram, which I like here.
Michael Levin
(00:27:11)
So if you tell me that all your goals revolve around maximizing the amount of sugar in this, in this, you know, 10, 20 micron radius of space-time, and that you have, you know, 20 minutes memory going back and maybe five minutes predictive capacity going forward, that tiny little cognitive light cone, I’m going to say, probably a bacterium. And if you say to me that, “Well, I’m able to care about several hundred yards sort of scale, I could never care about what happens three weeks from now, two towns over, just impossible,” I would say you might be a dog. And if you say to me, “Okay, I care about really what happens, you know, the financial markets on Earth, you know, long after I’m dead, and this and that,” I’d say you’re probably a human.
Michael Levin
(00:27:56)
And if you say to me, “I care in the linear range, I actively, I’m not just saying it, I can actively care in the linear range about all the living beings on this planet,” I’m going to say, “Well, you’re not a standard human. You must be something else,” because humans, I don’t know, standard humans today, I don’t think can do that. You must be some kind of a bodhisattva or some other thing that has these massive cognitive light cones. So I think what’s scaling from zero, and I do think it goes all the way down. I think we can talk about even particles doing something like this. I think what scales is the size of the cognitive light cone. And so now this is an interesting… here, I’ll try for a definition of life or whatever, for whatever it’s worth.
Michael Levin
(00:28:33)
I spent no time trying to make that stick, but if we wanted to… I think we call things alive to the extent that the cognitive light cone of that thing is bigger than that of its parts. So in other words, rocks aren’t very exciting because the things it knows how to do are the things that its parts already know how to do, which is follow gradients and things like that. But living things are amazing at aligning their competent parts so that the collective has a larger cognitive light cone than the parts. I’ll give you a very simple example that comes up in biology and that comes up in our cancer program all the time. Individual cells have little tiny cognitive light cones. What are their goals?
Michael Levin
(00:29:15)
Well, they’re trying to manage pH, metabolic state, some other things. There are some goals in transcriptional space, some goals in metabolic space, some goals in physiological state space, but they’re generally very tiny goals. One thing evolution did was to provide a kind of cognitive glue, which we can also talk about, that ties them together into a multicellular system, and those systems have grandiose goals. They’re making limbs, and if you’re a salamander limb and you chop it off, they will regrow that limb with the right number of fingers. Then they’ll stop when it’s done; the goal has been achieved. No individual cell knows what a finger is or how many fingers you’re supposed to have, but the collective absolutely does.
Michael Levin
(00:29:54)
And that process of growing that cognitive light cone from a single cell to something much bigger, and of course the failure mode of that process, so cancer, right? When cells disconnect, they physiologically disconnect from the other cells. Their cognitive light cone shrinks. The boundary between self and world, which is what the cognitive light cone defines, shrinks. Now they’re back to an amoeba. As far as they’re concerned, the rest of the body is just external environment, and they do what amoebas do. They go where life is good. They reproduce as much as they can, right? So that cognitive light cone, that is the thing that I’m talking about that scales. And so when we are looking for life, I don’t think we’re looking for specific materials.
Michael Levin
(00:30:30)
I don’t think we’re looking for specific metabolic states. I think we’re looking for scales of cognitive light cone. We’re looking for alignment of parts towards bigger goals in spaces that the parts could not comprehend.
Lex Fridman
(00:30:42)
And so cognitive light cone, just to make clear, is about goals that you can actively pursue now. You said linear, like we’re within reach immediately.
Michael Levin
(00:30:54)
No, I didn’t… sorry, I didn’t mean that. First of all, the goal necessarily is often removed in time. So, in other words, when you’re pursuing a goal, it means that you have a separation between current state and target state, at minimum. Your thermostat, right? Let’s just think about that. There’s a separation in time because the thing you’re trying to make happen, so that the temperature goes to a certain level, is not true right now. And all your actions are going to be around reducing that error, right? That basic homeostatic loop is all about closing that gap. When I said linear range, this is what I meant.
Michael Levin
(00:31:24)
If I say to you, “This terrible thing happened to, you know, 10 people,” and, you know, you have some degree of activation about it. And then I say, “No, no, no, actually it was 100, you know, 10,000 people.” You’re not a thousand times more activated about it. You’re somewhat more activated, but it’s not a thousand. And if I say, “Oh my God, it was actually 10 million people,” you’re not a million times more activated. You don’t have that capacity in the linear range. You sort of, you sort of, right? If you think about that curve, we sort of reach a saturation point.
Michael Levin
(00:31:54)
I have some amazing colleagues in the Buddhist community with whom we’ve written some papers about this. The radius of compassion is like, can you grow your cognitive system to the point that, yeah, it really isn’t just your family group, it really isn’t just the hundred people you know in your, in your, you know, circle? Can you grow your cognitive light cone to the point where, no, no, we care about the whole, whether it’s all of humanity or the whole ecosystem, or the whole, whatever? Can you actually care about that the exact same way that we now care about a much smaller set of people? That’s what I mean by linear range.
Lex Fridman
(00:32:25)
But this is separated by time like a thermostat, but a bacteria… I mean, if you zoom out far enough, a bacteria could be formulated to have a goal state of creating human civilization, because if you look at the, you know, bacteria… …Has a role to play in the whole history of Earth. So, you know, if you anthropomorphize the goals of a bacteria enough, I mean, it has a concrete role to play in the history of the evolution… …Of human civilization. So, you do need to… when you define a cognitive light cone, you’re looking at directly short-term behavior.
Michael Levin
(00:33:08)
Well, no. How do you know what the cognitive light cone of something is? Because as you’ve said, it could be almost anything. The key is you have to do experiments. And the way you do experiments is you put barriers… You have to do interventional experiments. You have to put barriers between it and its goal, and you have to ask what happens. And intelligence is the degree of ingenuity that it has in overcoming barriers between it and its goal. Now, if it were to be that… Now, this is, I think, a totally doable, but impractical and very expensive experiment. But you could imagine setting up a scenario where the bacteria were blocked from becoming more complex. And you can ask if they would try to find ways around it, or whether their goals are actually metabolic.
Michael Levin
(00:33:51)
And as long as those goals are met, they’re not going to actually get around your barrier. This business of putting barriers between things and their goals is actually extremely powerful because we’ve deployed it in all kinds of… And I’m sure we’ll get to this later, but we’ve deployed it in all kinds of weird systems that you wouldn’t think are goal-driven systems. And what it allows us to do is to get beyond just what you called anthropomorphizing claims of, say, you know, saying, “Oh, yeah, I think this thing is trying to do this or that.” The question is, well, let’s do the experiment. And one other thing I want to say about anthropomorphizing is people say this to me all the time.
Michael Levin
(00:34:27)
I don’t think that exists. I think that’s kind of like, you know… And I’ll tell you why. I think it’s like heresy or like other terms that aren’t really a thing. Because if you unpack it, here’s what anthropomorphism means: Humans have a certain magic, and you’re making a category error by attributing that magic somewhere else. My point is we have the same magic that everything has. We have a couple of interesting things besides, the cognitive light cone and some other stuff. And it isn’t that you have to keep the humans separate because there’s some bright line. It’s just… It’s that same old… All I’m arguing for is the scientific method, really. That’s really all this is.
Michael Levin
(00:35:11)
All I’m saying is you can’t just make pronouncements such as, “Humans are this,” and let’s not sort of push that. You have to do experiments. After you’ve done your experiments, you can say either, “I’ve done it, and I’ve found… Look at that. That thing actually can predict the future for the next, you know, 12 minutes. Amazing.” Or you say, “You know what? I’ve tried all the things in the behaviorist handbook, they just don’t help me with this. It’s a very low level of…” Like, that’s it. It’s a very low level of intelligence. Fine, right? Done. So that’s really all I’m arguing for, is an empirical approach. And then things like anthropomorphism go away. It’s just a matter of, have you done the experiment, and what did you find?
Lex Fridman
(00:35:45)
And that’s actually one of the things you’re saying, that if you remove the categorization of things, you can use the tools… …Of one discipline on everything.
Michael Levin
(00:35:56)
You could try.
Lex Fridman
(00:35:57)
To try and then see. That’s the underpinnings of the criticism of anthropomorphization, because what is that? That’s like psychoanalysis of another human could technically be applied to robots, to AI systems, to more primitive biological systems, and so on. Try.
Michael Levin
(00:36:18)
Yeah. We’ve used everything from basic habituation conditioning all the way through anxiolytics, hallucinogens, all kinds of cognitive modification on a range of things that you wouldn’t believe. And by the way, I’m not the first person to come up with this. So there was a guy named Bose well over 100 years ago who was studying how anesthesia affected animals and animal cells, and drawing specific curves around electrical excitability. And he then went and did it with plants and saw some very similar phenomena. And being the genius that he was, he then said, “Well, how do I…” I don’t know when to stop, but there’s no, there’s no… You know, everybody thinks we should have stopped long before plants because people made fun of him for that.
Michael Levin
(00:36:59)
And he’s like, “Yeah, but the science doesn’t tell us where to stop. The tool is working, let’s keep going.” And he showed interesting phenomena on materials, metals, and other kinds of materials, right? And so… …The interesting thing is that there is no generic rule that tells you when you need to stop. We make those up. Those are completely made up. You have to just do the science and find out.
Lex Fridman
(00:37:23)
Yeah, we’ll probably get to it. You’ve been doing recent work on looking at computational systems, even trivial ones like algorithms— … sorting algorithms- …and analyzing them in a behavioral kind of way, to see if there are minds inside those sorting algorithms. And, of course, let me make a pothead statement question here, that you could start to do things like trying to do psychedelics with a sorting algorithm. And what does that even look like? It looks like a ridiculous question that’ll get you fired from most academic departments, but it may be, if you take it seriously, you could try— …and see if it applies.
Lex Fridman
(00:38:00)
If a thing could be shown to have some kind of cognitive complexity, some kind of mind, why not apply to it the same kind of analysis and the same kind of tools, like psychedelics, that you would to a human mind that’s a complex human mind? It at least might be a productive question to ask. You’ve seen spiders on psychedelics, more primitive biological organisms on psychedelics. Why not try to see what an algorithm does on psychedelics? Anyway.
Michael Levin
(00:38:33)
Well, yeah, because, you see, the thing to remember is we don’t have a magic sense or really good intuition for what the mapping is between the embodiment of something and the degree of intelligence it has. We think we do because we have an N of one example on Earth and we kind of know what to expect from cells, snakes to, you know, primates, but we really don’t. We don’t have, and this is, we’ll get into more of the stuff on the Platonic space, but our intuitions around that stuff are so bad that to really think that we know enough not to try things at this point is, I think, really short-sighted.
Lex Fridman
(00:39:09)
Before we talk about the platonic space, let’s let’s lay out some foundations. I think one useful one comes from the paper, Technological Approach to Mind Everywhere. An experimentally grounded framework for understanding diverse bodies and minds. Could you tell me about this framework, and maybe can you tell me about figure one from this paper that has a few components? One is the tiers of biological cognition that goes from group to whole organism to whole tissue organ, down to neural network, down to cytoskeleton, down to genetic network, and then there’s layers of biological systems from ecosystem, down to swarm, down to organism, tissue, and then finally cell. So can you explain this figure and can you explain the TAME, so-called, framework?
Michael Levin
(00:40:02)
So this is the version 1.0, and there’s a kind of update, a 2.0, that I’m writing at the moment, trying to formalize in a careful way all the things that we’ve been talking about here, and in particular, this notion of having to do experiments to figure out where any given system is on a continuum, and we can… let’s just start with figure two maybe for a second, and then we’ll come back to figure one. And first, just to unpack the acronym, I like the idea that it spells out TAME, because the central focus of this is interactions and how do you interact with a system to have a productive interaction with it, and the idea is that cognitive claims are really protocol claims.
Michael Levin
(00:40:42)
When you tell me that something has some degree of intelligence, what you’re really saying is, “This is the set of tools I’m going to deploy, and we can all find out how that worked out for you.” And so technological, because I wanted to be clear with my colleagues that this was not a project in just philosophy. This had very specific, empirical implications that are going to play out in engineering and regenerative medicine and so on. Technological approach to mind everywhere, this idea that we don’t know yet where different kinds of minds are to be found and we have to empirically figure that out.
Michael Levin
(00:41:15)
And so what you see here in figure two is basically this idea that there is a spectrum, and I’m just showing four waypoints along that spectrum, and as you move to the right of that spectrum, a couple things happen: persuadability goes up, meaning that the systems become more reprogrammable, more plastic, more able to do different things than whatever they’re standardly doing, so you have more ability to get them to do new and interesting things. The effort needed to exert influence goes down; that is, autonomy goes up. And to the extent that you are good at convincing or motivating the system to do things, you don’t have to sweat the details as much, right? And this also has to do with what I call engineering agential materials.
Michael Levin
(00:41:51)
So when you engineer wood, metal, plastic, things like that, you are responsible for absolutely everything because the material is not going to do anything other than hopefully hold its shape. If you’re engineering active matter, or you’re engineering computational materials, or better yet, agential materials like living matter, you can do some very high-level prompting and let the system then do very complicated things that you don’t need to micromanage, and we all know that that increases when you’re starting to work with intelligent systems like animals and humans and so on. And the other thing that goes down as you get to the right is the amount of mechanism, or physics, that you need to exert the influence goes down.
Michael Levin
(00:42:31)
So if you know how your thermostat is to be set as far as its set point, you really don’t need to know much of anything else, right? You, you just need to know that it is a homeostatic system and that this is how I change the set point. You don’t need to know how the cooling and heating plant works in order to get it to do complex things.
Lex Fridman
(00:42:46)
By the way, a quick pause just for people who are listening, let me describe what’s in the figure. So there’s four different systems going up the scale of persuadability. So the first system is a mechanical clock, then it’s a thermostat, then it’s a dog that gets rewards and punishments, Pavlov’s dog, and then finally a bunch of very smart-looking humans communicating with each other and arguing, persuading each other using reasons. And then there’s arrows below that showing persuadability going up as you go up these systems from the mechanical clock to a bunch of Greeks arguing, and then going down as the effort needed to exert influence, and once again, going down as mechanism knowledge needed to exert that influence.
Michael Levin
(00:43:30)
Yeah. I’ll give you an example about that, panel C here with the dog. Isn’t it amazing that humans have been training dogs and horses for thousands of years knowing zero neuroscience? Also amazing is that when I’m talking to you right now, I don’t need to worry about manipulating all of the synaptic proteins in your brain to make you understand what I’m saying and hopefully remember it. You’re going to do that all on your own. I’m giving you very thin, in terms of information content, very thin prompts, and I’m counting on you as a multi-scale agential material to take care of the chemistry underneath, all right?
Lex Fridman
(00:44:03)
So you don’t need a wrench to convince me?
Michael Levin
(00:44:05)
Correct. I don’t need, and I don’t need physics to convince you, and I don’t need to know how you work. I don’t need to understand all of the steps. What I do need to have is trust that you are a multi-scale cognitive system that already does that for yourself, and you do. This is an amazing thing. I know people don’t think about this enough, I think. When you wake up in the morning and you have social goals, research goals, financial goals, whatever it is that you have, in order for you to act on those goals, sodium and calcium and other ions have to cross your muscle membranes. Those incredibly abstract goal states ultimately have to make the chemistry dance in a very particular way, right? You—
Michael Levin
(00:44:42)
Our entire body is a transducer of very abstract things. And, by the way, not just our brains, but our organs have anatomical goals and other things that we can talk about, because all of this plays out in regeneration and development and so on. But the scaling, right, of all of these things, the way that… the way you regulate yourself is not by, “Oh my God,” you don’t have to sit there and think, “Wow, I really have to push some sodiums across this membrane.” All of that happens automatically, and that’s the incredible benefit of these multi-scale materials. So what I was trying to do in this paper is a couple of things.
Michael Levin
(00:45:18)
All of these were, by the way, drawn by Jeremy Gay, who’s this amazing graphic artist that works with me. First of all, in panel A, which is the spiral I was trying to point out, is that at every level of biological organization, like we all know we’re sort of nested dolls of organs and tissues and cells and molecules and whatever, but what I was trying to point out is that this is not just structural. Every one of those layers is competent and is doing problem-solving in different spaces, and spaces that are very hard for us to imagine. We humans are, because of our own evolutionary history, so obsessed with movement in three-dimensional space that even in AI you see this all the time.
Michael Levin
(00:45:53)
They say, “Well, this thing doesn’t have a robotic body, it’s not embodied.” Yeah, it’s not embodied by moving around in 3D space, but biology has embodiments in all kinds of spaces that are hard for us to imagine, right? So your cells and tissues are moving in high-dimensional physiological state spaces, in gene expression state spaces, in anatomical state spaces. They’re doing that perception, decision-making, action loop that we do in 3D space when we think about robots wandering around your kitchen. They’re doing those loops in these other spaces. And so the first thing I was trying to point out is that every layer of your body has its own ability to solve problems in those spaces.
Michael Levin
(00:46:31)
And then on the right, what I was saying is that this distinction between, you know, people say, “Well, there are living beings and then there are engineered machines,” and then they often follow up with all the things machines are never going to be able to do and whatever. And so what I was trying to point out here is that it is very difficult to maintain those kinds of distinctions, because life is incredibly interoperable. Life doesn’t really care if the thing it’s working with was evolved through random trial and error or was engineered with a higher degree of agency, because at every level within the cell, within the tissue, within the organism, within the collective, you can replace and substitute engineered systems with naturally-evolved systems.
Michael Levin
(00:47:12)
And that question of, “Is it real, you know, is it biology or is it technology?” I don’t think is a useful question anymore. So I was trying to warm people up with this idea that what we’re going to do now is talk about minds in general, regardless of their history or their composition. It doesn’t matter what you’re made of. It doesn’t matter how you got here. Let’s talk about what you’re able to do and what your inner world looks like. That was the goal of that.
Lex Fridman
(00:47:34)
Is it useful to, as a thought experiment, as an experiment of radical empathy, to try to put ourselves in the space of the different minds at each stage of the spiral? Like, what state space is human civilization as a collective embodied? Like, what does it operate in? So humans, individual organisms, operate in 3D space. That’s what we understand. But when there’s a bunch of us together… …What are we doing together?
Michael Levin
(00:48:07)
It’s really hard, and you have to do experiments, which at larger scales are really difficult.
Lex Fridman
(00:48:12)
But there is such a thing?
Michael Levin
(00:48:14)
There may well be. We have to do experiments. I don’t know. Here’s an example. Somebody will say to me, “Well, you know, with your kind of panpsychist view, you probably think the weather is agential too.” It’s like, “Well, I can’t say that, but we don’t know, but have you ever tried to see if a hurricane has habituation or sensitization?” Maybe. We haven’t done the experiment. It’s hard, but you could, right? And maybe weather systems can have certain kinds of memories. I have no idea. We have to do experiments.
Michael Levin
(00:48:41)
So I don’t know what the entire human society is doing, but I’ll just give you a simple example of the kinds of tools, and we’re actively trying to build tools now to enable radically different agents to communicate. So we are doing this using AI and other tools to try and get this kind of communication going across very different spaces. I’ll just give you a very dumb example of how that might be. Imagine that you’re playing tic-tac-toe against an alien. So you’re in a room. You don’t see him. You draw the tic-tac-toe thing on the board, on the floor, and you know what you’re doing.
Michael Levin
(00:49:17)
You’re trying to make straight lines with Xs and Os, and you’re having a nice game. It’s obvious that he understands the process. Like, sometimes you win, sometimes you lose. It’s obvious. In that one little segment of activity, you guys are sharing a world. What’s happening in the other room next door? Well, let’s say the alien doesn’t know anything about geometry. He doesn’t understand straight lines. What he’s doing is he’s got a box, and it’s full of basically billiard balls, each one of which has a number on it. And all he’s doing is he’s looking through the box to find billiard balls whose numbers add up to 15. He doesn’t understand geometry at all. All he understands is arithmetic.
Michael Levin
(00:49:55)
You don’t think about arithmetic, you think geometry. The reason you guys are playing the same game is that there’s this magic square, right? That somebody constructed that basically is a three-by-three square, where if you pick the numbers right, they add up to 15. He has no idea that there’s a geometric interpretation to this. He is solving the problem that he sees, which is totally algebraic. You don’t know anything about that. But if there is an appropriate interface like this magic square, you guys can share that experience. You can have an experience. It doesn’t mean you start to think like him. It means that you guys are able to interact in a particular way.
Lex Fridman
(00:50:27)
Okay, so there’s a mapping between the two different ways of seeing the world that allows you to communicate with each other.
Michael Levin
(00:50:34)
Of seeing a thin slice of the world.
Lex Fridman
(00:50:36)
Thin slice of the world. How do you find that mapping? So you’re saying we’re trying to figure out ways of finding that mapping… …For different kinds of systems. What’s the process for doing that?
Michael Levin
(00:50:48)
So the process is twofold. One is to get a better understanding of what space the system is navigating, what goals it has, what level of ingenuity it has to reach those goals. For example, xenobots, right? We make xenobots or anthropods. These are biological systems that have never existed on Earth before. We have no idea what their cognitive properties are. We’re learning. We found some things. But you can’t predict that from first principles because they’re not at all what their past history would inform you of.

Creating life in the lab

Lex Fridman
(00:51:19)
Can you actually explain briefly what a xenobot is and what an anthropod is?
Michael Levin
(00:51:24)
So one of the things that we’ve been doing is trying to create novel beings that have never been here before. The reason is that typically when you have a biological system, an animal or a plant, and you say, “Hey, why does it have certain forms of behavior, certain forms of anatomy, certain forms of physiology? Why does it have those?” The answer is always the same. Well, there’s a history of evolutionary selection, and there’s a long, long history going back of adaptation, and there are certain environments, and this is what survived, and so that’s why it has. So what I wanted to do was break out of that mold, and to basically force us as a community to dig deeper into where these things come from.
Michael Levin
(00:52:07)
And that means taking away the crutch where you just say, “Well, it’s evolutionary selection that’s why it looks like that.” So in order to do that, we have to make artificial synthetic beings now. To be clear, we are starting with living cells, so it’s not that they had no evolutionary history. The cells do. They had evolutionary history in frogs or humans or whatever. But the creatures they make and the capabilities that these creatures have were never directly selected for. And in fact, they never existed. So you can’t tell the same kind of story. And what I mean is, we can take epithelial cells off of an early frog embryo, and you don’t change the DNA. No synthetic biology circuits, no material scaffolds, no nanomaterials, no weird drugs, none of that.
Michael Levin
(00:52:46)
What we’re mostly doing is liberating them from the instructive influences of the rest of the cells that they were in in their bodies. And so when you do that, normally these cells are bullied by their neighboring cells into having a very boring life. They become a two-dimensional outer covering for the embryo, and they keep out the bacteria, and that’s that. So you might ask, “Well, what are these cells capable of when you take them away from that influence?” So when you do that, they form another little life form we call a xenobot. And it’s this self-motile little thing that has cilia covering its surface. The cilia are coordinated so they row against the water, and then the thing starts to move, and has all kinds of amazing properties.
Michael Levin
(00:53:25)
It has different gene expression, so it has its own novel transcriptome. It’s able to do things like kinematic self-replication, meaning make copies of itself from loose cells that you put in its environment. It has the ability to respond to sound, which normal embryos don’t do. It has these novel capacities. And we did that, and we said, “Look, here are some amazing features of this novel system. Let’s try to understand where they came from.” And some people said, “Well, maybe it’s a frog-specific thing,” you know? Maybe this is just something unique to frog cells. And so we said, “Okay, what’s the furthest you can get from frog embryonic cells?”
Michael Levin
(00:54:00)
How about human adult cells?” And so we took cells from adult human patients who were donating tracheal epithelia for biopsies and things like that, and those cells, again, no genetic change, nothing like that. They self-organized into something we call anthropods. Again, a self-motile little creature. 9,000 different gene expressions. So about half the genome is now different. And they have interesting abilities. For example, they can heal human neural wounds. So in vitro, if you plate some neurons and you put a big scratch through it so you damage them, anthropods can sit down, and they will try… They will spontaneously, without us having to teach them to do it, they will spontaneously try to knit the neurons across.
Lex Fridman
(00:54:42)
What is this video that we’re looking at here?
Michael Levin
(00:54:44)
So this is an anthropod. So often when I give talks about this, I show people this video, and I say, “What do you think this is?” And people will say, “Well, it looks like some primitive organism you got from the bottom of a pond somewhere.” And I’ll say, “Well, what do you think the genome would look like?” And they say, “Well, the genome would look like some primitive creature.” Right? If you sequence that thing, you’ll get 100% Homo sapiens. And that doesn’t look like any stage of normal human development. It doesn’t act like any stage of human development. It has the ability to move around. It has, as I said, over 9,000 differential gene expressions. Also interestingly, it is younger than the cells that it comes from.
Michael Levin
(00:55:20)
So it actually has the ability to roll back its age, and we could talk about that and what the implications of that are. But to go back to your original question, what we’re doing with these kind of systems…
Lex Fridman
(00:55:30)
Trying to talk to it.
Michael Levin
(00:55:31)
We’re trying to talk to it. That’s exactly right. And not just to this. We’re trying to talk to molecular networks. So we found a couple years ago that gene regulatory networks, never mind the cells, but the molecular pathways inside of cells can have several different kinds of learning, including Pavlovian conditioning. And what we’re doing now is trying to talk to it. The biomedical applications are obvious. Instead of, “Hey, Siri,” you want, “Hey, liver, why do I feel like crap today?” And you want an answer.
Michael Levin
(00:55:54)
“Well, you know, your potassium levels are this and that, and I don’t feel good for these reasons.” And you should be able to talk to these things, and there should be an interface that allows us to communicate, right? And I think AI is going to be a huge component of that interface, of allowing us to talk to these systems. It’s a tool to combat our mind-blindness, to help us see diverse, very unconventional minds that are all around us.
Lex Fridman
(00:56:19)
Can you generalize that? So let’s say we meet an alien or an unconventional mind here on Earth. Think of it as a black box. You show up. What’s the procedure for trying to get some hooks into a communication protocol with the thing?
Michael Levin
(00:56:43)
Yeah. That is exactly the mission of my lab. It is to enable us to develop tools to recognize these things, to learn to communicate with them, to ethically relate to them. And in general, to expand our ability to do this in the world around us. I specifically chose these kinds of things because they’re not as alien as proper aliens would be. So we have some hope. I mean, we’re made of them. We have many things in common. There’s some hope of understanding them.
Lex Fridman
(00:57:11)
You’re talking about xenobots and anthropods?
Michael Levin
(00:57:12)
Xenobots, anthropods, cells, and everything else. But they’re alien in a couple of important ways. One is the space they live in is very hard for us to imagine. What space do they live in? Well, your body, your body’s cells, long before we had a brain that was good for navigating three-dimensional space, was navigating the space of anatomical possibilities. It was going from, you start as an egg, and you have to become, you know, a snake or a giraffe or a human, whatever we’re going to be.
Michael Levin
(00:57:42)
And I specifically am telling you that this general idea, when people model that with cellular automata type of ideas, this open-loop kind of thing where everything just follows local rules and eventually, there’s complexity, and here you go. Now, you’ve got a giraffe or a human. I’m specifically telling you that that model is totally insufficient to grasp what’s actually going on. What’s actually going on, and there have been many, many experiments on this, is that the system is navigating a space. It is navigating a space of anatomical possibilities. If you try to block where it’s going, it will try to get around you.
Michael Levin
(00:58:17)
If you try to challenge it with things it’s never seen before, it will try to come up with a solution. If you really defeat its ability to do that, which you can, you know, they’re not infinitely intelligent, so you can defeat them. You will either get birth defects, or you will get creative problem-solving such as what you’re seeing here with xenobots and anthropods. If you can’t be a human, you’ll find another way to be. You can be an anthropod, for example, or you’ll be something else.
Lex Fridman
(00:58:42)
Just to clarify, what’s the difference between cellular automata type of action where you’re just responding to your local environment and creating some kind of complex behavior, and operating in the space of anatomical possibilities?
Michael Levin
(00:58:56)
Sure.
Lex Fridman
(00:58:56)
So there’s a kind of goal, I guess, you’re articulating.
Michael Levin
(00:58:59)
Yes.
Lex Fridman
(00:58:59)
There is some kind…
Michael Levin
(00:59:01)
Yes
Lex Fridman
(00:59:01)
…of thing. There’s a will to X something.
Michael Levin
(00:59:06)
The will thing, let’s put that aside.
Lex Fridman
(00:59:08)
Okay, sorry.
Michael Levin
(00:59:08)
Because that’s a… Well, it’s fine too.
Lex Fridman
(00:59:10)
There I go, anthropomorphizing. I just always love to quote Nietzsche, so there we go.
Michael Levin
(00:59:13)
Yeah. Yeah, yeah. And I’m not saying I’m not saying that’s wrong. I’m just saying I don’t have data for that one, but I’ll tell you the stuff that I’m quite certain of. There are a couple of different formalisms that we have in control theory. One of those formalisms is open-loop complexity. In other words, I’ve got a bunch of subunits, like a cellular automaton. They follow certain rules, and you turn the crank, time goes forward, whatever happens, happens. Now, clearly you can get complexity from this. Clearly you can get some very interesting-looking things, right? So the game of life, all those kinds of cool things, right? You can get complexity. No, no, no problem.
Michael Levin
(00:59:47)
But the idea that that model is going to be sufficient to explain and control things like morphogenesis is a hypothesis. It’s okay to make that hypothesis, but we know, we know it’s false despite the fact that that is what we learned, you know, in basic cell biology and developmental biology classes. When the first time you see something like this, inevitably, especially if you’re an engineer in those classes, you go, “Hey, how does it know to do that? How does it know, you know, four fingers instead of seven?” What they tell you is, “It doesn’t know anything.” Make sure. That’s very clear. They all insist, like, when we learn these things, they insist nothing here knows anything.
Michael Levin
(01:00:27)
There are rules of chemistry, they roll forward, and this is what happens. Okay. Now, that model is testable. We can ask, “Does that model explain what happens?” Here’s where that model falls down. If you have that model and situations change, either there’s damage or something in the environment that’s happened, those kinds of open-loop models do not adjust to give you the same goal by different means. This is William James’ definition of intelligence: the same goal by different means. And in particular, working them backward, let’s say you are in regenerative medicine, and you say, “Okay, but this is the situation now. I want it to be different.” What should the rules be? It’s not reversible.
Michael Levin
(01:01:09)
So the thing with those kinds of open-loop models is they’re not reversible. You don’t know what to do to make the outcome that you want. All you know how to do is roll them forward, right? Now, in biology, we see the following. If you have a developmental system and you put barriers between… So I’m going to give you two pieces of evidence that suggest that there is a goal. One piece of evidence is that if you try to block these things from the outcome that they normally have, they will do some amazing things. Sometimes very clever things, sometimes not at all the way that they normally do it, right? So this is William James’ definition.
Michael Levin
(01:01:45)
By different means, by following different trajectories, they will go around various local maxima and minima to get to where they need to go. It is navigation of a space. It is not blind, turn the crank, and wherever we end up is where we end up. That is not what we see experimentally. And more importantly, I think, what we’ve shown, and this is something that I’m particularly happy with in our lab, over the last 20 years, we’ve shown the following. We can actually rewrite the goal states because we found them. We have shown through our work on bioelectric imaging and bioelectric reprogramming, we have actually shown how those goal memories are encoded, at least in some cases.
Michael Levin
(01:02:21)
We certainly haven’t got them all, but we have some. If you can find where the goal state is encoded, read it out, and reset it, and the system will now implement a new goal based on what you just reset, that is the ultimate evidence that your goal-directed model is working. Because if there was no goal, that shouldn’t be possible. Right? Once you can find it, read it, interpret it, and rewrite it, it means that by any engineering standard, it means that you’re dealing with a homeostatic mechanism.
Lex Fridman
(01:02:53)
How do you find where the goal’s encoded?
Michael Levin
(01:02:55)
So, through lots and lots of hard work.
Lex Fridman
(01:02:58)
The barrier thing is part of that? Creating barriers and observing?
Michael Levin
(01:03:01)
The barrier thing tells you that you should be looking for a goal.
Lex Fridman
(01:03:04)
So step one, when you approach an agentic system, is create a barrier of different kinds until you see how persistent it is at pursuing the thing it seemed to have been pursuing originally.
Michael Levin
(01:03:14)
Yeah.
Lex Fridman
(01:03:14)
And then you know, okay, cool, this is a… This thing has agency, first of all. And then second of all, like, you start to build the intuition about exactly which goal it’s pursuing.
Michael Levin
(01:03:24)
Yes. The first couple of steps are all imagination. You have to ask yourself, “What space is this thing even working in?” And you really have to stretch your mind, because we can’t imagine all the spaces that systems work in, right? So step one is, what space is it? Step two, what do I think the goal is? And let’s not mistake step two, you’re not done. Just because you have made a hypothesis, that doesn’t mean you can say, “Well, I see it doing this, therefore that’s the goal.” You don’t know that. You have to actually do experiments. Now, once you’ve made those hypotheses, now you do the experiments.
Michael Levin
(01:03:50)
You say, “Okay, if I want to block it from reaching its goal, how do I do that?” And this, by the way, is exactly the approach we took with the sorting algorithms and with everything else. You hypothesize the goal, you put a barrier in, and then you get to find out what level of ingenuity it has. Maybe what you see is, “Well, that derailed everything, so probably this thing isn’t very smart.” Or you say, “Oh, wow, it can go around and do these things.” Or you might say, “Wow, it’s taking a completely different approach using its affordances in novel ways, like that’s a high level of intelligence.” You will find out what the answer is.

Memories and ideas are living organisms

Lex Fridman
(01:04:21)
Another pothead question. Is it possible to look at, speaking of unconventional organisms and going to Richard Dawkins for example with memes, is it possible to think of things like ideas? Like how weird can we get? Can we look at ideas as organisms then creating barriers for those ideas, and seeing are the ideas themselves… If you take the actual individual ideas and trying to empathize and visualize what kind of space they might be operating in, can they be seen as organisms that have a mind?
Michael Levin
(01:04:58)
Yeah. Okay, if you want to get really weird, we can get really weird here. Think about the caterpillar-butterfly transition, okay? So, you’ve got a caterpillar, soft-bodied kind of creature, has a particular controller that’s suitable for running a soft body, you know, kind of robot. It has a brain for that task, and then it has to become this butterfly, hard-bodied creature, flies around. Okay. During the process of metamorphosis, its brain is basically ripped up and rebuilt from scratch, right? Now, what’s been found is that if you train the caterpillar, so you give it a new memory, meaning that if the caterpillar sees this color disc, then it crawls over and eats some leaves. Turns out, the butterfly retains that memory.
Michael Levin
(01:05:39)
Now, the obvious question is, how the hell do you retain memories when the medium is being refactored like that? Let’s put that aside. That’s something that I’m going to get somewhere even weirder than that. There’s something else that’s even more interesting than that. It’s not just that you have to retain the memory. You have to remap that memory onto a completely new context, because guess what? The butterfly doesn’t move the way the caterpillar moves, and it doesn’t care about leaves. It wants nectar from flowers. And so if that memory is going to survive, it can’t just persist. It has to…
Lex Fridman
(01:06:10)
Be remapped
Michael Levin
(01:06:11)
…be remapped into a novel context. Now, here’s where things get weird. We can take a couple of different perspectives here. We can take the perspective of the caterpillar facing some sort of crazy singularity and say, “My God, I’m going to cease to exist, but, you know, I’ll sort of be reborn in this new higher-dimensional world where I’ll fly.” Okay, so that’s one thing. We can take the perspective of the butterfly and say that, “Well, here I am, but, you know, I seem to be saddled with some tendencies and some memories, and I don’t know where the hell they came from, and I don’t remember exactly how I got them, and they seem to be a core part of my psychological makeup, and, you know, they’re…
Michael Levin
(01:06:49)
If they come from somewhere. I don’t know where they come from.” Right? So you can take that perspective. But there’s a third perspective that I think is really interesting and useful. The third perspective is that of the memory itself. If you take a perspective of the memory, so what is a memory? It is a pattern. It is an informational pattern that was continuously reinforced within one cognitive system, and now here I am on this memory. What do I need to do to persist into the future? Well, now I’m facing the paradox of change. If I try to remain the same, I’m gone. There’s no way the butterfly is going to retain me in the original form that I’m in now. What I need to do is change, adapt, and morph. Now, you might say, “Well, that’s kind of crazy.
Michael Levin
(01:07:31)
Well, how are you taking the perspective of a pattern within an excitable medium?” Right? Agents are physical things. You’re talking about information, right? So let me tell you another quick science fiction story. Imagine that some creatures come out from the center of the earth. They live down in the core. They’re super dense, okay? They’re incredibly dense because they live down in the core. They have gamma ray vision, you know, for… And so on. So they come out to the surface. What do they see? Well, all of this stuff that we’re seeing here, this is like a thin plasma to them. They are so dense. None of this is solid to them.
Michael Levin
(01:08:06)
They don’t see any of this stuff. So they’re walking around, you know, because the planet is sort of, you know, covered in this like thin gas, you know. And one of them is a scientist and he’s taking measurements of the gas, and he says to the others, “You know, I’ve been watching this gas, and there are like little whirlpools in this gas, and they almost look like agents. They almost look like they’re doing things. They’re moving around, they kind of hold themselves together for a little bit, and they’re trying to make stuff happen.” And the others say, “Well, that’s crazy. Patterns in a gas can’t be agents. We’re agents. We’re solid. This is just patterns in an excitable medium.”
Michael Levin
(01:08:38)
And by the way, how long do they hold together? He says, “Well, about 100 years.” “Well, that’s crazy. Nothing… You know, no real agent can exist to dissipate that fast.” Okay. We are all metabolic patterns, among other things, right? And so one of the things that… And so you see what I’m warming up to here. So one of the things that we’ve been trying to dissolve, and this is some work that I’ve done with Chris Fields and others, is this distinction between thoughts and thinkers. So all agents are patterns within some excitable medium, we could talk about what that is, and they can spawn off others. And now you can have a really interesting spectrum. Here’s the spectrum.
Michael Levin
(01:09:15)
You can have fleeting thoughts, which are like waves in the ocean when you throw a rock in. You know, they sort of go through the excitable medium and then they’re gone. They pass through and they’re gone, right? So those are kind of fleeting thoughts. Then you can have patterns that have a degree of persistence, so they might be hurricanes or solitons or persistent thoughts or earworms or depressive thoughts. Those are harder to get rid of. They stick around for a little while. They often do a little bit of niche construction, so they change the actual brain to make it easier to have more of those thoughts, right? Like, that’s a thing. And so they stay around longer. Now what’s further than that?
Michael Levin
(01:10:00)
Well, fragments, personality fragments of a dissociative personality disorder, they’re more stable, and they’re not just on autopilot. They have goals and they can do things, and then past that is a full-blown human personality. And who the hell knows what’s past that? Maybe some sort of trans-human, you know, trans-personal, like, I don’t know, right? But this idea, again, I’m back to this notion of a spectrum. It’s there is not a sharp distinction between, you know, we are real agents and then we have these thoughts. Yeah, patterns can be agents too, but again, you don’t know until you do the experiment. So if you want to know whether a soliton or a hurricane or a thought within a cognitive system is its own agent, do the experiment. See what it can do.
Michael Levin
(01:10:43)
Does it, can it learn from experience? Does it have memories? Does it have goal states? What can it do, right? Does it have language? So coming back to your original question, yeah, we can definitely apply this methodology to ideas and concepts and social whatevers, but you’ve got to do the experiment.
Lex Fridman
(01:11:04)
That’s such a challenging thought experiment of thinking about memories, from the caterpillar to the butterfly as an organism. I think at the very basic level, intuitively, we think of organisms as hardware… …And software as not possibly being able to be organisms, but…
Lex Fridman
(01:11:26)
…what you’re saying is that it’s all just patterns in an excitable medium, and it doesn’t really matter what the pattern is. We need to… and what the excitable medium is. We need to do the testing of how persistent is it? How goal-oriented is it? And there are certain kinds of tests to do that, and you can apply that to memories. You can apply that to ideas. You can apply that to anything, really. I mean, you could probably think about consciousness. You could… There’s really no boundary to what you can imagine. Probably really, really wild things could be minds.
Michael Levin
(01:12:08)
Yeah. Stay tuned. I mean, this is exactly what we’re doing. We’re getting progressively more and more unconventional. I mean, so this whole distinction between software and hardware, I think it’s a super important concept to think about. And yet, the way we’ve mapped it onto the world, I would like to blow that up in the following way. And again, I want to point out what the practical consequences are, because this is not just, you know, fun stories that we tell each other. These have really important research implications. Think about a Turing machine. So one thing you can say is the machine’s the agent.
Michael Levin
(01:12:47)
It has passive data, and it operates on the data, and that’s it. The story of agency is the story of whatever that machine can and can’t do. The data is passive, and it moves it around. You can tell the opposite story. You can say, “Look, the patterns on the data are the agent. The machine is a stigmergic scratch pad in the world of the data doing what data does.” The machine is just the consequences, the scratch pad of it working itself out. And both of those stories make sense depending on what you’re trying to do. Here’s the biomedical side of things. So our program in bioelectrics and aging, okay?
Michael Levin
(01:13:19)
One model you could have is the physical organism is the agent and the cellular collective has pattern memories, specifically what I was saying before, goals, anatomical goals. If you want to persist for 100 plus years, your cells better remember what your correct shape is and where the new cells go, right? So there are these pattern memories. They exist during embryogenesis, during regeneration, during resistance to aging. We can see them. We can visualize them. One thing you can imagine is, fine, the physical body, the cells, are the agent. The electrical pattern memories are just data, and what might happen during aging is that the data might get degraded. They might get fuzzy.
Michael Levin
(01:14:01)
And so what we need to do is reinforce the memories, reinforce the pattern memories. That’s one specific research program, and we’re doing that. But that’s not the only research program, because the other thing you might imagine is that, what if the patterns are the agent in exactly the same sense as we think in our brains? It’s the patterns of electrophysiological computations, whatever else, that is the agent, right?
Michael Levin
(01:14:30)
And that what they’re doing in the brain are the side effects of the patterns working themselves out. And those side effects might be to fire off some muscles and some glands and some other things. From that perspective… maybe what’s actually happening is, maybe the agent’s finding it harder and harder to be embodied in the physical world. Why? Because the cells might get less responsive. In other words, the cells are sluggish. The patterns are fine. They’re having a harder time making the cells do what they need to do, and that maybe what you need to do is not reinforce the memories, maybe what you need to do is make the cells more responsive to them, and that is a different research agenda. So which we are also doing.
Michael Levin
(01:15:07)
We have evidence for that as well, actually now, and we published it recently. And so my point here is, when we tell these crazy sci-fi stories, the only worth to them and the only reason I’m talking about them now, and I hadn’t been… you know, a year ago I wasn’t talking about this stuff, is because these are now actionable in terms of specific experimental research agendas that are heading to the clinic, I hope, in some of these biomedical approaches. And so now here we can go beyond this and we can say, okay, so up until now we’ve considered… What are disease states? Well, we know there’s organic disease, something that’s physically broken. We can see the tissues breaking down.
Michael Levin
(01:15:40)
There’s damage in the joint, you know, where the liver is doing what, you know, we can see these things. But what about disease states that are not physical states, they’re physiological states or informational states or cognitive problems? So in other words, in all of these other spaces, you can start to ask, what’s a barrier in gene expression space? What’s a local minimum that traps you in physiological state space, and what is a stress pattern that keeps itself together, moves around the body, causes damage, tries to keep itself going, right? What level of agency does it have?
Michael Levin
(01:16:15)
This suggests an entirely different set of approaches to biomedicine, and, you know, anybody who’s, let’s say, in the alternative medicine community, is probably yelling at the screen right now saying, “We’ve been saying this for hundreds of years,” and yeah, but. And I’m well aware these ideas are not new. What’s new is being able to now take this and make them actionable and say, “Yeah, but we can image this now. I can now actually see the bioelectric patterns and why they go here and not there,” and we have the tools that now hopefully will get us to therapeutics. So this is very actionable stuff, and it all leans on not assuming we know minds when we see them, because we don’t, and we have to do experiments.
Lex Fridman
(01:16:57)
To return back to the software-hardware distinction, you’re saying that we can see the software is the organism and the hardware is just the scratchpad, or you could see the hardware as the organism and the software is the thing that the hardware generates, and in so doing, we can decrease the amount of importance we assign to something like the human brain, or it could be the activations, it could be the electrical signals that are the organisms, and then the brain is the scratchpad.
Michael Levin
(01:17:30)
And by saying scratchpad, I don’t mean it’s not important. When we get to talking about the Platonic space, we have to talk about how important the interface actually is. It’s… The scratchpad isn’t unimportant, the scratchpad is critical. It’s just that my only point is that when we have these formalisms of software, of hardware, of other things, the way we map those formalisms onto the world is not obvious. It’s not given to us. We get used to certain things, right? But, but who’s the hardware, who’s the software, who’s the agent and who’s the excitable medium is to be determined.

Reality is an illusion: The brain is an interface to a hidden reality

Lex Fridman
(01:18:02)
So this is a good place to talk about the increasingly radical weird ideas that you’ve been writing about. You’ve mentioned it a few times, the Platonic space. So there’s this “Ingressing Minds” paper where you described the Platonic space. You mentioned there’s an asynchronous conference… … Happening, which is a fascinating concept because it’s asynchronous. People are just contributing asynchronously.
Michael Levin
(01:18:30)
So what happened was this crazy notion, which I’ll describe momentarily, I have given a couple talks on it. I then found a couple papers in the machine learning community called the Platonic Representation Hypothesis, and I said, “That’s pretty cool. These guys are climbing up to the same point where I’m getting at it from biology and philosophy and whatever. They’re getting there from computer science and machine learning.” We’ll take a couple hours, I’ll give a talk, they’ll give a talk, we’ll talk about it. I thought there were going to be three talks at this thing.
Michael Levin
(01:18:57)
Once I started reaching out to people for this, everybody sort of said, “You know, I know somebody who’s really into this stuff, but they never talk about it because there’s no audience for this,” so I reached out to them. And then they said, “Yeah. Oh, yeah, I know this mathematician,” or, “I know this, you know, economist, whatever, who has these ideas and there’s nowhere we can have her talk about them.” So I got this whole list and it became completely obvious that we can’t do this in a normal… You know, we are now booked up through December, so every week in our center, somebody gives a talk. We kind of discuss it. It all goes on this thing.
Michael Levin
(01:19:29)
I’ll give you a link to it, and then there’s a huge running discussion after that, and then in the end, we’re all going to get together for an actual real-time discussion section and talk about it. But there’s going to be probably 15 or so talks about this from all kinds of disciplines. It’s blown up in a way that I didn’t realize how much undercurrent of these ideas had already existed that were ready, like now, now is the time, and I think… This is… Like I’ve been thinking about these things for, I don’t know, 30-plus years. I never talked about them before because they weren’t actionable before. There wasn’t a way to actually make empirical progress with this now.
Michael Levin
(01:20:07)
You know, this is something that Pythagoras and Plato and probably many people before them talked about, but now we’re to the point where we can actually do experiments and they’re making a difference in our research program.
Lex Fridman
(01:20:19)
You can just look it up, “Platonic Space Conference.” There’s a bunch of different fascinating talks. Yours first on the patterns of forms and behavior, beyond emergence, then radical Platonism and radical empiricism from Joel Dietz, and Patterns And Explanatory Gaps In Psychotherapy, Does God Play Dice? from Alexey Tolchinsky and so on. So, let’s talk about it. What is it? And it’s fascinating that the origins of some of these ideas are connected to ML people thinking about representation space.
Michael Levin
(01:20:58)
Yeah. The first thing I want to say is that while I’m currently calling it the Platonic space, I am in no way trying to stick close to the things that Plato actually thought about. In fact, to whatever extent we even know what that is, I think I depart from that in quite… in some ways, and I’m going to have to change the name at some point. The reason I’m using the name now is because I wanted to be clear about a particular connection to mathematics, which a lot of mathematicians would call themselves Platonists because what they think they’re doing is discovering… not inventing as a human construction, but discovering a structured, ordered space of truths. Let’s put it this way.
Michael Levin
(01:21:38)
In biology, as in physics, there’s something very curious that happens that if you keep asking why, then something interesting goes on. Let’s… Well, I’ll give you two examples. First of all, imagine cicadas. So the cicadas come out at 13 years and 17 years, okay? And so if you’re a biologist and you say, “So why is that?” And then you get this explanation for, well, it’s because they’re trying to be off-cycle from their predators. Because if it was 12 years, then every two years, every three years, every four years, every six years, a predator would eat you when you come out, right? So, and you say, “Okay, okay, cool. That makes sense. What’s special about 13 and 17?” Oh, they’re prime. Uh-huh. And why are they prime?
Michael Levin
(01:22:18)
Well, now you’re in the math department. You’re no longer in the biology department. You’re no longer in the physics department. You’re now… you’re now in the math department to understand why the distribution of primes is what it is. Another example, and I’m not a physicist, but what I see is every time you talk to a physicist and you say, “Hey, why do the, you know, leptons do this or that, or the fermions are doing whatever?” Eventually, the answer is, oh, because there’s this mathematical, you know, this SU(8) group or whatever the heck it is, and it has certain symmetries in these certain structures. Yeah, great. Once again, you’re in the math department. So something interesting happens is that there are facts that you come across, many of them are very surprising.
Michael Levin
(01:22:55)
You don’t get to design them. You get more out than you put in, in a certain way, because you make very minimal assumptions. And then certain facts are thrust upon you. For example, the value of Feigenbaum’s constant, the value of natural logarithm E. These things you sort of discover, right? And the salient fact is this, if those facts were different, then biology and physics would be different, right? So they matter, they impact instructively, functionally, they impact the physical world. If the distribution of primes was something else, well then the cicadas would have been coming out at different times. But the reverse isn’t true. What I mean is, there is nothing you can do in the physical world to change E, as far as I know, to change E or to change Feigenbaum’s constant.
Michael Levin
(01:23:40)
You could have swapped out all the constants at the Big Bang, right? You can change all the different things, you are not going to change those things. So this, I think Plato and Pythagoras understood very clearly, that there is a set of truths which impact the physical world, but they themselves are not defined by and determined by what happens in the physical world. You can’t change them by things you do in the physical world, right? And so I’ll make a couple claims about that. One claim is, I think we call physics those things that are constrained by those patterns. When you say, “Hey, why is this the way it is?” Ah, it’s because this is how symmetries or topology or whatever. Biology are the things that are enabled by those. They’re free lunches. They’re…
Michael Levin
(01:24:24)
Biology exploits these kinds of truths, and it really enables biology and evolution to do amazing things without having to pay for it. I think there’s a lot of free lunches going on here. And so I show you a xenobot or an anthropod, and I say, “Hey, look, here are some amazing things they’re doing,” that tissue has never done before in their history. You say, first of all, where did that come from? And when did we pay the computational cost for it? Because we know when we pay the computational cost to design a frog or a human, it was for the eons that the genome was bashing against the environment getting selected, right? So you pay the computational cost of that. There’s never been any anthropods. There’s never been any xenobots.
Michael Levin
(01:25:02)
When do we pay the computational cost for designing kinematic self-replication and, you know, all these things that they’re able to do? So there’s two things people say. One is, “Well, it’s sort of… you got it at the same time that they were being selected to be good humans and good frogs.” Now, the problem with that is it kind of undermines the point of evolution. The point of evolutionary theory was to have a very tight specificity between how you are now and the history of selection that got you here, right? The history of environments that got you to this point. If you say, “Yeah, okay, so this is what your environmental history was. And by the way, you got something completely different.”
Michael Levin
(01:25:37)
You got these other skills that you didn’t know about, that’s really strange, right? And so then what people say is, “Well, it’s emergent.” And I say, “What’s that? What does that mean?” And they say… besides the fact that you got surprised, right? Emergence often just means I didn’t see it coming. You know, there was something happened. I didn’t know that was going to happen. So what does it mean that it’s emergent? And people say, “Well,” and there are many emergent things like this. For example, the fact that gene regulatory networks can do associative learning. Like, that’s amazing, and you don’t need evolution for that. Even random genetic regulatory networks can do associative learning.
Michael Levin
(01:26:07)
I say, “Why does that happen?” And they say, “Well, it’s just a fact that holds in the world. Just a fact that holds.” So now you have an option, and you can go one of two ways. You can either say, “Okay, look, I like my sparse ontology. I don’t want to think about weird platonic spaces. I’m a physicalist. I want the physical world, nothing more.” So what we’re going to do is when we come across these crazy things that are very specific, like, you know, anthropods have four specific behaviors that they switch around. Why four? Why not 12? Why not 100? Like four, why four?
Michael Levin
(01:26:36)
When we come across these things, just like when we come across the value of E or Feigenbaum’s number or whatever, what we’re going to do is we’re going to write it down in our big book of emergence. And that’s it. We’re just going to have to live with it. This is what happens. We’re just… You know, there’s some cool surprises. You know, when we come across them, we’re going to write them down. Great. It’s a random grab bag of stuff. And when we come across them, we’ll write them down. That’s one… the upside is you get to be a physicalist, and you get to keep your sparse ontology.
Michael Levin
(01:27:02)
The downside is I find it incredibly pessimistic and mysterian because you’re basically then just willing to make a catalog of these amazing patterns. Why not, instead, and this is why I started with this Platonic terminology, why not do what the mathematicians already do? A huge number of them say, “We are going to make the same optimistic assumption that science makes, that there’s an underlying structure to that latent space.” It’s not, like, a random grab bag of stuff. There’s a space to it where these patterns come from, and by studying them systematically, we can get from one to another. We can map out the space. We can find out the relationships between them.
Michael Levin
(01:27:43)
We can get an idea of what’s in that space, and we’re not going to assume that it’s just random. We’re going to assume there’s some kind of structure to it. And you’ll see all kinds of people, I mean, you know, well-known mathematicians that talk about this stuff. You know, Penrose and lots of other people who will say that, “Yeah, there’s another space physically, and it has spatial structure. It has components to it and so on. We can traverse that space in various ways.” And then there’s the physical space. So I find that much more appealing because it suggests a research program, which we are now undergoing in our lab.
Michael Levin
(01:28:14)
The research program is everything that we make, cells, embryos, robots, biobots, language models, simple machines, all of it, they are interfaces. All physical things are interfaces to these patterns. You build an interface, some of those patterns are going to come through that interface. Depending on what you build, some patterns versus others are going to come through. The research program is mapping out that relationship between the physical pointers that we make, and the patterns that come through it, right? Understanding what is the structure of that space, what exists in that space, and what do I need to make physically to make certain patterns come through?
Michael Levin
(01:28:50)
Now, when I say patterns, we have to ask, “What kinds of things live in that space?” Well, the mathematicians will tell you, “We already know. We have a whole list of objects. You know, the amplituhedrons and all this crazy stuff that lives in that space.” Yeah, I think that’s one layer of stuff that lives in that space, but I think those patterns are the lower agency kinds of things that are basically studied by mathematicians. What also lives in that space are much more active, more complex, higher agency patterns that we recognize as kinds of minds, that behavioral scientists would look at that pattern and say, “Well, I know what that is. That’s the competency for delayed gratification or problem-solving of certain kinds,” or whatever.
Michael Levin
(01:29:29)
And so, what I end up with right now is a model in which that latent space contains things that come through physical objects, so simple, simple patterns, right? So facts about triangles and Fibonacci patterns and fractals and things like that. But also, if you make more complex interfaces such as biologicals, and importantly, not just biologicals, but let’s say cells and embryos and tissues, what you will then pull down is much more complex patterns that we say, “Ah, that’s a mind. That’s a human mind,” or, “That’s a snake mind,” or whatever.
Michael Levin
(01:30:02)
So I think the mind-brain relationship is exactly the kind of thing that the math-physics relationship is, that in some very interesting way, there are truths of mathematics that become embodied, and they kind of haunt physical objects, right, in a very specific functional way. And in the exact same way, there are other patterns that are much more complex, higher agency patterns that basically inform living things that we see as obvious embodied minds.
Lex Fridman
(01:30:35)
Okay, given how weird and complicated what you’re describing is, we’ll talk about it more, but you gotta ELI5 the basics to a person who’s never seen this. So again, you mentioned things like pointers. So the physical object themselves or the brain is a pointer to that platonic space. What is in that platonic space? What is the platonic space? What is the embodiment? What is the pointer?
Michael Levin
(01:31:05)
Yeah, okay. Let’s try it this way. There are certain facts of mathematics. So the distribution of prime numbers, right, that if you map them out, they make these nice spirals. And there’s an image that I often show, which is a very particular kind of fractal.
Michael Levin
(01:31:22)
And that fractal is the Hally map, which is, it’s pretty awesome that it actually looks very organic. It looks very biological. So if you look at that thing, that image, which has very specific complex structure, it’s a map of a very compact mathematical object. That formula is like, you know, Z cubed plus seven. It’s something like that. That’s it. So now you look at that structure and you say, “Where does that actually come from?” It’s definitely not packed into the Z cubed plus seven. It’s not, there’s not enough bits in that to give you all of that. There’s no fact of physics that determines this. There’s no evolutionary history. It’s not like we selected this based on some, you know, from a larger set over time. Where does this come from?
Michael Levin
(01:32:01)
Or the fact that… Think about the way that biology exploits these things. Imagine a world in which the highest fitness belonged to a certain kind of triangle, right? So evolution cranks a bunch of generations and it gets the first angle right, then cranks a bunch more generations, gets a second angle right. Now there’s something amazing that happens. It doesn’t need to look for the third angle because you already know. If you know two, you get this magical free gift from geometry that says, “Well, I already know what the third one should be.” You don’t have to go look for it.
Michael Levin
(01:32:29)
Or as evolution, if you invent a voltage-gated ion channel, which is basically a transistor, right, and you can make a logic gate, then all the truth tables and the fact that NAND is special and all these other things, you don’t have to evolve those things. You get those for free. You inherit those. Where do all those things live? These mathematical truths that you come across that you don’t have any choice about. You know, once you’ve committed to certain axioms, there’s a whole bunch of other stuff that is now just what it is. And so what I’m saying is, and this is what Pythagoras was saying, I think, that there is a whole space of these kinds of truths.
Michael Levin
(01:33:04)
Now, he was focused on mathematical ones, but he was embodying them in music and in geometry and in things like that. There are the space of patterns and they make a difference in the physical world, to machines, to sound, to things like that. I’m extending it, and what I’m saying is, yeah, and so far we’ve only been looking at the low agency inhabitants of that world. There are other patterns that we would recognize as kinds of minds, and that you don’t see them in this space until there’s an interface, until there’s a way for them to come through the physical world. That interface, the same way that you have to make a triangular object before you can actually see the rule of what you’re going to gain, right?
Michael Levin
(01:33:45)
Out of the rules of geometry and whatever. Or you have to actually do the computation on the fractal before you actually see that pattern. If you want to see some of those minds, you have to build an interface, right? At least if you’re going to interact with them in the physical world, the way we normally do science. As Darwin said, “Mathematicians have their own new sense, like a different sense than the rest of us.” And so that’s right. You know, mathematicians can perhaps interact with these patterns directly in that space. But for the rest of us, we have to make interfaces.
Michael Levin
(01:34:13)
And when we make interfaces, which might be cells or robots, you know, embryos or whatever, what we are pulling down are minds that are fundamentally not produced by physics. So I don’t believe that—I don’t know if we’re going to get into the whole consciousness thing—but I don’t believe that we create consciousness, whether we make babies or whether we make robots. Nobody’s creating consciousness. What you create is an interface, a physical interface through which specific patterns, which we call kinds of minds, are going to ingress, right? And consciousness is what it looks like from that direction looking out into the world. It’s what we call the view from the perspective of the platonic patterns.
Lex Fridman
(01:34:53)
Just to clarify, what you’re saying is a pretty radical idea here. So if there’s a mapping from mathematics to physics, okay, that’s understandable, intuitive as you’ve described. But what you’re suggesting is there’s a mapping from some kind of abstract mind object to an embodied brain that we think of as a mind— —as fellow humans. What is that? What exactly… ‘Cause you said interface. You’ve also said pointer. So the brain, and I think you said somewhere, a thin interface.
Michael Levin
(01:35:37)
A thin client. Yeah. The brain— The brain, a brain is a thin client. Yeah.
Lex Fridman
(01:35:40)
Thin client. Okay. So you’re… A brain is a thin client to this other world. Can you just lay out very clearly how radical the idea is? Because you’re kind of dancing around. I think you could also point to Donald Hoffman and who speaks of an interface to a world. So we only interact with the, quote unquote, real world through an interface. What is the connection here?
Michael Levin
(01:36:11)
Okay, a couple of things. First of all, when you said it makes sense for physics, I want to show that it’s not as simple as it sounds. Because what it means is that even in Newton’s boring, sort of classical universe, long before quantum anything, Newton’s world, physicalism was already dead in Newton’s world. I mean, think about what that means. This is, this is nuts. Because already he knew perfectly well… I mean, Pythagoras and Plato knew that even in a totally classical, deterministic world, already you have the ingression of information that determines what happens and what’s possible and what’s not possible in that world from a space that is itself not physical. In other words, it’s something like the natural logarithm E, right?
Michael Levin
(01:36:57)
Nothing in Newton’s world is set to the value of E. There is nothing you could do to set the value of E in that world. And yet, the fact that it was that and not something else governed all sorts of properties of things that happened. That classical world was already haunted by patterns from outside that world. This should be like… This is wild. This is not saying that, “Okay, everything was cool. Physicalism was great up until, you know, maybe we got quantum interfaces or we got consciousness or whatever. But originally it was fine.” No, this is saying that that worldview was already impossible really, since… So from a very long time ago, we already knew that there are non-physical properties that matter in the physical world.
Lex Fridman
(01:37:42)
This is the chicken or, or the egg question. You’re saying Newton’s laws are creating the physical world?
Michael Levin
(01:37:51)
That is a very deep follow-on question that we will come back to in a minute. All I was saying about Newton is that you don’t need quantum anything. You don’t need to think about consciousness. You already, long before you get to any of that, as Pythagoras, I think, knew, already we have the idea that this physical world is being strongly impacted by truths that do not live in the physical world. And when I say…
Lex Fridman
(01:38:18)
Wait. Which truths are we referring to? Are we talking about Newton’s laws, like mathematical equations, or…
Michael Levin
(01:38:23)
No, no. Mathematical facts. So, for example, the actual value of E or…
Lex Fridman
(01:38:27)
Oh, like very primitive mathematical facts.
Michael Levin
(01:38:29)
Yeah, yeah. I mean, some of them are… I mean, if you ask Don Hoffman, there’s this amplituhedron thing that is a set of mathematical objects that determines all the scattering amplitudes of the particles and whatever. They don’t have to be simple. I mean, the old ones were simple. Now they’re like crazy. I can’t imagine this amplituhedron thing, but maybe they can. But all of these are mathematical structures that explain and determine facts about the physical world, right? If you ask physicists, “Hey, why this many of this type of particle?” “Ah, because this mathematical thing has these symmetries.” That’s why.
Lex Fridman
(01:39:00)
So Newton is discovering these things. They’re not… He’s not inventing.
Michael Levin
(01:39:04)
This is very controversial, right? And there are, of course, physicists and mathematicians who disagree with what I’m saying, for sure. But what I’m leaning on is simply this: I don’t know of anything you can do in the physical world. You’re around at the Big Bang, you get to set all the constants. Set physics however you want. Can you change E? Can you change Feigenbaum’s constant? I don’t think you can.
Lex Fridman
(01:39:28)
Is that an obvious statement? I don’t even know what it means to change the parameters at the start of the Big Bang.
Michael Levin
(01:39:34)
So physicists do this. They’ll say, “Okay, you know, if we made the ratio between gravitation and the electromagnetic force different, would we have matter? How many dimensions would we have? Would there be inflation? Would there be this or that?” Right? You can imagine playing with it. There are however many unitless constants of physics. These are the kind of knobs on the universe that could, in theory, be different, and then you’d have different physics, you’d have different physical properties.
Lex Fridman
(01:40:05)
You’re saying that’s not gonna change the axiomatic systems that mathematics has?
Michael Levin
(01:40:10)
What I’m not saying is that every alien everywhere is going to have the exact same math that we have. That’s not what I’m claiming, although, maybe. But that’s not what I’m claiming. What I’m saying is, you get more out than you put in. Once you’ve made a choice… And maybe some alien somewhere made a different choice of how they’re going to do their math. But once you’ve made your choice, then you get saddled with a whole bunch of new truths that you discover that you can’t do anything about. They are given to you from somewhere. And you can say they’re random, or you can say, “No, there’s this space of these facts that they’re pulled from. There’s a latent space of options that they come from.”
Michael Levin
(01:40:41)
So when your E is exactly 2.718 and so on, there is nothing you can do in physics to change it.
Lex Fridman
(01:40:46)
And you’re saying that space is immutable? It’s-
Michael Levin
(01:40:50)
I’m not saying it’s immutable. So I think Plato may or may not have thought that these forms are eternal and unchanging. That’s one place we differ. I actually think that space has some action to it, maybe even some computation to it.
Lex Fridman
(01:41:01)
But we’re, we’re just pointers. Can this-
Michael Levin
(01:41:05)
Well, so I’ll circle back around to that whole thing. So the only thing I was trying to do is blow up the idea that we’re cool with how it works in physics. No problem there. I think that’s a much bigger deal than people normally think it is. I think already there, you have this weird haunting of the physical world by patterns that are not coming from the physical world.
Michael Levin
(01:41:28)
The reason I emphasize this is because now what I’m going to… when I amplify this into biology, I don’t think it sort of jumps as a new thing. I think it’s just a much more… I think what we call biology are systems that exploit the hell out of it. I think physics is so constrained by it, but we call biology those things that make use of those kinds of things and run with it. And so, again, I just think it’s a scaling. I don’t think it’s a brand new thing that happens. I think it’s a scaling, right? So what I’m saying is we already know from physics that there are non-physical patterns, and these are generally patterns of form, which is why I call them low agency, because they’re like fractals that stand still, and they’re like prime number distributions.
Michael Levin
(01:42:10)
Although there’s a mathematician that’s talking in our symposium that’s telling me that actually I’m too chauvinistic even there. That actually, even those things have more oomph than even I gave them credit for, which I love. So what I’m saying is those kinds of static patterns are things that we typically see in physics, but they’re not the full extent of what lives in that space. That space is also home to some patterns that are very high agency. And if we give them a body, if we build a body that they can inhabit, then we get to see different behavioral competencies that the behavior scientists say, “Oh, I know what that looks like.” That’s this kind of behavioral, you know, this kind of mind or that kind of mind.
Michael Levin
(01:42:49)
In a certain sense, I mean, yes, what I’m saying is extremely radical, but it is a very old idea. It’s an old idea of a dualistic worldview, right? Where the mind was not in the physical body, and that it in some way interacted with the physical brain. So, I just want to be clear. I’m not claiming that this is fundamentally a new idea. This has been around forever. However, it’s mostly been discredited, and it’s a very unpopular view nowadays. There are very few people in the, for example, cognitive science community or anywhere else in science that like this kind of view. Primarily, and already Descartes was getting crap for this when he first tried it out as this interaction problem, right?
Michael Levin
(01:43:30)
So the idea was, okay, well, if you have this non-physical mind and then you have this brain that presumably obeys conservation of mass energy and things like that, how are you supposed to interact with it? And there are many other problems there. So what I’m trying to point out is that first of all, physics already had this problem. You didn’t have to wait till you had biology and cognitive science to ask about it. And what I think is happening and the way we need to think about this is coming back to my point that I think the mind-brain relationship is basically of the same kind as the math-physics relationship.
Michael Levin
(01:44:06)
The same way that non-physical facts of physics haunt physical objects is basically how I think different kinds of patterns that we call kinds of minds are manifesting through our… through interfaces like brains.
Lex Fridman
(01:44:19)
How do we prove or disprove the existence of that world? ‘Cause it’s a pretty radical one. ‘Cause this physical world, we can poke. It’s there. It feels like all the incredible things like consciousness and cognition and all the goal-oriented behavior in agency all seems to come from this 3D entity.
Michael Levin
(01:44:43)
Yeah, I mean-
Lex Fridman
(01:44:44)
And so, we can test it. We can poke it. We can hit it with a stick.
Michael Levin
(01:44:48)
Yeah, sort of.
Lex Fridman
(01:44:48)
Makes noises.
Michael Levin
(01:44:50)
Sort of. I mean, Descartes got some stuff wrong, I think. But one thing that he did get right, the fact that you actually don’t know what you can poke and what you can’t poke. The only thing you actually know are the contents of your mind, and everything else might be… And, in fact, what we know from Anil Seth and Don Hoffman and various other people, it’s definitely a construct. You might be on drugs, and you might wake up tomorrow and say, “My God, I had the craziest dream of being Lex Fridman.” Amazing.
Lex Fridman
(01:45:16)
It’s a nightmare.
Michael Levin
(01:45:17)
Yeah, well… Yeah, that, that… Who knows? But-
Lex Fridman
(01:45:19)
It’s a ride.
Michael Levin
(01:45:20)
Right? But you see, it’s not clear at all that the physical poking is your primary reality. That’s not clear to me at all.
Lex Fridman
(01:45:30)
I don’t know. That’s an obvious thing that a lot of people can show… is true to take a step to the Descartes, “I think, therefore I am.” That’s the only thing you know for sure, and everything else could be an illusion or a dream. That’s already a leap. I think from a basic caveman science perspective, the repeatable experiment… …Is the one that most of intelligence comes from here. The reality is exactly as it is. To take a step towards the Donald Hoffman worldview takes a lot of guts and imagination, and stripping away of the ego and all these kinds of processes.
Michael Levin
(01:46:11)
I think you can get there more easily by synthetic bioengineering in the following sense. Do you feel a lack of X-ray perception? Do you feel blind in the X-ray spectrum or in the ultraviolet? I mean, you don’t. You have absolutely no clue that stuff is there, and all of your reality as you see it is shaped by your evolutionary history. It’s shaped by the cognitive structure that you have, right? There are tons of stuff going on around us right now that we are completely oblivious. There’s equally all kinds of other stuff which we construct, and this is just modern cognitive science that says that a lot of what we think is going on is a total fabrication constructed by us.
Michael Levin
(01:46:54)
So I think this is not a… I don’t think this is a philos-… I mean, Descartes got there from a philosophical point. That’s not what I’m, that’s not the leap I’m asking us to make. I’m saying that depending on your embodiment, depending on your interface, and this is increasingly going to be more relevant as we make first augmented humans that have sensory substitution. You’re going to be walking around. Your friend’s going to be like, “Oh, man. I have this primary perception of the solar weather and the stock market because I got those implants.” “And what do you see?” “Well, I see the, you know, the traffic or the internet through the Trans-Pacific Channel.” We’re all going to be living in somewhat different worlds. That’s the first thing.
Michael Levin
(01:47:29)
The second thing is we’re going to become better attuned to other beings, whether they be cells, tissues. You know, what’s it like to be a cell living in a 20,000-dimensional transcriptional space? To novel beings that have never been here before, that have all kinds of crazy spaces that they live in, and that might be AIs, it might be cyborgs, it might be hybrids, it might be all sorts of things. So this idea that we have a consensus reality here that’s independent of some very specifically chosen aspects of our brain and our interaction, we’re going to have to give that up no matter what to relate to these other beings.
Lex Fridman
(01:48:07)
I think the tension is, and absolutely, and this idea that you’re talking about, almost… I think you’ve termed it cognitive prosthetics… …Which is different ways of perceiving and interacting with the world. But I guess the question is, is our human experience, the direct human experience, is that just a slice of the real world, or is it a pointer to a different world? That’s what I’m trying to figure out. Figure out, because the claim you’re making is a really fascinating one, a compelling one. There’s a pretty strong one, which is there’s another world into which our brain is an interface, which means you could theoretically map that world systematically.
Michael Levin
(01:48:52)
Yeah, which is exactly what we’re trying to do. I mean, we’re…
Lex Fridman
(01:48:54)
Right, right, but it’s not clear that that world exists.
Michael Levin
(01:48:59)
Yeah, yeah, okay. So, that’s the beautiful part about this, and this is why I’m talking about this now, whereas I wasn’t, you know, about a year ago. Up until a year ago, I was never talking about this because I think this is now actionable. So there’s this diagram that’s called the Map of Mathematics, and they basically try to show how all the different pieces of math link together, and there’s a bunch of different versions of it. So there are two features to this. One is that, what is it a map of? Well, it’s a map of various truths. It’s a map of facts that are thrust on you. You don’t have a choice. Once you’ve picked some axioms, you just, you know, hear some surprising facts that are just going to be given to you.
Michael Levin
(01:49:38)
But the other key thing about this is that it has a metric. It’s not just a random heap of facts. They’re all connected to each other in a particular way. They literally make a space, and so when I say it’s a space of patterns, what I mean is it is not just a random bag of patterns such that when you have one pattern, you are no closer to finding any other pattern. I’m saying that there’s some kind of a metric to it so that when you find one, others are closer to it, and then you can get there. So that’s the claim. And obviously, this is… Now, not everybody buys this, and so on. This is one idea. Now, how do we know that this exists? Well, I’ll say a couple of things. If that didn’t exist, what is that a map of?
Michael Levin
(01:50:18)
If there is no space, if you don’t want to call it a space, that’s okay, but you can’t get away from the fact that as a matter of research, there are patterns that relate to each other in a particular way. What’s, you know, well, the final step of calling it a space is minimal. The bigger issue is what the hell is it a map of then if it’s not a space? So that’s the first thing. Now, that’s how it plays out, I think, in math and physics. Now, in biology, here’s how we’re going to know if this makes any sense. What we are doing now is trying to map out that space by saying, “Look, we took…
Michael Levin
(01:50:53)
We know that the frog genome maps to one thing, and that’s a frog. It turns out that exact same genome, if you just take the slightest step with the exact same genome but you just take some cells out of their environment, they can also make xenobots with very specific different transcriptomes, very specific behaviors, very specific shapes. It’s not just, “Oh, well, you know, they do whatever,” and that they have very specific behaviors, just like the frog had very specific properties. We can start to map out what all those are, right, and make that…
Michael Levin
(01:51:25)
And basically try to draw the latent space from which those things are pulled, and one of two things is going to happen in the future. So this is, you know, come back in 20 years, and we’ll see how this worked out. One thing that could happen is that we’re going to see, “Oh, yeah, just like the map of mathematics, we made a map of the space.” And we know now that if I want a system that acts like this and this, here’s the kind of body I need to make for it, because those are the patterns that exist. The Anthrobots have four different behaviors, not seven and not one. And so, that’s what I can pull from. These are the options I have.
Lex Fridman
(01:51:59)
Is it possible that there are varying degrees of grandeur to the space that you’re thinking about mapping? Meaning, it could strictly be just the space of biology, or is this a space of, like, minds, which feels like it could encompass a lot more than just biology?
Michael Levin
(01:52:25)
Yeah, except that… and I don’t see how it would be separate because I’m not just talking about an anatomical shape and transcriptional profile. I’m also talking about behavioral competencies. So when we make something and we find out that, okay, it does habituation, sensitization. It does not do Pavlovian conditioning, and it does do delayed gratification, and it doesn’t have language, that is a very specific cognitive profile. That’s a region of that space, and there’s another region that looks different, because I don’t make a sharp distinction between biology and cognition. If you want to explain behaviors, they are drawn from some distribution as well. So I think in 20 years, or however long it’s going to take, one of two things will happen.
Michael Levin
(01:53:08)
Either we and other people who are working on this are going to actually produce a map of that space and say, “Here’s why you’ve gotten systems that work like this and like this and like this, but you’ve never seen any that work like that,” right? Or, we’re going to find out that I’m wrong, and that basically it’s not worth calling it a space because it is so random and so jumbled up that there is, we’ve been able to make zero progress in linking the embodiments that we make to the patterns that come through.
Lex Fridman
(01:53:40)
Just to be clear, I mean, from your blog post on this, from the paper, we’re talking about a space that includes a lot of stuff.
Michael Levin
(01:53:48)
Yeah, yeah.
Lex Fridman
(01:53:48)
It includes human, what is it, meditating? Steve. “Hello, my name is Steve.” AI systems, so all those basic computational systems, objects, biological systems, concepts. It includes everything.
Michael Levin
(01:54:04)
Well, it includes specific patterns that we have given names to.
Lex Fridman
(01:54:08)
Right.
Michael Levin
(01:54:08)
Some of those patterns we’ve named mathematical objects. Some of those patterns we’ve named anatomical outcomes. Some of those patterns we’ve named psychological types.
Lex Fridman
(01:54:17)
So every entry in an encyclopedia, old-school Britannica, is a pointer to this space.
Michael Levin
(01:54:27)
There is a set of things that I feel very strongly about because the research is telling us that’s what’s going on, and then there’s a bunch of other stuff that I see as hypotheses for next steps that guide experiment.
Michael Levin
(01:54:40)
So what I’m about to tell you, I don’t, you know, these are things I don’t actually know. These are just guesses that, you know, you need to make some guesses to make progress. I don’t think that there are specific, or I don’t know, but it doesn’t mean that there are going to be specific Platonic patterns for, “This is the Titanic, and this is the sister of the Titanic, and this is some other kind of boat.” This is not what I’m saying. What I’m saying is, in some way that we absolutely need to work out, when we make minimal interfaces, we get more than we put in. We get behaviors. We get shapes. We get mathematical truths, and we get all kinds of patterns that we did not have to create. We didn’t micromanage them. We didn’t know they were coming.
Michael Levin
(01:55:22)
We didn’t have to put any effort into making them. They come from some distribution that seems to exist that we don’t have to create. And exactly whether that space is sparse or dense, I don’t know. So, for example, if there is, you know, some kind of a Platonic form for the movie, The Godfather, if it’s surrounded by a bunch of crappy versions and then crappier versions still, I have no idea, right? I don’t know if the space is sparse or not. I, you know, I don’t know if it’s finite or infinite. These are all things I don’t know. What I do know is that it seems like physics, and for sure biology and cognition, are the benefits of ingressions that are free lunches in some sense. We did not make them.
Michael Levin
(01:56:04)
Calling them emergent does nothing for a research program, okay? That just means you got surprised. I think, I think it’s much better if you make the optimistic assumption that they come from a structured space, that we have a prayer in hell of actually exploring. And in some decades, if I’m wrong, and it says, “You know what? We tried. It looks like it really is random. Too bad.” Fine.
Lex Fridman
(01:56:24)
Is there a difference? Like, can we one day prove the existence of this world? And is there a difference between it being a really effective model for connecting things, explaining things, versus an actual place where the information about these distributions that we’re sampling actually exists, that we can hit with a stick?
Michael Levin
(01:56:52)
Yeah, you can try to make that distinction.
Lex Fridman
(01:56:55)
Yeah.
Michael Levin
(01:56:56)
But I think modern cognitive neuroscience will tell you that whatever you think this is, at most, it is a very effective model for predicting the future experiences you’re going to have.
Lex Fridman
(01:57:09)
So all of this that we think about as physical reality is just a convenient model.
Michael Levin
(01:57:13)
I mean, that’s not me. That’s predictive processing and active inference—that’s modern neuroscience telling you this. This isn’t anything that I’m particularly coming up with. All I’m saying is the distinction you’re trying to make, which is an old-school, realist kind of view, that is it metaphorical or is it real? All we have in science are metaphors, I think, and the only question is how good are your metaphors. And I think as agents living in a world, all we have are models of what we are and what the outside world is. That’s it. And the question is, how good is it a model?
Michael Levin
(01:57:49)
And my claim about this is in some small number of decades, this will either give rise to a very enabling mapping of the space for AI, for bioengineering, for, you know, biology, whatever. Or we are going to find out that it really sucks, because it really is a random grab bag of stuff, and we tried the optimistic research program, it failed, and we’re just going to have to live with surprise. I mean, I doubt that’s going to happen, but it’s a possible outcome.
Lex Fridman
(01:58:16)
But do you think there is some place where the information is stored about these distributions that are being sampled through the thin interfaces? Like an actual place?
Michael Levin
(01:58:28)
Place is weird because it isn’t the same as our physical space-time, okay? I don’t think it’s that. So calling it a place is a little weird.
Lex Fridman
(01:58:35)
No, but like physics, general relativity describes a space-time.
Michael Levin
(01:58:40)
Okay.
Lex Fridman
(01:58:40)
Could other physics theories be able to describe this other space where information is stored that we can apply, maybe different, but in the same spirit, laws about—
Michael Levin
(01:58:52)
Yes
Lex Fridman
(01:58:52)
information?
Michael Levin
(01:58:53)
I definitely think there are going to be systematic laws. I don’t think they’re going to look anything like physics. You can call it physics if you want, but I think it’s going to be so different that that probably just, you know, cracks the word. And whether information is going to survive that, I’m not sure. But I definitely think that there are going to be laws. But I think they’re going to look a lot more like aspects of psychology and cognitive science than they’re going to look like physics. That’s my guess.
Lex Fridman
(01:59:23)
So what does it look like to prove that world exists?
Michael Levin
(01:59:26)
What it looks like is a successful research program that explains how you pull particular patterns when you need them, and why some patterns come and others don’t, and show that they come from an ordered space.
Lex Fridman
(01:59:40)
Across a large number of organisms?
Michael Levin
(01:59:43)
Well, it’s not just organisms. I mean, I think it’s going to end up, and I mean, you can talk to the machine learning people about how they got to this point. Again, because this is not just me. There are a bunch of different disciplines that are converging on this now simultaneously. You’re going to find again, just like in mathematics, where from different directions everybody sort of is looking at different things. Say, “Oh my God, this is one underlying structure that seems to inform all of this.” So in physics, in mathematics, in computer science, machine learning, possibly in economics, certainly in biology, possibly in cognitive science, we’re going to find these structures.
Michael Levin
(02:00:20)
It was already obvious in Pythagoras’ time that there are these patterns. The only remaining question is, are they part of an ordered, structured space, and are we up to the task of mapping out the relationship between what we build and the patterns that come through it?
Lex Fridman
(02:00:39)
So from the machine learning perspective, is it then the case that even something as simple as LLMs are sneaking up onto this world, that the representations that they form are sneaking up to it?
Michael Levin
(02:00:54)
When… I’ve given this talk to some audiences, especially in the organicist community. People like the first part where it’s like, okay, now there’s an idea for what the magic, quote unquote, is that’s special about living things and so on. Now, if we could just stop there, we would have dumb machines that just do what the algorithm says, and we have these magical living interfaces that can be the recipient for these ingressions. Cool, right? We can cut up the world in this way. Unfortunately or fortunately, I think, that’s not the case. And I think that even simple minimal computational models are to some extent beneficiaries of these free lunches.
Michael Levin
(02:01:41)
I think that the theories we have, and this goes back to the thin client interface kind of idea. The theories we have of both physics and computation, so theory of algorithms, you know, Turing machines, all that good stuff. Those are all good theories of the front-end interface, and they’re not complete theories of the whole thing. They capture the front end which is why they get surprised, which is why these things are surprising when they happen. I think that when we see embryos of different species, we are pulling from well-trodden familiar regions of that space, and we know what to expect: frog, you know, snake, whatever.
Michael Levin
(02:02:21)
When we make cyborgs and hybrids and biobots, we are pulling from new regions of that space that look a little weird and they’re unexpected, but you know, we can still kind of get our mind around them. When we start making AIs, proper AIs, we are now fishing in a region of that space that may never have had bodies before. It may have never been embodied before. And what we get from that is going to be extremely surprising. And the final thing just to mention on that is that because of this, because of the inputs from this Platonic space, some of the really interesting things that artificial constructs can do are not because of the algorithm; they’re in spite of the algorithm. They are filling up the spaces in between.
Michael Levin
(02:03:09)
There’s what the algorithm is forcing you to do, and then there’s the other cool stuff it’s doing which is nowhere in the algorithm. And if that’s true, and we think it’s true even of very minimal systems, then this whole business of language models and AIs in general, watching the language part may be a total red herring because the language is what we force them to do. The question is, what else are they doing that we are not good at noticing? And this is, you know, this is something that we are, I think, as an existential step for humanity, to become better at this because we are not good at recognizing these things now.

Unexpected intelligence of sorting algorithms

Lex Fridman
(02:03:49)
You’ve got to tell me more about this behavior that is observable, that is unrelated to the explicitly stated goal of a particular algorithm. So you looked at a simple algorithm of sorting. Can you explain what was done?
Michael Levin
(02:04:04)
Sure. First, just the goal of this study: there are two things that people generally assume. One is that we have a pretty good intuition about what kind of systems are going to have competencies. So from observing biologicals, we’re not terribly surprised when biology does interesting things. Everybody always says, “Well, it’s biology, you know, of course it does all this cool stuff.” And yeah, but do we have these machines? And the whole point of having machines and algorithms and so on is they do exactly what you tell them to do, right? And people feel pretty strongly that that’s a binary distinction, and that’s what we can carve up the world in that way. So I wanted to do two things.
Michael Levin
(02:04:42)
I wanted to first of all explore that and hopefully break the assumption that we’re good at seeing this, because I think we’re not. And I think it’s extremely important that we understand very soon that we need to get much better at knowing when to expect these things. And the other thing I wanted to do was to find out, you know, mostly people assume that you need a lot of complexity for this. So when somebody says, “Well, the capabilities of my mind are not properly encompassed by the rules of biochemistry,” everybody’s like, “Yeah, that makes sense.” Where, you know, you’re very complex and okay, your mind does things that you couldn’t…
Michael Levin
(02:05:22)
You didn’t see that coming from the rules of biochemistry, right? Like, we know that. So mostly people think that has to do with complexity, and what I would like to find out as part of understanding what kind of interfaces give rise to what kind of ingressions, is it really about complexity? How much complexity do you actually need? Is there some threshold after which this happens? Is it really specific materials? Is it biologicals? Is it something about evolution? Like, what is it about these kinds of things that allows this surprise, right? Allows this idea that we are more than the sum of our parts. And I had a strong intuition that none of those things are actually required, that this is…
Michael Levin
(02:05:58)
This kind of magic, so to speak, seeps into pretty much everything. And so to look at that, I wanted also to have an example that had significant shock value. Because the thing with biology is there’s always more mechanism to be discovered, right? Like, there’s infinite depth of what the materials are doing. You know, somebody will always say, “Well, there’s a mechanism for that, you just haven’t found it yet.” So I wanted an example that was simple, transparent, so you could see all the stuff. There was nowhere to hide. I wanted it to be deterministic, because I don’t want it to be something around unpredictability or stochasticity, and I want it to be something familiar to people, minimal.
Michael Levin
(02:06:35)
And I wanted to use it as a model system for honing our abilities to take a new system and looking at it with fresh eyes, and that’s because these sorting algorithms have been studied for over 60 years. We all think we know what they do and what their properties are. The algorithm itself is just a few lines of code, you know? You can see exactly what’s there, it’s deterministic. So that’s why. That’s why, right? I wanted the most shock value out of a system like that, if we were to find anything, and to use it as an example of taking something minimal and seeing what can be gotten out of it.
Michael Levin
(02:07:08)
So I’ll describe two interesting things about it, and then we have lots of other work coming in the next year about even simpler systems. I mean, it’s actually crazy. So the very first thing is this: the standard sorting… so let’s say bubble sort, right? And all these sorting algorithms, you know, what you’re starting out with is an array of jumbled up digits, okay, so integers. It’s an array of mixed up integers, and what the algorithm is designed to do is to eventually arrange them all into order, and what it does, generally, is compare some pieces of that array and based on which one is larger than which, it swaps them around.
Michael Levin
(02:07:46)
And you can imagine that if you just keep doing that and you just keep comparing and swapping, then eventually you can get all the digits in the same order. So, the first thing I decided to do, and this is the work of my student Kaining Zhang and then Adam Goldstein on this paper, this goes back to our original discussion about putting a barrier between it and its goals. And the first thing I said, “Okay, how do we put a barrier in?” Well, how about this? The traditional algorithm assumes that the hardware is working correctly. So if you have a seven and then a five, and you tell them to swap, the lines that swap the five and the seven, and then you go on, you never check. Did it swap?
Michael Levin
(02:08:24)
Because you assume that it’s reliable hardware, okay? So what we decided to do was to break one of the digits so that it doesn’t move. When you tell it to move, it doesn’t move. We don’t change the algorithm. That’s really key. We do not put anything new in the algorithm that says, “What do you do if the damn thing didn’t move?” Okay? Just run it exactly the same way. What happens? Turns out, something very interesting happens. It still works, so it still sorts it, but it eventually sorts it by moving all the stuff around the broken number, okay? And that makes sense, but here’s something interesting. Suppose we, suppose we plot, at any given moment, we plot the degree of sortedness of the string as a function of time.
Michael Levin
(02:09:09)
If you run the normal algorithm, it’s running and it’s guaranteed to get where it’s going. That’s the, you know, it’s got to sort, and it will always reach the end. But when it encounters one of the broken digits, what happens is, the actual sortedness goes down. In order to then recoup and get better order later. What it’s able to do is to go against the thing that it’s trying to do.
Michael Levin
(02:09:34)
To go around in order to meet its goal later on. Now, if I showed this to a behavior scientist and I didn’t tell him what system was doing, they would say, “Well, we know what this is. This is delayed gratification.” This is the ability of a system to go against its gradient and get what it needs to do. Now, imagine two magnets. Imagine you take two magnets and you put a piece of wood between them, and they’re like this. What the magnet is not going to do is to go around the barrier and get to its goal. The two… they’re not smart enough to go against their gradient. They’re just going to keep doing this. Some animals are smart enough, right? They’ll go around, and the sorting algorithm is smart enough to do that.
Michael Levin
(02:10:13)
But the trick is there are no steps in the algorithm for doing that. You could stare at the algorithm all day long. You would not see that this thing can do delayed gratification. It isn’t there. Now, there are two ways to look at this. On the one hand, you could say, or the reductionist physics approach, you could say, “Did it follow all the steps in the algorithm?” You say, “Yeah, it did.” Well then, there’s nothing to see here. There’s no magic. This is, you know, it does what it does. It didn’t disobey the algorithm, right? I’m not claiming that this is a miracle. I’m not saying it disobeys the algorithm. I’m saying it’s not failing to sort. I’m saying it’s not doing some sort of, you know, crazy quantum thing.
Michael Levin
(02:10:49)
Not saying any of that. What I’m saying is other people might call it emergent. What it has are properties that are not complexity, not unpredictability, not perverse instantiation as sometimes in ALife. What it has are unexpected competencies recognizable by behavioral scientists, meaning different types of cognition. Primitive. Well, we wanted primitive, so there you go, it’s simple that you didn’t have to code into the algorithm. That’s very important. You get more than you start with, than you put in. You didn’t have to do that. You get these surprising behavioral competencies, not just complexity. That’s the first thing. The second thing, which is also crazy, but it requires a little bit of explanation.
Michael Levin
(02:11:32)
The second thing that we said is, “Okay, what if instead of in the typical sorting algorithm, you have a single controller top down?” I’m sort of godlike looking down at the numbers and I’m swapping them according to the algorithm. What if… and this goes back to, actually the title of the paper talks about agential data, self-sorting algorithms. This is back to like, who’s the pattern and who’s the agent, right?
Michael Levin
(02:11:52)
You say, “What if we give the numbers a little bit of agency?” Here’s what we’re going to do: we’re not going to have any kind of top-down sort. Every single number knows the algorithm, and it’s just going to do whatever the algorithm says. So if I’m a five, I’m just going to execute the algorithm, and the algorithm will try to make sure that to my right is the six and to my left is a four. That’s it. So every digit is… it’s like a distributed… you know, it’s like an ant colony. There is no central planner. Everybody just does their own algorithm, okay? We’re just going to do that.
Michael Levin
(02:12:20)
Once you’ve done that, and by the way, one of the values of doing that is that you can simulate biological processes because in biology, you know, if I have like a frog face and I scramble it with all the different organs, every tissue is going to rearrange itself so that ultimately you have, you know, nose, eyes, head. You’re going to have an order, right? So you can do that. But okay, fine, but you can do something else cool. Once you’ve done that, you can do something cool that you can’t do with a standard algorithm. You can make a chimeric algorithm. What I mean is not all the cells have to follow the same algorithm. Some of them might follow bubble sort, some of them might follow selection sort.
Michael Levin
(02:12:52)
It’s like in biology what we do when we make chimeras, we make frogolottles. So frogolottles have some frog cells, they have some axolotl cells. What is that going to look like? Does anybody know what a frogolottle is going to look like? It’s actually really interesting that despite all the genetics and the developmental biology, you have the genomes, you have the frog genome, you have the axolotl genome, nobody can tell you what a frogolottle is going to look like, even though you have, yeah. This is, this is back to your question about physics and chemistry. Like, yeah, you can know everything there is to know about how, you know, how the physics and the genetics work, but the decision-making, right? Is like baby axolotls have legs.
Michael Levin
(02:13:27)
Tadpoles don’t have legs. Is a frogolottle going to have legs, right? Can you predict that from understanding the physics of transcription and all of that? Anyway, so, so we made some… So, so you, you see this as like an intersection of biology, physics- …cognition. So we made chimeric algorithms, and we said, “Okay, half the digits randomly.” We assigned them randomly. So half the digits are randomly doing bubble sort, half the digits are randomly doing, I don’t know, selection sort or something.
Lex Fridman
(02:13:51)
But once you choose bubble sort, that digit is sticking with bubble sort.
Michael Levin
(02:13:55)
It’s sticking. We haven’t done the thing where they can swap between… no. But they’re sticking to it, right? You label them and they’re sticking to it. The first thing we learned is that… Well, the first thing we learned is that distributed sorting still works. It’s amazing. You don’t need a central planner when every number is doing its own thing, it still gets sorted. That’s cool. The second thing we found is that when you make a chimeric algorithm where actually the algorithms are not even matching, that works too. The thing still gets sorted. That’s cool. But the most amazing thing is when we looked at something that had nothing to do with sorting, and that is we asked the following question. We defined…
Michael Levin
(02:14:30)
Adam Goldstein actually named this property, and I think it’s well-named. We define the algotype of a single cell. It’s not the genotype, it’s not the phenotype, it’s the algotype. The algotype is simply this: What algorithm are you following? Which one are you? Are you a selection sort or a bubble sort, right? That’s it. There are two algotypes. And we simply ask the following question, “During that process of sorting, what are the odds that whatever algotype you are, the guys next to you are your same type?” It’s not the same as asking how the numbers are sorted because it’s got nothing to do with the numbers. It’s actually just whatever type you are.
Lex Fridman
(02:15:03)
It’s more about clustering than sorting.
Michael Levin
(02:15:05)
Clustering. Well, that’s exactly what we call it. We call it clustering. And at first… So now think of what happens, and that’s… and you can see this on that graph, it’s the red. You start off, the clustering is at 50% because as I told you, we assign the algotypes randomly. So the odds that the guy next to you is the same as you is half, 50%, right? Because there are only two algotypes. In the end, it is also 50% because the thing that dominates is actually the sorting algorithm, and the sorting algorithm doesn’t care what type you are. You’ve got to get the numbers in order. So by the time you’re done, you’re back to random algotypes because you have to get the numbers sorted.
Michael Levin
(02:15:39)
But in between, in between you get some amount of increased… very significant, because look at the control, it’s in the middle, the pink is in the middle. In between you get significant amounts of clustering, meaning that certain algotypes like to hang out with their buddies for as long as they can. Now, now, now here’s one more thing and then I’ll kind of give the philosophical significance of this. And so we saw this and I said, “That’s nuts because the algorithm doesn’t have any provisions for asking what algotype am I, what algotype is my neighbor? If we’re not the same, I’m going to move to be next to…” Like if you wanted to implement this, you would have to write a whole bunch of extra steps.
Michael Levin
(02:16:17)
There would have to be a whole bunch of observations that you would have to take of your neighbor to see how he’s acting. Then you would infer what algotype he is. Then you would go stand next to the one that seems to have the same algotype as you. You would have to take a bunch of measurements to say, “Wait, is that guy doing bubble sort or is he doing selection sort,” right? Like if you wanted to implement this, it’s a whole bunch of algorithmic steps. None of that exists in our algorithm. You don’t have any way of knowing what algotype you are or what anybody else is. Okay. We didn’t have to pay for that at all. So notice a couple of interesting things. The first interesting thing is that this was not at all obvious from the algorithm itself.
Michael Levin
(02:16:50)
The algorithm doesn’t say anything about algotypes. Second thing is we paid computationally for all the steps needed to have the numbers sorted, right? Because we know, you pay for a certain computation cost. The clustering was free. We didn’t pay for that at all. There were no extra steps. So this gets back to your other question of how do we know there’s a platonic space, and this is kind of like one of the craziest things that we’re doing. I actually suspect we can get free compute out of it. I suspect that one of the things that we can do here is use these aggressions in a useful way that don’t require you to pay costs to pay physical costs, right? Because we know every bit has an energy cost that you have to get. The clustering was free. Nothing extra was done.
Lex Fridman
(02:17:31)
Yeah, just this plot for people who are just listening, on the X-axis is the percentage of completion of the sorting process and the Y-axis is the sortedness of the listed numbers, and then also in the red line is basically the degree to which they’re clustered. And you’re saying that there’s this unexpected competence of clustering. And I should comment that I’m sure there’s a theoretical computer scientist listening to this saying, “I can model exactly what is happening here and prove that the clustering increases and decreases.” So taking the specific instantiation of the thing you’ve experimented with and prove certain properties of this.
Lex Fridman
(02:18:19)
But the point is that there’s a more general pattern here of probably other unexpected competencies that you haven’t discovered that emerge from this, that you could get free computation out of this thing.
Michael Levin
(02:18:32)
So this goes back to the very first thing you said about physicists thinking that physics is enough. You’re 100% correct that somebody could look at this and say, “Well, I see exactly why this is happening. We can track through the algorithm.” Yeah, you can. There’s no miracle going on here, right? I mean, the hardware isn’t doing some crazy thing that it wasn’t supposed to do. The point is that despite following the algorithm to do one thing, it is also at the same time doing other things that are neither prescribed nor forbidden by the algorithm. It’s the space between chance and necessity, which is how a lot of people, you know, see these things.
Michael Levin
(02:19:07)
It’s that free space, we don’t really have a good vocabulary for it, where the interesting things happen. And to whatever extent it’s doing other things that are useful, that stuff is computationally without extra cost. Now, there’s one other cool thing about this. And this is the beginning of a lot of thinking that I’ve done about this—this relates to AI and stuff like that: intrinsic motivations.
Michael Levin
(02:19:29)
The sorting of the digits is what we forced it to do. The clustering is an intrinsic motivation. We didn’t ask for it. We didn’t expect it to happen. We didn’t explicitly forbid it, but we didn’t, you know, we didn’t know. This is a great definition of the intrinsic motivation of a system. So when people say, “Oh, that’s a machine, it only does what you programmed it to do.” I, you know, I as a human have intrinsic motivation, you know I’m creative and I have intrinsic motivation. Machines don’t do that. Even this minimal thing has a minimal kind of intrinsic motivation, which is something that is not forbidden by the algorithm, but isn’t prescribed by the algorithm either.
Michael Levin
(02:20:08)
And I think that’s an important, you know, third thing besides chance and necessity. Something else that’s fun about this is when you think about intrinsic motivations, think about a child. If you make him sit in math class all day, you’re never going to know what the other intrinsic motivations are that he might be doing, right? Like who knows what else he might be interested in. So I wanted to ask this question. I want to say, if we let off the pressure on the sorting, what would happen? Now, that’s hard because if you mess with the algorithm, now it’s no longer the same algorithm, so you don’t want to do that. So we did something that I think was kind of clever. We allowed repeat digits.
Michael Levin
(02:20:48)
So if you allow repeat digits in your array, you can still have all the fives, can still be after all the fours and after all the sixes, but you can keep them as clustered as you want. So this thing at the end where they have to get de-clustered in order for the sorting to happen, we thought maybe we could let off the pressure a little bit. If you do that, all you do is allow some extra repeat digits, the clustering gets bigger. It will cluster as much as you let it. The clustering is what it wants to do. The sorting is what we’re forcing it to do. And my only point is if the bubble sort, which has been gone over and gone over, how many times has these kinds of things that we didn’t see coming, what about the AIs, the language model, everything else?
Michael Levin
(02:21:29)
Not because they talk, not because they say that they’re, you know, have an inner perspective or any of that, but just from the fact that this thing is even the most minimal system surprises with what happens. And frankly, when I see this, tell me if this doesn’t sound like all of our existential story. For the brief time that we’re here, the universe is going to grind us into dust eventually, but until then, we get to do some cool stuff that is intrinsically motivating to us, that is neither forbidden by the laws of physics nor determined by the laws of physics, but eventually, it kind of comes to an end. So I think that aspect of it, right, that there are spaces…
Michael Levin
(02:22:13)
Even in algorithms, there are spaces in which you can do other new things, not just random stuff, not just complex stuff, but things that are easily recognizable to a behavior scientist. You see, that’s the point here. And I think that kind of intrinsic motivation is what’s telling us that this idea that we can carve up the world, we can say, “Okay, look, biology is complex. Cognition, who knows what’s responsible for that, but at least we can take a chunk of the world aside and we can cut it off and we can say, these are the dumb machines.” These are just these algorithms…
Michael Levin
(02:22:46)
Whereas we know the rules of biochemistry don’t explain everything we want to know about how psychology is going to go, but at least the rules of algorithms tell us exactly what the machines are going to do, right? We have some hope that we’ve carved off a little part of the world and everything is nice and simple, and it is exactly what we said it was going to be. I think that failed. I think it was a good try. I think we have good theories of interfaces, but even the simplest algorithms have these kinds of things going on. And so that’s why I think something like this is significant.
Lex Fridman
(02:23:17)
Do you think that there is going to be in all kinds of systems of varying complexity things that the system wants to do and things that it’s forced to do? So, are there these unexpected competencies to be discovered in basically all algorithms and all systems?
Michael Levin
(02:23:38)
That’s my suspicion, and I think that it is extremely important for us as humans to have a research program to learn to recognize and predict. We make things… Never mind something as simple as this. We make, you know, social structures, financial structures, Internet of Things robotics, AI, so we make all this stuff, and we think that the thing we make it do is the main show. And I think it is very important for us to learn to recognize the kind of stuff that sneaks into the spaces.
Lex Fridman
(02:24:06)
What if, what… It’s a very counterintuitive notion. By the way, I like the word emergent. I hear your criticism and it’s a really strong one, that emergent is like you toss your hands up, but I don’t know the process, but it’s just a beautiful word, because it is… I guess it’s a synonym for surprising. And I mean, this is very surprising, but just because it’s surprising doesn’t mean there’s not a mechanism that explains it.
Michael Levin
(02:24:34)
Mechanism and explanation are both not all they’re cracked up to be in the sense that, you know, anything you and I do, we could come up with the most beautiful theory. We paint a painting, anything we do. Somebody could say, “Well, I was watching the biochemistry and the Schrodinger equation playing out, and it totally described everything that was happening. You didn’t break even a single law of biochemistry. Nothing to see here, nothing to see, right?” Like, okay, you know, consistent with the low-level rules, you can do the same thing here. You can look at the machine code and say, “Yeah, this thing is just executing machine code.” You can go further and say, “Oh, it’s quantum foam.” It’s just doing the thing that quantum foam does.
Lex Fridman
(02:25:17)
That, that you’re saying that’s what physicists miss.
Michael Levin
(02:25:20)
Well, and I’m not saying they’re unaware of that. I mean, they’re generally a pretty sophisticated bunch. I just think they’ve picked a level and they’re going to discover what is to be seen at that level, which is a lot. And my point is, the stuff that the behavior scientists are interested in shows up at a much lower level than you think.
Lex Fridman
(02:25:39)
How often do you think there’s a misalignment of this kind between the thing that a system is forced to do and what it wants to do? And it’s particularly… I’m thinking about various levels of complexity of AI systems.
Michael Levin
(02:25:53)
So right now, we’ve looked at, like, five other systems. That’s a small N, okay? But just looking at that, I would find it very surprising if bubble sort was able to do this, and then there was some sort of valley of death where nothing showed up, and then living things. Like, I can’t imagine that. I’m going to say that if something… And we actually have a system that’s even simpler than this, which is 1D cellular automata that’s doing some weird stuff. If these things are to be found in this kind of simple system, I mean, they just have to be showing up in these other more complex AIs and things like that. The only thing we don’t know, but we’re going to find out, is to what extent there is interaction between these.
Michael Levin
(02:26:37)
So I call these things side quests, you know. It’s like they’re like in a game, you know, with the main thing you’re supposed to do. And as long as… As long as you still do it, the thing about this is you have to sort. You have to sort. There’s no miracle. You’re going to sort. But as long as you can do other stuff while you’re sorting, it’s not forbidden. And what we don’t know is, to what extent are the two things linked? So if you do have a system that’s very good at language, are the side quests that it’s capable of, do they have anything to do with language whatsoever? We don’t know the answer to that.
Michael Levin
(02:27:08)
The answer might be no, in which case all of the stuff that we’ve been saying about language models because of what they’re saying, all of that could be a total red herring and not really important, and the really exciting stuff is what we never looked for. Or in complex systems, maybe those things become linked. In biology, they’re linked. In biology, evolution makes sure that the things you’re capable of have a lot to do with what you’ve actually been selected for. In these things, I don’t know, and so we might find out that they actually do give the language some sort of leg up, or we might find that the language is just… You know, that’s not the interesting part.
Lex Fridman
(02:27:43)
Also, it is an interesting question of this intrinsic motivation of clustering. Is this a property of the particular sorting algorithms? Is this a property of all sorting algorithms? Is this a property of all algorithms operating on lists, on numbers? How big is this? So, for example, with LLMs, is it a property of any algorithm that’s trying to model language, or is it very specific to transformers and that’s all to be discovered?
Michael Levin
(02:28:14)
We’re doing all that. We’re testing this stuff in other algorithms. We’re looking for… We’re developing suites of code to look for other properties. We… You know, to some extent, it’s very hard because we don’t know what to look for, but we do have a behaviorist handbook which tells you all kinds of things to look for. The delayed gratification, the problem-solving, like, we have all that. I’ll tell you an N of one of an interesting biological intrinsic motivation, because people… So, in the alignment community and stuff, there’s a lot of discussion about what the intrinsic motivations are going to be of AIs? What are their goals going to be, right?
Michael Levin
(02:28:48)
What are they going to want to do? Just as an N of one observation, anthrobots, the very first thing we checked for… So this is not experiment number 972 out of a thousand things. This is the very first thing we checked for. We put them on a plate of neurons with a big wound through them, a big scratch. First thing they did was heal the wound, okay? So it’s an N of one, but I like the fact that the first intrinsic motivation that we noticed out of that system was benevolent and healing. Like, I thought that was pretty cool. And we don’t know. Maybe the, you know, maybe the next 20 things we find are going to be some sort of, you know, damaging effects. I can’t tell you that. But the first thing that we saw was kind of a positive one. And, I don’t know, that makes me feel better.

Can aging be reversed?

Lex Fridman
(02:29:27)
What was the thing you mentioned with the anthrobots that they can reverse aging?
Michael Levin
(02:29:31)
There’s a procedure called an epigenetic clock where what you can do is look at particular epigenetic states of cells and compare to a curve that was built from humans of known age. You can guess what the age is, okay? So we can take now… And this is Steve Horvath’s work, and many other people, that when you take a set of cells you can guess what their biological age is, okay? So we make the anthrobots from cells that we get from human tracheal epithelium. We collaborated with Steve’s group, the Clock Foundation. We sent them a bunch of cells and we saw that if you check the anthrobots themselves, they are roughly 20% younger than the cells they come from.
Michael Levin
(02:30:17)
And so that’s amazing, and I can give you a theory of why that happens, although we’re still investigating. And then I could tell you the implications for longevity and things like that. My theory for why it happens I call this age evidencing. And I think that what’s happening here, like with a lot of biology, is that cells have to update their priors based on experience. And so I think that they come from an old body. They have a lot of priors about how many years they’ve been around and all that, but their new environment screams, “I’m an embryo,” basically. There are no other cells around. You’re being bent into a pretzel. They actually express some embryonic genes.
Michael Levin
(02:30:58)
They say, “You’re an embryo.” And I think it’s not enough new evidence to roll them all the way back, but it’s enough to update them to about 28% back.
Lex Fridman
(02:31:08)
Yeah, so it’s similar to, like, when an older adult gives birth to a child. So you’re saying you could just fake it till you make it with age? Like, the environment convinces the cell that it’s young?
Michael Levin
(02:31:27)
Well, first of all, yeah, yes. And that’s my hypothesis.
Lex Fridman
(02:31:32)
That’s nice
Michael Levin
(02:31:32)
And we have a whole bunch of research being done on this. There was a study where they went into an old age home and they redid the decor, like ’60s style, when all these folks were really young. And they found all kinds of improvements in blood chemistry and stuff like that, because they say it was sort of mentally taking them back to when… you know, when they were the way they were at that time. I think this is a basal version of that, that basically if you’re finding yourself in an embryonic environment, what’s more plausible, that you’re young or what? What, you know, like, I think this is the basic feature of biology, is to update priors based on experience.
Lex Fridman
(02:32:10)
Do you think that’s actually actionable for longevity? Like, you can convince cells that they’re younger and thereby extend their lifespan?
Michael Levin
(02:32:21)
This is what we’re trying to do, yeah.
Lex Fridman
(02:32:23)
Could it be as simple as that?
Michael Levin
(02:32:25)
Well, I’m not claiming it’s simple. That is in no way simple. But because again, you have to… All of this, all of the regenerative medicine stuff that we do balances on one key thing, which is learning to communicate to the system. We have to… If you’re going to convince that system, you know, so when we make gut tissue into an eye, you have to convince those cells that their priors about, “We are gut precursors,” those priors are wrong and you should adopt this new worldview that you’re going to be an eye.
Michael Levin
(02:32:53)
So being convincing and figuring out what kind of messages are convincing to cells, and how to speak the language, and how to make them take on new beliefs, literally, is at the root of all of these future advances in birth defects and regenerative medicine and cancer. That’s what’s going on here. So I’m not saying it’s simple, but I can see the path.

Mind uploading

Lex Fridman
(02:33:17)
Going back to the Platonic space, I have to ask if our brains are indeed thin client interfaces to that space, what does that mean for our mind? Can we upload the mind? Can we copy it? Can we ship it over to other planets? What does that mean for exactly where the mind is stored?
Michael Levin
(02:33:49)
Yeah. Couple of things. So we are now beyond anything that I can say with any certainty. This is total conjecture, okay? Because we don’t know yet. The whole point of this is we actually don’t really understand very well the relationship between the interface and the thing.
Lex Fridman
(02:34:02)
And the thing you’re currently working on is to map-
Michael Levin
(02:34:06)
Correct.
Lex Fridman
(02:34:06)
… this space?
Michael Levin
(02:34:07)
Correct. And we are beginning to map it, but, you know, this is a massive effort. So I’ll give a couple of conjectures here. One is that I strongly suspect that the majority of what we think of as the mind is the pattern in that space, okay? And one of the interesting predictions from that model, which is not a prediction of modern neuroscience, is that there should be cases where there is very minimal brain, and yet normal IQ function. This has been seen clinically. Corina Kofman and I reviewed this in a paper recently, a bunch of cases of humans where there’s very little brain tissue, and they have normal or sometimes above normal intelligence.
Michael Levin
(02:34:54)
Now, things are not simple because that obviously doesn’t happen all the time, right? Most of the time it doesn’t happen. So what’s going on? We don’t understand. But it is a very curious thing that is not a prediction of… I’m not saying it can’t… You know, you can take modern neuroscience and sort of bend it into a pretzel to accommodate it. You can say, “Well, there are these, you know, kind of redundancies and things like this,” right? So you can accommodate it, but it doesn’t predict this. So there are these incredibly curious cases. Now, do I think you can copy it? No, I don’t think you can, because what you’re going to be copying is the interface, the front end, the brain or, you know, whatever.
Michael Levin
(02:35:35)
The action is actually the pattern in the Platonic space. Are you going to be able to copy that? I doubt it. But what you could do is produce another interface through which that particular pattern is going to come through. I think that’s probably possible. I can’t say anything at this point about what that would take, but my guess is that that’s possible.
Lex Fridman
(02:35:55)
Is your guess, your gut is that that process, if possible, is different than copying? Like, it looks more like creating a new thing versus copying.
Michael Levin
(02:36:08)
For the interface. So if you could… So here’s my prediction for a Star Trek transporter: For whatever reason, right now, your brain and body are very attuned and attractive to a particular pattern, which is your set of psychological propensities. If we could rebuild that exact same thing somewhere else, I don’t see any reason why that same pattern wouldn’t come through it the same way it comes through this one. That would be a guess, you know? So I think what you will be copying is the physical interface, and hoping to maintain whatever it is about that interface that was appropriate for that pattern. We don’t really know what that is at this point.
Lex Fridman
(02:36:48)
So when we’ve been talking about mind, in this particular case, it’s the most important to me because I’m a human. Does self come along with that? Does the feeling, like, this mind belongs to me? Does that come along with all minds? The subjective… Not the subjective experience. The subjective experience is important too, consciousness, but like the ownership.
Michael Levin
(02:37:19)
I suspect so, and I think so because of the way we come into being. So one of the things that I should be working on is this paper called “Booting Up the Agent,” and it talks about the very earliest steps of becoming a being in this world. Kind of like you can do this for a computer, right? Before you switch the power on, it belongs to the domain of physics, right? It obeys the laws of physics. You switch the power on, some number of nanoseconds, microseconds, I don’t know, later, you have a thing that, oh look, it’s taking instructions off the stack and doing them, right? So now it’s executing an algorithm.
Michael Levin
(02:37:56)
How did you get from physics to executing an algorithm? Like, what was happening during the boot-up exactly before it starts to run code or whatever, right? And so we can ask that same question in biology. What are the earliest steps of becoming a being?
Lex Fridman
(02:38:12)
Yeah, that’s a fascinating question. Through embryogenesis, at which point are you booting on? Do you have a hope of an answer to that?
Michael Levin
(02:38:21)
Well, I think so. I think so in two ways. The first thing is just physically what happens. So I think that your first task as a being, and again, I don’t think this is a binary thing. I think this is a positive feedback loop that sort of cranks up and up. Your first task as a being coming into this world is to tell a very compelling story to your parts. As a biological being, you are made of agential parts.
Michael Levin
(02:38:52)
Those parts need to be aligned, literally, into a goal they have no comprehension of. If you’re going to move through anatomical space by means of a bunch of cells which only know physiological and metabolic spaces and things like that, you are going to have to develop a model and bend their action space. You’re going to have to deform their option space with signals, with behavior-shaping cues, with rewards and punishments, whatever you got. Your job as an agent is ownership of your parts, is alignment of your parts. I think that fundamentally is going to give rise to this ability. Now, that also means having a boundary saying, “Okay, this is the stuff I control. This is me.”
Michael Levin
(02:39:35)
This other stuff over here is outside world. I have to figure out… You don’t know where that is, by the way. You have to figure it out. And in embryogenesis, it’s really cool. As a grad student, I used to do this experiment with duck embryos, which are a flat blastodisc. You can take a needle and put some scratches into it, and every island you make for a while until they heal up, thinks it’s the only embryo. There’s nothing else around, so it becomes an embryo. And eventually you get twins and triplets and quadruplets and things like that. But each one of them at the border, you know, they’re joined. Well, where do I end and where does he begin? You have to know where your borders are.
Michael Levin
(02:40:10)
So that action of aligning your parts and coming to be this, this emergence. I mean, I’m even going to say it. This emergence. We just don’t have a good vocabulary for it. This emergence of a model that aligns all the parts is really critical to keep that thing going. There’s something else that’s really interesting, and I was thinking about this in the context of this question of these beautiful ideas. There’s this amazing thing that we found, and this is largely the work of Federico Pagosi in my group. So a couple of years ago, we saw that networks of chemicals can learn. They have five or six different kinds of learning that they can do.
Michael Levin
(02:40:51)
And so what I asked them to do was to calculate the causal emergence of those networks while they’re learning. And what I mean by that is this: If you’re a rat, and you learn to press a lever and get a reward, there’s no individual cell that had both experiences, right? The cells at your paw had touched the lever. The cells in your gut got the delicious reward. No individual cell has both experiences. Who owns that associative memory? Well, the rat. So that means you have to be integrated, right? If you’re going to learn associative memories from different parts, you have to be an integrated agent that can do that. And so we can measure that now with metrics of causal emergence like fi and things like that.
Michael Levin
(02:41:33)
So we know that in order to learn, you have to have significant fi. But I wanted to ask the opposite question: What does learning do for your fi level? Does it do anything for your degree of being an agent that is more than the sum of its parts? So we trained the networks, and sure enough, some of them, not all of them, but some of them, as you train them, their fi goes up, okay? And so basically what we were able to find is that there is this positive feedback loop between every time you learn something… …You become more of an integrated agent. And every time you do that, it becomes easier to learn. And so, it’s this…
Lex Fridman
(02:42:13)
It’s a virtuous cycle.
Michael Levin
(02:42:14)
It’s a virtuous cycle. It’s an asymmetry that points upwards for agency and intelligence. And now back to our platonic space stuff, where does that come from? It doesn’t come from evolution. You don’t need to have any evolution for this. Evolution will optimize the crap out of it, for sure. But you don’t need evolution to have this. It doesn’t come from physics. It comes from the rules of information, the causal information theory, and the behavior of networks. They’re mathematical objects. This is not anything that was given to you by physics or by a history of selection. It’s a free gift from math.
Michael Levin
(02:42:47)
And those two free gifts from math lock together into a spiral that I think causes simultaneously a rise in intelligence and a rise in collective agency. And I think that’s just amazing to think about.
Lex Fridman
(02:43:04)
Well, that free gift from math, I think, is extremely useful in biology. When you have small entities forming networks, hierarchy that builds more and more complex organisms. That’s obvious. I mean, this speaks to embryogenesis, which I think is one of the coolest things in the universe. In fact, you acknowledge its coolness in the “Ingressing Mind” paper, writing, quote, “Most of the big questions of philosophy are raised by the process of embryogenesis. Right in front of our eyes, a single cell multiplies and self-assembles into a complex organism, with order on every scale of organization and adaptive behavior. Each of us takes the same journey across the Cartesian cut, starting off as a quiescent human oocyte, a little blob thought to be well-described by chemistry and physics.”
Lex Fridman
(02:44:00)
Gradually, it undergoes metamorphosis and eventually becomes a mature human with hopes, dreams, and a self-reflective metacognition that can enable it to describe itself as not a machine, that’s more than its brain, body, and underlying molecular mechanisms,” and so on. What, in all of our discussion, can we say as the clear intuition of how it’s possible to take a leap from a single cell to a fully functioning organism full of dreams and hopes and friends and love and all that kind of stuff? In everything we’ve been talking about which has been a little bit technical, how do we understand? ‘Cause that’s one of the most magical things the universe is able to create, perhaps the most magical. From simple physics and chemistry, create this, us two talking about ourselves.
Michael Levin
(02:45:03)
I think we have to keep in mind that physics and chemistry are not real things. They are lenses that we put on the world; they are perspectives where we say, “We are, for the time being, for the duration of this chemistry class or career or whatever, we are going to put aside all the other levels, and we’re going to focus on this one level.” And what is fundamentally going on during that process is an amazing positive feedback loop of collective intelligence for the interface. It’s the physical interface scaling its cognitive light cone that it can support, so it’s going from a molecular network… The molecular network can already do things like Pavlovian conditioning. You don’t start with zero.
Michael Levin
(02:45:47)
When you have a simple molecular network, you are already hosting some patterns from the platonic space that look like Pavlovian conditioning. You’ve already got that starting out. That’s just the molecular network. Then you become a cell, and then you’re many cells. And now you’re navigating anatomical amorphous space, and you’re hosting all kinds of other patterns. And eventually you… And I think again, I think there’s then this is like what all this stuff that we’re trying to work out now.
Michael Levin
(02:46:14)
There’s a consistent feedback between the ingressions you get and the ability to have new ones, which again I think is this positive feedback cycle, where the more of these free gifts you pull down, they allow you physically to develop to ways where, “Oh, look, now we’re suitable for more and higher ones.” And this continuously goes and goes and goes until you’re able to pull down a full human set of behavioral capacities.
Lex Fridman
(02:46:39)
What is the mechanism of such radical scaling of the cognitive cone? Is it just this kind of… The same thing that you were talking about with the network of chemicals being able to learn?
Michael Levin
(02:46:49)
I’ll give you two mechanisms that we found. But again, just to be clear, these are mechanisms of the physical interface. What we haven’t gotten is a mature theory of how they map onto the space; that’s just beginning. But I will tell you what the physical side of things look like. The first one has to do with stress propagation. So imagine that you’ve got a bunch of cells, and there’s a cell down here that needs to be up there. Okay. All of these cells are exactly where they need to go, so they’re happy, their stress is low. This cell… Now, let’s imagine stress is basically a physical implementation of the error function.
Michael Levin
(02:47:32)
It’s basically the amount of stress, it’s basically the delta between where you are now and where you need to be. Not necessarily in physical position, this could be in anatomical space, and physiological space, and in transcriptional space, whatever, right? It’s just the delta from your set point. So, you’re stressed out, but these guys are happy, they’re not moving. You can’t get past them. Now imagine if what you could do, is you could leak your stress, whatever your stress molecule is, and the cool thing is that evolution has actually conserved these highly, so these are all… And we’re studying all of these things, they’re actually highly conserved.
Michael Levin
(02:48:01)
If you start leaking your stress molecules, then all of this stuff around here is starting to get stressed out. When things get stressed, starting to get stressed out, their temperature, in the… not physical temperature, but in the sense of, like, simulated annealing or something, right, their ability to… their plasticity goes up. Because they’re feeling stress, they need to relieve that stress, and because all the stress molecules are the same, they don’t know it’s not their stress. They are equally irritated by them as if it was their own stress, so they become a little more plastic, they become ready to kind of, you know, adopt different fates. You get up to where you’re going, and then everybody’s stress can drop.
Michael Levin
(02:48:35)
So notice what can happen by a very simple mechanism: just be leaky for your own stress. My problems become your problems, not because you’re altruistic, not because you actually care about my problems. There’s no mechanism for you to actually care about my problems, but just that simple mechanism means that faraway regions are now responsive to the needs of other regions, such that complex rearrangements and things like that can happen. It’s an alignment of everybody to the same goal through this very dumb, simple stress-sharing thing.
Lex Fridman
(02:49:04)
Via leaky stress.
Michael Levin
(02:49:05)
Leaky stress, right? So there’s another one, there’s another one… …Which I call memory anonymization. So imagine here are two cells, and imagine something happens to this cell, and it sends a signal over to this cell. Traditionally, you send a signal over, this cell receives it. It’s very clear that it came from outside, so this cell can do many things. It could ignore it, it could take on the information, it could just ignore it, it could reinterpret it, it could do whatever, but it’s very clear that it came from outside. Now imagine the kind of thing that we study, which is called gap junctions. These are electrical synapses that could directly link the internal milieus of two cells. If something happens to this cell, it gets…
Michael Levin
(02:49:45)
Let’s say it gets poked, and there’s a calcium spike or something that propagates through the gap junction here. This cell now has the same information, but this cell has no idea, “Wait a minute, was that… Is that my memory or is that his memory?” Because it’s the same, right? It’s the same components, and so what you’re able to do now is to have a mind meld. You can have a mind meld between the two cells where nobody’s quite sure whose memory it is, and when you share memories like this, it’s harder to say that I’m separate from you. If we share the same memories, we are kind of a…
Michael Levin
(02:50:16)
And I don’t mean every single memory, right? So they still have some identity, but to a large extent, they have a little bit of a mind meld, and there are many complexities you can lean on top of it. But what it means is that if you have a large group of cells, they now have joint memories of what happened to us, as opposed to, you know, what happened to you and I know what happened to me. And that enables a higher cognitive light cone, because you have greater computational capacity, you have a greater area of concern, of things you want to manage. I don’t just want to manage my tiny, little memory states because I’m getting your memories. Now I know I’ve got to manage this whole thing.
Michael Levin
(02:50:50)
So both of these things end up scaling the size of things you care about, and that is a major ladder for cognition. It is scale the degree of, you know, the size of concern that you have.
Lex Fridman
(02:51:02)
It’d be fascinating to be able to engineer that scaling. Probably applicable to AI systems, how do you rapidly scale the cognitive cone?
Michael Levin
(02:51:12)
Yeah, yeah. We have some collaborators…
Lex Fridman
(02:51:14)
Light cone
Michael Levin
(02:51:14)
…in a company called Softmax that we’re working with to do some of that stuff. In biology, that’s our cancer therapeutic, which is that what you see in cancer literally is cells electrically disconnect from their neighbors when they were part of a giant memory that was working on making a nice organ. Well, now they can’t remember any of that, now they’re just amoebas, and the rest of the body is just external environment. And what we found is if you then physically reconnect them to the network, you don’t have to fix the DNA, you don’t have to kill the cells with chemo, you can just reconnect them and they go back to… Because they’re now part of this larger collective, they go back to what they were working on. And so, yeah, I think we can intervene at that scale.

Alien intelligence

Lex Fridman
(02:51:57)
Let me ask you more explicitly on the SETI, the Search for Unconventional Terrestrial Intelligence, what do you hope to do there? How do you actually try to find unconventional intelligence all around us? First of all, do you think on Earth there is all kinds of incredible intelligence we haven’t yet discovered?
Michael Levin
(02:52:21)
I mean, guaranteed we’ve already seen in our own bodies, and I don’t just mean that we are host to a bunch of microbiomes or any of that. I mean your cells, and we have all kinds of footwork on this, every day they traverse these alien spaces, 20,000-dimensional spaces and other spaces. They solve problems. I think they suffer when they fail to meet their goals, they have stress reduction when they meet their goals. These things are inside of us, they are all around us. I think that we have an incredible degree of mind blindness to all of the very alien kinds of minds around us, and I think that, you know, looking for aliens off the Earth is awesome and whatever.
Michael Levin
(02:53:02)
But if we can’t recognize the ones that are inside our own bodies, what chance do we have to really recognize the ones that are out there?
Lex Fridman
(02:53:12)
Do you think there could be a measure like IQ for mind? What would it be? Not mindedness, but intelligence that’s broadly applicable to the unconventional minds, that’s generalizable to unconventional minds, where we could even quantify like, “Holy shit, this discovery is incredible because it has this IQ”?
Michael Levin
(02:53:41)
Yeah, yes and no. The yes part is that as we have shown, you can take existing IQ metrics… I mean, literally existing kinds of ways that people use to measure intelligence of animals and humans and whatever, and you can apply them to very weird things. If you have the imagination to make the interface, you can do it. And we’ve done it, and we’ve shown creative-
Michael Levin
(02:54:03)
…problem-solving and all this kind of stuff. So, yes. However, we have to be humble about these things and recognize that all of those IQ metrics that we’ve come up with so far were derived from an N of one example of the evolutionary lineage here on Earth, and so we are probably missing a lot of them. So I would say we have plenty to start with. We have so much to start with. We could keep tens of thousands of people busy just testing things now, but we have to be aware that we’re probably missing a lot of important ones.
Lex Fridman
(02:54:32)
What do you think has more interesting intelligent unconventional minds inside our body, the human body, or like we were talking off-mic, the Amazon jungle, like nature, natural systems outside of the sophisticated biological systems we’re aware of?
Michael Levin
(02:54:55)
Yeah. We don’t know because it’s really hard to do experiments on larger systems. It’s a lot easier to go down than it is to go up. But my suspicion is, you know, like the Buddhists say, innumerable sentient beings. I think by the time you get to that degree of infinity, it kind of doesn’t matter to compare. I suspect there are just massive numbers of them.
Lex Fridman
(02:55:16)
Yeah, I think it really matters which kind of systems are amenable to our current methods of scientific inquiry. I mean, I’ve, I spent quite a lot of hours just staring at ants- When I was in the Amazon, and it’s such a mysterious, wonderful collective intelligence. I don’t know how amenable it is to research. I’ve seen some folks try. You can simulate, you can… But I feel like we’re missing a lot.
Michael Levin
(02:55:41)
I’m sure we are. But one of my favorite things about that kind of work: have you seen there’s at least three or four papers showing that ant colonies fall for the same visual illusions that we fall for?
Lex Fridman
(02:55:51)
Okay.
Michael Levin
(02:55:51)
Not the ants, the colonies. So if you-
Lex Fridman
(02:55:53)
The colony together. Yeah.
Michael Levin
(02:55:54)
The colonies. So if you- Lay out food in particular patterns, they’ll do things like complete lines that aren’t there and- And all the same stuff that we fall for, they fall for. So I don’t think it’s hopeless, but I do think that we need a lot of work to develop tools.
Lex Fridman
(02:56:08)
Do you think all the tooling that we develop and the mapping that we’ve been discussing will help us do the study part, finding aliens out there?
Michael Levin
(02:56:17)
I think it’s essential. I think it’s essential. We are so parochial in what we expect to find in terms of life that we are going to be just completely missing a lot of stuff. If we can’t even agree on, never mind, definitions of life, but what’s actually important… I, I read a paper recently where I asked whatever, 65 or so modern working scientists for a definition of life. And we had so many different definitions across so many different dimensions. We had to use AI to make amorphous space out of it. And there was zero consensus about what actually is important, you know? And if we’re not good at recognizing it here, I just don’t see how we’re going to be good at recognizing it somewhere else.
Lex Fridman
(02:57:08)
So given how miraculous life is here on Earth, it’s clear to me that we have so much more work to do. That said, would that be exciting to you if we find life on other planets in the solar system? Like, what would you do with that information, or is that just another life form that we don’t understand?
Michael Levin
(02:57:32)
I would be very excited about it because it would give us some more unconventional embodiments to think about- Right? A data point that’s pretty far away from our existing data points, at least in this solar system. So that would be cool. I’d be very excited about it. But I must admit that my level of, my set point for surprise has been pushed so high at this point that it would have to… you know, it would have to be something really weird to make me shocked. I mean, the things that we see every day is just, yeah.
Lex Fridman
(02:58:04)
I think you’ve mentioned in a few places that like you wrote that the “Ingressing Minds” paper is not the weirdest thing you plan to write. How weird are you going to get? Maybe a better question is, in which direction of weirdness do you think you will go in your life? In which direction of the weird Overton window are you going to expand?
Michael Levin
(02:58:34)
Yeah, well, the background to this is simply that I’ve had a lot of weird ideas for many, many decades, and my general policy is not to talk about stuff until it becomes actionable. And the amazing thing, I mean, I’m really just kind of shocked, is that in my lifetime, the empirical work, I really didn’t think we would get this far. And the knob, I have this mental knob of what percentage of the weird things I think do I actually say in public, right?
Michael Levin
(02:59:08)
And every few years when the empirical work moves forward, I sort of turn that knob a little, right, as we keep going. So I have no idea if we’ll continue to be that fortunate or how long I can keep doing this or whatever. But just to give you a direction for it, it’s going to be in the direction of what kinds of things do we need to take seriously as other beings with which to relate to. So I’ve already pushed it, you know, so like, we knew brainy things, and then we said, “Well, it’s not just brains.” And then we said, “Well, it’s not just…” So, you know, it’s not just in physical space, and it’s not just biologicals, and it’s not just complexity.
Michael Levin
(02:59:52)
There are a couple of other steps to take that I’m pretty sure are there, but we’re going to have to do the actual work to make it actionable before, you know, before we really talk about it. So that direction.
Lex Fridman
(03:00:07)
I think it’s fair to say you’re one of the more unconventional humans, scientists out there. So the interesting question is, what’s your process of idea generation? What’s your process of discovery? You’ve done a lot of really incredibly interesting, like you said, actionable, but interesting, out-there ideas that you’ve actually engineered with Xenobots and Anthrobots, these kinds of things. When you go home tonight, go to the lab, what’s the process? Empty sheet of paper, when you’re thinking through it?
Michael Levin
(03:00:50)
Well, the mental part is, a lot of it, funny enough, much like making Xenobots. You know, we make Xenobots by releasing constraints, right? We don’t do anything to them. We just release them from the constraints they already have, and then we see-
Michael Levin
(03:01:05)
So a lot of it is releasing the constraints that mentally have been placed on us. And part of it is my education has been a little weird because I was a computer scientist first, and only later biology. And so by the time I heard all the biology things that we typically just take on board, I was already a little skeptical and thinking a little differently, but a lot of it comes from releasing constraints. And I very specifically think about, okay, this is what we know. What would things look like if we were wrong? Or what would it look like if I was wrong? What are we missing? What is our worldview specifically not able to see, right? Whatever model I have.
Michael Levin
(03:01:41)
Or another way I often think is I’ll take two things that are considered to be very different things, and I’ll say, “Let’s just imagine those as two points on a continuum.” What does that look like? What does the middle of that continuum look like? What’s the symmetry there? What’s the parameter that I can turn to get from here to there? So those kinds of… I look for symmetries a lot. I’m like, okay, this thing is like that way, in what way? What’s the fewest number of things I would have to move to make this map onto that? Right? So those are, you know, those are kind of mental tools.
Michael Levin
(03:02:12)
The physical process for me is basically, I mean, obviously, I’m fortunate to have a lot of discussions with very smart people. So in my group there are some, you know, I’ve hired some amazing people, so we of course have a lot of discussions and some stuff comes out of that. My process is I do pretty much every morning, I’m outside for sunrise, and I walk around in nature. There’s just not really anything better as inspiration than nature. I do photography, and I find that it’s a good meditative tool because it keeps your hands and brain just busy enough.
Michael Levin
(03:02:50)
You don’t have to think too much, but, you know, you’re sort of twiddling and looking and doing some stuff, and it keeps your brain off of the linear, logical, careful train of thought enough to release it so that you can ideate a little more while your hands are busy.
Lex Fridman
(03:03:04)
So it’s not even the thing you’re photographing, it’s the mechanical process of doing the photography?
Michael Levin
(03:03:09)
And mentally, right? Because I’m not walking around thinking, “Okay, let’s see, so for this experiment we’ve got to, you know, I’ve got to get this piece of equipment and this…” Like, that goes away, and it’s like, okay, what’s the lighting and what am I looking at? And during that time when you’re not thinking about that other stuff, then I say, “Well, yeah, I’ve got to get a notebook,” and I’m like, “Look, this is what we need to do.” So that kind of stuff.
Lex Fridman
(03:03:30)
And the actual idea of writing down stuff, is it a notebook? Is it a computer? Are you super organized in your thinking, or is it just like random words here and there with drawings, and… What is the space of thoughts you have in your head? Is this sort of amorphous, things that aren’t very clear? Are you visualizing stuff? Is there something you can articulate there?
Michael Levin
(03:04:01)
I tend to leave myself a lot of voicemails. Because as I’m walking around, I’m like, “Oh man, this idea,” so I’ll just call my office and leave myself a voicemail for later to transcribe.
Michael Levin
(03:04:12)
I don’t have a good enough memory to remember any of these things, so what I keep is a mind map. So I have an enormous mind map. One piece of it hangs in my lab so that people can see, like, these are the ideas, this is how they link together. Here’s everybody’s project. I’m working on this. How the hell does this attach to everybody else’s so they can track it? The thing that hangs in the lab is about nine feet wide. It’s a silk sheet, and it’s out of date within a couple of weeks of my printing it, because new stuff keeps moving around. And then there’s more that isn’t, you know, isn’t for anybody else’s view. But yeah, I try to be very organized because otherwise I forget.
Michael Levin
(03:04:49)
So, so everything is in the mind map, things are in manuscripts. I have something like, h- right now, probably 163, 62 open manuscripts that are in process of being written at various stages. And, and when things come up I stick ’em in the right manuscript, in the right place, so that when I’m finally ready to finalize, then, then I’ll put words around it and whatever. But there’s like outlines of everything. So I try…
Lex Fridman
(03:05:13)
So there’s a wide front of manuscripts of work that’s being done, and it’s continuously pushing towards completion, but you’re not clear what’s going to be finished when and how.
Michael Levin
(03:05:27)
That’s…
Lex Fridman
(03:05:27)
When is the actual…
Michael Levin
(03:05:27)
That’s… I mean, that’s… Yes, but that’s just the theoretical, philosophical stuff. The empirical work that we’re doing in the lab, I mean, those are… We know exactly, you know…
Lex Fridman
(03:05:36)
It’s more focused. There’s a specific set of questions.
Michael Levin
(03:05:37)
Like, we know this is, this is, you know, anthrobot aging. This is limb regeneration. This is the new cancer paper. This is whatever. Yeah, those things are very linear.
Lex Fridman
(03:05:45)
Where do you think ideas come from when you’re taking a walk that eventually materialize in a voicemail? Where’s that? What … Is that from you? Is that … You know, a lot of really … Some of the most interesting people feel like they’re channeling from somewhere else.
Michael Levin
(03:06:02)
I mean, I hate to bring up the Platonic space again, but I mean, if you talk to any creative, that’s basically what they’ll tell you, right? And certainly that’s been my experience, so I feel like it’s a collaboration. So collaboration is I need to bust my ass and be prepped in one, A, to work hard, to be able to recognize the idea when it comes, and B, to actually have an outlet for it so that when it does come, we have a lab and we have people who can help me do it, and then we can actually get it out, right? So that’s my part, is, you know, be up at 4:30 AM doing your thing and be ready for it.
Michael Levin
(03:06:40)
But the other side of the collaboration is that, yeah, when you do that, amazing ideas come, and, you know, to say that it’s me, I don’t think would be right. I, you know, I think it’s definitely coming from other places.

Advice for young people

Lex Fridman
(03:06:53)
What advice would you give to scientists, PhD students, grad students, young scientists that are trying to explore the space of ideas given the very unconventional, non-standard, unique set of ideas you’ve explored in your life and career?
Michael Levin
(03:07:09)
Let’s see. Well, the first and most important thing I’ve learned is not to take too much advice, and so I don’t like to give too much advice. But I do have one technique that I’ve found very useful, and this isn’t for everybody, but there’s a specific demographic. There’s a lot of unconventional people reach out to me, and I try to respond and help them and so on. This is a technique that I think is useful for some people. How do I describe it? You need to… It’s the act of bifurcating your mind, and you need to have two different regions. One region is the practical region of impact. In other words, how do I get my idea out into the world so that other people recognize it? What should I say?
Michael Levin
(03:07:57)
What are people hearing? What are they able to hear? How do I pivot it? What parts do I not talk about? Which journal am I going to publish this in? Is it time now? Do I wait two years for this? Like, all the practical stuff that is all about how it looks from the outside, right? All the stuff that I can’t say this, or I should say this differently, or this is going to freak people out, or this is odd. You know, this community wants to hear this so I can pivot it this way. Like, all that practical stuff. It’s got to be there; otherwise, you’re not going to be in a position to follow up any of your ideas. You’re not going to have a career. You’re not going to have resources to do anything. But it’s very important that that can’t be the only thing.
Michael Levin
(03:08:30)
You need another part of your mind that ignores all that shit completely, because this other part of your mind has to be pure. It has to be I don’t care what anybody else thinks about this. I don’t care whether this is publishable, describable. I don’t care if anybody gets it. I don’t care if anybody thinks it’s stupid. This is, this is what I, what I think, and why, and, and give it space to, to sort of grow, right? And if you keep the … If you try to mush them … If you try to mush them together, I, I found that impossible because, because the practical stuff poisons the other stuff.
Michael Levin
(03:08:58)
If you’re, if you’re too much … If you’re too much on the creative end, you can be an amazing thinker, it’s just nothing ever materializes. But if you’re very practical, it tends to poison the other stuff because the more you think about how to present things so that other people get it, it, it constrains and it- and it bends how you start to think. And, you know what I tell my students and others is there’s two kinds of advice. There’s very practical, specific things, like somebody says, “Well, you forgot this control,” or, “This isn’t the right method,” or, “You shouldn’t be …” That stuff is gold, and you should take that very seriously, and you should use it to par- to improve your craft, right? And that’s, like, super important.
Michael Levin
(03:09:37)
But then there’s the meta advice where people are like, “That’s not a good way to think about it. Don’t work on this. This isn’t …” That, that stuff is, is, is garbage. And, and even very successful people often give very constraining, terrible advice. Like, one of my, one of my reviewers in the paper years ago said … I, I love this. The Freudian slip. He, he said he was gonna give me constrictive criticism, right? And that’s exactly what he gave me.
Lex Fridman
(03:09:59)
That’s funny.
Michael Levin
(03:09:59)
was constrictive criticism. I was like, “That’s awesome.” Uh, that’s a great typo.
Lex Fridman
(03:10:03)
Well, it’s very true. I mean, that second, the bifurcation of the mind is beautifully put. I do think some of the most interesting people I’ve met are… sometimes fall short on the normie side, on the practical, “How do I… having the emotional intelligence of how do I communicate this with people that have a very different worldview, that are more conservative and more conventional and more kind of fit into the norm.” You have to be able to have the skill to fit in. And then you have to, again, beautifully put, be able to shut that off when you go on your own and think. And having two skills is very important.
Lex Fridman
(03:10:46)
I think a lot of radical thinkers think that they’re sacrificing something by learning the skill of fitting in, but I think if you want to have impact, if you want ideas to resonate and actually lead to, um … First of all, be able to build great teams- … that help bring your ideas to life. And second of all, for your ideas to have impact, and to scale, and to resonate with a large number of people, you have to have that skill. And those are, those are very different. Those are very different. Let me ask a ridiculous question. You already spoke about it, but what to you is one of the most beautiful ideas that you’ve encountered in your various explorations? Maybe not just beautiful, but one that makes you happy to be a scientist, to be able to be a… …Curious human exploring ideas.
Michael Levin
(03:11:39)
I mean, I must say that, you know, I sometimes think about these ingressions from this space as a kind of steganography, you know? So steganography is when you hide data and messages within the bits of another pattern that don’t matter, right?
Michael Levin
(03:11:56)
And the rule of steganography is you can’t mess up the main thing, you know? So if it’s a picture of a cat or whatever, you’ve got to keep the cat. But if there are bits that don’t matter, you can kind of stick stuff. So I feel like all these ingressions are a kind of universal steganography, that there’s this, like, these patterns seep into everything, everywhere they can. And they’re kind of shy, meaning that they’re very subtle, not invisible. If you work hard, you can catch them. They’re not invisible, but they’re hard to see. And the fact that, I think they also affect “machines,” as much as they certainly affect living organisms, I think is incredibly, incredibly beautiful.
Michael Levin
(03:12:35)
And I personally am happy to be part of that same spectrum, and the fact that that magic is sort of applicable to everything. A lot of people find that extremely disturbing, and that’s some of the hate mail I get is like, “Yeah, we were with you, you know, on the majesty of life thing, until you got to the fact that machines get it too.” And now, like, terrible, right? You’re, you know, kind of devaluing the majesty of life. And I don’t know. The idea that we’re now catching these patterns and we’re able to do meaningful research on the interfaces and all that is just, to me, absolutely beautiful. And that it’s all one spectrum, I think to me is amazing. I’m enriched by it.

Questions for AGI

Lex Fridman
(03:13:16)
I agree with you. I think it’s incredibly beautiful. I lied, there’s an even more ridiculous question. So it seems like we are progressing towards possibly creating a superintelligent system, an AGI, an ASI. If I had one, gave it to you, put you in the room, what would be the first question you ask it? Maybe the first set of questions? There are so many topics that you’ve worked on and are interested in. What is there like a first question that you really just, if you can get a solid answer, what would it be?
Michael Levin
(03:13:53)
I mean, the first thing I would ask is how much should I even be talking to you? For sure. Because it’s not clear to me at all that getting somebody to tell you an answer in the long run is optimal. It’s the difference between when you’re a kid learning math and having an older sibling that’ll just—
Lex Fridman
(03:14:14)
Oh, yeah
Michael Levin
(03:14:14)
—tell you the answers, right? Like, sometimes it’s just like, “Come on, just give me the answer. Let’s move on with this, you know, cancer protocol and whatever.” Like, great. But in the long run, the process of discovering it yourself, how much of that are we willing to give up? And by getting a final answer, how much have we missed of stuff we might have found along the way? Now, I don’t know what the… The thing is, I don’t think it’s correct to say, “Don’t do that at all. You know, take the time in all the blind alleys,” and like… That may not be optimal either, but we don’t know what the optimal is. We don’t know how much we should be stumbling around versus having somebody tell us the answer.
Lex Fridman
(03:14:52)
That’s actually a brilliant question to ask AGI then.
Michael Levin
(03:14:55)
It, I mean— …if it’s really—
Lex Fridman
(03:14:56)
That’s a really—
Michael Levin
(03:14:56)
If it’s really an AGI—
Lex Fridman
(03:14:58)
I mean, that’s a good first question.
Michael Levin
(03:14:58)
Yeah, if it’s really an AGI, I’m like, “Tell me what the balance is. Like, how much should I be talking to you versus stumbling around in the lab and making all my, you know, all my own mistakes?” And was it 70/30? You know, 10/90? I don’t know. So that would be, that would be the first—
Lex Fridman
(03:15:09)
And then the AGI will say, “You shouldn’t be talking to me.”
Michael Levin
(03:15:12)
It may well be. It may say, “What the hell did you make me for in the first place? You guys are screwed.” Like, that’s possible. Um— You know, the second question I would ask is what’s the question I should be asking you that I probably am not smart enough to ask you? That’s the other thing I would say.
Lex Fridman
(03:15:30)
This is really complicated. It’s a really, really strong question. But again, there the answer might be… You wouldn’t understand the question it proposes most likely. So I think for… Me, I would probably, assuming you can get a lot of questions, I would probably go for questions where I would understand the answer. Like, it would uncover some small mystery that I’m super curious about. Because if you ask big questions like you did, which are really strong questions, I just feel like I wouldn’t understand the answer. If you ask it, “What question should I be asking you?” It would probably say something like, “What is the shape of the universe?” And you’re like, “What? Why is that important?” Right? You would be very confused by the question it proposes.
Lex Fridman
(03:16:20)
I would probably want… It would just be nice for me to know, straight up, first question, how many living intelligent alien civilizations are in the observable universe? Yeah, that would just be nice. To know if it’s zero or is it a lot? I just want to know that. And then… Unfortunately, it might answer. It might be a Michael Levin answer.
Michael Levin
(03:16:45)
That’s what… that’s what I was about to say, is that my guess is it’s going to be exactly the problem you said, which is, it’s going to say, “Oh my God. I mean, right in this room you got…” You know, and like, “Oh, man.”
Lex Fridman
(03:16:56)
Yeah, yeah, yeah. Everything you need to know about alien civilizations is right here in this room. In fact, it’s inside your own body.
Michael Levin
(03:17:06)
Just for…
Lex Fridman
(03:17:06)
Thank you-
Michael Levin
(03:17:07)
… for starters
Lex Fridman
(03:17:07)
AGI. Thank you. All right. Michael, one of my favorite scientists, one of my favorite humans. Thank you for everything you do in this world.
Michael Levin
(03:17:16)
Thank you so much.
Lex Fridman
(03:17:17)
Truly, truly fascinating work, and keep going for all of us.
Michael Levin
(03:17:20)
Thank you…
Lex Fridman
(03:17:21)
You’re an inspiration
Michael Levin
(03:17:21)
So much. Thank you so much. Yeah, it’s great to see you. Always a good discussion. Thank you so much, I appreciate it.
Lex Fridman
(03:17:27)
Thank you for this.
Michael Levin
(03:17:27)
Thank you.
Lex Fridman
(03:17:28)
Thanks for listening to this conversation with Michael Levin. To support this podcast, please check out our sponsors in the description where you can also find links to contact me, ask questions, get feedback, and so on. And now, let me leave you with some words from Albert Einstein. “The most beautiful thing we can experience is the mysterious. It is the source of all true art and science.” Thank you for listening. I hope to see you next time.