Author Archives: Lex Fridman

About Lex Fridman

Host of Lex Fridman Podcast. Research Scientist at MIT, working on human-AI interaction, robotics, and machine learning.

Transcript for Jeff Kaplan: World of Warcraft, Overwatch, Blizzard, and Future of Gaming | Lex Fridman Podcast #493

This is a transcript of Lex Fridman Podcast #493 with Jeff Kaplan.
The timestamps in the transcript are clickable links
that take you directly to that point in
the main video. Please note that the transcript is
human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Episode highlight

Jeff Kaplan
(00:00:00)
There’s three types of fun, fun for the player, fun for the designer, and fun for the computer.
Lex Fridman
(00:00:06)
Is it PvP?
Jeff Kaplan
(00:00:07)
It’s all PvP. In fact, Rust is the most PvP thing in all of PvP.
Lex Fridman
(00:00:16)
Well, I don’t know what that means, but-
Jeff Kaplan
(00:00:19)
Rust players know what that means. My whole career and my family are thanks to EverQuest, so I think I won the game. And we’re idiots. We’re reading the forums, and the forums are just flaming us all the time. Like, “There’s lag on this server,” and, “Can’t log into that ser—” And that’s, that was our perspective of what was happening. And when I showed up at that show, it… One of the most emotional things in my life. It was nothing but an outpouring of love. I had believed I would never work any place but Blizzard. I loved it. It was a part of who I was and I felt I was a part of it, and I literally thought I would retire from the place. I never thought the day would come, and that was it.
Lex Fridman
(00:01:11)
How painful was it to say goodbye?
Jeff Kaplan
(00:01:14)
It broke me.

Introduction

Lex Fridman
(00:01:16)
Now, meanwhile, as far as the outside world is concerned, you’ve disappeared off the face of the earth, but you were actually working on a game. The following is a conversation with Jeff Kaplan, a legendary game designer of World of Warcraft and Overwatch, which are two of the biggest, most influential games ever made. He is genuinely one of the most amazing human beings I’ve ever met. In the many conversations I was fortunate enough to have with him, including while playing video games, he was always kind, thoughtful, hilarious, and still and forever a legit gamer, through and through. Of course, he’s always quick to celebrate the incredible teams of creative minds he has gotten a chance to work with over the years, and they are truly incredible.
Lex Fridman
(00:02:10)
Blizzard has created some of the greatest games ever made, games that to me personally have brought me thousands of hours of fun, meaning, and happiness, from Warcraft, to StarCraft, to Diablo, WoW, Overwatch and more. So for that, a big thank you to Jeff, to the entire Blizzard team, and to every creative mind in the video game industry, giving their heart and soul to build video game worlds that we fans get a chance to enjoy. This was a super fun, inspiring, whirlwind conversation, pun intended, with one of the most beloved gamers and game designers ever. Full of memes, lulz, wisdom, emotional rollercoaster moments, and of course, Blizzard video game lore.
Lex Fridman
(00:02:59)
Jeff left Blizzard in 2021, and has been secretly working on a new video game called The Legend of California that I got a chance to play with Jeff. It is incredibly beautiful. Set in the 1800s Gold Rush era of California, it’s an open world online multiplayer game, part adventure and action, part survival. Sometimes creating a feeling of loneliness and desperation, and sometimes just awe watching the sun rise over a beautiful landscape. It’s unlike any game that Jeff has ever worked on, and it’s a game that I genuinely can’t wait to play with all of you. You can wishlist it on Steam. Join the alpha later in March, I think, and early access is on the way.
Lex Fridman
(00:03:53)
This is the Lex Fridman Podcast. To support it, please check out our sponsors in the description where you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Jeff Kaplan.

Early games: Pac-Man, Zork, Doom, Quake

Lex Fridman
(00:04:07)
You were first a legendary video game player, in particular in EverQuest, before you ever became a legendary video game designer on World of Warcraft and on Overwatch, which I think is a wild journey to go through from gamer to designer. But first, let’s go way back. When did you first fall in love with video games?
Jeff Kaplan
(00:04:32)
I was lucky. I was born in that golden era of coin-op. So, I literally remember the first time seeing Pac-Man. I was with my Uncle Ronnie, and he just kept feeding me quarters. I think he wanted to play, but was too scared to, so he, you know, his little nephew, he was just giving him quarters to play Pac-Man. I remember being at my brother’s graduation in Philadelphia, and they had an Asteroids machine in the lobby. That was one of the first coin-op machines I had played as well. And my brother and I would… we would try to get the high score, and we’d finally get it. But we had to go to bed early ’cause we were little kids. And then in the morning somebody else had like beat our high score. And then, you know, I grew up in Southern California in the ’80s. I was born in ’72.
Jeff Kaplan
(00:05:27)
So, you know, I was a kid with that skateboard BMX culture where we’d ride two towns over. We knew all the pizza parlors and liquor stores and arcades, and we just lived in that coin-op phase. That was, that was where the love started. And then you started to see things like Pong. You’d go over to a friend’s house, they’d have Pong, and it was just mind blowing, like, we’re playing this thing on the TV and it was so much fun. Atari was a big thing at that time as well. But the big one for me was actually Intellivision, because my dad was an executive recruiter, and one of his clients was Mattel. And he said, “Hey, I… They gave me this thing,” and he would get discounts or free games. And my brothers and I just loved Intellivision. Like, we would just play it endlessly.
Jeff Kaplan
(00:06:27)
And the comparison was always like, “Is this game close to what’s in the arcades?” And it was just such a golden era. And I think the, the big moment where it really blew open and kind of hit the next level was when the NES came out. And that, like, NES with Super Mario- … Was kind of gaming at the next level at that point. And I have, like, warm, fuzzy memories even thinking about it to this day. I remember we played Super Mario for weeks, my brothers and I, and then I had a friend come over, and he showed me all the secret stuff-
Jeff Kaplan
(00:07:10)
… in Super that I didn’t know existed at the time. And it’s… it was like suddenly, the world opened up more and games could be more. And then there was, like, a big PC gaming push that hit me. My parents ran their own business. Like I said, my dad was an executive recruiter, and they bought an IBM. And this is, like, when it was DOS before MS-DOS existed. And I was so disappointed, because, like, other kids had the Amiga or the Commodore- … which, you know, they were better for gaming than the IBM at the time. And my mom, she really encouraged my brother and I. She bought Zork. You know, it was just Infocom word games, and where your imagination would take you. Like, Zork holds a place in my heart I think few games will ever touch.
Lex Fridman
(00:08:13)
It’s a text-based game?
Jeff Kaplan
(00:08:14)
Text-based game. You know, you just type in, “Go west. Open mailbox.” You know? And… But it’s that power of imagination. It’s why the book is always better than the movie, you know?
Lex Fridman
(00:08:29)
Yeah. So, you’re starting to see these creations of worlds that you can navigate. You can step into this world and you can lose yourself in that world.
Jeff Kaplan
(00:08:38)
Yeah. You’re transported. You’re living there.
Lex Fridman
(00:08:41)
Was Zork popular?
Jeff Kaplan
(00:08:43)
Zork was insanely popular. And then there was Zork II and Zork III.
Lex Fridman
(00:08:48)
A trilogy. Zork trilogy. I see it. Okay.
Jeff Kaplan
(00:08:51)
And it was weird, and, like, the… Sometime in the ’90s, there was this, there was this era of what they called CD-ROM games. That’s how they branded them. And they made a return to Zork, but it now had graphics. And somehow, that just shattered everything, because the Zork you knew in your head didn’t exist anymore. Yeah, Zork was fantastic. I think it might be open source now, which I think is fabulous. But I highly recommend Zork. There was also, in those days, on the PC that worked on our IBM, was Ultima-
Jeff Kaplan
(00:09:32)
… which was the Richard Garriott series. And he was Lord British. We knew him as Lord British. He put himself in the game. And you want to talk about world-building. You know, there was Yew Forest and there was all the characters. And the first Ultima I played was Ultima II, ’cause Ultima I was before my time. And that series, it was this RPG group-based PC game, and the worlds were just so rich. Like, you could get on a rocket ship. You’re playing in this fantasy world, fighting demons, and yet somehow you could get on a rocket ship. And then there was just all of this sort of crazy stuff that would happen in games that are based in the world.
Jeff Kaplan
(00:10:22)
Like, there were bouncers in the towns, and merchants, but if you really wanted to, you could try to rob these people, or kill Lord British, you know? That was something that was super hard. And when you’re just a jackass kid, you spend your time endlessly trying to do these things over and over, and Ultima was really a profound kind of experience for me.
Lex Fridman
(00:10:48)
And, of course, that led to Ultima Online, which is a legendary game in itself, perhaps connected to EverQuest. Sort of starting to build these worlds that are massively multiplayer online video games. Can you take me to that journey? Like, as you started to get online, the MMO world. What were influential? What were fun for you?
Jeff Kaplan
(00:11:11)
Well, the big one for me was EverQuest. But like you mentioned, Ultima Online sort of was the predecessor. It came before EverQuest. And it was, like, one of those unfortunate times in my life where I was actually at grad school.
Lex Fridman
(00:11:28)
You were busy.
Jeff Kaplan
(00:11:29)
I was busy, and I missed Ultima Online. Like, I would have had that experience. And when you hear the Ultima Online stories, they’re some of the craziest, funniest… You know, I know somebody who, they learned how to poison in the game, and then they would poison apples, then leave them on the ground, and somebody else would be adventuring, then feed the apple to their horse and kill their horse. Then they’d steal all their stuff and… You know, Ultima Online was kind of… It was the earliest grief-based experiment. Really, like, when you’re treating the humans like ants in the ant farm. That was kind of Ultima Online.
Lex Fridman
(00:12:11)
So-
Jeff Kaplan
(00:12:12)
… my first, like, what online gaming, what defined online gaming for me was Quake and Doom and Duke Nukem. You know, it started with Doom and they had a… You could basically LAN. You could network with your friends or you could connect with a modem and hook up with somebody. And that was like a mind-blowing… Just seeing another entity in a video game and saying, “That’s a person on the other side of that.”
Jeff Kaplan
(00:12:44)
That was magical, like, that that moment happened and that person could be in another room or across town from you. And Quake kind of took it to the next level. Like, that’s where everybody knew what they were doing. The systems were more refined. And this Quake community formed with all of these, you know, great websites, mods. The community was divided into … There were two castes of players. The low ping bastards, the LPBs … and then the rest of us, you know. And I remember rolling into Quake matches, you know, on a dial-up modem with a 300 ping connection, and I thought it was the greatest thing ever. And just, just connecting with people. Like I said, the, the websites.
Jeff Kaplan
(00:13:40)
To this day, the only gaming website I read- I don’t read any of the news sites anymore, but I read Blue’s News. Which was like, like … Someone actually teased me recently. I linked him a story. I’m like, “Oh, did you hear this new thing’s coming out?” And I sent the link, and they’re like, “Dude, this is from Blue’s News. Like, what time machine did you just step out of?” And a guy named Stephen Heaslip… I’m probably pronouncing his name wrong. I apologize, but it was actually through that site that I learned about EverQuest.
Jeff Kaplan
(00:14:16)
They had those programmer .plan updates, the .plan files. And guys like Carmack would … You know, they’d post about what code they were writing or how they had optimized something, or just their personal life. Like, you know, the Ferrari talk would always happen … once they had achieved success. And there was an id programmer named Brian Hook, and he said, “I’m leaving id to go work at Verant,” which became Sony Online, “to work on this game called EverQuest.” And I was like, “How does anybody leave id, the greatest institution in all of gaming ever, to work on any other game?” I’m like, “This guy must be crazy. Or whatever this EverQuest thing is, I need to see it. I need to know what’s going on.” And if he hadn’t have made that post, I never would have checked out EverQuest.
Lex Fridman
(00:15:22)
We’ll talk about EverQuest, but since you mentioned Carmack and Quake, what can we say about the genius of John Carmack? Why was he such an important and influential human in the history of gaming?
Jeff Kaplan
(00:15:34)
Those early geniuses at id … Like, I wouldn’t be sitting here talking to you right now if they hadn’t had the breakthroughs that they had at the time. Gaming engines were evolving, but the level of breakthrough that they achieved with Wolf 3D, that was the first … I remember playing Wolfenstein when it was a 2D game. You’d run around. You’d dress up as a German. You’d throw a grenade.
Jeff Kaplan
(00:16:08)
To see it in 3D… And it’s funny. You look back at the screenshots or videos of it now, and it seems almost childish. Like, “Oh, why were you so excited about that?” And you were transported. There… It was the intimacy of first person. You know, putting the hands in front of you, holding the gun, being transported to Nazi Germany, but you’re the hero fighting the Nazis. And then the evolution. Like, when Doom came out, I’m a huge Army of Darkness fan. Like, one of my favorite movies of all time. And I was like, “This is Army of Darkness, the video game.” You know? Like, “Give me the boom stick. Here we go.” And the graphical advances… But it wasn’t just how the game looked, it was how it played.
Jeff Kaplan
(00:17:01)
The smoothness kept getting better. The responsiveness, the sharpness of the gameplay. You have to credit id in those days and Carmack and Romero. I … As somebody who worked on an FPS, I … That wouldn’t have existed without them. Credit where credit’s due.
Lex Fridman
(00:17:22)
And by the way, we should say you’re … As a gamer, your range is incredible. You are a legit first-person shooter gamer, but you also obviously love the more MMO world, rich, exploratory kinda game. So it’s fascinating. But yeah, there is … On the technology stack that brought something like Quake or Wolfenstein 3D to life, there’s a threshold which you pass of realism where you can immerse yourself into that world. I had the same exact experience with Wolfenstein 2D taking a step to 3D, and it was like tears in my eyes. Like, “This is incredible.” Like, my memories of Wolfenstein 3D is it was like ultra realistic. It’s silly to say now.

Writing career

Lex Fridman
(00:18:14)
It was the feeling like you were there. Yeah, what an incredible age. And some of that, the storytelling, a lot of that is the technology that brings that kind of 3D world to life. It’s incredible. But before we get too far on that tangent, you mentioned grad school. We should mention that you have a master’s degree in creative writing from NYU, and you wanted to be a writer. You told me your main influences were Kerouac, but also Hemingway, Salinger, Bukowski, Orwell. What drew you to storytelling in that medium of writing? What aspect of the human experience were you trying to put down on paper?
Jeff Kaplan
(00:18:59)
Well, it started with being a fan first and being inspired and reading, and it’s the, not only being transported to a different world or into a different person, but also, you know, the way that stories can touch emotions in you and trigger feelings sometimes you didn’t even know you had. And that was very appealing for me. And the big challenge with it is, and I think this is for anybody who creates anything, is putting yourself out there. To some degree, there’s a lot of ego that goes into that moment where you say, “Well, I’ve been reading, you know, 1984 or Green Hills of Stranglethorn, and I think it’s amazing. And now I’m gonna try to write something that somebody is gonna read.”
Jeff Kaplan
(00:20:05)
That’s a giant leap of faith. You know, that’s a moment of putting yourself out there completely, and there’s gotta be some part of that that’s ego. There’s some part of it that’s masochistic. And I think for people who want to create and build stuff, they can’t help but to do it. You don’t really have an option. That’s just how you’re wired, and you’re gonna do it anyway. And, you know, I admire people like Dickinson who can just write all the poems and leave them in a drawer to be discovered by somebody else. You know, that’s one way to go about it.
Lex Fridman
(00:20:46)
Yeah, Franz Kafka, you know, a lot of the stories he wrote, never published, and he asked for all of them to be destroyed. And then it’s only because of his friend that ignored his request that we even have many of his stories. It’s like to be that kinda… I mean, clearly, there’s some masochism there, some tortured soul. But then there’s also the ego like you mentioned. I was entertained by this story of James Joyce when he was a young man, 18, 19 declared that he’s going to be the greatest writer of the 20th century. And he turned out in many, in the eyes of many to be one of the greatest writers of the 20th century. But there’s, like, millions of kids just like James Joyce, writers, they’re declaring exactly that, that turn out not to be.
Lex Fridman
(00:21:37)
But that is in some cases, in many cases, maybe most cases, you have to have that ego- … to say, “I’m gonna…” Yeah, right. “I read 1984,” “and I’m going to write the next 1984.”
Jeff Kaplan
(00:21:50)
Yeah. And I do think ego is a big part of it. It’s one of the many lessons I’ve learned. Hearing your Kafka story is funny, because fast-forwarding to how my writing career ended- … I literally threw away everything, I mean, in a dumpster. I used to keep copious notes, like journals, my writing journals, everything I ever read, every story idea. I probably had 20 volumes of just handwritten notes. And then I also kept personal journals of just, you know, to keep the writing habit up of just, you know, what happened in my day, how I was feeling, all of that. And then either digitally or typed, I had all of my manuscripts, and I threw it all in the dumpster.
Lex Fridman
(00:22:40)
What was that decision? Do you remember that decision? What was that like to just take that part of your life and just put it in a dumpster?
Jeff Kaplan
(00:22:50)
Yeah. It was—I think it was necessary. It was necessary. This is like rationalizing it after the fact, you know, which is easy to do. You know? But at the time, I think I was so broken and so defeated with failure that I needed the moment. It was like throwing in the towel for a boxer, you know? It’s that moment of like, “I’m not gonna win this fight, and you need to move on from it.” And if there was any element of that sitting around, I’d be tempted to try again or bring it out of the drawer 10 years later.
Lex Fridman
(00:23:34)
We should mention that you did give it a real try. You’ve mentioned receiving over 170 rejection letters in one year when submitting your stories. So there’s a lot of rejection. So it was a long chain of rejection. And then what was that like, the rejection?
Jeff Kaplan
(00:23:54)
It was hard. I had moved from New York. I did the most terrible dumb thing that I knew I was doing at the time. I had a really great group of writer friends from grad school in New York, and I think writing is a very lonely, solitary thing. But weirdly, writers kind of support each other and just, “Who do you give the story to?” You know, you don’t wanna give it to your mom or dad, you know. You kinda wanna give it to somebody who’s gonna really punch you in the nose and tell you what’s wrong with it. And I had left that writing circle to move back to California.
Lex Fridman
(00:24:30)
Did you take a bunch of drugs, take your typewriter and drove across, uh-
Jeff Kaplan
(00:24:35)
No.
Lex Fridman
(00:24:35)
… acro- Across the United States and then wrote a book about it? Or just to take Kerouac as an example. Anyway, sorry. You went just-
Jeff Kaplan
(00:24:44)
I might have been more successful had- … I done that.
Lex Fridman
(00:24:47)
So sorry. So you went back.
Jeff Kaplan
(00:24:48)
So I moved back to California, and I did it for a girl. And I think within two months of moving back, we were broken up. So… And I knew it when I was standing in my studio apartment when it was empty in New York and I was about to close the door for the last time. I had that, like, you know, little me on the shoulder saying, “Dude, what are you doing?”
Jeff Kaplan
(00:25:13)
“This… You’re making one of those epic life mistakes that is gonna come back to haunt you.” And I ended up alone in California, and I think it was a good three years that I structured my life where I was gonna write for eight hours a day, because it’s that writer’s habit. Like, you have to just force yourself: “This is a job. This isn’t a hobby. Whether I like it or not, rain or shine, sick or healthy, I’m gonna write for eight hours a day.” And I did. I was fortunate. Like I said, my dad had his company and he hired me as a research associate. So I was calling up, generating name lists for a recruiting company, and I would take… Whenever there was East Coast assignments, I would take those so I could start at like 5:00 in the morning.
Jeff Kaplan
(00:26:04)
And I created all this space for me to write, and I just… I had a dog named Jack- … who was… He was a Jack Russell Terrier. And so everybody’s like, “You’re a writer, you named your Jack Russell Terrier Jack.” I’m like, “Because I named him after Jack Kerouac.” “It’s poetic and epic,” and-
Lex Fridman
(00:26:21)
Yeah, of course.
Jeff Kaplan
(00:26:22)
… I just looked like a dumbass, but- … it was just me and this dog. And I was writing, you know, all that time intensely. And this was mid to late ’90s, so even though the internet existed, email was very primitive and you had to send a manuscript off, like printed paper- … to all… Like, I was trying to get short stories published in literary magazines, and you had to send an envelope with a return self-addressed stamp. So it was expensive, too. Like if you didn’t have money, you were just… There was a cost to it- … to every single one of them.
Lex Fridman
(00:27:02)
You had to pay for the rejection letter that you would eventually receive.
Jeff Kaplan
(00:27:07)
Yeah. And the, like, big thing that you were hoping for was that the editor would write you a note with the rejection letter. Like, um-
Lex Fridman
(00:27:17)
Keep going.
Jeff Kaplan
(00:27:18)
Yeah. And you’d like cling onto this. Like, it was like, “Oh, Glimmer Train said, you know, showing promise.” You know, and you just hang onto that for like a week, you know, pretending like that was… But it was just soul-crushing. And I really stuck… And I became more and more isolated. Part of that was leaving that group of writing friends in New York. I’m prone to just introversion anyway. The type of person I am. Breaking up with the girlfriend at the time. I just sort of fell into that world of, like, all I was doing was writing. And it broke me. Like, I went into very deep and heavy depression. I drank too much. I really had a problem with alcohol. And all those things compounded into just deep, deep depression. And I don’t… There wasn’t like a magic rejection that broke me.
Jeff Kaplan
(00:28:31)
That would have been epic if like- … someone out there is like, “The dude who…” “I’m the dude who broke Jeff that one day.” But I just had a moment where I said, “This is gonna destroy me.” And… Like, I don’t want to be discouraging to anybody, because I really do believe, like you hear it so much, like, “You have to work for your dreams, never give up.” Like, we’re trained this way. Like, “Never give up.” The universe… Actually, maybe not the universe. A group of editors at literary magazines across the United States was telling me it was time to give up as a writer, like I wasn’t cut out for it. And I stopped.
Lex Fridman
(00:29:24)
Sometimes, you know, closing a door is required for another door to open.
Jeff Kaplan
(00:29:30)
Yes.
Lex Fridman
(00:29:30)
That’s one of the hardest things to do, is to walk away.
Jeff Kaplan
(00:29:33)
Yeah. And I think, rightly so, our parents, our coaches, our mentors train us not to give up. And I think a lot of us take pride in that, “I’m never gonna give up. I’m gonna do this come hell or high water.” And sometimes there’s that reality, especially when you’re now in your mid-20s, where you have that moment of like, “Am I really gonna be this? Like, am I ever gonna sort of find the light here?” And, maybe, and it’s so hard, it’s so hard to have this moment, “Maybe this isn’t my calling in life,” especially when you don’t know what the next calling is gonna be.
Lex Fridman
(00:30:15)
That’s so painful. It’s ’cause you’ve invested so much of yourself, of who you are, of the dreams you’ve had, of this just whole conception of yourself, and you’re watching yourself slide down in terms of becoming isolated, suffering more and more. And then you just have to somehow figure out how to get out of that. And it is true. In that situation, the way to get out is the dumpster. Is to cut it off. Is there advice you can extract from that? There’s a lot of young folks who are in that same situation.
Jeff Kaplan
(00:30:54)
Yeah. I, this is one of those hindsight things where, you know, having gone through it and ended up okay on the other side, which you don’t know at the time, you know? When you’re a young person in your late teens or early 20s, there’s so much pressure on you. And I really think adults don’t help. You know? Every time you run into the younger nephew or whoever and you start to say things like, “Oh, what’s your major? What are you gonna do with that?” “What do you wanna be?” It’s such bullshit to do to a human being. You know?
Lex Fridman
(00:31:29)
You’re so lost in the world. I mean, most of us are lost our entire lives, but especially in your 20s, you know, like, you’re lost. So the questions like, yeah, “What are you doing? What’s your major? What’s the career?” And so on, that’s not the point, man. I’m trying to move through the world, I’m trying to run through the world to find the thing that sparks my heart, to find the passion, to find what I’m meant to be on this earth for. And there are really, I mean, that is a real hero’s journey of searching as a young person. That’s a real, like, you know, all the adults, with their wisdom, they’ve stopped searching often. They’ve done the lazy, the comfortable thing. They found their thing.
Lex Fridman
(00:32:18)
And so now they look back, they don’t remember how much suffering and how much uncertainty that young people have to deal with.
Jeff Kaplan
(00:32:28)
It’s, there’s confusion, there’s pressure. Like, the pressure we exert on younger people for having it figured out is insane. So the advice that I always give, and it sounds so stupid, like this sounds really trite, but focus on what you wanna do, not what you wanna be. The pressure that society kind of puts on us is, you know, “Oh, do you wanna be an astronaut? Do you wanna be a firefighter? Do you wanna be a writer? Do you wanna be a game maker?” And I think we get lost in the trappings of, like a vision of what that role is- … and how to perform as a fake actor in that role. Versus when you’re off the clock and no one’s asking you any questions-
Jeff Kaplan
(00:33:29)
… you, you know, you’re not at Thanksgiving dinner and your uncle’s pressuring you into, you know, what your future’s gonna be for the rest of your life. When you go home, how do you spend your time? Like, what makes you happy? What brings you fulfillment? And through those paths, you’re gonna find out what you’re gonna become, not what you wanna be. It’s, “What do you wanna do?”
Lex Fridman
(00:33:55)
What do you wanna do? The thing that brings you joy on a moment by moment basis. Yeah. That’s brilliantly put. And speaking of which, that’s where you took the pivot. You switched to video games. How did that happen? Gradually? Suddenly?

EverQuest obsession

Jeff Kaplan
(00:34:13)
Gradually and suddenly. So when I had that fateful moment where I just sort of gave up with writing, I had these days where I’d structure eight-hour chunks of just, this was writing time, you know? I’d sit solitary typing. All that was gone. And, you know, I could still support myself, which was nice. And then I had this free time and I wasn’t spending it with anybody, I was just alone. Me and the dog, Jack. And I just poured it all into EverQuest. You know, I, it was 1999 when that game came out. And I had a friend, Victor, like kind of a lifelong friend. One of the few friends I had who played computer games, ’cause there was a stigma to that.
Jeff Kaplan
(00:35:06)
You know? It wasn’t, you didn’t walk around telling people you played games. They thought you wasted your time. And my friend, Vic, had bought EverQuest. I’m like, “That’s that game that that guy Brian Hook went to work on. Is it good?” And he’s like, “Yeah, you gotta play it.” And the moment I logged in, I was just transported. It was the world of Norrath. And it wasn’t just the world itself and how it looked, I thought the game was gorgeous, it was the mechanics, you know, that I was this halfling rogue that, you know, had to go out and adventure in the world, and when I killed stuff, I got experience, and I needed better loot to kill more stuff to get more experience. And the sort of draw of progression in the game was amazing.
Jeff Kaplan
(00:36:01)
I, and I just lived my life of, “I can’t wait ’til the next time I log in.” There was a lot of escapism going. It wasn’t all healthy. When all was said and done, when I finally had quit EverQuest three days later, you could type in the command /played to see how much played time you had. I had, I think it was like 272 played days in three years. So you start to do the math on like, how much time- … in those three years I was living in that world. It was… it was kind of insane.
Lex Fridman
(00:36:42)
Well, that’s over 6,000 hours- … of gameplay. Wow. So here going to Perplexity, EverQuest is a long-running 3D fantasy, massively multiplayer online role-playing game, MMORPG, set in the world of Norrath, as you were saying. First released in March 1999, it is an online role-playing game where thousands of players create characters, group up, and explore a persistent shared world. It’s widely regarded as one of the foundational MMORPGs, helping define raid content, guild systems and 3D online worlds. That’s the other component of it. There’s… It’s all humans and they group up- … and they raid together in the game.
Jeff Kaplan
(00:37:28)
Yep. In the context of EverQuest, raiding is usually around 30 people or more getting together to conquer something that you couldn’t beat otherwise. And to do successful raiding, you usually needed to join what in EverQuest everyone referred to as an Uber Guild. So I had this great pride in my EverQuest journey that I… Most of the time leveling up I was unguilded or I was in like a role-playing guild with rogues only. And it was when I got to Level 50 in EverQuest, which was the top level, I got invited into this guild called Legacy of Steel, which on our server was the top. Every server had a top guild.
Jeff Kaplan
(00:38:18)
And I was on a server called The Nameless Server, and the top guild was Legacy of Steel. And that, the thrill of getting 30 people together to go see if you could beat, you know, Nagafen, who was the fire dragon, or Vox, who was the frost dragon, and needing perfect coordination to pull it off, it was insane how fun. Like, you would literally scream out. You’re alone in your room at home- … but you felt like you were there with these people and you would audibly cheer out when you won, and you’d feel depressed when you lost, and it was a game of high highs and low lows, and it did everything right. It was amazing.
Lex Fridman
(00:39:05)
So that was a big leap for you to go from the proud lone warrior to a member of a guild, an Uber Guild. And then there’s that epic story of you rising to the top to become the leader of this Uber Guild.
Jeff Kaplan
(00:39:22)
The leader… Yeah. So organizing people in an online game like EverQuest is like herding cats- … ’cause, you know, everyone has their own will. Some people are loot motivated, some people want the guild to do well, some people are just lonely and want people to hang out with. And there was also a lot of depression in the EverQuest community. It was something I suffered with, but a lot of people, you know, anytime you’re feeling sad or down, you’re looking for escape. And one of the great things video games brings us is escapism. And escapism isn’t always bad or negative- … but when you sort of abuse it to escape your real life problems, it’s bad and negative.
Lex Fridman
(00:40:18)
So there’s a mix of pain and darkness that pain can manifest as- … all part of this community.
Jeff Kaplan
(00:40:27)
Yeah. And what’s weird is you enter the cycle where being with other people gives you comradery and relief and makes you feel like you’re not doing so bad in life, but you can quickly enter a cycle of… But then you’re withdrawing from life and it makes you feel that way more to where you can only get the fix from the game at that point. So it’s… Psychologically, there’s a lot going on there.
Lex Fridman
(00:40:57)
And so you had to work with all of that. You have to get a bunch of people together to do a raid, who are all human beings going through complicated psychological journeys of their own. Some are talking shit, some are just quietly lonely, just looking for some loot.
Jeff Kaplan
(00:41:16)
In the late ’90s, everyone was talking shit. You know what I mean? Like, the gaming culture was just a different thing back then. But it was a great group. It was super fun. It was people from all walks of life. And to coordinate these people, like you just had to repeat everything like 200 times. Like, “Okay, we’re gonna port from North Ro. Everybody get to North Ro.” And then you’d have to repeat that for like six hours-
Jeff Kaplan
(00:41:47)
… to have any chance of like 20% of the people showing up in North Ro. And I sort of like… At first I joined the guild, I was just like the bright-eyed, bushy-tailed. Like, I was like one of the few rogues in the guild. I just wanted to be helpful. I really admired the people running the guild. Like, we had a great guild leader. And it was just a really fun experience. And, you know, the guild leader one day just disappeared. Like, he quit and he was going through, you know, his own thing, and that’s what would happen in EverQuest. Like, people would just kinda disappear all of a sudden. There wasn’t a, “Hey, in about a month, I’m gonna stop playing because I’m starting this new job.”
Jeff Kaplan
(00:42:34)
People had to quit in some dramatic way, where they just disappear, and basically, our guild leader stopped playing.
Lex Fridman
(00:42:43)
Did you miss them when they disappeared? Like, we, we should say that most of the people, maybe all of them, were anonymous. So you just- …have a username, and you don’t really say who you are in real life.
Jeff Kaplan
(00:42:53)
Absolutely. In those days, there was a great stigma to mentioning your, any real-life info. You just kind of kept it all really close to your chest, and you never knew who was male or female. You kind of assumed everybody was male.
Lex Fridman
(00:43:11)
Safe assumption.
Jeff Kaplan
(00:43:11)
And then it was a surprise if they were actually female. Like my wife, for example, that’s how I met her.
Lex Fridman
(00:43:19)
You met her in EverQuest?
Jeff Kaplan
(00:43:20)
I met her in EverQuest.
Lex Fridman
(00:43:21)
That is a true love story, right there.
Jeff Kaplan
(00:43:23)
Yeah. Yeah. The funny part for me with EverQuest is, you know, you play a game as much as I played EverQuest, and people are like, “You threw years of your life away.” Like, “You can’t win a game like that.” And I’m like, “I don’t know, like, sitting here today, my whole career and my family are thanks to EverQuest, so I think I won the game.”
Lex Fridman
(00:43:49)
Yeah, yeah. You’re like the “Well, actually…” guy.
Jeff Kaplan
(00:43:52)
Well, yeah, exactly.
Lex Fridman
(00:43:54)
Your life will be on the Wikipedia page somewhere that says, “Well, here’s an example of somebody-” “… why video games are awesome.” Yeah, I mean, some of it… I should mention this as an aside. For me and many people I know, yes, it’s hundreds of hours, but some of the happiest hours and days of my life. Like, looking back, it all worked out. During it, you are pretty low, and you think, “What am I doing with my life?” All that kind of stuff.
Lex Fridman
(00:44:23)
But, like, looking back, just the all-nighters you pull playing a particular video game, allowing yourself to really fully be immersed, seeing the sun come up—and by the way, many of those games, for me, were Blizzard games. It’s just an incredible thing that video games have been able to do. I think, you know, it used to be, and still is somewhat the case, that books do that same kind of thing. They- …they take you on a journey. But video games, for a long time, you’re right, they had a stigma. Like, I couldn’t tell people. I felt like I was doing, like, heroin or something. Like, I felt like I was doing this secret, dark thing. It usually is in the dark. There’s just a secretive nature to it, like I’m doing something really dark and shady.
Jeff Kaplan
(00:45:10)
It wasn’t mainstream.
Lex Fridman
(00:45:12)
It wasn’t.
Jeff Kaplan
(00:45:12)
It wasn’t accept-… There was a stigma to it. And one of the weirdest parts of that is, you know, I mentioned, like, you could type in the /played in EverQuest. Well, if you did the /played on how much TV people watch, what would that look like? It would blow- …6,000 hours out of the water, easily. Well, it… 20 years ago it would have, you know? Not today.
Lex Fridman
(00:45:39)
Now it’s the phone, yeah. Yeah. But then, it is hard to say goodbye to that world. Those are also really painful times. How hard was it to say goodbye for you?
Jeff Kaplan
(00:45:52)
To EverQuest? It was really hard. And there were times where you try to quit.
Lex Fridman
(00:45:58)
Oh, you took a break sometimes?
Jeff Kaplan
(00:45:59)
Yeah. You think you’re quitting for good. You’d have those moments of, like, “I’m doing this too much. I need to move on in life. I’m gonna put it down and walk away, and hopefully not come back.” And there were times where you did come back. When I finally did leave EverQuest, it was actually extremely easy, because I was psychologically done with the game at the time. It was not shortly, but not too long after a new expansion had come out. At the time, it was Shadows of Luclin.
Jeff Kaplan
(00:46:34)
Which didn’t speak to me like the expansions before. Like, the one before that was called Scars of Velious, which was an amazing expansion. And I had gotten the job at Blizzard, and I guess I’m just an obsessive person. So all the time and energy that I had put into EverQuest, the second, you know, the second, my first minute started at Blizzard, that was my new obsession.

Getting hired at Blizzard

Lex Fridman
(00:47:04)
So speaking of which, you have to tell the epic origin story of how you got the job at Blizzard. As we said, you were this legendary gamer, and now legendary troll, on EverQuest. Username, Tigole. You gave a lot of edgy feedback to the devs, telling them in now famous… There’s several rants. There’s a famous one where you tell many of them to do a bunch of things, including to pull their heads out of their asses. You were loved and respected because you gave a lot of specific ways that the game could be improved. And that’s an important thing to say. You weren’t just talking shit. You actually really loved and cared for the game, and you gave them, in the language of the time, advice on how to improve their game.
Lex Fridman
(00:47:56)
And it’s funny, because, like, you look back to those messages, it’s inspiring to me. It should be informative and inspiring to a lot of people, because you’re really, legit, full-time talking shit. And now, and you always have been, like, one of the kindest, most loved human beings in the entire gaming industry. Anyway, how did that lead to you getting a job at Blizzard?
Jeff Kaplan
(00:48:21)
So when the first guild leader left, Legacy of Steel, the founder… He, he was a guy named… His online name was Dread. That was his name. He left, and our guild was kind of in this listless spin for a while. And eventually, somebody stepped up and took his position as guild leader, and that person’s name was Ariel-
Jeff Kaplan
(00:48:45)
… who was this blonde wood elf warrior female, who always refused to wear a helmet because she thought their character was so pretty, wanted to show their face all the time. So Ariel was a great guild leader for us, and made me like an assistant guild leader, raid leader, officer type in the guild. And over time, Ariel got busier and busier, and, you know, would send me messages like, “Hey, I’m not gonna be online, you know, tomorrow,” or, “I’m not gonna be online tonight. Can you run the raid? Can you run the raid?” And running the raids was very natural for me. And it was my first experience with leadership in my life, of like how do you motivate people? Like, what does motivation look like? What does discipline look like? How do you inspire people?
Jeff Kaplan
(00:49:43)
When do you force people versus encourage them, you know? So it was a learning experience for me on the fly, and I had the safety net of the real guild leader would log in eventually.
Lex Fridman
(00:49:57)
I should mention, I’m just now reading about, doing a bunch of research on Justinian of the Roman Empire, and he rose from being a peasant to being emperor, so I see a lot of parallels in your life journey, from peasant to emperor, but go ahead, I’m sorry.
Jeff Kaplan
(00:50:14)
At- at least EverQuest guild leader, that’s- that’s as much-
Lex Fridman
(00:50:17)
Uber guilded-
Jeff Kaplan
(00:50:17)
… as I could say.
Lex Fridman
(00:50:18)
Uber guild leader.
Jeff Kaplan
(00:50:18)
Uber guild leader. Best guild on the Nameless server. So as time went on, Ariel became busier and busier, and then one day, they contacted me and we were having this like whisper back and forth, and they said, “You’re gonna have to take over the guild. I’m just too busy.” And then it came out later … Well, let me back up a second. I started fooling around … Like around this time Half-Life 1 had come out, and with both Duke Nukem and Half-Life 1, one of the incredible things that those companies did back in the day was when they shipped the game, they shipped the editor on the CD.
Jeff Kaplan
(00:51:02)
And if you were curious enough, you could like fire up that editor and fool around with it. So I made a- a Duke Nukem level, and you’d send it off to like those UK programming magazines, and you know, you’d get excited because your level was in, you know, some random magazine. And then I started making like Half-Life levels. And Ariel had stepped down as guild leader. I had become guild leader.
Jeff Kaplan
(00:51:30)
And then at one point, Ariel contacts me and says, “Hey, you know, you were talking about those Half-Life levels you made. I want to see those.” I’m like, “Oh, that’s cool.” Like, “I didn’t know you played Half-Life.” Like, “Yeah, maybe we can get a server up and I can play them.” And Ariel tells me, “No, mail them to this address in Irvine.” And- because I- again, to rewind in the time machine for a second, to send something like a Half-Life level over the internet would have- … taken like 12 hours. So you actually like burned it onto a CD and stuck it in the mail. So I put my Half-Life levels, I sent them to Ariel, and he says, “You know, my name’s Rob. I’m a designer at Blizzard Entertainment.”
Jeff Kaplan
(00:52:24)
“I hear you’re in Pasadena ’cause you mentioned it.” You know, I would write about, you know, the Rose Parade and all these things on our website. You know, I kind of … It was blogging before blogging existed, so he knew I lived in Pasadena, and he’s like, “Irvine’s only an hour away. Why don’t you come down, see Blizzard, and you can also meet …” and he names like four people in the guild. And I’m like, “They all work at Blizzard too?” He’s like, “Yeah, we’re all Blizzard.” And it was so weird because during that era, I didn’t have a lot of money. It was not like … Kind of nowadays it feels like everybody plays every game, but you had to be selective. So like I never bought StarCraft or Diablo or Warcraft.
Jeff Kaplan
(00:53:13)
I was much more of the Half-Life, Quake, Quake III guy around that time, and I’d never played a Blizzard game, and I just got invited to go to Blizzard Entertainment.
Lex Fridman
(00:53:25)
Was Blizzard already legendary, you know, with the Warcraft and StarCraft? Is there … Was it building this like great legend of this game company that seemingly doesn’t miss?
Jeff Kaplan
(00:53:37)
It was very much on its way to enshrining itself as being one of the legendary game … Like, it was beloved- … by gamers, but there were still ignorant people like me who hadn’t played, you know, War II or Diablo II or StarCraft, which was shocking to people.
Lex Fridman
(00:53:56)
So you weren’t like freaking out, freaking out?
Jeff Kaplan
(00:53:59)
No, I was freaking out in a different sense. I’m like, “Am I gonna get mugged when I-” Like, “Who are … Is this a scam?” Because you didn’t meet people off the internet. So I drove down there. I ended up … There was Rob Pardo- … who at that time was the lead designer on Warcraft III, and he was Ariel. You know, so okay, it wasn’t a woman after all. It wasn’t this blonde wood elf. You know, I don’t know what you expect at that point.
Jeff Kaplan
(00:54:34)
It was Rob Pardo. To this day, a great friend of mine named Scott Mercer was the enchanter in our EverQuest guild, a guy named Dalomin. There was a guy named Roman Kenney who was like this totally psychotic wizard who played in our guild. And I had lunch with these guys, you know, we just went out to Irvine to like a restaurant. And, you know, forgive me for the misuse of the phrase, but it was like my coming out moment. And we talked about games having that stigma and being embarrassed about who you are and what you like. Like I, up until that point, I would never tell friends, family, like, “I love games. I’m playing this game EverQuest. It’s so cool, we just killed a dragon.” And so you were hiding this part of your identity.
Jeff Kaplan
(00:55:28)
And I’m out to lunch with these guys in Irvine, and we’re talking about dragons and swords and, you know, raid tactics and talking shit on all the people in the guild. And I literally had this moment where I felt like myself for the first time. I just felt so comfortable, and that was an eye-opening moment. And after that, after that lunch happened, he invited me for a couple more lunches down, you know, just… I just saw it as like, “Oh, now, I’m…” You know, I made friends with these people online. Now we know each other in real life, and they happen to work for this game company. And at another one of the lunches, they invite this troll warrior to have lunch with us, whose name in the game was Barfa, the Troll Warrior.
Jeff Kaplan
(00:56:25)
And Barfa wasn’t somebody who played with us all the time, but kind of like Ariel got into the guild kind of on the side. You know, it was one of those like inside invites of like, “Who’s Barfa?” “I don’t know, but Barfa is in the guild now.” And there was at the time, it was a new dungeon called The Hole, and we had never done it before. And we jumped down in this hole, and we’re doing this whole dungeon, and everything goes wrong, as it’s prone to do in EverQuest. And the whole guild escapes except for Barfa, whose troll character’s so big, he can’t jump out of the exit.
Jeff Kaplan
(00:57:13)
And I hand the potion to Barfa, and I say, “Here, use this. It’ll teleport you out.” And I’m a rogue, I can just stealth and get out of the dungeon on my own. So I saved Barfa, not really knowing who Barfa was, and I did it with a very expensive potion. Mm-hmm. Having lunch, Rob introduced me, “This is Allen Adham. He plays Barfa.” Mm-hmm. I’m like, “Oh, Barfa!” And we, you know, he has a… “You saved me in The Hole that time.” Well, it turns out Allen was the founder of Blizzard, and he was the head… He was sort of the head of everything at that time. It was Allen, Mike Morhaime, and Frank Pearce. And what I didn’t realize what these lunches were, like I just loved them because I felt like I was myself.
Jeff Kaplan
(00:58:03)
I felt true happiness being surrounded by these, you know, people who were talkin’ about video games and I felt comfortable around. And one day, Rob logs into EverQuest. He wasn’t playing much at the time, and he said, “I want you tomorrow to check the Blizzard job site.” Mm-hmm. I’m like, “Okay, like, I’ll check the Blizzard job site.” And they had announced World of Warcraft, and posted on the job site—mm-hmm—was the job for an associate quest designer. And the funniest part of it was, I forget if it was a requirement or a plus in the job description, but they’re like, “We really want somebody with a creative writing degree.” Hmm. And I’m like, “You guys set this up for me.” Like, they were just looking…
Jeff Kaplan
(00:58:56)
And it was that hindsight moment of like, actually, these guys were just interviewing me for six months. And they were actually friends, and they were really cool about it too. And I just had the fuck it moment like that, that job opened up. I applied with all my heart, you know? Like, they had a bunch of quest writing on it. And then I went through like a pretty hardcore six-month recruiting process because they never hired designers from out of the company. Traditionally, designers were promoted from within Blizzard. Either they would like transfer out of other disciplines, or they would come from quality assurance, tech support. So hiring somebody off the street was kind of a big deal for them, and they really put me through a grilling.
Jeff Kaplan
(00:59:52)
I met with… It was the first time I met Chris Metzen who is maybe the most inspirational, creative person on the planet. And you instantly… They paired me… They did this interview pairing. There were these two guys. It was Kevin Jordan who was one of the original designers on WoW. Really, he doesn’t get enough credit for his contributions. He was one of the earliest class designers, PvP designers. But he’s a really quiet guy. And they paired him with Chris, and Chris just owns the room, you know? Chris, you could just sit and listen to him. He’s so creative. He’s so passionate. And the way he articulates things, like you just instantly become a fan of Chris when you’re around Chris.
Jeff Kaplan
(01:00:48)
And Chris, Kevin, and I go to lunch at this Italian place that was across the street from Blizzard, and I remember… Chris made a stop to buy cigarettes, you know, on the way to the interview. And then every other word out of Chris’s mouth was like, “Fuck,” and, “Shit.” And I’d come from this whole, like, corporate culture from my dad’s recruiting business, where I’d never imagined somebody would curse in an interview, or stop to buy smokes. And again, it was like, “I’m around my people.” Like, I never smoked, but just, you know, being around people who didn’t care about-
Jeff Kaplan
(01:01:31)
… what the corporate norms were was so inspiring. And then my last interview was with Allen and Rob, and a great programmer named Bob Fitch. Like, I think he’s one of the first five developers at Blizzard. And they took me to an ARCO station that had a Jack in the Box. You know, how, like- … sometimes they’ll combo? It was like ARCO Jack in the Box. And that was my final interview at Blizzard, was at the ARCO Jack in the Box. And I remember thinking to myself, “These guys just brought me to a Jack in the Box that’s in an ARCO station. I need to work here.” Like, this is… “These are my people.” “This is where I belong.” Like, it was the greatest thing ever. And so, yeah, that’s my crazy journey to Blizzard.

Lowest point in Jeff’s life

Lex Fridman
(01:02:28)
Started at the bottom and end up at the top in a Jack in the Box. Can you speak to… ‘Cause you mentioned some of the low points in the… in depression. Through that journey, how did you find your way out? So, can you just… A lot of people are sitting in those low points right now listening to this. What kind of wisdom can you draw about finding your way out, finding your people?
Jeff Kaplan
(01:02:55)
There were a lot of really low points. I started drinking a lot, and alcohol was something that I really wrestled with until my early 30s. And one of the things I’m most proud of today is sobriety and having been sober for such a long time now. And I remember I would like buy a bottle of Old Grand-Dad and- … like, drink the whole thing by myself, and then watch the Oscars. I remember I was … Of all things, I’m watching the Oscars, which is just such a fake, bullshit environment.
Jeff Kaplan
(01:03:35)
But I was like… You know, I was really drunk and all those people seemed so together and successful and polished, and I just… It made me… It was that contrast that made me feel like such a failure. And it all seems so stupid and unimportant to me now. Um, I became… You know, I got in that constant struggle of try not to drink, but drink to make it feel better. I was lucky my parents were very supportive of me, even in my 20s, even after I, you know, quote-unquote left the house. I went into therapy and that was very helpful. You know, extremely helpful. And one thing I learned is that you have to find the right therapist for you.
Jeff Kaplan
(01:04:31)
It’s not just checking a checkbox of, “I went to therapy.” It’s about finding somebody who sort of helps you get out of whatever rut you’re in, in a way that’s healthy for you. And I tried antidepressants, but I hated… I just hated taking pills and feeling like something was in me, and making me feel different. I- I never responded to it. And then the hardest thing, you know, which I’ve never mentioned to anyone, and is- is hard for me to talk about, but eventually I went through ECT, which is electroconvulsive therapy, shock therapy. And that broke me out. And I would never endorse that as a miracle. That was… I was at such a low point that people were very worried about me and my wellbeing-
Jeff Kaplan
(01:05:37)
… and what was gonna happen, and that was sort of an extreme pull-the-rip-cord, like there’s-nothing-else-to-lose moment. And I think that was the difference maker. That, and starting at Blizzard.
Lex Fridman
(01:05:56)
To find… I mean, there is a- there is a deep loneliness there when before you met those guys at lunch, you’re alone, like in a really deep fundamental way. Like, in the way you weren’t in New York with the writing- with the writer’s group, right? And so tha- that must’ve been an incredible experience just to see the guild.
Jeff Kaplan
(01:06:16)
Yes. It was everything I… As such an introvert, you think that there are extroverts and introverts, and introverts don’t need anybody, but weirdly, I think introverts almost need people more. And we don’t always know how to engage-
Jeff Kaplan
(01:06:38)
… in the right, healthy ways, and how to find people and how to connect with people. And it was- it was great. One… The thing that had attracted me to creative writing was the solitude of it, and the fact that you didn’t have to collaborate, and you could just write what you wanted to write and it was all you. You would succeed on your own or you would fail on your own, and that was very attractive to me. And the thought of creative collaboration was actually off-putting. I’d spent all four years of undergrad interning at Universal Pictures, ’cause I thought I wanted to be in film, and it was such an unhealthy creative collaboration in the film industry.
Jeff Kaplan
(01:07:27)
It’s a very, you know, I look up unhealthily to the film industry and admire it and, you know, grew up with all these legends who had come from there. But it’s like a caste system. And I was on the bottom of the caste system as an intern, and I was seeing how the other people who were low caste in the film industry were treated, and it was just horrible, you know. But games was different. Games was very flat. It didn’t matter if you were the CEO or the boss, like, the way Mike and Allen carried themselves with, you know, me, who was an associate game designer, you felt like an equal. And I think it… Not just the camaraderie, but the part that shouldn’t be overlooked is the work itself and the work ethic. That’s what really pulled me out.

One of Us

Lex Fridman
(01:08:33)
Hard work on a thing you love. I have to, if you may allow me, read the prophetic “one of us” quote, “one of us” post you made on April 18th, 2002. Because in some deep sense, you, I think, remained one of us. The… I apologize to bring up Justinian the emperor, but remained a kind of peasant gamer, a true, true gamer, who happens to be also be designing the games. And so this post kind of speaks to that. It’s fascinating to read, because that was at the very beginning, right? You didn’t know anything. You didn’t know the games you would end up creating. Title of the post, “If you want something done right.” He wrote, “This week, I accepted a position as associate game designer with Blizzard Entertainment.
Lex Fridman
(01:09:26)
Specifically, I will be designing quests for World of Warcraft, Blizzard’s MMORPG based on the popular Warcraft series. In addition to my duties as quest designer, I will also be expected to contribute to helping design the end game content for World of Warcraft. The reason I’m sharing this information, besides the fact that I have a masochistic love of reading rants and flames about myself, is because I know that the fans of this site are hardcore MMORPG players. The readers of the site have also come to know my personal opinions on what constitutes a fun gaming experience versus what feels like a complete waste of time or poorly designed encounter.” Wow, you’re very eloquent in this post and without too much shit talking.
Lex Fridman
(01:10:11)
“You’ve all read my opinions on such things as tedious key camps, obvious time sinks devoid of any story or linear narrative, quests which reward the lucky over the skilled, and quest rewards which are out of sync with the amount of time and effort required to complete them. I hope that my association with World of Warcraft will serve to comfort MMORPG fans that one of us is on the other side of the fence, looking out for the interest of the player.” And you go on to describe some of the high hopes you have for World of Warcraft, which is really fun to read because you don’t realize-
Jeff Kaplan
(01:10:50)
Now-
Lex Fridman
(01:10:50)
… it’s gonna be, like, one of the greatest games of all time played by millions of human beings, just where those millions of human beings are playing for hundreds of hours, thousands of hours. It’s crazy. It’s funny that this… one of us is writing at the dawn of a new age. The final paragraph is, “So with all that is going on with me, you’ll have to excuse any lapse in updates to the site here. I will try my hardest to give you slack or something to read while you should be working. But in the meantime, there’s a whole world of NPCs. They need to learn the words kaksagur and mo’fucker, and the like. Although something tells me I’m already in trouble with the boss.” “One of us,” Jeff, “one of us.” That was a beautiful, beautiful post. Did you in fact get in trouble with the boss?
Jeff Kaplan
(01:11:44)
No. No. My boss was Allen. And Allen was very understanding and he… they kind of knew what they were getting into- … when they hired me. And that post actually embarrasses me when I hear it now. There’s so much ego in it- … and I think that’s… it’s got that 20 year old- … you know, “I don’t know what I don’t know.”
Lex Fridman
(01:12:11)
“I know exactly how to fix this video game and all video games and-” But there’s brilliance behind that. There’s a passion behind that. Like, when you’re a gamer and you really put in the hours in a game like EverQuest, you understand what makes for a compelling experience. You don’t, at that time, understand how much hard work is required to create that experience and how much uncertainty there is, how difficult it is, how many trade-offs there are. How your designs, when they actually are brought to the world and are experienced by thousands of people, millions of people, they are different from the vision you had for it. So all those elements you don’t know, but you have to have that ego in the beginning, right?

Early Blizzard culture

Lex Fridman
(01:12:51)
Do you even have the guts to try? Do you have the guts to put in all that work? So what were the… what was it like? What were the vibes of early Blizzard like? They’ve… at this point, Warcraft I and II, Warcraft III is in production. StarCraft. These are legendary games. I don’t… I spent probably over 1,000 hours in these games combined. I played Warcraft I, II, III. I played StarCraft I and II. I played WoW, of course. Diablo I, II, III, IV. I played Diablo II with “Stay a while and listen,” with Deckard Cain.
Jeff Kaplan
(01:13:27)
“Stay a while and listen.”
Lex Fridman
(01:13:29)
I mean, some of these characters, some of these experiences just, they’ll stay with me forever. Anyway, so big thank you to those early Blizzard folks. What was it like? What was the team like? What were the developers like? What was the vibes like in those early days?
Jeff Kaplan
(01:13:46)
It was the dream. When I showed up at Blizzard on my first day, the office was on the University of California Irvine campus at the time. They have this research and development park where, if you’re like a tech company, you can get office space there, and Blizzard took up… When I joined, it was three-fourths of the building was Blizzard, and there were… There was like a building right next to it that had like Cisco and, you know, it was like all kind of techy places.
Jeff Kaplan
(01:14:20)
And it was so funny because you drive up and, like, everything was very serious and corporate, and then outside of the Blizzard offices, it’s everybody is wearing black T-shirt and shorts and throwing frisbees and playing Hacky Sack and on scooters and skateboards, and you’re like, “Okay, that’s where, that’s where Blizzard is.” So it was that environment. I remember walking in the door and thinking like, “It feels like I’m walking into a dorm room-“
Jeff Kaplan
(01:14:48)
“… ’cause it was just posters on the wall.” And there were actually, like people would have futons because they’d be sleeping because we would work so much back then. But the vibe was… It was very small. Like Blizzard, the day I joined in May of 2002, was fewer than 200 people, and that included… There was a whole group up in San Mateo called Blizzard North. So Blizzard South, the Irvine group, was responsible for StarCraft and Warcraft, and there were two development teams at Blizzard. It was called Team One and Team Two at Blizzard South. Team One was revered. These are the RTS guys. They made, you know, StarCraft, Warcraft II, and they were, at that time, they were working on Warcraft III.
Jeff Kaplan
(01:15:45)
Team Two was kind of the red-headed stepchild. Like apparently, before I joined, they had tried to spin off a second team multiple times and failed, and then they finally decided they were gonna make World of Warcraft. There was a game called Nomad. I don’t know what that game was exactly, but that was what Team Two was working on at first. That got scrapped, and Allen steered the team towards World of Warcraft. And there’s an amazing designer named Eric Dodds. He’d go on later in his career to be the game director of Hearthstone. Him and Ben Brode basically were the core designers behind that. But Eric and Kevin Jordan were these two key designers working on World of Warcraft for Team Two, and then you had this tech group that was headed up by John Cash.
Jeff Kaplan
(01:16:49)
And John Cash, the, the first day that I showed up to work on Team Two, they said, “You have to go get your login from John Cash.” I’m like, “John… The John Cash from id?”
Jeff Kaplan
(01:17:03)
And you know, John Cash has a skin. You could be John Cash in Quake III. So, and then he saw me, and he was a huge EverQuest player, and he was like, “You’re the guy who runs Legacy of Steel.” I’m like, “You’re John Cash.” We had that moment where we kind of fanboyed out on each other. And it was just… The vibe was so cool there. Like, there were very few producers. So a game team, there are five core disciplines that make a video game. You’ve got engineers or programmers who are writing the code. You’ve got the art team that’s making all the visuals for the game, and that spans everything from like 3D modeling, characters, environments, to also animation, tech art, you know, making it all work.
Jeff Kaplan
(01:18:00)
You’ve got game design, which some companies don’t have design. The artists and the engineers do it. Valve famously has very few designers because everybody there is a designer. But in companies where design is a discipline, which it very much is so at Blizzard, game designers are sort of the creating the game experience people, you know, setting up all the systems and content in a way that gets the player to navigate through the game.
Lex Fridman
(01:18:33)
So that’s part of a story, part of this quest design, part of it is like how you move through the game world.
Jeff Kaplan
(01:18:38)
Yes. So game designers, there’s a spectrum, like same with art, same with engineering, of roles within game design. Some are more heavy on the systems side. So like any game that you’ve played where loot drops- … you know, Diablo IV, World of Warcraft, you know, Escape from Tarkov, whatever. If there’s loot dropping, a designer has planned out very carefully what drops where and at what percentages. That would be like a systems designer. A content designer is somebody who’s gonna make quests or write storylines, or there might even be a narrative designer, which is even more focused on a story. But designers, you know, run the gamut, and then you’ve got these jack-of-all-trade designers that can do it all.
Jeff Kaplan
(01:19:33)
So that’s the design group. There’s production, which is project management, and production is different at every game company you go to. So if you talk to someone from EA or Blizzard, production might be very different. They might be the boss. They might actually be a designer or they might be more of a project manager. And then one of my favorite disciplines on a game team that’s often overlooked is sound and-
Jeff Kaplan
(01:20:03)
… you know, audio, which is comprised of the sound designers and composers. And there are two things, I think there are two things that no one realizes how much they bring to a game until they’re missing, and that’s audio and lighting. Because most of the time, we’re playing without these things, and it just feels a little off and wrong. And when you have a great lighting artist or you have a great composer or sound designer, like, it… the experience. You’re just tapping into these senses that you wouldn’t otherwise. But that’s who comprises the game team.
Lex Fridman
(01:20:46)
Is the lighting, you know, all the different kinds of graphics, would that be under the art team?
Jeff Kaplan
(01:20:56)
Yeah. Lighting, you’re gonna have lighting under the art team, but they’re gonna be best friends with the graphics programmer. And, you know, like I mentioned with design, there’s this wide spectrum on the engineering team. You have some guys who are like architectural geniuses who are coming up with, you know, the server client model or the networking or whatever. Others are more, like, gameplay focus. On Overwatch, we had an audio programmer just doing nothing but audio hooks for the audio team. And on every game team, you’re gonna have graphics programmers who will work with people like the lighting artists or the environmental artists, character artists on shaders, and basically any way to make the game. They’ll always ask, “What’s your vision?”
Jeff Kaplan
(01:21:48)
What are you trying to get it to look like?” They’ll want an illustration of what should the world look like, and they’ll be the ones who say, “I know how to write code that will let you do that.” So you partner a great graphics programmer with a great lighting artist, and that’s… That’s actually the creative tension behind games and what makes game teams so unique, is if we were to line them up on some crazy spectrum on one end, you’re gonna have the artists who… They’re creative, dare I say emotional- … you know, they are artistes on that end. And on the other end, you have the most logical, brilliant programmers who their minds just work very differently from the most creative art-
Jeff Kaplan
(01:22:38)
Like artists could be sitting, you have a meeting with them and they’ll just sit illustrating. If there’s any piece of paper, they’re drawing on it. And programmers, you know, they’re just so brilliant and organized in their thinking and everything is so logical. And then in the middle are people like the sound designers, the game designers, and the producers. They’re kind of a little bit in all those fields, but it’s the brilliance of taking people who are so vastly different in their interests and talents, but aiming them at that shared goal or that shared vision of the game that, like, really makes something special.
Lex Fridman
(01:23:21)
And there, I mean, you showed me the size of the team for World of Warcraft, but you also are well known for working on quite small teams to create these incredibly huge games. What is the power of a small team in this kind of context where a lot… there’s that creative tension? Is it because a small team avoids maybe the compartmentalization, like the modular where the artists now have their own wing building where they never talk to the engineers, that kind of thing?
Jeff Kaplan
(01:23:54)
Absolutely. I mean, you hit the nail on the head. The bigger the team, the more you become a cog in the machine. And on a small team, the way I like to describe it is you get to have a loud voice. If we’re a small team, let’s say we’re gonna make a game and it’s at sort of the incubation period of a game and there’s only 10 of us, all 10 of us are in the room for every decision. You know, I’m not a server networking guy, but I’m in the room for that discussion. I’m not an illustrator, but I’m gonna sit in the room when we decide what the art style looks like. As soon as the team starts to grow, we become compartmentalized.
Jeff Kaplan
(01:24:36)
It’s exactly like you said. And there’s a weird thing that happens that’s just kind of a human nature thing. The less you interact with somebody, the more you sort of become alienated from them and vilify their point of view. You tend to look at what they do and say with skepticism- … rather than trust and belief in them. And I find on smaller teams where we all know each other’s names, I know what everybody’s working on every day, they know what I’m working on, everybody can talk to each other, there’s none of that stereotyping of a discipline. On big unhealthy teams, you start to say things like, “Well, the artists just don’t get it.”
Jeff Kaplan
(01:25:28)
“They don’t understand what we’re trying to make.” And when you back up and you think about the statement that you just said, it’s like… such an asshole statement. Like, really, all the artists don’t get it? Like, that’s… A, that’s not true. B, that’s sort of demeaning to them. Like, they signed up for the… This is their life’s work, too. This game is gonna be as much theirs as it is mine. So who am I to say a statement like that?
Lex Fridman
(01:25:55)
Yeah. It’s harmful to a discipline to think that you understand the world. Most other folks don’t, and you have nothing to learn from them, really, and they’re deluded in some kind of way. That’s so powerful.
Jeff Kaplan
(01:26:12)
Fast-forwarding a little bit, when we formed Team 4 and… Which went on to make Titan and ultimately fail, and then that got rebooted as the Overwatch team, the idea that I tried to get through to the team was to make an assumption. And really, like, Blizzard is one of the top game developers in the world, and we were very fortunate when I was there, and I imagine it’s this way today, that we could recruit whatever talent we wanted. The best of the best wanted to come work at Blizzard. And if you sort of go through the paces of that and say, “Okay, when we recruit somebody…” Let’s say we’re recruiting an artist to make props.
Jeff Kaplan
(01:27:06)
Boxes, chairs, whatever. That is the best prop artist in the industry. That’s who’s gonna show up on our doorstep, so when they show up here, we should treat them like the best prop artist in the industry instead of starting from a place of doubt and cynicism. So, when that person speaks up and says, “I think…” Like, with Overwatch, for example, “I think we should do this.” You know, “We should do X instead of Y.” Instead of saying, “Well, I’m a believer in Y, why are you against my idea X?” You should take a moment, have a deep breath, and say, “Man, the best prop artist in the industry is suggesting something.” Why don’t I listen to it?”
Lex Fridman
(01:27:57)
I actually do it for myself, like this kinda thought framework or thought experiment. Whenever I’m talking to a new person, especially if I feel, myself, a little bit tinge of that feeling. Usually, it happens with, like, a really young person, like an undergraduate student or someone like this. I pretend that they are the smartest person in the world in my head, and then not… Like, it puts me in the mode of, like, assuming I have a lot to learn from them, and it helps. You actually, like, really listen. I literally think they’re the smartest, wisest human on Earth. It helps me.
Jeff Kaplan
(01:28:36)
I had that, like, I think… You know, I’m no expert. I’m a game designer, so, like, as much psychology as I know is how to manipulate people into having fun, hopefully. Like, I don’t know, I don’t have an important job.
Lex Fridman
(01:28:51)
Yeah, right.
Jeff Kaplan
(01:28:52)
But psychologically speaking-
Lex Fridman
(01:28:54)
That’s fun.
Jeff Kaplan
(01:28:55)
… I think a lot about is ego, and I think about insecurity. And insecurity, we all have. Like, all of us as human beings have insecurity. It just manifests itself in different ways.
Jeff Kaplan
(01:29:11)
And as we kind of go through our life journey, the insecurity also changes. So, like, some people, for example, use their insecurity to rip other people apart. Some people destroy themselves through their own insecurity. Some people destroy everybody with their insecurity. But I had that moment as a young lead, when I first was made a lead on, like, World of Warcraft, where I felt it was very important to be right and to, you know, be shepherding the correct idea. And I actually got pulled aside. Like, Pardo and I had a meeting with a couple people who weren’t game designers, and it’s always tricky as a game designer because constantly everybody is throwing ideas out on a game team. Like, there’s no shortage of ideas ever.
Jeff Kaplan
(01:30:04)
And we were in some meeting about something, and these people kind of threw out these ideas. And I wasn’t mean to them, but I very kind of systematically, like an insecure, you know, ego-driven new lead would do, I kind of, “Let me tell you why that’s wrong, and let me tell you what we’re gonna do instead.” And after the meeting, you know, Pardo pulled me aside, and he said, “You’re a very smart designer, but you shouldn’t do what you just did to those people. You should always listen to what people have to say and try to make their ideas work.” And I just…
Jeff Kaplan
(01:30:47)
Over and over, I was like, “Okay, anytime an idea comes my way, let’s try to make it work.” And it went from this kind of thing that I didn’t believe into to actually, like, a core part of who I am today as a leader, as a game designer, as a game director. And some of the best ideas have come from developing other peoples’ ideas-
Jeff Kaplan
(01:31:16)
… where your first reaction is like, “No, that’s wrong,” and then just kind of sticking with it and going, “But how could we make it work?” And the most gratifying part when it succeeds is they get all the credit, and you’ve sort of elevated this person whose idea wouldn’t have been championed, whose idea by the insecure, egotistical lead of, you know, early 2000s would have just said no. Now their idea is the thing everybody in World of Warcraft or Overwatch is just loving, and they get all the credit.
Lex Fridman
(01:31:59)
I should give context to the listener who don’t know about the great Jeffrey Kaplan that you’re one of the most humble and always give credit to the team for everything and anything. And so everything we talk about today, I know you’re probably resisting constantly giving credit to the team on everything. So you’re the famous, “Hi, I’m Jeff from the Overwatch team,” right? So just as a small aside, thank you for your humility through your career, and thank you for always celebrating the team. But let’s talk about WoW. Let’s talk about World of Warcraft. Tell me what the early days of developing WoW was like. Maybe we should talk about what World of Warcraft, WoW is, going to Perplexity here.

Building World of Warcraft

Lex Fridman
(01:32:52)
World of Warcraft is a massively multiplayer online RPG where you create a character, level it up doing quests and dungeons, and progress your gear and power in an open fantasy world called Azeroth. At a basic level, you move, use abilities from your action bar, follow quests, and gradually learn a combat rotation that fits your class. And there’s all kinds of characters and roles and classes. You pick a race, appearance, starting zone, small racial bonuses. In a class, how you fight, what your role is in groups. Can you continue, fill in some of the gaps, what is World of Warcraft?
Jeff Kaplan
(01:33:29)
World of Warcraft, first of all, more than anything, is a world. Like, it’s a world that you can live in with real other people, and everybody’s kinda living out their fantasy. Chris Metzen, who was the creative director on World of Warcraft, and really, like, Allen Adham, who’s one of the founders of Blizzard, calls Chris “the heart and soul of Blizzard.” And it’s almost like when you’re making a Blizzard game, you’re making Chris’ imagination at some point. And Chris famously said, “The lead character of World of Warcraft is the world.” And I always believed that. So you’re trying to create this place that’s exciting and dangerous, but comfortable, but uncomfortable and gorgeous, and, you know it should feel massive, and it really is.
Jeff Kaplan
(01:34:23)
It’s, you know, can take a half an hour to get from one end of the world to the other. But it’s this world you’re living in. The world is divided into two warring factions. There’s the Horde and the Alliance, and that was a very important, very controversial decision that was made by Allen Adham, was the champion of the Horde and Alliance.
Lex Fridman
(01:34:51)
And that in the early days, there was a really strong division.
Jeff Kaplan
(01:34:54)
Strong division.
Lex Fridman
(01:34:55)
Like… You pick a side and then you hang around with only people of your kind.
Jeff Kaplan
(01:35:02)
Yeah, and you get it tattooed in real life on you. Like, the amount of people who walk up to me and show me their Horde tattoo.
Lex Fridman
(01:35:10)
That’s awesome.
Jeff Kaplan
(01:35:11)
Like, it’s epic. It’s like it’s become who they are. Like, if you were to say, like, “Hey, Lex, come play World of Warcraft with me. We’re Alliance on Tichondrius,” you’d be like- … “Dude-“
Lex Fridman
(01:35:22)
Lose my number.
Jeff Kaplan
(01:35:23)
“… Alliance?” Like, “Okay, I don’t think we can be friends anymore.” But the Horde-Alliance decision was really controversial because in EverQuest, it was mixed race. They had all the races kind of like WoW did, but they could all group with each other. And Pardo and I came from EverQuest, where we felt like this was a horrible decision Allen was making. And we argued—Allen, Rob, Bob Fitch, and I would have lunch every single day, and we would just talk about WoW and the core design of WoW. Rob wasn’t even on WoW at that time. He was finishing Warcraft III. And we would fight over the Horde-Alliance split, if it was a good idea or not. And Allen had… He came from more of the Dark Age of Camelot community, which was another massively multiplayer online game that was more PvP based.
Jeff Kaplan
(01:36:19)
And he said the magic of that game was they had three factions, and he liked the fact that you were instantly on a team. You weren’t a loner in the world. And whether you liked it or not, you had people on your side. And Rob and I just argued and argued against it, and then sometime before beta, Allen retired. He went on to run a hedge fund, of all things. Like, got super into poker, got super into finance, left, and retires, like, I think it was nine months to a year before WoW shipped, which is kinda nuts. And Rob takes over as lead designer in Allen’s stead, and to Rob’s credit, the first thing he did was go… Speaking to what we were speaking about earlier, he said, “Allen’s a smart guy. The fact that he was fighting so hard for-“
Jeff Kaplan
(01:37:16)
“… Horde Alliance, we gotta do it.” And Rob and I sort of changed our point of view and got on board with Horde Alliance and went all in. And so, you know, the early days of WoW was… It was a great team. It was a mix of these veterans that we all looked up to. You know, we had Mark Kern running the team. Shane Dabiri was, you know, a legendary Blizzard developer. Bill Petras was the art director, and then we had Metzen, who was sort of like… Metzen was the cool big brother we all, you know, aspired to be.
Jeff Kaplan
(01:37:59)
I’m older than Metzen, but I looked up to him like a big brother. And then there were a lot of us who had never done it before, or they had also pulled a lot of people from other teams and other game types. Like, for example, the guys building the dungeons, they hired out of the Quake community. And because they didn’t have any hardcore MMO designer on the staff at that time, it was, you know, Kevin and Eric and Alan were sort of the only designers, they started building Quake dungeons- … as, like, Quake levels as the dungeons. At one point, WoW was even made in QeRadiant, which was the Quake engine. And then they later, you know, retooled to where they were using a proprietary engine. So we were like this hodgepodge, like the Bad News Bears-
Jeff Kaplan
(01:38:56)
… is how I would describe the WoW team, of this mix of veterans and then people like me. Like, I’m just some fucking idiot, you know- … who played a lot of EverQuest. And I end up at Blizzard.
Lex Fridman
(01:39:10)
Designing quests.
Jeff Kaplan
(01:39:11)
Yeah. Like, okay, we’re gonna design World of Warcraft now. And I’ve said this later with hindsight, I think a huge part of WoW’s success, especially with the early WoW team, Team Two in its earliest formation, was that we didn’t know what we were doing. You kind of… Like, it’s that… Titan was the example for me. Titan was the attempt at making an MMO after World of Warcraft at Blizzard. And we failed horribly, and we had the best of the best on that team. And it’s because everybody was too much of an expert on how to make a groundbreaking phenomenon MMO. World of Warcraft was a bunch of people, like a very successful, sure-of-itself company who had made StarCraft, Diablo, Warcraft, with a bunch of yayhoos basically- … Who was like, “Yeah, we can compete with Sony Online.”
Jeff Kaplan
(01:40:21)
At the time, they were making EverQuest II. Like, if we go back in the time machine, EverQuest II had been announced. And EverQuest fans, we were just drooling for EverQuest II. It wasn’t, “Oh, cool, World of Warcraft.” It was EQ2 was gonna take, you know, the chalice and run with it. And then, of all things, they announced Star Wars Galaxies, and they had a brilliant designer on that, a guy named Raph Koster, who had come from Ultima Online, and he’s just a really smart game designer. If you ever watch one of his lectures, like, he lectured a lot at GDC, and, you know, we’re like, “Oh my God, they’re- they’re making EverQuest II and Star Wars Galaxies, and they have the Star Wars intellectual property.” “We’re fucked.”
Jeff Kaplan
(01:41:15)
Like, “How are we gonna compete?” And everybody had seen the success of EQ, EverQuest, and everybody was gonna make an MMO, and it was just a question of who was gonna win.
Lex Fridman
(01:41:28)
So you’re feeling this immense pressure. You have this small team of just this hodgepodge of this unlikely team that kind of looks fast forwarding to Overwatch, the heroes in Overwatch, but working extremely hard. Now, you- you told me about crazy, crazy work hours, and not because you were forced to, but because you wanted to, because your heart was in it, because you’re like, “This is everything.” Like, you loved it.
Jeff Kaplan
(01:42:01)
Yeah. The games industry has a terrible reputation for insane amounts of overtime. It’s just called crunch. Like, do you crunch or not? These days, crunch is not allowed, not permitted, heavily frowned upon. If we were to work overtime, somebody’d write an article about it next week and say how horrible we are for working overtime. Back then, we worked insane, and I mean insane hours. The longest shift I ever worked straight was 30 hours. That’s when we were gold mastering Warcraft III. This was in my… I think War III shipped on July 3rd, 2002, so this would have been, like, late June, early July. Probably late June. And I had nothing to do with War III.
Jeff Kaplan
(01:43:04)
I should just say that. Like, in the credits, I’m additional help or additional testing or something like that. When I showed up in May of 2002, it was all-hands-on-deck World of Warcraft for E3. We got through E3, and then all hands on deck, the whole company, get War III out the door.
Lex Fridman
(01:43:26)
For shipping Warcraft III.
Jeff Kaplan
(01:43:27)
For shipping Warcraft III, and because I had not been involved with the game at all, and I was a brand new wet-behind-the-ears game designer, they’re like, “You’re just gonna help test whatever we tell you to test.” So we’re trying to gold master, and there’s a crash that happens rarely. If you run one of the cinematics, like you have to be watching the cinematic after one of the levels, and then there was a crash that happened. And so a programmer put in some logging to catch it, and then they needed somebody to just over and over again, “I need the crash to happen so I can fix the bug.” And I sat there for 30 hours and just watched the cinematic for 30 hours-
Jeff Kaplan
(01:44:15)
… straight. And it was the funniest thing, like it was almost surreal watching everybody leave at the… which was a trickle out. Like, everybody kind of trickles out, like, at- … different hours, you know? The family guys go much earlier than the single guys. And then watching everybody show up again in… the next morning, and they’re all, like, dressed different, and they look all refreshed. And I’m just like in the same position. You know, like eyes are beet red.
Lex Fridman
(01:44:45)
To the soundtrack of the cinematic and yeah.
Jeff Kaplan
(01:44:47)
Yeah. But we crunched World of Warcraft, we crunched… The date slipped, so you do this thing. I remember Mark Kern standing the team up and saying, “We’re gonna crunch early so we don’t have to crunch later in the project.” And I really believe he wasn’t manipulating us. Like, I really genuinely believe that he believed in that. But with games, anything can happen, and they’re just… We slip uncontrollably all the time. And we slipped, and it sort of created just this death march, endless death march that… Like to this day, members of the WoW team will remember, like, Newport Rib. If I say that, they’ll have, like, twitches because, like, they would cater the dinner. They’d bring it in at, like, 6:00 or 7:00 at night.
Jeff Kaplan
(01:45:43)
And they’d… Everybody was eating Newport Rib or Panda Express. It was like the worst diet ever. I actually like Newport Rib, no shade- … on them. But you can only eat so much of it. And the carpets are stained and, like, dudes are falling asleep on the couches. And it was an unhealthy work environment. It gets pinned on… ‘Cause at a lot of places it is executive driven. And it is mandated from the top, but the hours that I worked, I never blamed on anyone but myself. I just wanted to. I remember, you know, coming in on Memorial Day, like, with sand from the beach on my feet because I really wanted to get some work done that day, and working through Christmas, and those were things I wanted to do. I never felt like somebody, you know, held my feet to the coals.
Lex Fridman
(01:46:38)
Yeah, it’s such a complicated thing because yeah, okay, you could say that’s unhealthy, but I know a large number of people, especially in their 20s, but actually throughout their career, that have been at companies that do crunch for a thing they believe in, for a thing they love, and it’s some of the most fulfilling years of their life, months and years of their life. And they also… it’s not just fulfilling, they grow from it, they learn from it, and it… You know, and when they… Especially when they talk back about it, about that time, they can see how incredible it was. Of course, when you’re going through it, sometimes it’s extremely difficult, you don’t know.
Lex Fridman
(01:47:22)
And then the crunch, like you mentioned, it’s supposed to be a month or two, and then it, it turns out to be a half a year, and then maybe it turns out to be something like a Titan type game where you never actually ship it, and it’s heartbreaking and the pain, it’s all… But then you look back and you realize how incredible that journey was.
Jeff Kaplan
(01:47:39)
I think, like, my reflections on it many years later, and having gone through, like, pretty crazy levels of crunch to more controlled, I think where crunch is problematic and people are good to be vocal about being opposed to it, is when it’s forced and unnecessary. There’s a lot of like, “Hey, if anybody on the team stays, we all stay”-
Lex Fridman
(01:48:10)
Yeah
Jeff Kaplan
(01:48:10)
… kind of, which I think is not necessary. I don’t think executives who take off and work 40-hour weeks should be telling anybody to stay late. I think that’s wrong and immoral. But to me as an individual, as long as I’m not telling other people to do it, my life’s work is my passion and I want to do it as much as possible. I find myself, I don’t think I’ve ever worked less than 10 hours in a day. Like that… 10 hours is like a normal-ish day to me.
Jeff Kaplan
(01:48:49)
And I enjoy lots of weekends working because I enjoy it. It brings me pleasure and fulfillment. And all of that said, from a place of caution, especially in this era when people are very touchy about it. I don’t try to impose that on anybody else. I don’t want anybody to feel like they’re obligated to, but please understand it’s what makes me who I am, that work ethic. I enjoy it. I actually… Some of my fondest memories are from those WoW crunches.
Lex Fridman
(01:49:25)
And then looking back and reading some of these stories, it’s pretty cool because me, as a fan, on the receiving end of some of those video games, you bring joy to millions of people. It’s awesome. Let me ask you about quests, but first, quick bathroom break if it’s okay.
Lex Fridman
(01:49:43)
Quick 30-second thank you to our sponsors. Check them out in the description. It really is the best way to support this podcast. Go to lexfridman.com/sponsors. We got Fin for customer service AI agents, Blitzy for code generation in large code bases, BetterHelp for mental health, Shopify for selling stuff online, CodeRabbit for AI-powered code review, and Perplexity for curiosity-driven knowledge exploration. Choose wisely, my friends. And now, back to my conversation with Jeff Kaplan. Okay, we’re back. So I think it’s fair to say that before WoW, MMO leveling, like in EverQuest, consisted of, maybe that’s simplifying it a bit, but standing in one spot and killing monsters for hours.

How WoW changed video games

Lex Fridman
(01:50:32)
You helped develop with WoW, I would say a revolutionary idea of quest-driven leveling, where there’s a story driven, quest driven guide through the world, and it so happens that as part of doing that, you’re also leveling the character. So the leveling is both fun and is the engine that drives the story that then also immerses you into the world and pulls you in more and more and more and more. So take me through this process of developing that idea of quest driven design.
Jeff Kaplan
(01:51:10)
Sure. Yeah, there were actually a lot of people involved in it, and they all kind of contributed in their own unique ways. Alan Adham was the lead designer on WoW. When we first sort of decided we were gonna have a quest-based game, we used to joke that, like, EverQuest barely had any quests in it.
Jeff Kaplan
(01:51:31)
It did have quests, they just… They weren’t really in front of the player in an obvious way. You kind of had to seek them out on a website. And Alan knew that he wanted quests to be a big part of World of Warcraft. And so he hired me. That was my entry level position at Blizzard. And on the same day, he hired a guy named Pat Nagle, which was hilarious to me, because Pat was the… He had this funny title of HR and Facilities at Blizzard, because it was such a small company. So, like, if you sent an application in, Pat would deal with the application, or if the toilet overflowed, Pat would have to deal with it.
Jeff Kaplan
(01:52:18)
And so the whole time I was applying at Blizzard, I was going through Pat, and then on my first day, they put Pat and I in an office together, and he’s like, “Yeah, they hired me also as the quest designer.” And so Pat… And he was the most wonderful guy. We had so much fun. So Pat and I kind of designed the quest system. It was Alan’s idea to have it in the first place. And then there was that great designer I mentioned, Eric Dodds, who helped a lot with the interface of it all. And the idea was… At first, we actually on a whiteboard in Alan’s office, we estimated how many quests we thought EverQuest had to date.
Jeff Kaplan
(01:53:03)
And EverQuest had had, you know, I think four or three expansions at that point in time, and we’re like, “Wow, we have to make all of these quests like EverQuest has.” It’s gonna be a lot of quests, and it’s kind of up to me and Pat to do it all. And we believed all we had to do was match that EverQuest number. And Pat and I started working on, like, the design of the system and how it would interact, and Eric Dodds was really involved in how the interface… You know, like how you were going to interact with the NPCs and all of that. And we split up the world into like two zones. He was gonna take Elwynn Forest, which was the starting area for the humans, and I was gonna take Westfall, which was the sophomore zone after Elwynn for the humans.
Jeff Kaplan
(01:53:56)
Pat and I would meet with Chris Metzen, and those were the funnest meetings ever because Chris just has stories in his head and visions. Chris is, like, artist, storyteller, world builder extraordinaire, and he sort of described what he wanted going on in those zones. You know, you want the gameplay to follow the flow of what was going on with the stories of those areas. So we finished Elwynn and Westfall, and we did, like, a team play test. And our assumption was because the way EverQuest worked, players just wanted to level up. It was a level based game.
Jeff Kaplan
(01:54:39)
You go out. You kill a creature. You get experience points. You level up a little bit. And so the way people played EverQuest is they’d find these areas where there were lots of creatures, and you’d usually find the best experience efficiency cycle you could find, so, like, fast respawn kind of easy things to kill, and that’s how you would progress through EverQuest. And I remember Alan kind of telling us, like, “Hey, the quests… When Pat and Jeff write quests, they’ll aim us to where the creatures are.” You’ll do a quest, and then you’ll spend a few hours killing creatures in that area afterwards, and that’s how he imagined it would work. So we kind of set up the world that way. You know, Pat probably did a dozen, maybe 20 quests in Elwynn.
Jeff Kaplan
(01:55:33)
I’d do a dozen, 20 quests in Westfall, and we’d do this team play test. And we had a bunch of people on the team who never played MMOs, like guys with shooter background, you know, StarCraft fans, et cetera. And they’d play World of Warcraft. I think we played for, like, an hour or two, and we only did Elwynn Forest. And the overwhelming feedback from our team… And these are people who really didn’t play EverQuest, they’re like—My God, Pat, that was horrible. I ran out of quests, like, right away. And we’re like, wait a second. You expect to just have quests just keep going? And they’re like, yeah, we expect to have quests just keep going the whole way.
Jeff Kaplan
(01:56:20)
And we kinda had an oh shit moment right after that Elwynn Forest play test, where we realized, like, we had vastly underestimated the number of quests we were gonna need. And we changed, we developed this philosophy that’s kind of a shared philosophy across Blizzard games in general at this point. And I’ve heard it outside of Blizzard, other people in the industry, which is you design along the path of least resistance. So, basically what that means, like, in EverQuest, the path of least resistance if you wanted your character to hit max level is to find the easiest creatures and kill them over and over again in place, which to some people think is very boring. To me, I would do that for eight hours ’cause I think that’s fun.
Jeff Kaplan
(01:57:09)
But we decided in World of Warcraft, we said, why don’t we make the path of least resistance, so in this case, the way to get the best experience the fastest not to be killing creatures in one place, but will overload the experience into the quests themselves, and then that will move you through the world, which will get you to see everything. It will enable us to tell these awesome storylines. It sort of did a lot for the game, and I think it was like a fundamental change in the genre. Like, if you look at the things that… EverQuest was very popular and very successful, and it was hitting like hundreds of thousands of players. And WoW blew the doors open and was tens of millions of players.
Jeff Kaplan
(01:57:57)
And I think the fundamental difference there was that WoW allowed you to play as a single player. And what makes an MMO, massively multiplayer online game, massive is having the other people there. And they’re so important or else the world feels kind of wrong and dead. But the concept that we have to force you to interact with them to do anything is very off-putting to a lot of people. And the fact that people could come into WoW and just kind of the game design, the game design way of des- describing it is directed gameplay.
Jeff Kaplan
(01:58:42)
And some games have extremely tight directed gameplay. Like, for example, if you were to play a single player game like Last of Us, you know, you’ll have those moments where they’ll be like, you’ll come up to a log and then press triangle to duck or else, or whatever the duck button is-
Jeff Kaplan
(01:59:01)
… left stick to duck to go under. And that’s like the ultimate in directed gameplay. Like, they’re telling you exactly what to do. On the other end of the spectrum is a game like Minecraft, like vanilla Minecraft, where you’ll find it’s very divisive amongst gamers who love Minecraft or hate it. The ones who hate it are like, “I don’t know what I’m supposed to do.” Like, “You drop me in this world. I’m supposed to dig or something.” And that’s the type of player that needs directed gameplay or they’re gonna cycle out. Not all players need it. And what WoW did, that it doesn’t seem like an innovation, it doesn’t seem like revolutionary, but it sort of created this directed gameplay that felt optional, but really wasn’t.
Lex Fridman
(01:59:54)
I mean, I think it’s absolutely revolutionary. It basically changed gaming. It changed the way we see games. And it was so successful in part because it became a mechanism by which you could spend hundreds of hours, thousands of hours in the game. I mean, it’s kind of a, like, obviously… It’s one of those… All these great ideas are always like this, right? In retrospect, you’re like, “Well, obviously if you make the path of least resistance quest-driven gameplay, then it’s gonna be the reason that most people play.” But it is true that… I’m with you on… I both like the quests and Cow Level.
Lex Fridman
(02:00:39)
I guess you have to design for everybody. That’s the tricky thing. Like, how do you fine-tune this? If you think of it as a loop of like accept quest, kill 10 rats, turn in quest, ding, level up—that loop. Like, how do you fine-tune that so it’s maximum fun or fun for the maximum number of people? Is it… How- how difficult is that?
Jeff Kaplan
(02:01:03)
It’s extremely difficult. And not everybody’s good at doing that. We all, to some degree, lack the self-awareness of how we tick. So we’re all different types of gamers, but if you ask me to describe the type of gamer I am, I might actually be giving more of a picture of the type of gamer I wish I was or the type of gamer I want you to think I am versus the type of gamer I actually am.
Jeff Kaplan
(02:01:34)
By playing lots of games, you cannot be an exceptional game designer without playing the shit out of as much as you can and understanding on a deep level. And the weirdest part about it is you’re not just looking for the greatest hits. You learn just as much from a shitty game that you do from an amazing game. And also—like, a lousy game can have a great system that was tuned wrong, or lacked the correct interface, or they didn’t put the right visceral polish on it. There’s an executional aspect to all of it. When I’m playing, I’m not only, like, thinking about what makes this fun, I’m thinking about what makes this not fun. But I’m also watching everyone around me. My wife plays games, my kids play games.
Jeff Kaplan
(02:02:35)
And understanding, like, well, what do they do and how are they different to me? Why are they finding enjoyment in this? Why are they not? What’s frustrating? What did they miss?
Lex Fridman
(02:02:46)
And being raw honest with exactly what you’re saying. I mean, if I were to analyze the kind of gamer I am, why do I enjoy Cow Level? And why, above that, why do I enjoy loot? Why is loot so fun? Like, what is it about opening a chest and getting a bunch of stuff? I mean, that might be like at the core of what I enjoy about gaming. That, and walking around a beautiful world with nice music.
Jeff Kaplan
(02:03:16)
As a game designer, I am, at best, a quack psychologist. You know? We can motivate you to do some weird things. The two driving motivators are extrinsic and intrinsic. And all of us, at different times in our lives, in our gaming careers, whatever, we can shift from being intrinsically motivated to being extrinsically motivated. Obviously, loot is a big extrinsic motivation, but even saying that is too simplistic. Like, for example, on the loot boxes of Overwatch, there’s a masterfully designed system that was designed by a game designer, not by a businessperson or whatever. Like, not a commercial person. But beyond that, we also had a really good team who said the visceral opening of the box, the sound it makes-
Jeff Kaplan
(02:04:19)
… the graphics, like the way things spill out and animate, all of that is as satisfying as well. And you’re trying to… Like, there’s the lizard brain part of it. Of like, how does it… Like, I see chest. I know I’m gonna… It’s gonna feel good. It’s gonna feel good. And then there’s the spreadsheety part of it. Of, what does it have? Is it an upgrade? And I think great game designers know how to tap into both of those things.
Jeff Kaplan
(02:04:49)
You know, tap into the intrinsic and extrinsic. There’s… Like, when I was studying writing, you would study the elements of fiction. And, you know, these are just like basic things like plot and character development and setting and theme and whatever. And there’s no, like, textbook that exists for game design, at least none that has been introduced to me yet. But I think about, like, elements of fun.
Jeff Kaplan
(02:05:19)
What are the things that create fun for players? And they’re not the same. Like, it really… Every human being is different. Like, progression is fun. Sense of progression that I’m investing. I’m putting an investment into this game, and then the game is recognizing my investment. That things like leveling, things like the amount of gold you have, those are all investment based. There’s mastery. There’s just pure raw skill. Creativity is one. And hand-in-hand with creativity is customization. And some of those can be aesthetic. Like, look at my customized character, and I have the black curly hair, and I put an earring in my character and I’m customizing in that way. The other is customizing my build.
Jeff Kaplan
(02:06:13)
I’m gonna come up with a whirlwind barbarian and I’m the first to do it. These are all elements of fun that designers can tap into, and in fact are frequently tapping into. But they’re never defined anywhere, and I find that players drift. Like, I’m the type of player who’s not really loot motivated. I’m more motivated by seeing the content the world has to offer. And often that takes me on a detour of being loot motivated, because there might be a dragon or a demon somewhere that I can’t beat without this level of armor and sword. So now I’m loot motivated for some period of time, to get back to being content motivated.
Jeff Kaplan
(02:07:05)
Or if I’m having trouble defeating a boss, I might have to go back and look at the skills and abilities that my character’s using, and I have to go into creativity mode. “Oh, he has that one AE where he…” Area of effect. “…where he puts a curse on me.” And, you know, “If I had this counterability to the curse, I could beat the boss to get the loot, to get to the next boss.” These are all cycles that are tapping into all those different elements of fun.

Single-player vs Multi-player

Lex Fridman
(02:07:36)
And ultimately enjoying and discovering what the world gives you. Has to offer to you. And you’re… You have a lot of hats as a gamer, so you love the RPG/MMORPG world, but you’re also a big shooter guy. Can you explain to me what fun in a shooter context is? And we’ll talk about Overwatch as a specific kind of fun. Maybe… But you’re also a huge fan of the ultra-realistic shooters. Call of Duty. What is the definition of fun there?
Jeff Kaplan
(02:08:09)
There’s a lot of skill and mastery. Off the cuff, flippant comment would be clicking heads, you know? I’m just trying to click heads.
Lex Fridman
(02:08:18)
Okay.
Jeff Kaplan
(02:08:19)
There’s an intimacy also to the first person camera. And now, not all shooters are first person. There is a large trend these days to third person. I really think PUBG and Fortnite sort of opened that third person shooter door. And you’re seeing games like ARC Raiders are third person. But to me, nothing is as pure as first person. Like you’re- … literally living in the world as that being. You can look at your hands, and it’s that pure visceral test of skill of, “Can you click on the thing fast enough?” And when it’s PvP based, you know that’s coming at you.
Lex Fridman
(02:09:05)
Could you lay out for people who don’t necessarily know what PvP and PvE is? And single player-
Jeff Kaplan
(02:09:11)
Absolutely.
Lex Fridman
(02:09:11)
… multiplayer, massively online multiplayer?
Jeff Kaplan
(02:09:14)
So PvP is player versus player. So that means a combative, you know… If Lex and I are up against each other, we’re attacking each other. We call that PvP. You can get killed by another player. Player versus environment is anytime you’re shooting computer-controlled opponents. So if it’s a game about dragons, the dragon is the E, the environment in PvE.
Lex Fridman
(02:09:41)
And we should say that PvP and PvE, the P might be multiple players. It could be five versus five, six versus six for PvP. And for PvE, it could be, like, raids where it’s multiple people, large groups of people going against the AI.
Jeff Kaplan
(02:09:59)
Yep. So single player, that’s a game that you play totally by yourself. Like, you don’t play with anybody else. You can’t play with anybody else. It’s not networked to play with other people. For example, I’m playing a game called Story of Seasons right now on the Switch, which I just play by myself. I have my farm. You know, there’s a town. I’m meeting people in the town, and no one can come and join me and interact with that. So it’s a very controlled experience. Single player games are very difficult, or they can be very difficult and expensive in terms of production to create. Like, if you think of a game, like Uncharted or Last of Us that’s made by Naughty Dog, like, those are kind of the preeminent best single player games you could talk about.
Jeff Kaplan
(02:10:53)
They’re very handcrafted. Every experience is made just for you. One up from that is what I call co-op. And these terms become interchangeable, so I’m using some semantics here. But co-op is any cooperative experience that we can play together, but we’re sharing an exact same experience very intentionally. And it’s me sharing that experience only with other people that I know. So a great example of a cooperative game, maybe one of the best of all time, was Left 4 Dead, which is a game where you and three other people go in and you fight, like, hordes of zombies, and you try to progress through to the end safe room. It’s a very cooperative experience. A game like Diablo IV, you can play cooperatively with other people.
Jeff Kaplan
(02:11:52)
Now, one up from that is multiplayer, and that’s when you’re engaging with strangers who are in the same world that you might not have the same cooperative goals as. You might have very opposed goals to them. You might PvP them, or they might just be random strangers that you pass in a town or city and never see again. And then massively multiplayer, which is what the MMO online sort of stands for, massively multiplayer online game, that’s when you’re breaking into thousands of players. And the worlds become really, really big at that point.
Lex Fridman
(02:12:34)
By the way, we should say that the co-op could be remote connection, but there’s also, what would you call it, couch co-op where you have two people. Some games are really designed well for the experience of two humans sitting together and playing the game together. Which is a really tricky thing to design for, but if it’s done well, it’s a… It’s a really fulfilling experience. Like, with a friend, with a loved one, you can, like, play a game together. And Diablo IV, I should say, is an example of a game that does that really well. They do couch co-op. Like, two people can play Diablo sitting together and there’s a real intimate experience in that.
Jeff Kaplan
(02:13:13)
Yeah, couch co-op—it’s funny, ’cause it actually, like, predates the couch even. Some of those old arcade games- … like, would have two joysticks on them and then you could play- … with somebody else. Or there’s, you know, the famous game Gauntlet—
Jeff Kaplan
(02:13:27)
… had four joysticks and four people playing together. And then anybody who grew up in that early console era, like, you know, NES, Sega Genesis was a legendary one. We would sit and we’d play NHL 93- … on the couch. And anybody who lost, you’d lose the controller. And you could play that with up to four people playing, or we… I remember one of the big games that came out was Mortal Kombat. And we would play Mortal Kombat on the Sega Genesis, and it was the house rules were, you know, whoever lost, so whether you were in your college dorm or just some buddy’s apartment and there’s five people there, you’re constantly cycling everybody in and out. But there’s just a magic to multiplayer, of engaging and sharing in the experience-
Jeff Kaplan
(02:14:21)
… with other people. That’s why I’ve always… I’ve never made a single-player game. Uh, I have great admiration for them. I don’t know if I could do it. The challenge… The reason I love multiplayer so much, the way I describe being a game director or game designer on a multiplayer game, it’s like imagine if you were gonna be a movie director, and you were gonna have all these actors and set designers and props and, you know, writers and scripts and all of this stuff, and your goal was to get a certain movie made. But we’re gonna ask you, the director, to just… You’re gonna leave the room. You can set it all up ahead of time, and then you’re not allowed to be there or talk to anybody involved in it.
Jeff Kaplan
(02:15:16)
And now you need the actors to have an experience, and it’s just kind of the wildest, funnest experiment. Like-
Lex Fridman
(02:15:24)
From a designer/creator perspective, ’cause you don’t know what the players will create, so that’s fun to see. You, you, you lay out the chessboard, you lay out the world, and then you get to watch what they create together. That’s true.
Jeff Kaplan
(02:15:38)
I struggle because sometimes people call me the anti-story guy in games, and that really hurts me because, like, I actually love story in games, and I counter that I’m the anti-shitty story guy. And what I mean by that is like, A, the most magical stories that I’ve ever heard come out of video games are player stories about, you know, the time I gave Barfa a potion and then I met him in real life. Like, that’s better than any video game writing that I’ve heard in a long while. The player story is so much more interesting. You know? “Lex, why do you like the cow level so much?” “Tell me about some goofy time-” “… like a loot goblin drew you into the most danger.”
Jeff Kaplan
(02:16:33)
“And… But there was another player there, and then…” You know, like, those are the stories that I think are more interesting from games. There are some exceptional writers in video games and some exceptional games at story. You know, I’ve mentioned Naughty Dog, like they’re kind of on another level. But Valve has amazing writing. The writing behind Half-Life 2, Marc Laidlaw; the writing behind Portal- … and Portal 2. I think it was Erik Wolpaw, who is hilarious, just amazing, and Rockstar.
Jeff Kaplan
(02:17:14)
Red Dead Redemption 2 is one of my favorite games of all time, and that’s a game where you can see the expertise and mastery of the game design and the narrative design, and the fact that you can have those player stories of just the goofy shit. Like, I remember… ‘Cause the controls are a little awkward in Red Dead for a PC player who’s playing on console. Like, I always get confused about, like, taking out my gun and putting it away, and what’s, you know, the L1 and L2.
Jeff Kaplan
(02:17:52)
Like, as a PC gamer, I’m just like, “Let me bind this stuff to where I want it.” And so like, you know, a guy in town rides by and he’s like, “Howdy, partner.” And I go to, like, give him the Arthur Morgan, you know, “Hey, what’s up?” back, and I just whip out my sawed-off shotgun and, like, blow his fucking head off. And then the whole town is like… Suddenly I’m, like, under… I’m wanted and I’m being chased, and then there’s a train that, like, takes out the posse, and-
Lex Fridman
(02:18:21)
Yes.
Jeff Kaplan
(02:18:21)
It’s like those stories, and the fact that Red Dead can have, you know, this, like, touching, heartbreaking story of Arthur Morgan and his journey, but you can also have, you know, the player story of blowing off the poor guy that’s just trying to-
Lex Fridman
(02:18:34)
And that’s the combination. And then Rockstar does a really good job with, you know, even in Grand Theft Auto with the radio. It can be kind of a side aspect to the game; that great writing there can create—help create the world— … with humor, with color, with depth, with heartbreak, all that kind of stuff.
Jeff Kaplan
(02:18:53)
There was a moment in Red Dead where it… There’s the Daniel Lanois song— …”That’s the Way It Is”. I just… I love Daniel Lanois, so the fact that somehow Rockstar landed him and like, was able to get that song out of him. And there’s this moment where you’re, like, riding back and they start that song, and- Everything up to then had been gorgeous, like, more of a score. There’s Woody Jackson, who’s, like, a really amazing game composer. He had done the score for that, and so nothing had been, like, lyrical with words. And then they play the Daniel Lanois song, and there’s, like, the quotes are coming back—
Jeff Kaplan
(02:19:41)
… from, like, Dutch and Arthur Morgan, and I’m just like, “Goddam, this is, like… This is art.” You know, this is like—I know it’s supposed to be entertainment, I know it’s a business, but the top of the pyramid is art, and- … it just hit me emotionally.
Lex Fridman
(02:19:58)
Yeah, there’s certain games where, you know… I mean, that moment, you just imagine the number of people who shed a tear during that moment, and that’s just a reflection of how much you’re invested into this world, into these characters, and it’s a beautiful thing. I have to ask you about this, this image that you sent me. It’s super cool, so I’d love it if we could nerd out about it a little bit, the zone flow for the original World of Warcraft. There’s a bunch of zones. It’d be awesome if you kinda talk through how, like, this world is built. Take me to that time when you were designing this, before anyone else got a chance to play it.
Jeff Kaplan
(02:20:35)
All WoW stuff. It would start from that inspiration of Chris and the world. And, you know, it was so fun hanging out with Chris because we had whiteboards all over the place, and, you know, “Hey, Chris, we should make Eastern Kingdoms. What do you think it should be?” And he would just tell you the story of each of these as he’s just drawing. And Chris is a really talented artist, so the map would be gorgeous. I have lots of, like, photographs of Chris maps that he would just kind of whiteboard up. He’s like, you know, “Here’s the Dwarven Lands, there’s Wetlands with Khaz Modan up there, and that’s where this, you know, tribe of dwarves were from.” And then they, you know, humans are going to be down with Elwynn Forest.
Jeff Kaplan
(02:21:25)
And then Westfall, there’s, you know, this group called the Defias Brotherhood and they have a place called Deadmines.” So I would talk to Chris because you want to capture the spirit, like, as a game designer, you want to capture the experience that’s in people’s heads. So, like, take Burning Steppes, for example. Supposed to be one of the scariest places with lava and dragons and, you know, all this kind of stuff. That doesn’t feel like where you want to start. It feels like where you want to end, so you kind of work the world flow in a way that puts that at the end. But there was also kind of some magic to the original starting areas, where we gave the dwarves and the humans a free flight path between… The dwarf hometown was called Ironforge, the human hometown was called Stormwind.
Jeff Kaplan
(02:22:23)
And we allowed you to fly for free. So, like, these little newbies who were, you know, level five or something, if you played a dwarf and I played a human, I’m like, “Oh, Lex, don’t worry, I’ll come. You know, I’ll come to Ironforge and we’ll hook up and I’ll just fly out to you,” which is the magic of World of Warcraft. You have to fly over Burning Steppes and Searing Gorge, and you look down and you’re like, “Holy shit, that looks scary and dangerous.” And it plants that seed of things to come.
Lex Fridman
(02:22:56)
So you’ve designed some incredible quests. Is there any that stand out that you’re proud of or ashamed of? I mean, you famously have designed the Green Hills of Stranglethorn quest. One of the most infamous quests in the history of WoW, of gaming, where you had to collect a bunch of pages, or… Green Hills of Stranglethorn, maybe, can you comment on that one or any quest that just springs to mind?
Jeff Kaplan
(02:23:26)
Green Hills of Stranglethorn holds a lot of emotional value for me because amongst WoW players back in the day, it was unanimously hated as one of the shittiest, most annoying quests. But it holds a really special place in my heart. First of all, it’s one of the few times that I just, like, wrote a short story that’s actually in the game. It’s me paying homage to Hemingway, and the guy who gives you the quest, his name is Hemet Nesingwary, which is just me rearranging the letters of Hemingway. There’s another quest giver there that’s Kerouac’s name also mixed up. And then it was the typical hubris of a junior game designer who thinks he’s clever but is actually a dipshit. That’s- That’s the Green Hills of Stranglethorn, like, summed up.
Jeff Kaplan
(02:24:29)
So, like, I wrote the story over, like, it was, I think, winter break, like, everybody was gone and I just was so happy to be in the office, you know, I’m at Blizzard by myself writing late at night. And the whole idea, and this is, this is very much what I call ant farm designer, which is bad. Which is, you know, you’re the game designer who’s playing God, and players are the ants in your ant farm, and you want to see what they’re gonna do, which is not the correct way to be a good multiplayer designer. But I hadn’t learned that yet, and there’s a really great famous Sid Meier quote where he says there’s three types of fun. Fun for the player, fun for the designer, and fun for the computer.
Jeff Kaplan
(02:25:23)
And we catch ourselves, we’re like, you know, we gotta be really care… It has to be fun for the player, not fun for us. So this Green Hills of Stranglethorn quest was like an ant farm design of, I’m gonna write this, honestly, probably pretty shitty story, I haven’t read it since 2003 so God only knows if it’s any good. But I wrote the story and then I divided it up into all of these different pages. And the quest giver, Hemet Nesingwary, wants you to put together, like, the story’s like, he wrote this book, but then the pages got scattered across Stranglethorn Vale. And some… When you’re doing quest design, you’re really thinking about the player flow and you’re directing them from quest giver hubs out until these destinations, and you want them to do all the destinations.
Jeff Kaplan
(02:26:14)
But sometimes we would do these bridging quests where you could do anything in the zone and it sort of had this overlap. And so the pages of Green Hills of Stranglethorn could be looted off of any creature anywhere in Stranglethorn Vale, and it was kind of like that McDonald’s Monopoly game where you have to have all the pieces or else you’re not gonna win. But where I really went south, I don’t think the idea in a vacuum is horrible, but where this really fell apart was the interface of World of Warcraft wasn’t set up. Like the pages didn’t stack, there wasn’t a dedicated container to put all the pages in, so players had very limited bag space. And as they’re fighting in Stranglethorn Vale, I’m just shitting up their inventory with all of these pages and they only needed so many.
Jeff Kaplan
(02:27:14)
Like you might get unlucky and you have like three page fives that are just junk in your inventory, and I might have like eight page sixes. And then everybody… And this was the goal, like the, the designer trying to puppeteer everybody. Everybody in Stranglethorn chat is like, “Hey, I’m looking for a page six. Anyone got a page three?” And that was like my fantasy as a designer of like, and then they’re gonna be social and meet each other, and players are gonna be appreciative for each other, but really all everybody did was just no… Eventually, no one did the quest. They just were super annoyed, or they went to the Auction House. So the quest is famous in that it was so aggravating and annoying and it just became a way…
Jeff Kaplan
(02:28:06)
It not only became a way for me to learn from my mistakes, but because I was very open with the fact that I didn’t think it was good and that the quest had failed, it opened the door for us at Blizzard to be critical of our own work. Like it’s always easier if you’re the first one to go out and say, “Hey, guys, I think I made one of the shittiest quests in the game and here’s why.” And then it sort of challenged people to make better versions of it.

How Blizzard made great video games

Lex Fridman
(02:28:35)
I mean, again, you continue to speak with so much humility. But WoW turned out to be one of the biggest games of all time both in terms of popularity, how many players play it, revenue, and critical acclaim. And then you rose to become a game director of WoW helping release Wrath of the Lich King, which by many is considered to be the greatest expansion. I mean, there’s a million questions I can ask here, but maybe this is also a good place to ask about the famous Blizzard polish. So Blizzard as a company has historically, and you were certainly a big part of that, delivered these games.
Lex Fridman
(02:29:18)
They were just, got so many pieces right and well-functioning and well-coordinated, and just feel finished in a way that a lot of other games don’t get right. So what does it take to take this gigantic game, this game played by millions of people, loved by millions of people, and deliver it in a way where it’s like it all just works?
Jeff Kaplan
(02:29:44)
To have a level of polish is like a studio wide culture that has to be instilled in everybody, like no one can be satisfied with a bug. Every game is gonna have bugs, and Blizzard games have bugs. It’s a question of, how quickly do you fix them and with what urgency? And as players ourselves, if we’re playing as much as anybody else, we’re gonna be motivated to fix the bugs. There are some really tactical aspects to it, too. The quality assurance department at Blizzard is the best in the industry. Like the people who come and do QA at Blizzard, they are passionate gamers. Many of them want to be developers themselves, and they’re not just doing it for a job. They do it because they fucking love the game.
Jeff Kaplan
(02:30:45)
And the relationship we tried to develop between us on the development teams and QA was extremely tight. And whenever possible, we also tried to sit as many QA members up with the development team as possible, depending on the logistics of… You know, in the early days, we didn’t always have the space for all of QA to sit with us. We were very fortunate on the Overwatch team to have a large amount of QA sitting with us, and then developing that relationship. You know, in the early days there, there were these fears of like, “Well, QA can’t talk to the developers,” and trying to shatter that-
Jeff Kaplan
(02:31:27)
… of, because some of our QA members knew the game so inside out, you would just say to ’em like, “Hey, dude. Just message me anytime. Here’s my home number. Like, call me if there’s a bug. If you think we’re gonna get raked over the coals on this, you gotta speak up. I don’t care what the chain of command is. Like, we gotta fix this thing.” So QA was amazing.
Lex Fridman
(02:31:52)
I mean, so can you speak to QA, quality assurance? At the peak of the craft, what does it entail? Like you’re basically experiencing the game and trying to figure out particular slices of that experience that could be improved?
Jeff Kaplan
(02:32:09)
Yeah. People simplify the role by just, “Oh, these guys just get to play games all day and then, like, let us know if there’s a bug.” They are so systematic in the way they test stuff. They come up with these plans that are actually amazing of, like, who’s gonna test what. There’s a lot of regression testing that goes on. Within QA there will also be compatibility testing. The Blizzard compatibility department was amazing. Like, they had every card, every machine, every configuration, and they would roll through to make sure there wasn’t some quirk that was gonna come up on some video card or some motherboard that you weren’t expecting. But it was all very systematic. It wasn’t just Wild West, let’s play the game.
Jeff Kaplan
(02:33:02)
And then as a developer interacting with QA, you would find that there were certain specialists whether like, like for example, on Overwatch, there were a couple of players that… Like, we all were shooter players when we were making Overwatch, but I’m not like esports level shooter player. I’m like, you know, Gen Xer, “Remember Doom, how good I was”- … type of shooter player. But we had, you know, a couple of these QA specialists who, like, they could just snipe from 100 meters out and hit the shot every time and tell us if there was a frame of input delay, you know? And then you sit that person with an engineer and say, “Hey, I think there’s some input lag here.”
Lex Fridman
(02:33:58)
That’s amazing.
Jeff Kaplan
(02:33:59)
And sure enough, they’d be right. But you have to have that relationship where the devs trust QA. Or just even on, like World of Warcraft, they had a great relationship with QA in that they built out a full raid team to do the raids. And then you’re not only, like, looking for bugs, like, “Hey, the dragon was supposed to fly and instead it just, like, sunk through the world and the game crashed,” which would happen. But, like, if you really value QA, you’re asking them, “What do you… Dude, what do you think? You’re…” You know? Like, “10 million people are gonna see this. Your opinion, multiply it, you know? It matters. What do you think? You know? Are you having fun? Oh, yeah, this is cool. This isn’t cool.” So QA was important.
Jeff Kaplan
(02:34:48)
The other thing that was important is the Blizzard engineering, which you have to architect your game to be hotfixable.
Jeff Kaplan
(02:34:58)
And what a hotfix is, games, there’s a couple ways to fix ’em. The way most of us know, ’cause all the software we have gets a patch, you know? You have to update it. You have to download a new version of it. Windows, you know, you get that annoying message, like, “There’s a new version of Windows.” And it takes, you know, a few minutes and you update it. You know, obviously, we patch our games and that’s where we fix a lot of bugs, but if you really wanna run a game like Overwatch or World of Warcraft successfully, you need master level engineers who have architected the client and server in such a way that you can hotfix the game on a dime. And what a hotfix is, is a server patch that no one’s client has to go down for.
Lex Fridman
(02:35:44)
Mm-hmm. That’s because you’re dealing with a huge number of players and you discover an issue and you want to respond to that issue really quickly.
Jeff Kaplan
(02:35:51)
Yeah. There’s emergency issues like something’s crashing. Like, the worst case scenario is anytime the server’s crashing. Or in Overwatch, like, a really catastrophic bug would be something where you have to disable a hero. Like, someone found an exploit and you have to disable a hero from the lineup. You want to turn around that hotfix if you can in a half an hour, get that hero back live. You might have somebody who only plays that hero, and the only reason they’re gonna play Overwatch is because that hero’s active. You don’t want to wait for patches and you want to hotfix- … as fast as you can.
Lex Fridman
(02:36:32)
And then also to improve the game quickly to just even settle stuff to do that.
Jeff Kaplan
(02:36:36)
Yeah. Players feel it. Like, they… That’s where there’s this idea of, like, the love and the craftsmanship of the developer that you can feel. Like, any product, you know your iPhone or Android or, like, any computer or consumer product, you can feel when there are people who loved it behind it and aren’t just putting it out on a shelf. And games have that as well, where you can feel the heart and soul of the developer in the thing. And some of that’s, like, the joy and delight of, like, that there’s a Cow Level, right? That that’s… You know, you can feel the humanity of the development team-
Jeff Kaplan
(02:37:25)
… through that. But another part of that is, like, do they clean up their fucking yard, you know? Does this game work? Is it… And it’s not just the bugs and the crashes. It’s, like, when balance gets wacky and stupid and, you know, suddenly everybody’s a Barbarian and whirlwinding and no one else will play anything else. You’re like, “We should probably fix that,” you know?
Lex Fridman
(02:37:50)
Oh, those were the days. I sadly was the Barbarian Whirlwind guy.
Jeff Kaplan
(02:37:54)
One-handed.
Lex Fridman
(02:37:56)
It was… Yeah, it brought so much joy. So a lot of people modern day think of you as Jeff from the Overwatch team.
Jeff Kaplan
(02:38:05)
My name is Jeff from the Overwatch team. I’m Jeff from the Overwatch team. I’m Jeff from the Overwatch team.
Lex Fridman
(02:38:12)
But y’all must have forgot, you were the game director of WoW in an era when WoW was one of the biggest games in the world. Just, you know, looking back, what wisdom can you draw from that time when you got to experience this era of gaming that changed gaming forever, where it’s millions of people playing this video game?
Jeff Kaplan
(02:38:37)
It was my first game I worked on, and I joined it as this entry level dude. I still have my offer letter from Blizzard, which was for 35K a year.
Jeff Kaplan
(02:38:50)
You know, that’s what I was making. And very shortly after WoW shipped, you know, Allen left as lead before the beta, or like right around the beta, and then Rob took over as the lead designer, and then he left the team very shortly after WoW shipped to go start StarCraft II. And he put myself and Tom Chilton in charge. Tom is a designer who… He was a great partner of mine and a great leader and he actually came from Ultima Online. And so I always looked up to Tom because he had a lot more experience than I did. And this is like early 2005, the world was on fire, the servers were barely running… WoW was just, had taken off like gangbusters, and they basically put me and Tom in charge of WoW. And at the time they promoted me, my title…
Jeff Kaplan
(02:40:00)
I didn’t even have a lead title, my title was Senior Game Designer. And Tom and I were running the design of WoW at that time. So I thought it was totally normal, and I thought what we were experiencing with WoW was just normal for making a video game because it was the first video game that I had worked on. I thought it was the funnest joyride because we were working on WoW, we were still working insane hours and then I’d get home, eat dinner, and then me and my wife would log in and play WoW, you know, for four hours, and then I’d go in the next day and I’d work… And it was just this… My whole life was World of Warcraft. And I loved it.
Jeff Kaplan
(02:40:59)
Like I loved everything from, you know, the creative meetings with Chris Metzen and just what an inspiration and muse he was, down to the simplest, dumbest design stuff that like we as game designers, like, you wanna talk about why a button is in the lower-left versus lower-right and what does that mean? That’s like two hours of discussion. And is there a better way? Like the 10,000 minutiae problems were thrilling to me. And then also the big disasters. Like the big… I had in the early days of WoW, we didn’t really have all the processes in place for, like, how to deal with being a successful online game, and I literally had GMs, like game masters, these are customer support guys, calling my home phone at 3:00 in the morning.
Jeff Kaplan
(02:42:02)
Like, I remember this one time there was some faction token in Stranglethorn Vale and they figured out a way to exploit it, and this GM calls me panicked, it’s 3:00 in the morning. He’s like, “I’m just spawning…” Uh, what, what did we call ’em? Guardians of Blizzard. They were these giant infernals that we just made that instantly death touched anything. We used to have them when we were in the beta, like off in the distance of places players weren’t supposed to get in case they cheated their way there. And this GM is just spawning them all over Stranglethorn Vale because he’s worried because the players are exploiting. Yeah. It’s like 3:00 in the morning and I’m talking in hushed tones because my wife is sleeping right next to the bed.
Jeff Kaplan
(02:42:50)
I’m doing this ’cause it was actually like before the cell phone days when I actually had a landline. But that’s just how… And I loved it. I loved the thrill of those big moments, the minutiae. And I felt like through the running WoW Live, which was me and Tom together with an amazing team, we kind of learned how to be the WoW team. And putting WoW in a box and shipping it was like only chapter one in a 12-chapter book essentially. And that first how to run the game, how to patch it, what type of content, how to deal with emergencies, what should our customer support be like. I mean, we would debate should we have a launcher or not. You know, in the early days, the only reason the launcher existed in WoW was to run anti-cheat on your machine.
Jeff Kaplan
(02:43:45)
And we had a moment where we figured out how to put that into the game and out of the launcher. And it was the first time I ever really had an in-depth conversation with Mike Morhaime. He’s like, “You gotta bring the launcher back, guys.” We’re like, “Why?” He’s like, “There’s no better way for us to talk to our players.” Um, and I remember trying to hide the launcher. And to this day, Mike was right. Like, that launcher turned out to be the best thing we ever had. That’s essentially what Battle.net has morphed into these days. But all those decisions and when it came time to make Burning Crusade, you know, at that point, Tom and I were leads. We were full, they had actually promoted us.
Jeff Kaplan
(02:44:29)
There was, there were two big exoduses of groups that quit Blizzard; they were disenfranchised if you can believe it. Like we just shipped World of Warcraft and this whole group just walked out the door. I was actually sitting, my desk faced Morhaime’s office, and I watched them all go in and quit, and they were the group that formed Carbine… which made the game WildStar. Ended up taking them 10 years to make, and they were just really unhappy with World of Warcraft, and they were unhappy with… I don’t know what they were unhappy with. They were unhappy enough to walk out the door right after we had shipped WoW.
Lex Fridman
(02:45:16)
That’s incredible. Like, what, what is it? Just because they put their heart and soul into the game and they maybe get exhausted in a certain kind of way?
Jeff Kaplan
(02:45:23)
Yeah, and I don’t want to… It’s not fair of me to speak on their behalf. I think they were promised some compensation that they didn’t immediately see. I don’t know if the game… Like, here’s the weird part when you make a game. When you come up with the idea and you start pitching it to people, that’s the best the game is ever gonna be, and then you work on it. Like, you know, games I worked on take five years, you know? Overwatch was two and a half, three years. Every day you get close to ship, the imagination of the ideal game gets farther and farther from the reality, and you’re always shipping this, like, greatly sacrificed thing that nowhere near matches the imagination- … of the inception of the idea, so you become disenfranchised with the concept.
Lex Fridman
(02:46:24)
So in some sense, you’re shipping… You’re constantly in a state of disappointment. You’re basically shipping a lesser thing than you’ve been dreaming about.
Jeff Kaplan
(02:46:37)
Yes.
Lex Fridman
(02:46:37)
You’re doing less and less and less, saying no and no, and cutting, and all that kind of stuff. Yeah, it’s difficult, psychologically difficult, but nevertheless, the result when you zoom out, it’s one of the greatest games of all time that millions of people played for thousands of hours. It’s just… Did you ever have an experience, a realization how huge WoW was in terms of not, like, statistics on the server and so on, but the cultural impact it had?
Jeff Kaplan
(02:47:05)
The first time was the first BlizzCon, which was in 2005. So when WoW shipped—and this is so weird to tell people—but on the team, not everyone, but a lot of us were very demoralized after WoW shipped. There were all sorts of issues with the servers because the game did way more successful than we expected it to do and the server load was just nuts. Like, we were just… We were doing our best to hire database programmers, you know, ’cause we just didn’t know how to deal with the sheer scope of the game. But when you’re an individual like… And at that time, like I mentioned, there were multiple exoduses of people who quit Blizzard. They went and formed a couple notable studios. One was Carbine, the other was Red 5.
Jeff Kaplan
(02:48:11)
And we lost, like, kind of our core people. Like, when Red 5 started, that was our team leader, that was Mark Kern, and our art director, Bill Petris, they quit. When Carbine started, it was, I think, all of our animators and some of our best programmers and… Like, it’s really demoralizing when you lose team members like that, but then we were also underwater. Like, the servers aren’t running, we’re not able to keep up with demand, and we had to start putting patches out, and now we’re making patches like… For a while we had one animator who stuck around, and then eventually he left also, but you’re doing like, okay, we gotta now do a patch without an animator.
Jeff Kaplan
(02:49:01)
A lot of our art team was gone at that point, and you’re trying to keep the ship afloat and the morale was just in the shitter. Like, everybody felt very down on Team Two, the WoW team was called Team Two, and that we had somehow failed. And during that time, there was this idea to do BlizzCon, and the way that started was EverQuest had done these, like, meetups because they knew it was, like, a big guild social game, and people would get together at like some hotel ballroom and you’d sit with your guild at like a banquet room table.
Jeff Kaplan
(02:49:44)
And to give credit where credit’s due, I remember sitting in the meeting for what was to become BlizzCon, it was Pardo who said, “Blizzard’s bigger than that. We’re not just one game, and I know everybody’s focused on World of Warcraft right now, we should do BlizzCon.” And at the time, we had a game called StarCraft: Ghost was in development, and that was getting ready to show, and there was Frozen Throne, which was the expansion to Warcraft III, but, like, we knew we were gonna make StarCraft II. And then there was a lot of motion happening with Blizzard North, which is a whole separate story, but there was like, hey, we could really do a cool show-
Jeff Kaplan
(02:50:27)
… that’s this BlizzCon thing. And at first, we kind of announced it and it just was crickets. You know when you’re, like, excited about something, you’re like, “Man, everybody’s gonna love. Like, we’re doing BlizzCon,” and everybody’s kinda like, “Crickets. What’s BlizzCon? Who cares?” And we’re, we’re idiots, we’re reading the forums, and the forums are just flaming us all the time, like, “There’s lag on this server and can’t log into that server.” And that was our perspective of what was happening. And then, like I said, give Mike Morhaime credit where credit’s due. He kept us committed to that launcher, and they put the BlizzCon tickets on the launcher, which they hadn’t done before. It was on the website.
Jeff Kaplan
(02:51:13)
And so everybody who logged into World of Warcraft suddenly got this like, “Hey, we’re doing BlizzCon in Anaheim, do you wanna come?” Sold out instant. Like, instantly sold out. And when I showed up at that show, it… One of the most emotional things in my life. It was nothing but an outpouring of love. And up until that point, your perception was, because you’re just reading online and it was… The perception is such hatred, because people who are passionate online, they express themselves in the harshest ways ’cause it gets attention. You know, that’s the lesson I should’ve learned from my early days. And it’s such an unfortunate thing, because then you met these people in person and they loved World of Warcraft.
Jeff Kaplan
(02:52:12)
And all they wanted to do was talk about World of Warcraft and hear about what was coming next and be around other people who loved World of Warcraft, and-
Lex Fridman
(02:52:22)
It’s incredible. It’s a fascinating theme, to me, about human nature, and it’s absolutely true, and I wish there was a thing that could be solved. But then again, maybe not. Maybe that’s just the way it is. But in person, all of the people that are passionate about a particular topic, and whatever that topic is, it could be games, it could be at conferences, technical conferences, they are all mostly full of love. And just the way they talk about stuff, they nerd out. Even the disagreements are drenched in this respect and appreciation and love for the game, for the topic. And online, you’re right, I don’t know if it’s because of popularity or clicks or so on, but it’s just the way of speaking on the internet is more mockery and-
Jeff Kaplan
(02:53:12)
Cynical.
Lex Fridman
(02:53:13)
If you say, “I love this thing. Here’s an apple. I love apples,” or, “I love bananas. I love fruit…” Like, “I love X,” whatever. You just get made fun of. You get… And then so what the lesson you learn from that is, “Well, I’m just not going to speak up when I love something. I’m going to instead speak up when I, maybe how much I hate another thing that’s similar to it.” Or maybe join in when we’re making fun of a particular quirky thing, about, “Don’t you hate it when bananas are too ripe or too…” Versus like not saying the, calling out the elephant in the room is, “We’re all gathered here today ’cause we love the thing.”
Lex Fridman
(02:53:58)
It’s interesting. It’s that aspect of the internet that I think is jarring to a lot of people depending on the game, but if you go to Discord or Reddit or so on, in the communities that love a particular video game, there’s a… If you’re not used to it, and I don’t often go, so when I go it’s like, “Wow, there’s a lot of, like, pretty intense kinda mockery and derision and so on.” But you get used to it pretty quick and you understand it. I just, I wish there was more love.

Online toxicity

Jeff Kaplan
(02:54:25)
I feel bad because I played a role in the earliest development of some of that online culture. It really was social media before it was called social media. You know, I ran a… I actually had this reputation for being edgier than I really was. There were a couple notable posts that survived 30 years that people like to look back on but they don’t look back on the ones where I’m just being chill. And that’s unfortunate. I think a lot as a game designer about the design of social media. And unfortunately, social media in general is designed in such a way where the maximum hyperbole works, and that’s how you get the most points is by being max hyperbolic.
Jeff Kaplan
(02:55:27)
And usually, unfortunately, it’s more in the negative direction than the positive direction. You know, if I say, “That’s, that’s a pretty nice mug. I’ve seen nicer, but I like this one,” no one’s interested in that. I have to either love this thing, or better, this thing’s a crime against humanity- … in some way. And it’s very self-reinforcing and everybody sort of feeds into it and-
Lex Fridman
(02:56:03)
Especially when you’re young. I got to see this kinda interesting thing. So I was at I spent, that’s what we’re talking about, you’re from Pasadena, so I’ve been spending a lot of time in Caltech and working on robots, and we get to see students come in from high school. Undergraduates come in and, like a tour, hang out with the robots. And middle school also. And the interesting thing you see, the younger that they are, the more prevalent this effect, which is all of them are kind of afraid to show that they think a thing is awesome. They’re all… You could just feel they’re checking, “Is it okay?”
Lex Fridman
(02:56:48)
So they’re kinda like the default mode is whatever, this, everything is stupid, this is stupid. You know, ’cause that’s the safe place to be. It’s a real act of vulnerability. I would say it’s an act of courage, especially for a young person, to be like, “Holy shit, that’s awesome.” Like, I’m gonna, if I think this is awesome, I’m gonna be the nerd, I’m gonna take the risk and be made fun of for saying, “I love this,” in that case, it’s, “I love this robot.” So that’s an actual psychological effect that also young people are dealing with, in-person also. So I think, I just wanna say, for young people listening to this, be vulnerable, be courageous and say you love a thing if you love a thing. And do more of that on the internet, I think.
Lex Fridman
(02:57:35)
I think people make up the internet, people build the internet, and young people, more than anybody else, define the future of the internet. So put more love out there in the world. If you love a video game, if you love Overwatch, say you love it.
Jeff Kaplan
(02:57:50)
I couldn’t agree more. You know, as somebody who’s taken a lot of heat online, like any game developer, you just get destroyed. Doing what you do, you must get destroyed, you know? And it doesn’t matter, you get 100 compliments, it’s the one, you know, you’re… And you’re supposed to read it and supposed to be fine with it and have it not affect you. It’ll stay with you for years, you know? I have those. And I think of it, like the cheesy, the cheesy way I think about it is like, is there some kind of social Darwinism going on? And my big worry is that there are creators… Like, now being a creator of anything, writer, musician, you know, make online videos, whatever creator means to you, make games.
Jeff Kaplan
(02:58:46)
Now part of the skillset is being able to weather like a fire hose of criticism like the world has never seen. And I make up these scenarios in my head of like, would Van Gogh have existed if, you know, Reddit and all these things were out there commenting on… Like, how many people were able to communicate with Beethoven in his lifetime, or in a week? Like, how many influences could comment on his music directly to him? Versus like if I want to insult Brad Pitt right now, I can just go on 10 different devices and do it. And it’s like that level of access is very dangerous, and I worry that there is a whole group of people who’s receding from us that will never see the brilliance, and they’re being shut out by the negativity.
Jeff Kaplan
(02:59:55)
There- there’s a very real example, was Jay Wilson, who I think is one of the great design minds, who was the game director of Diablo III. And he took so much heat, it just affected him to the point where he essentially retired from making games. Went and, you know, wrote novels. I was very happy for him because, you know, I’m glad he found his place, and I think he’s getting back into making games now. But we lost, we essentially… Like, think how many people loved Diablo III and played the shit out of Diablo III. And Jay is one of the people you have to thank for that. And yet that community basically removed him from making games for like 10, 15 years, and it feels criminal to me.
Lex Fridman
(03:00:50)
Yeah, absolutely. They… So this is a call to action, again. People out there, support, support, especially young creators, support them. They need it. Like you think negativity has no cost, but it does. You’re robbing the world of some of the great creations. And also, allow creators to suck and to improve. Because that’s what the process of creation is like, is to take risks. To take risks meaning being vulnerable, being cringe.
Lex Fridman
(03:01:25)
To doing the thing that like, the embarrassing failure where you’re standing there in a silly clown outfit, on stage, dancing, and nobody’s laughing. And it’s a… Comedians go through this all the time, when… They talk about this all the time, when they bomb, right? They, the act just doesn’t work, and you have to go through that. And you have to, you have to support the creators through that journey. In order to have great things, we need to support those folks. So, after shipping WoW, Wrath of the Lich King, again, many consider it to be one of the great expansions for WoW, you stepped down as WoW’s game director and switched to developing Titan.

Why Titan failed

Lex Fridman
(03:02:14)
This epic huge game that promised to be the, sort of the MMO to end all MMOs. I mean, it’s kind of a legendary vision for a game, right? It’s gigantic. With a lot of, like you said, a brilliant team, a team that’s now hardened and knows how to do a great game. But it was canceled after seven years in development. So, tell me, what was the vision of the game and what happened?
Jeff Kaplan
(03:02:47)
Sure. So, as we were experiencing success with World of Warcraft, there was this concept in the studio that WoW wasn’t gonna last forever.
Jeff Kaplan
(03:02:58)
WoW would be maybe successful for five years, and eventually kind of age out. And the studio would be in real trouble if we didn’t have another massively multiplayer online game sort of waiting in the wings. So starting around, I wanna say 2006, maybe 2005 the talk of starting a team really picked up momentum, and we were working on Burning Crusade. Rob Pardo took the helm to start sort of Titan development. We didn’t even really have a team then. And I remember being embroiled in Burning Crusade and going to Titan meetings, and Rob pulled a group from kind of across the company, and we started talking about what this next MMO could be and when it would get going. And eventually, it started in earnest, like real development, around 2007.
Jeff Kaplan
(03:04:12)
The first team members joined, and it was a real ambitious project, including like building a new engine from scratch. I think maybe the first team member was a guy named John LaFleur, who was just a stellar game programmer, and the engine which ultimately failed for Titan ended up becoming the engine for Overwatch, which is a great success story for him. And the idea behind the game, it was gonna take place in future Earth, and the players played as secret agents. And by day, they all had day jobs, and by night, they went off and did cool secret agent stuff. And the secret agent stuff was very first-person shooter, but over-the-top abilities like you would see in Overwatch, because that’s where they came from. And the by day stuff, we were gonna let you run businesses.
Jeff Kaplan
(03:05:18)
We took a lot of influence from games like Animal Crossing, Harvest Moon, the Sims. We had a brilliant game designer and game director named Matt Brown, who was the creative director on The Sims. He came over. And so we had this vision that there was gonna be all this like daytime business house stuff. You could build a house. You could live in a neighborhood. And beyond that, there was also a vision on the technical side, game design and technical side, that unlike World of Warcraft, which the modern day term for it is that it’s sharded. Mm-hmm. So meaning people play on different realms or servers.
Jeff Kaplan
(03:06:09)
In a WoW server, I don’t, I haven’t been on that team in a very long time, but back in the day, you might have 5,000 people on a WoW server before they’d have to spin up another WoW server. The big idea behind Titan is that everybody would play on one server. It was a one server, one world game, and the world was massive. It was gonna take place in future Earth, and we were literally building like, we had what we called Bay City, which was San Francisco. We had, you know, Hollywood, and then we had to build all of California between that, and we also wanted to build like Cairo and London. And there’s this realization of like, how do we connect all of these? The game had driving in it, like full-blown, like GTA-style driving.
Jeff Kaplan
(03:07:06)
It was such a gargantuan, huge undertaking with a, with a brand new engine, a brand new team, a brand new IP, intellectual property, you know, setting, which we really wrestled over. Like, the amount that the IP just, you know, trying to figure out, like, are there aliens or not aliens, you know? Like, all that sounds kinda dumb and fun, but when you’re building a game, like you, especially world-building, you have to have rules. That’s what makes world-building work, is that like, this exists in this world, and this doesn’t, and you know, why? It’s like, ’cause someone said so, and just the way it needs to be. But that development started in 2007, kind of as ideation, brainstorming, early work.
Jeff Kaplan
(03:07:59)
Really got going in late 2007, and then I had to ship Wrath of the Lich King and it was… We had the like, we always did like a champagne toast. I still remember it because it was Election Day. I think it was like Election Day and my birthday, and the day Obama got elected, and then I left the WoW team on that day. It was like memorable in all those ways. And then I joined the Titan team, and that game, we went on, like the fast-forward part of that is we shut it down in 2013.
Jeff Kaplan
(03:08:43)
That was one of the most painful development processes that I’ve ever been a part of, and probably, probably deep into 2009, I knew that the game in its current form could never ship and would never exist, and by 2010—like after numerous times trying to convince the powers that be that, like this game is not gonna happen, it’s in trouble. I remember going to Mike Morhaime in 2010, and, like, you’re going to the CEO of… You know, at that time, Blizzard was a big company, and I’m like, “You gotta shut us down. We’re just gonna burn money.”
Lex Fridman
(03:09:35)
What was your intuition about why? So like from my understanding, there was a few issues. So one, with such a gigantic world, which by the way, is a beautiful dream, this kind of universe simulator, because I love… Every game you mentioned there is great. I empathize with the dream. I would love to play that game. But one of the issues, as I understand, was it was unclear what, like, the quest flow is. Like what are you supposed to really do in this game? What’s the thing that connects all of the pieces together?
Jeff Kaplan
(03:10:08)
So it was a multifaceted failure for many reasons. Ultimately, the failure of Titan lies with leadership, team leadership, myself included. Like, there’s just no getting around that. And then on top of that, like, a lot of games you can point to as being like an engineering failure, like the, you know, the servers didn’t work— … or like an art failure, like no one responded to the look of the game, or a design failure, like the… it’s just not fun or it’s tuned poorly. We failed on art, engineering, and design, and I’m cautious about calling out art because some of the best art ever made at Blizzard was made for Titan. My criticism isn’t of the art that was created. My criticism is that we never had any art cohesion, so the art looked like it could’ve come from 10 different games.
Lex Fridman
(03:11:08)
Mm-hmm. And we should say it cost $83 million across those years. So a large team doing a lot of stuff, but not converging towards a game that could actually ship.
Jeff Kaplan
(03:11:24)
Correct. As, like, a game designer, I use semantics a lot and I like to define my semantics so people know where I’m coming from. Talking about ideas versus vision for a second, ideas are easy. Ideas, you know, I can have 10 in 10 seconds. You know, let’s make a 2D platformer about a mouse, you know, whatever. Like, you can… I want to… A secret agent by day is, you know, doing all this cool shooting stuff, by night is running a flower shop. You know, ideas are just infinite. At least on creative teams, you know, you have no shortage of ideas. What I call vision is the ability to not only take a great idea, but shepherd it into existence, and you’re doing that through inspiration first and foremost.
Jeff Kaplan
(03:12:24)
If you need a team to make it, you need a team to believe in the vision of the idea. And then there also has to be a technological plan for the idea. There has to be a design plan. There has to be an art style for the plan. There has to be a pragmatic production reality to the plan. And Titan kind of was like that was the hubris of Blizzard in that era at its height of, you know, we were over being hurt about, you know, World of Warcraft. I don’t know if people are gonna like it. And we were now in the era of, like, we made World of Warcraft. We can do no wrong. This next thing is gonna be the best ever. And there was also a lot of what I call anticipatory hiring-
Jeff Kaplan
(03:13:21)
… or, like, there’s opportunity hiring and then there’s also anticipatory hiring. I have the exact opposite hiring philosophy. I won’t hire anybody on any team until, like, we’re feeling like we gotta work overtime or, like, we might not ship if we don’t get, you know, somebody else in here. And Titan kinda had that hubris of like, well, we’re gonna build a really big world. We don’t know the story of the world yet. We don’t really have it mapped out what it should be like. We don’t have the art style really defined. We don’t know technically how we’re gonna make the art or what the constraints of it are, but we know we’re gonna build a really big world, so let’s just start hiring environmental artists.
Jeff Kaplan
(03:14:08)
And, like, in one year, we would hire, like, 70 environmental artists from all over the world. You know, we’re getting visas and, like, the top tier talent ’cause at the height of World of Warcraft and nobody knew the team that they were coming on. It was Blizzard’s next MMO top secret and they, you know, their first day at work, like some, you know, poor guy from Belgium just shows up and he’s at his first day at work and he’s like, “Oh, are we making World of StarCraft? Is that…” And they’re like, “No, dude. Let me show you it.” And he’s like, “What is this game?” You know? We were in that world, and we hired way too many people.
Jeff Kaplan
(03:14:50)
The right way to incubate a video game is you have the smallest group possible and you try to get the idea across with whatever technology you can get your hands on, using other engines, using art from whatever. You prove out that idea, and once you know what you’re doing, then you expand the team. You know the cliche of “idle hands is the devil’s work,” or whatever.
Lex Fridman
(03:15:20)
Yeah, yeah.
Jeff Kaplan
(03:15:21)
You have these, like, brilliant team, huge, and we don’t have a road map for what we’re making or how we’re gonna make it. And now you’re having to deal with all these people. Like, they’re coming into your office, you know, you’re trying to figure out what is the quest flow, how do I design the quest system for Titan, how can we prototype it? And we’re like, “Oh, this prop artist over here is running out of stuff to do. What props should he make? Should he work on Chinatown or the Hollywood set?” And you’re just making up busy work. The engine didn’t work.
Jeff Kaplan
(03:16:01)
When we would run play tests on Titan, we would have to tell the team, “Stop checking in because it slows us down.” We had this really great technical artist, a guy named Dylan Jones, and he was on Titan with us, and I remember in, like, the last days, we asked him, because he was a very active user—Titan editor was called Titan Edit or TED which is, to this day, TED is the proprietary tool for Overwatch, since Overwatch came from the Titan engine—
Jeff Kaplan
(03:16:37)
… which was Tank. And we said to Dylan, “I want you to log your uptime in the editor, in TED.” And in a 40-hour week, he was only able to work for 20 hours. And you can imagine, you’re building a team of the best and the—like, the best in the industry, and they can’t work. So not only are you just burning cash faster than anybody on the planet, it’s also, like, imagine having fighter pilots, but we don’t let them fly. Like, the creative frustration and the way that that manifested itself, and how demoralized the team got, it was a disaster.
Lex Fridman
(03:17:24)
And so many elements of that were done completely differently for Overwatch, which turned out to be this incredibly masterful execution on a short timescale with a small team with a clear vision. I read that sort of if you—if you were to compare Overwatch and Titan, sort of the defining characteristic for the Titan team, they said yes to everything, and the Overwatch team said no to everything. Meaning focus, like deep, deep focus on the execution of a very clear vision. And maybe that’s the process of designing games, like you said, is, you know on a team that’s full of incredible ideas because it’s creative minds, it’s constantly saying no. It’s a really painful process, but perhaps it—it is the responsibility of leadership to just keep saying no. Which sucks.
Lex Fridman
(03:18:17)
I guess it sucks to be a leader on a team in that sense, because you’re constantly saying no.
Jeff Kaplan
(03:18:21)
Being a creative leader, you’re in two modes. You’re pushing or you’re pulling, and whatever mode you’re in is the exact opposite of the team. When they’re not thinking outside the box enough or, like, elevating the vision enough, that’s when you’re pushing them. Like, “Come on guys,” you know, “don’t worry about the schedule. We got—” you know, “capture hearts and minds, inspire people.” And when they’re going a little crazy and they… Endless source of great ideas and really fun development, that’s when you gotta pull and say, “Guys, we need to ship this. The best feature we can add for the player is shipping.” That was a common phrase that we had.

Overwatch in six weeks

Lex Fridman
(03:19:09)
So when Titan was canceled, I mean that must’ve been a gigantic heartbreak for everybody. And there was this moment when the plan was for the Titan team to be disbanded and moved elsewhere, but you fought for keeping some part of the Titan team, the core of the team together, and Mike Morhaime gave you six weeks to come up with a pitch for a new game. And you’ve talked about this process, and you’ve mentioned that there were three possible ideas, directions you were thinking about. A StarCraft MMO, maybe an MMO in a new IP called Crossworlds, and then the third idea was Overwatch. Can you take me through those six weeks?
Jeff Kaplan
(03:19:56)
Yeah, the six weeks, it’s… It was supposed to be the greatest time ever if you think about it. Because you’re a game developer at Blizzard, and you get to come up with a new idea. So that sounds awesome, like, to everybody at Blizzard, to all game developers, it sounds great. But we were probably the most demoralized we’d ever been in our careers. At least I was, you know? I didn’t know if I was gonna be fired. I didn’t know if that was the end of my career at that point. And so it was like a really serious, kind of dire environment that this was happening in. And we were given two criteria that we had to hit for these pitches. The first one was that we had to ship within two years. And that is a very ambitious timeframe for any game.
Lex Fridman
(03:20:53)
Yeah, crazy. That’s crazy.
Jeff Kaplan
(03:20:54)
But for a Blizzard game, it’s kind of insane. And then the second… Okay, the second is even more ambitious and crazy, was whatever we made, whatever we pitched had to have the potential to have World of Warcraft-like revenue.
Lex Fridman
(03:21:13)
Yeah. Right.
Jeff Kaplan
(03:21:14)
And to date, at that point, there was one game that had World of Warcraft-like revenue, which was World of Warcraft, so… Immediately, I just threw out the revenue thing ’cause it’s all fucking Monopoly money to me. Like, this game money is… It’s insane, and I just don’t think about it. That’s someone else’s problem. But I did want to be as realistic as I could about the schedule part of it. So most of our team, the Titan team, was 140-some people. Most of that team got moved to go work on Heroes of the Storm, the D3 expansion, World of Warcraft, Hearthstone. So immediately, a large number of the team was gone. Then we had a bunch of, like, what we called temp loans-
Jeff Kaplan
(03:22:05)
… people that someday were gonna come back to us, but we loaned off for, like, six-month tour of duty. And then there was a very small team. There was a group of engineers that was mothballing Titan, so it exists somewhere at Blizzard at that point. And they were also deconstructing the engine because they knew it didn’t work anymore, and to make a new game, it had to be way reconsidered to sort of what it is today. And then there was a very small creative group that was supposed to come up with these three pitches and given six weeks. And we just sort of arbitrarily decided, like, let’s spend two weeks on each pitch. The ground rules that I sort of led with is you have to be all in for the two weeks on the pitch. So if we’re…
Jeff Kaplan
(03:22:56)
You know, pitch one was a StarCraft MMO, and we have to live and breathe and want it more than anything. And I kind of warned everybody. I said, “At the end of this two weeks, you’re going to think this is the only game idea, and you’re not going to be invested in the next, but we’re going to throw it out as soon as we finish it and do the next one.” And the StarCraft MMO, I actually really loved that pitch. It was called StarCraft Frontiers. And the concept was, like, less of you’re playing, like, space marine. Like, it was less armies. StarCraft the RTS is always about the three races and the giant armies.
Jeff Kaplan
(03:23:34)
And kind of what made WoW wow and separate from the Warcraft RTS series was that instead of being, like, a footman in the army in World of Warcraft, you were like a lone adventurer, you know, make your mark on the world. So we had this idea, it was this old Chris Metzen drawing of a space prospector. And I love that idea that, like, somewhere out in, like, where all the giant StarCraft battles were happening, you know, thousands of Zerg and Protoss and Terran, there’s, like, this, like, lone prospector on some planet, like, going through, like, a mysterious dungeon- … you know, looking for minerals but finding monsters. Like, it was that kind of spirit of-
Lex Fridman
(03:24:22)
That’s awesome.
Jeff Kaplan
(03:24:23)
… more on the ground level.
Lex Fridman
(03:24:24)
I didn’t even think about that because my intuition with a StarCraft MMO would be the soldier as part of the army, right? The prospector. That’s such a beautiful vision. Yeah.
Jeff Kaplan
(03:24:32)
Yeah, I-
Lex Fridman
(03:24:33)
Looking for the resources and on the way finding the monsters.
Jeff Kaplan
(03:24:37)
You want to be on the ground f- Like, what’s it like on the ground floor? And I don’t want to be a minion in a giant army. I want to be Indiana Jones in space, you know?
Lex Fridman
(03:24:49)
Nice.
Jeff Kaplan
(03:24:50)
So then there was this Metzen picture of the prospector, and then two of the most amazing artists, Arnold Tsang and Peter Lee. Arnold’s the great character artist. Peter Lee’s the great environment artist. They did this concept art for Frontiers that was Metzen’s space prospector. He’s smoking a cigar-… and he’s got his foot on a Hydralisk skull.
Lex Fridman
(03:25:15)
Nice.
Jeff Kaplan
(03:25:15)
And then there’s, like, a Medivac in the background, and they’re on this, like, big alien planet. And, like, that picture, you just wanted to like, “Here’s my money. I’ll pre-order now. Like, sign me up for that game.” That picture ended up being McCree from Overwatch. We redid it.
Lex Fridman
(03:25:37)
Nice.
Jeff Kaplan
(03:25:37)
But, but yeah, that’s, that was where McCree actually came from. So that was the StarCraft Frontiers idea. We kind of, we, we went all in on the design. We had a world design. We had class design, like how, how the classes would work, what progression might look like. And you also have to think when you’re trying to design an MMO, like, what could expansions and live content be like? And we put together a really good pitch. We all knew there’s no way you can make this game. Like, this, even though it was more focused than Titan, it’s five years on Blizzard’s best day with nothing going wrong, in a perfect scenario, five years to make that game probably with, you know, 150 to 200 people.
Jeff Kaplan
(03:26:26)
Like, these 40 people are not making that game in two years. So as much as I… Like, again, that was an idea, not a vision, ’cause it lacked, it lacked the path to reality, you know? There-
Lex Fridman
(03:26:40)
‘Cause that’s a legit large-scale MMO in a, in a world that you haven’t quite developed in the way that an MMO needs that was really crafted for the arts or the real-time strategy formulation of StarCraft. And it’s in space. It’s-
Jeff Kaplan
(03:26:53)
Yeah.
Lex Fridman
(03:26:53)
It’s… It would, it would take… I mean, it would be incredible, but it would be a five-year and realistically even more.
Jeff Kaplan
(03:26:59)
Like, an endless thing that you’d spin on on that team. You’re making the StarCraft game. How do you get from planet to planet?
Lex Fridman
(03:27:05)
Yeah.
Jeff Kaplan
(03:27:05)
Is it a cut scene? No one’s going to want a cut scene, but we should probably make it a cut scene because that’s easy. But well, we gotta have space flight. That… You’re adding, like, three years just by saying, “We gotta have space flight.”
Lex Fridman
(03:27:18)
You are. Yeah.
Jeff Kaplan
(03:27:19)
And then how do you make a space game without space flight? We’ve all played them. We know, we know those games, so.
Lex Fridman
(03:27:25)
So are you essentially, when you’re brainstorming like that, and by the way, such an incredible thing, for two weeks, you’re just really falling in love with the game altogether and trying to figure out if it’s actually possible. So if you’re developing that, are you just constantly trying to say, like, “What is the simplest possible thing we can do that’s a complete world?” Like, are you constantly trying to simplify or you’re allowing yourself to go big?
Jeff Kaplan
(03:27:49)
So when you’re brainstorming and you’re with the team and you’re the creative leader, it’s, “Guys, what’s fucking amazing?” What’s big? What do players need? There’s a Blizzard design value called “what is the fantasy?” What is the fantasy? You want to be in space. You want to be in the StarCraft universe, and then your job as the game director, and if you have a great creative director, art director, tech director, the director should be scoping it back into reality. The mistake I see on a lot of game teams is scope becomes a production problem. You give it to the project managers or the executives or the producers to say, “No, there’s not enough time.” Or, “You guys should hire more,” ’cause-
Lex Fridman
(03:28:44)
Right.
Jeff Kaplan
(03:28:46)
Like, what do executives, what do those types have at their disposal that they can hit you with meetings in Outlook and tell you that you can hire more people? That’s not really how you get the game made.
Lex Fridman
(03:28:59)
That’s why they get paid the big bucks.
Jeff Kaplan
(03:29:02)
The scoping, your best-case scenario is when your tech director, art director, and game director are doing the scoping. Because then you know, like, this part we gotta spend big bucks on. There’s no getting around it. This part we can cheat. If you have a giant team and one guy’s job is just to make props, you know, crates and chairs, that guy’s going to make the… You know, that’s a AAA awesome developer who’s going to put his heart and soul into it. If you let him, he’ll take, you know, six weeks to make a crate. You have to have that moment where you’re like, “I kind of need 200 crates. So just spend, like, a couple hours on that one.” And that’s a hard thing to say to somebody.
Lex Fridman
(03:29:52)
You’re doing this kind of scope carving while also talking about “what is the fantasy.” So you’re, there’s a tension there that you’re constantly dancing with. So you’re, you’re allowing yourself to think big, but then scoping it down, and doing that, what, on a scale of days in this case, like?
Jeff Kaplan
(03:30:12)
Yeah. We had two weeks, so, and I don’t think we were… I was working on weekends, but we weren’t getting the group together. So it’s, you know, like 10 working days.
Lex Fridman
(03:30:24)
And then you, like, shut it off and go to idea number two?
Jeff Kaplan
(03:30:27)
Yeah. Idea number two was CrossWorlds. That was a Metzen vision for a universe, and, like, I’m glad Metzen’s back at Blizzard, and I hope they make this game someday. The way Chris described it was there’s a planet on the edge of the universe that’s like the Mos Eisley space port with all these, you know, freakish aliens and people from all walks of life-
Lex Fridman
(03:30:58)
Nice.
Jeff Kaplan
(03:30:58)
… and it’s kind of seedy and criminal. And there’s traders and smugglers and diplomats and… But this one planet is sort of the planet that they’ve agreed to like meet on, and this is like the neutral place, and then the game was going to take place on that planet, so-
Lex Fridman
(03:31:16)
This is awesome.
Jeff Kaplan
(03:31:17)
Yeah. So that was more of like a world IP driven one that was really inspired by Chris.
Lex Fridman
(03:31:24)
And that allows you to play with different characters, different… I like that, I like that idea a lot, because it’s the meeting place of different worlds, and then you can allow your imagination to drive what the worlds from which they came from are like. So you don’t have to design those worlds.
Jeff Kaplan
(03:31:41)
No, you don’t have to design them, but then they’re yours. Like, if the players really are reacting to, like, the Green People planet- … or whatever, and someday you’re like, “Hey, what expansion should we make?” “I don’t know. Green People planet.”
Lex Fridman
(03:31:53)
Green People, yeah.
Jeff Kaplan
(03:31:54)
Like, “Let’s do it.”
Lex Fridman
(03:31:56)
I like it.
Jeff Kaplan
(03:31:57)
So it was actually that, it was CrossWorlds. We were working on CrossWorlds, and like the StarCraft Frontiers, you know, for Frontiers, we were having the class meetings, you know, how class progression work, like, the game designery stuff. And on CrossWorlds, we were having a class meeting of, like a big decision in, like, RPG type games is always: are you doing, like, skill based or class based?
Jeff Kaplan
(03:32:26)
And it’s usually some combination of those, but class based, you’re like choosing, “I’m going to be a warrior, therefore I use sword and shield, and I do these things.” Where more of a skill base is everybody’s kind of an avatar, and then the skills that you pick define, so I might take that I know how to use swords. So you’re kind of making those decisions, and with all things game design, there’s no right or wrong. It’s all trade-offs. So the trade-off decision we were making is like, “Oh, I think we want to be class based with this CrossWorlds thing,” and we were in a design meeting and one of my favorite designers of all time is a guy named Geoff Goodman. He was one of the original WoW encounter designers; he designed like Onyxia and all the big raid bosses.
Jeff Kaplan
(03:33:17)
Like, if someone has a favorite raid boss, Geoff probably designed it. And he just kind of off the cuff said in this meeting, he said, “I wish instead of making, like, six classes, I wish we could make 50 classes. And I wish instead of having, like, you know, 100 abilities on the classes, the 50 classes all just had, like, one or two things that was really interesting about them.” And then the class meeting ended. Like, we designed our six classes in that meeting, and then the meeting ended. And I was back at my desk, and it just stuck with me what Geoff had said about the way he wished he could design the classes. And then I also had… We had this directory of all the amazing Titan art.
Jeff Kaplan
(03:34:13)
And I started pulling up Arnold Tsang’s characters. Arnold’s vision and his art is second to none. And I started taking some of the old Titan characters that we had designed. We had a class called the Jumper, and the Jumper could, like, teleport forward and rewind time and come back. And the Jumper used dual-wield pistols, which was, at the time, designed after my dual G18s from Modern Warfare 2. It was my favorite loadout. I was just cribbing Infinity Ward. That’s where Tracer’s guns came from.
Jeff Kaplan
(03:34:53)
And we had all these, like, different guns, like, some that bloomed and some that, you know, had this, like, really crazy recoil, and we had other types of guns. And I took every version of, like, the Titan Jumper, and I just distilled it into what I thought was the best version of the Jumper, which was, you know, the dual-wield pistols, the blink, the recall, and time bomb. And then I took Arnold Tsang art, and I went, you know, to Arnold, and I’m like, “What if this wasn’t, like, a class? You know, who is this as a person, not a class?” And Arnold, “What if she’s British, and her name’s Tracer?” And, like, that was the origin of Overwatch.
Jeff Kaplan
(03:35:41)
And some of the pragmatic part of that was I knew that Geoff Goodman was gonna be on this team, and I knew that Arnold Tsang was gonna be on this team. And it’s a play to your strengths moment. Like, what could we make in two years with the talent we have, and what is realistic? Like, what could we realistically make? And so then I just sat there, and I sort of I went through a bunch of Titan classes with a guy named The Gunjack, who was… became Reaper. We had… Actually, the Ranger got split out and became 76 and became Bastion, of all things.
Lex Fridman
(03:36:30)
You’re describing the game of Overwatch where exactly that vision from that meeting- … came to life for you. As opposed to having a small number of classes with a large number of skills, you have a large number of heroes with each their distinct look, distinct set of skills.
Jeff Kaplan
(03:36:50)
Yeah, and the personality was a big part of it, like capturing… This isn’t some generic, the Jumper. It’s this person, Lena Oxton. You know? And she has a life, and we’re gonna, you know, make you interested in her.
Lex Fridman
(03:37:07)
Yeah, there’s, like, a deep backstory. And that’s also what’s interesting about Overwatch, is that backstory is not, like, revealed in a direct way. It’s, it sort of, like, seeps in indirectly throughout the game. So, the backstory is implied almost. And it’s told not directly. So, there’s a lot of ideas like this. And so you’re… This is the thing that the team converged to.
Jeff Kaplan
(03:37:32)
Yeah. Well, and it was funny because, like, we’re having these CrossWorlds. Like, people are, you know, writing design docs and doing concept art for CrossWorlds. And, you know, we’d have some brainstorm meetings every day, and I put together… It was a seven-page deck, Overwatch deck. And it was called Monetized Shooter at the time. And it just said, “Monetized Shooter.” And then the first slide was League of Legends plus Team Fortress 2 logos.
Jeff Kaplan
(03:38:05)
And then I had, like, six heroes, like, sloppily designed. And as everybody was working on CrossWorlds there were two, you know, co-leaders of that team for… There was, you know… Chris Metzen was there, and Ray Gresko. And I remember Ray coming over. Ray is, like, a phenomenal game developer of all time. He, like, wrote the Dark Forces engine, was the production director on Diablo III. He and I killed Titan. And then he’s at my desk looking over my shoulder, and he’s like, “Well, what are you working on? Is this the CrossWorlds pitch?” I’m like, “No, this is, like, another idea that I’m just working on on the side.” And I show him the seven slides, and he just looks at me, and he says, “Go show Metzen this.”
Jeff Kaplan
(03:39:04)
This is what we should make instead.” And then I went and I showed Metzen, like, “Hey, this is just an idea.” And then Metzen was like, “Yes. You know, this is what we should make.” And I showed Arnold, and it was Arnold’s art. And then Ray tells me, he’s like… ‘Cause we would- Every morning, we’d get the team together ’cause we were in this dire, you know, dire straits, and we’re midway through at that point. And Ray and a producer named Matt Hawley said, “Tomorrow morning at the meeting, you’re gonna pitch this Monetized Shooter idea.” It was called Monetized Shooter because originally when I pitched it, it was free to play and you had to buy the heroes, which is fucking terrible, but at the time, I actually thought that was a good idea.
Jeff Kaplan
(03:39:56)
And I’m walking down the hall with Matt Hawley to go, like, pitch this to this group, you know, we’re supposed to be working on CrossWorlds, and they’re like, “You gotta pitch this idea to them.” And Matt Hawley stops me in the hall and says, “You, Jeff, you cannot go into that meeting. I refuse to put up a deck in front of the team where the first slide says, ‘Monetized Shooter.'” “They’ll hate that, and that’s not the spirit of who we are-” “… as, you know, creative devel-” And I’m like, “Yeah, you’re right.” Like, well no one was supposed to see his deck anyway.
Jeff Kaplan
(03:40:34)
You guys are all looking over my shoulder. He’s like, “You need to put a name on it.” I’m like, “It’s Overwatch.” Like, right on the spot, I said the name was Overwatch. And where that had come from was when we were working on Titan, I was really angry about this. We did this fake… I did not do this, another leader on the team did this, of this fake, like, we’re gonna put up whiteboards and everyone gets to vote for their favorite name for Titan. But the person who did it already had a name in mind- … for the game. And just kept pushing towards that name.
Jeff Kaplan
(03:41:17)
And the thing that got the most votes was Overwatch. Overwatch in Titan was, like, a police group, essentially. But somebody had written Overwatch on that board and it got the most votes. So I basically named the game Overwatch to, like, high five my team- … and kind of middle finger. Like- Don’t act like it’s a democracy when it’s not.
Lex Fridman
(03:41:43)
Yeah, yeah.
Jeff Kaplan
(03:41:43)
You know? So…
Lex Fridman
(03:41:44)
So it’s a middle finger. So Overwatch, and then the, I mean, the rest is history. So what, in that slide deck, did you already have a kind of crawl, walk, run idea of the way this would be developed?
Jeff Kaplan
(03:42:00)
So my, my deck was terrible. People actually- … there’s a thing called the Jeff Deck, which is: it’s always gray with black writing and then the default, like, PowerPoint blue shapes, because I just don’t bother making it look good-
Jeff Kaplan
(03:42:15)
… Besides dragging Arnold Tsang’s art, you know, desecrating it into my deck. We put together… We had this amazing game designer on the Overwatch team, a guy named Jeremy Craig who’s now actually game directing a game over at Bonfire. Jeremy, not only was he a great game designer, but he had the ability to sell things better than anybody else, visually. So Jeremy took my shitty deck, and then we had lots more, like, creative brainstorms and we thought through the game of Overwatch a lot more, and then he made this gorgeous pitch deck that we pitched. We first had to go through the Blizzard production and game directors for them to approve it and give it their thumbs up, then we had to go through the Blizzard executives, then we had to go through Activision.
Jeff Kaplan
(03:43:11)
And in that deck, because we had to speak to schedule, we had to speak to two things that were tough to speak to. One, we had to speak to schedule, and we came up with this concept of crawl, walk, run. We had identified the reason Titan failed is we just tried to run; we tried to come up with the next World of Warcraft. But if you think about World of Warcraft, it had Warcraft I, II and III to build upon to even get to the point where people gave a shit enough about that world to want to live in the world of Warcraft. So the idea was that instead of trying to cut right to World of Warcraft, let’s try to honor Warcraft I, essentially. So this first game is just to establish that there’s a universe you might give a shit about.
Jeff Kaplan
(03:44:09)
We also knew that the timeframe we were given of two years, there was no way to create a compelling PvE experience, so we just kinda randomly put dates in a slide of crawl, walk, run, thinking it was aspirational, and really, we were just trying to save ourselves. Like, don’t cancel us. You know, this team can make something great. The other part that we had to talk to too was, like, a mobile strategy. Like, at that time, it was like, everything has to be also on mobile, which I think is the dumbest thing ever. And so literally what we did is, this was Jeremy’s brilliant part, we had a picture with all the boxes and then one of them is, like, a tablet with just a fucking Photoshop of, you know, Arnold’s art on it. We’re like, “And also-“
Lex Fridman
(03:45:03)
Mobile
Jeff Kaplan
(03:45:03)
“… it’ll be on mobile.”
Lex Fridman
(03:45:06)
Brilliant. But I think this crawl, walk, run idea is really nice. So the, the, the initial idea is you would have basically a shooter with all these different characters, all these heroes, and then the, the walk would be the PvE version of that, co-op. And then if people really fall in love with the world, then you build a big MMO around it. Quick pause for a bathroom break. Quick 30-second thank you to our sponsors. Check them out in the description. It really is the best way to support this podcast. Go to lexfridman.com/sponsors. We got Fin for customer service AI agents, Blitzy for code generation in large code bases, BetterHelp for mental health, Shopify for selling stuff online, CodeRabbit for AI-powered code review, and Perplexity for curiosity-driven knowledge exploration.

Best Overwatch heroes

Lex Fridman
(03:46:00)
Choose wisely, my friends. And now, back to my conversation with Jeff Kaplan. And we should also say that there’s a whole world that was built around Overwatch. And one of the ideas was… So, Warcraft is a very particular kind of world. StarCraft is a particular kind of world. Diablo is a particular kind of world. And you wanted to bring Overwatch to Earth and make it positive. You give this talk where there was a lot of respect paid to the sort of dark, gritty, post-apocalyptic games on Earth. Also gave a lot of respect to the ultra-realistic first-person shooter games like Call of Duty. And you wanted to create something more that paints a vision of a near-term hopeful future, and fun, and more sort of surreal, versus like ultra-real.
Lex Fridman
(03:46:57)
So it’s interesting to talk through how a world comes to life. How you think about that world, how you create the tone of the game, how you think, how you craft in this vision. And not just, like, different characters like Tracer and so on, like what the personality is, but, like, bringing the world to life in which they will be. What was that process like?
Jeff Kaplan
(03:47:23)
The process was a blast. And, like, the goal was that bright, hopeful future. And the other phrase we used all the time on the team was, “A future worth fighting for.”
Lex Fridman
(03:47:34)
Mm-hmm, yes.
Jeff Kaplan
(03:47:35)
You know, if there’s gonna be all this fighting, like the… it kinda has to be worth it for something. Picking the locations in the world was the funnest thing. You know, there’s just a group of us who would sit around, and be like, “Where do you wanna go?” You know, “Santorini looks amazing.” And you’re looking at pictures, and like, “Let’s make that place.” You know in a video game people are gonna spend hours and hours in a location. Resist the urge to do the common, I call them the cargo container mazes, that you see in every game. And I know why they exist, they’re easy to make, but we kinda wanted Overwatch to be this world tour of great places that you’d wanna go to.
Jeff Kaplan
(03:48:23)
Or in the case of like Oasis, it’s like, okay, maybe Iraq, back when we were making this game, wasn’t the top of people’s list, but what is the bright, hopeful version of what that could look like? So we just really tried to sell this idea of these aspirational locations. One, just to get people thinking about different places on Planet Earth and how awesome they all are. But also, from like a pure game design standpoint, you’re gonna spend a lot of time in the environment, so the environment should be pleasing and not oppressive.
Lex Fridman
(03:49:04)
Can you go through some of the heroes that you ended up putting in the game? Maybe a good way to do it is, which are your favorites? And what’s from the best of your knowledge of the internet, favorites?
Jeff Kaplan
(03:49:16)
My favorite… I have a couple favorite heroes. Obviously, Tracer.
Lex Fridman
(03:49:23)
She’s the OG.
Jeff Kaplan
(03:49:24)
The OG, the cornerstone. You know, we put her on the front of the box. She was that moment of, “We should just take the best of the best,” and we know this gameplay is good and solid. And it’s so simple. Like, the mechanics are very easy to explain to somebody. It’s very easy to pick up. The first time anybody hits Recall for the first time and they try to wrap their mind around like, “Wait, does that mean if I…” You know, and they’re mapping out the possibilities.
Lex Fridman
(03:49:56)
And by the way, we should say that it’s a PvP game with six versus six at first, and where there’s three distinct roles that people take on on a team. And those roles, at first, I guess were not required. Like, you can reallocate those roles as you wanted. And then to maximize the fun, you add a little bit of structure. You enforce two per role, and the role being Tank, Support, and Damage. So, that. And then there’s all the kinds of heroes that are associated with the different roles, and people pick and there’s lore. And some people are probably like hardcore just one particular hero. And so there’s a lot of personality and story and community that builds around each of the heroes. And, but at the end of the day, it is just a fun shooter.
Jeff Kaplan
(03:50:51)
Yeah. Our goal was to pay homage to the shooters before us that we loved. There’s no way you can talk about Overwatch without talking about Team Fortress 2. Uh, Team Fortress started as a Quake mod, which was brilliant and I played tons of. Then there was Team Fortress Classic that came out with Half-Life 1.
Jeff Kaplan
(03:51:15)
And then Team Fortress 2, I think everything about it blew everybody away when it came out in 2007. And there’s obviously just huge influence there. But the shooter mechanics of Overwatch are… They hearken back to what people call the arcade or arena shooter genre. Which pains me ’cause I never… Back in the day, I didn’t think of Quake as an arcade shooter. It was almost an insulting way of saying it. But just the fast movement, really epic, over-the-top weapons. You have a low time to kill, or TTK, that players call it. Meaning you’re very survivable; you can take a few hits. Where, in a game like Call of Duty or Counter-Strike, if you get shot in the head, you’re just dead right away. So it was supposed to be this explosive, larger than life, fun, arcade-y shooter-
Jeff Kaplan
(03:52:17)
With a lot of teamwork involved.
Lex Fridman
(03:52:20)
And so you said Tracer up there? She’s the OG. Who else?
Jeff Kaplan
(03:52:25)
McCree. McCree is another, like, I’m somebody who’s attracted to the simplicity and design. And I did not design McCree’s six shooter. The way that gun feels is phenomenal, and to capture the spirit of that, we had a designer named Mike Heiberg design the High Noon ultimate. And then just all the care and love the team put in, like when he does the ultimate, we roll a tumbleweed across the screen like every time. It’s a very simple hero, but the simplicity is what I like best in design.
Jeff Kaplan
(03:53:02)
I’m not a fan of when somebody starts explaining, you know, in any of these games, whether they’re MOBAs or hero shooters, and they start, like, “This guy throws orbs, and he throws three orbs, and then he runs out of his orb bank, and then he can call the orbs back, or he can catch the orbs.” And my head is spinning, and I’m like, “Just give me a fucking good gun.” You know? And I’m done.
Lex Fridman
(03:53:28)
Simplicity is everything. What about Reinhardt, the tank?
Jeff Kaplan
(03:53:32)
Reinhardt was actually my main. So I played the most of Reinhardt. That was another amazing Geoff Goodman design of this guy who just has a shield. As soon as you give somebody a shield, they know what to do. They go into protector mode. The shield was designed to shoot through. The shield has since been copied by like every hero shooter since, and even non-hero shooters. And then he just has a giant rocket hammer. And he does a charge ability. It’s really interesting where the charge ability came from. I was playing a ton of Left 4 Dead 2, and you could play in versus mode where you could be the enemy zombie guys.
Jeff Kaplan
(03:54:16)
And there was an enemy boss zombie called The Charger who had that charge ability. And I thought, the reason that ability was so cool is because it’s a commit. Once you press the button, you’re a runaway train. And watching Reinhardts charge to their deaths is kind of hilarious, and it’s what separates a great Rein from a shitty one.

The challenge of matchmaking

Lex Fridman
(03:54:37)
You’ve explained that the Overwatch matchmaker process is designed to keep players at a 50% win rate. I think it’s just a fascinating topic. Not to get too philosophical, but you can’t have the up without the down, hence the 50%. Can, can you speak to the complexity of like what makes a good matchmaker?
Jeff Kaplan
(03:54:57)
The matchmaking systems are some of the most complex design and engineering tasks you’re ever gonna tackle. And they’re thankless. It’s, it’s very hard, too, because I think most people, and they’re not being disingenuous, like if you ask a gamer, “What do you want?” They’re like, “I just want a fair match. Like, just make it even.” And the reality of what they want is they want a match where they’re slightly better than the other guy.
Jeff Kaplan
(03:55:27)
Like, they want it to feel like it was close but then win. And you can’t architect that. Like, there, it’s, you know, it’s a zero-sum situation, so there’s gotta be winners and there’s gotta be losers. The other really core problem, and we would study this all the time when people would complain. You know, you see a Reddit post, and somebody would say, “I had a six game losing streak. This is so fucked. It’s the worst matchmaker ever.”
Lex Fridman
(03:55:58)
Oh, Reddit.
Jeff Kaplan
(03:56:00)
Yeah, right? I love Reddit.
Lex Fridman
(03:56:02)
Me too.
Jeff Kaplan
(03:56:02)
But we would look up that person’s account. I would do that all the time. I love looking up people’s accounts and seeing- … what would happen. It’s like, yeah, he had the six-game losing streak. He had an eight-game winning streak before that. There was no post about how awesome is this. And the human psychology doesn’t allow for that. One of my hindsight regrets about Overwatch, and this is, I think we did the right thing in the moment. It’s you know, like, I wouldn’t go back and redo it, but if I was making a hero shooter from scratch today, I would make it less team-focused. And we put all of our eggs in you noticing if the team won or lost.
Jeff Kaplan
(03:56:48)
And we downplayed your individual contribution as much as possible. There wasn’t a scoreboard. We had a medal system, but the medal system was, in my opinion, it was not good because the losing team got medals and the winning team got medals. And on the losing team, they would use that. They would weaponize it against their teammate. “Well, I’m the top kills, and all you guys are making us lose.” And it’s like, “Okay, you’re the top kills by like one, and you guys still lost.” So I would, if I was to redo it today, or for any aspiring hero shooter makers out there, I would actually downplay the team factor, and try to put more focus on individual contribution.
Jeff Kaplan
(03:57:37)
Because that’s just how people play. They’re, they’re selfish. And I don’t mean that in a bad way. It’s just, it’s that human nature, they can’t help.

Rust

Lex Fridman
(03:57:48)
And in terms of how they experience the game, in terms of how they derive joy from it, or how they see the challenge of the game is individual. Even when you’re on a team, you’re still feeling- … it’s individual, a fundamental individual experience. Let me, as a small aside, before I forget, since we mentioned first-person shooters so much, outside of Overwatch, what are some of the great shooters of all time that you’ve played?
Jeff Kaplan
(03:58:14)
Quake is the greatest.
Lex Fridman
(03:58:16)
Quake is GOAT.
Jeff Kaplan
(03:58:17)
Yeah. Quake is GOAT. There’s a lot of contenders up there.
Lex Fridman
(03:58:23)
What have you logged the most hours in outside of the games?
Jeff Kaplan
(03:58:27)
Rust.
Lex Fridman
(03:58:27)
Okay. Can you… Okay. A lot of folks have written to me that I need to play Rust, the video game. I have not even looked into it. Somebody on Reddit said it has a steep learning curve. I would like to give it a chance because you have to me spoken so highly of it. So can you explain Rust?
Jeff Kaplan
(03:58:48)
Yeah. Rust is an open world game. It’s a procedural map, so it means that every time it’s different. You’re always on an island, and it resets every month. So-
Lex Fridman
(03:59:02)
Is it PvP?
Jeff Kaplan
(03:59:03)
It’s all PvP. In fact, Rust is the most PvP thing in all of PvP.
Lex Fridman
(03:59:12)
Well, I don’t know what that means, but-
Jeff Kaplan
(03:59:15)
Rust players know what that means.
Lex Fridman
(03:59:16)
Everybody who plays Rust and loves it sounds to me like they’re in a cult. So I with all due respect, please don’t write me letters.
Jeff Kaplan
(03:59:24)
They’re too busy playing Rust. They’re too busy checking on their base, making sure it’s not raided, to write you letters.
Lex Fridman
(03:59:30)
Oh, good.
Jeff Kaplan
(03:59:31)
It takes place… It’s basically… It’s open world. You can do whatever you want. There’s not really any directed gameplay to it, but at any time, any other player can kill you and take anything that’s on you.
Lex Fridman
(03:59:49)
Oh, wow.
Jeff Kaplan
(03:59:50)
Yeah, and then you build what Rust players call bases, and you upgrade the base, and you try to make the base as safe as possible to store your stuff, and then you can make explosives and blow up other people’s walls to get into their base where they’re keeping all their best stuff and take all their shit.
Lex Fridman
(04:00:11)
Like, permanently?
Jeff Kaplan
(04:00:12)
Permanently. Like-
Lex Fridman
(04:00:14)
Oh, I see.
Jeff Kaplan
(04:00:15)
… it would be like PvPing in WoW. Imagine in World of Warcraft- … if somebody could not only kill you but take everything that’s in your bank and make you level one the next time you log in.
Lex Fridman
(04:00:29)
Wow. That’s very stressful.
Jeff Kaplan
(04:00:32)
The beauty of Rust, and why it’s so good, is you can’t have the high highs without the low lows. And-
Lex Fridman
(04:00:41)
Like, real low lows.
Jeff Kaplan
(04:00:42)
Real low lows.
Lex Fridman
(04:00:43)
Wow. All right.
Jeff Kaplan
(04:00:45)
Like, debilitating, like, “am I ever gonna play this game” lows.
Lex Fridman
(04:00:49)
Right.
Jeff Kaplan
(04:00:49)
You know, like, you spend a week building the world’s most perfect base and getting tons of loot, and then it… There’s what’s called online raiding and offline raiding. Online raiding means that my enemy is… I can see that they’re in their base right now, and I’m gonna try to attack them while they’re in their base. Offlining, which is, like, all Rust players will say you’re the scum of the Earth if you offline someone, and then all Rust players also offline people all the time. Yeah. It’s-
Lex Fridman
(04:01:28)
Yes,
Jeff Kaplan
(04:01:28)
… gamer etiquette.
Lex Fridman
(04:01:29)
Yes.
Jeff Kaplan
(04:01:29)
Offlining’s when, like, “Hey, I think that my neighbor logged off for the night. You know, they just played six hours. I’ve been watching them, and now there’s no activity in their base, so I’m gonna, like, blow up their walls and take up all their stuff when they’re not here.”
Lex Fridman
(04:01:45)
Mm-hmm. Yeah. So Rust, because real life is not hard enough, is what it sounds like. Just, I want… If I want-
Jeff Kaplan
(04:01:50)
That’d be a great tag.
Lex Fridman
(04:01:51)
If I want more stress in my life, I’ll play Rust. Yeah. I can’t wait. So okay, so that’s one. That sounds like a unique experience and a great joy. So quick number one, Rust in up there.
Jeff Kaplan
(04:02:06)
Call of Duty.
Lex Fridman
(04:02:07)
Call of Duty just has its own-
Jeff Kaplan
(04:02:08)
You know, there’s a lot of haters. Like, Call of Duty 4 and Modern Warfare 2 were the pinnacle of Call of Duty, with Black Ops being a very respectable, you know, third. But you’re never gonna get a better gun feel from a game than Call… Like, just study the visual effects, the animation, the modeling, the sounds. Every aspect of shooting a gun in Call of Duty is so masterfully done. And then the maps, like, the flow of the multiplayer is just great. Like, there’s… There’s a map called Crash from Call of Duty 4 that Aaron Keller and I… Aaron’s now the game director on Overwatch. We just sat and studied that map, or Terminal from Modern Warfare 2. Just studied the maps of just, like, this map design is off the hook. So Call of Duty is definitely up there.
Lex Fridman
(04:03:05)
So even though you were not thinking about it, Overwatch ended up being a gigantic success. So did you start thinking about, in this framework of crawl, walk, run, about the walk, the PvE piece?
Jeff Kaplan
(04:03:23)
Yes. So the PvE piece was what Overwatch 2 was supposed to be. And I don’t know if people know this or not, but we started working on Overwatch 2 in 2015.
Lex Fridman
(04:03:43)
♪ Over- ♪
Jeff Kaplan
(04:03:43)
So, Overwatch 1 didn’t ship until 2016. So before Overwatch… And it wasn’t like work in earnest. It was like pitching the game. I remember I spent a lot of time… It was myself, Chris Metzen, and Michael Chu sort of brainstorming a framework for what, like, a campaign could look like. And we had this idea of, like, a cooperative PvE shooter. And we actually pitched it to the team before we launched because we were trying to put a bunch of runway in front of us. That worked against us, and it’s one of my biggest mistakes I’ve made as a creative leader in my career, was Overwatch 2. There were two points of failure for me.
Jeff Kaplan
(04:04:33)
The first was, I had people on the game team who didn’t like PvP or competitive shooters, and they really loved the Overwatch universe and wanted to play these characters and heroes, but they wanted to kind of do it on their own terms in like a PvE setting. So even though Overwatch is this like runaway success and everybody’s talking about it, they felt like they couldn’t really engage with it. And so like people on the dev team are like, “Okay, thank God we, you know, shipped that PvP thing-“
Jeff Kaplan
(04:05:09)
“… When do we start work on this other thing?” So that came from a genuine place of excitement. And then the other point of pressure was from the executive team, and this was both the Blizzard and more so the Activision executive teams, and they started really putting the heat on, “Well, you said Overwatch 2 was gonna be out in 2019.” And they’re referring back to these slides that were just crazy dates. Like- … it was… You never want to put a PowerPoint deck in front of a corporate executive. Like, you might as well etch it in stone and come down from the mountain on it.
Lex Fridman
(04:05:53)
So you just threw some dates because the layout looked good.
Jeff Kaplan
(04:05:56)
Yeah. This is just all bullshit. This is just… In the same way we put, like, the tablet, you know? We just put Overwatch, like put Tracer on a tablet and say we have a mobile strategy.
Jeff Kaplan
(04:06:09)
So the executives started getting really angry at us that Overwatch 2 was slipping, slipping. And so when Overwatch 1 took off, I remember very early, we were in like May of 2016, and that year the Olympics were gonna be in Rio, I think. And, you know, I always like to pay, pay respects to, like, when a big event is happening, I’m like, “Hey, we should do, like, an event for the Olympics.” You can’t call it the Olympics or else they sue you, so you just… Even though you’re advertising for them to a bunch of kids who want to play video games and not watch the Olympics. But we also had like these two developers, Mike Heiberg and Dave Adams, like worked on this quirky… Like, they made soccer in Overwatch. We called it Lúcioball.
Jeff Kaplan
(04:07:05)
Like, they made a map and they made these mechanics. We’re like, “Yeah, let’s do an event called the Summer Games.”
Jeff Kaplan
(04:07:13)
And we do a live patch that’s the Summer Games. It’s extremely successful. And then after that, we’re like, “Yeah, let’s do… Halloween’s coming up. Let’s do a Halloween event. How cool will that be?” And our fans just loved these events, but there were two groups that were struggling with it. One was that group I told you on the dev team who was like, “Oh my God, you guys are over-scoping the patches. Why are we doing this Halloween event? We should be doing… We should start work on Overwatch 2. We shouldn’t be this focused on the live game,” which was fucking nuts. Like, that was just crazy. There’s this phrase of catch the wave, ride the wave. Most games fall off the back of the wave. They don’t catch the wave. No one plays it or plays it for two weeks.
Jeff Kaplan
(04:08:09)
If you’re lucky enough to have caught the wave- … ride it till the end. And my instincts at that point were like, “Let’s just keep… How many more of these live events can we do?”

Why Jeff left Blizzard

Lex Fridman
(04:08:22)
So yeah. So now there’s this wave in the live game and events, but the pressure on creating Overwatch 2 was building.
Jeff Kaplan
(04:08:31)
Yeah. We had a coalition on the team that was… really wanted Overwatch 2 built instead of the live events. And then the executive pressure became monumental. And what would have been correct was to do more world events, like keep it going, but the major derail was Overwatch League. And we really like… The, the weirdest part about Overwatch League is I believe in it. You know, I helped pitch it along with some other people. We thought it was like the future of esports and doing regional-based teams, ensuring minimum player salaries and player protections. Like, there was a lot of very good about Overwatch League.
Lex Fridman
(04:09:20)
And there would be teams associated with particular cities.
Jeff Kaplan
(04:09:22)
Yes.
Lex Fridman
(04:09:23)
And it would be international. It would be real competition. So the dream, the ambition was really huge there.
Jeff Kaplan
(04:09:29)
Yeah. The teams part of the dream was more of like regional based, player protection, try to make esports more of a first class citizen, because there were all these stories about like shady teams, you know, screwing their players over. Where it got away from us was there was a lot of excitement about Overwatch League, like too much so, and then it got over marketed to the people buying the teams. They went on this road show where they had a deck basically, and like you could put anything in a deck and sell anything, and they were pretty much selling the Brooklyn Bridge, that Overwatch League was going to be more popular than the NFL.
Jeff Kaplan
(04:10:15)
And we got a bunch of…… billionaire investors in these teams. And when 2018 started, like for example the day I got back, they said, “We signed this huge deal with Twitch for streaming of Overwatch League,” like a media rights deal. And that means that here’s all these commitments we made for Overwatch League of like in-game stuff that had to exist. Like a lot of it was integration with Twitch and camera control and that kind of stuff. The other part of it was a bunch of skins and you know, uniforms for all the teams, which was not just getting the art in the game, but there was huge technical challenges to, like, how all that worked and was efficient and hit the right, you know, memory footprint and all of that kind of stuff.
Jeff Kaplan
(04:11:13)
And so all of your plans at that point kind of go out the window. Like you’re not gonna work on new world events. You’re not really even focused on Overwatch 2, you’re just kind of treading water. There was a lot of talk of like, “Oh God, you know, the deal, like, the deal didn’t go well and we’ve got to do make goods to make the deal better for them.” I’m like, “Just give them some money back, you know?” Like, if you… The deal isn’t what people wanted, like, putting it on us, the Overwatch team, to, like, support this beast.
Jeff Kaplan
(04:11:52)
And it was a great idea that the wrong instincts and sort of, I don’t know how to phrase this in a way that’s not damning, but there was too much focus on, “Let’s make lots of money really fast.” And a lot of people got dragged into it. And while Overwatch League was great for Overwatch in terms of the players that it brought in, and the Overwatch League players, they were awesome. I love them. The Overwatch League staff at Blizzard, some of the nicest, most motivated, great creative people- … like all of these organizations got built and they were all great, but it was a house of cards waiting to fall.
Lex Fridman
(04:12:43)
And when it became more about the money versus the quality of the experience of the different teams playing together and actually building this ecosystem of esports…
Jeff Kaplan
(04:12:55)
The financial reality kicked in, where these teams now, we didn’t just have, you know, executives at Activision and Blizzard who cared about the bottom line of Overwatch. We had all these people who basically invested in the game, and then they started to express their opinions. Originally, the business model was going to be that they were going to do in-person events and there’s going to be big ticket sales and then merch, you know, and all of that. And I think really quickly everybody learned like, yeah, we can’t do in-game events when you have a London team and a Shanghai team and, like, how does this work? So that fell apart super quickly. The merch was good, but it wasn’t going to be making NFL level money-
Jeff Kaplan
(04:13:51)
… whatever insanity anybody thought that was going to be. So everybody quickly defaulted back to, “Hey, didn’t Overwatch make like $500 million just in the live game last year? What can we sell and what can you give us?” That pressure comes onto the team, and then the pressure to ship Overwatch 2 and all the care and love that we had for, like, the live game and the live server, “Let’s just make events and new heroes and new maps,” we’re losing all these resources. And it got to the point, you know, my exit at Blizzard, I believed in Overwatch 2. I think we could have made a great game. I have a lot of hindsight of, like, how I would have designed that game differently with what I know now versus what ultimately we didn’t ship.
Jeff Kaplan
(04:14:50)
And Overwatch 2 is out now, but it’s not the Overwatch 2 that we planned and announced.
Lex Fridman
(04:14:57)
So when you’re referring to Overwatch 2 in this conversation, you’re referring to the PvE version?
Jeff Kaplan
(04:15:01)
The PvE version.
Lex Fridman
(04:15:02)
Which, by the way, I would have loved to play. I’m one of the people that were… Overwatch is great, but the PvP, but I would have loved to play the PvE version.
Jeff Kaplan
(04:15:12)
I think everybody would have loved to have played it. And there’s a misconception online that all I cared about was PvE and I didn’t care about PvP. All of the Overwatch 2 PvP maps were something that I said to the team over and over, “We have a PvP audience. If we get anything right, it has to be the PvP.” We would be lucky to welcome these PvE players, but that’s not guaranteed. So it was never a PvE only focus.
Lex Fridman
(04:15:46)
It’s just almost expanding it to also the E.
Jeff Kaplan
(04:15:50)
Yeah. And what eventually broke me was it used to be like in 2016 and 2017, I felt very in control of the Overwatch team and the direction of the game as a game director, you know, working with Ray Gresko as the production director, it felt like we were running Overwatch.
Jeff Kaplan
(04:16:11)
And we were very, very successful and doing a good job. And I think the fans were happy. And then as we transitioned, you know, Overwatch League was the best intention. You know, my parents always say, “The road to hell is paved with good intentions.” That was the Overwatch League, and it ended up being an albatross. And then Overwatch 2 is the same thing. And what it boiled down for me, like what sort of ultimately broke me in my Blizzard career was I got called in the CFO’s office, and he sits me down and he says, he gives me a date, which at the time was 2020 and was going to slip to 2021, but at the time, it was 2020.
Jeff Kaplan
(04:17:03)
And he said, “Overwatch has to make in 2020, and then every year after that, it needs a recurring revenue of…” And then he says to me, “If it doesn’t do dollars, we’re gonna lay off a thousand people, and that’s gonna be on you.” And that was just the biggest fuck you moment I had in my career. It felt surreal to be in that condition. And as somebody who’s worked on a lot of games, made a lot of games, you get in these meetings where they’re like, “There’s Fortnite, has 1,400 people working on it. If you just hire 1,400 people and make it free-to-play, we’ll make that money, right?” And that was… I had believed I would never work any place but Blizzard. I loved it. It was a part of who I was and I felt I was a part of it, and I literally thought I would retire from the place.
Jeff Kaplan
(04:18:12)
I never thought the day would come, and that was it. I was like, we’re done here. Luckily for Blizzard, that CFO is no longer there.
Lex Fridman
(04:18:25)
I mean, Blizzard is one of the greatest companies in the history of Earth. They’ve created so many incredible video games. It’s so difficult to create so many hits, and they were done not by chasing money. They’re done by small incredible teams, the hodgepodge that you describe taking big risks and falling in love with the thing they do and then just chasing it, working extremely hard. And just because you figured out a way how to make a lot of money doesn’t mean it’s not at the core this incredible creative journey that’s incredibly difficult to pull off. And just because you got a bunch of really smart creative people who have somehow figured out how to pull it off multiple times in a row doesn’t mean you can just treat it like a machine.
Lex Fridman
(04:19:21)
Every single time, it’s this beautiful journey of hodgepodge of weirdos working together, and weirdos have to run that thing. If you have, ever have a chance to create something special, you have to have weirdos at the helm. And the degree to which you don’t have weirdos at the helm, creative minds at the helm and you’re a businessperson at the helm, get out of their way, right? You can’t, you cannot have the meetings like you’re describing. And I don’t just speak about this particular company. It’s just the entire industry. I just, there’s so much joy to be had if we keep creating great games, and I just hope we get to see those great games.
Jeff Kaplan
(04:20:09)
I think there’s a message to creative people out there and people who make stuff. We’re generally—we’re so focused on the love of the craft that we get lost in it and we love doing it and we’re not cutthroat and we don’t have that kind of ambition. We have a different kind of ambition. But there’s this whole world, especially as soon as you’re lucky enough to have success, that are very cutthroat and very ambitious. And for whatever reason, we keep giving ourselves to them, and we need to stop giving our so… World of Warcraft, when we made it, there was no CFO at Blizzard. You don’t need a CFO to make World of Warcraft. You need artists, engineers, designers, producers, and an audio team.
Lex Fridman
(04:21:07)
You don’t need to bring in… Just because you’re making a lot of money doesn’t mean you need to now start adulting by bringing in a CFO. You can figure it out.
Jeff Kaplan
(04:21:16)
And there are great finance guys. Like I’ve worked with finance guys who get it and get out of the way and respect, and they’re gamers, and they sort of understand, but like, I wish developers would understand their own value more and stop handing the golden goose to people who don’t deserve it.
Lex Fridman
(04:21:40)
How painful was it to say goodbye?
Jeff Kaplan
(04:21:44)
It broke me. I think after you’ve been at a place like Blizzard, which I love Blizzard. To this day, I have nothing but warm, fond memories. I mean, there’s those moments where you’re like, “I wish that hadn’t happened,” but on the whole, that place is mecca for game development, and everything I have is due to Blizzard. They provided for me and my family, made me the person I am, so separating from Blizzard was one of the most painful things. And I was very sad when I resigned, and I didn’t realize how broken I was until recently, like the mourning, grieving I had gone through of like… I think I’m a little fucked in the head for not being there any… How could I give that up? How could I not be there anymore? It was—it was really, really painful leaving.
Lex Fridman
(04:22:48)
Can we just speak to, I don’t know, I don’t think we can give enough love to Blizzard. It’s a legendary company. For me personally, for everybody, for millions of people, created some of the greatest games ever, Warcraft, StarCraft Universe, Diablo, WoW, Overwatch. What made it such a legendary game company? Just looking back at the whole of it?
Jeff Kaplan
(04:23:11)
The start is Mike, Allen, and Frank. It was run by three gamers. They were, all three of them, programmers. They made the games before they just ran the company, so they knew what each of us as developers beneath them were going through, and they protected us. They shielded us from all of the nonsense, and even when they would align with a businessperson, they had a COO in the early days named Paul Sams, and Paul protected us.
Jeff Kaplan
(04:23:47)
You know, they just, they found great people who got it. The company when I joined was, like, 95% developers and, like, 5% operations. It’s, when I left, it was, you know, 50/50, and that’s like a 4,500-person company. That love of the games and the respect and good treatment for game developers really turned it into the place that it was, just the commitment to excellence, the high-quality bar and then finding these passionate people like Chris Metzen or Sam Didier, they were, like, the visionaries of early Blizzard, Allen Adham, of just these worlds that we’re still making and we’re still playing in today. It was infectious and it was inspirational, and you wore the Blizzard blue with an esprit de corps.
Jeff Kaplan
(04:24:51)
Like, you felt proud to be part of it and you felt like you had made it to be there, and everything you did, you did wanting to respect and honor those who had come before you. I know that sounds almost cheesy saying it that way, but it really had that sense of reverence, like you knew you were part of something special. You didn’t take it for granted.
Lex Fridman
(04:25:15)
Yeah. That’s the sense. Reading everything, that’s the sense I got. Everybody there was a part of it that truly, truly, truly honored that time. Just to, just to take a small slice, what were some of the brain… So you mentioned Chris Metzen. You gave so much love to so many people on the team, but I gotta ask about Chris Metzen, who I would, by the way, love to do a podcast with at some point. What were the brainstorming sessions with him like? It seems like those are pretty awesome.
Jeff Kaplan
(04:25:44)
They were the best. Like, you could walk into a room. Like, the way I would work with Chris is early on when I was more junior, it was just sort of getting creative direction from him. “Hey, Chris, I’m about to work on this zone called Westfall. What are your ideas? You know, how could I capture them in gameplay? Well, that won’t quite work. How about like this?” It was more like that. Later on, like, I, I still remember the first discussion I ever had with Chris about Wrath of the Lich King, I went up to his office like, “Hey, we’re, we’re finally doing it. We’re doing the Northrend expansion. You know, what excites you about Northrend?” And that’s all you had to say. And he would draw a map and he’d start pulling up old, like, Warcraft II and Warcraft I manuals- …
Jeff Kaplan
(04:26:39)
and, you know, showing you, like, pictures he and Sammy had drawn and, like, maps and, and he, all of it, he would just go on for an hour and then I would sort of digest. I’d just listen, taking constant notes. I’m photographing his whiteboards all the time, and then I go back and start to put those into design flow of, like, “Okay. What, what’s a zone? What’s a dungeon? What could be cool? What should come first? What should come last?” You know, Lich King, for example, we wanted to try a very specific design to counter a problem we had in Burning Crusade, which is everybody entered through the Dark Portal through Hellfire Peninsula, all the server programmers hate you because everybody loads into the same zone at the same time. Lich King, we split them up for better player flow.
Jeff Kaplan
(04:27:27)
Plus, it’s more interesting the more choice you have. You know, Sid Meier says, “Games are a series of interesting choices,” so we give them two starting zones, but that was the flow with Chris. And so often we would just, like, okay, in that first meeting, Chris had put a zone called Grizzly Hills on the board. Well, I don’t know anything about Grizzly Hills. “Hey, Chris? Talk about Grizzly Hills.” If you didn’t interrupt him, he’d just go for an hour. And you have no idea how much of it, like, he had pre-thought about or had existed in previous lore and how much of it he was just making up on the spot. He’s just that charismatic and captivating.
Lex Fridman
(04:28:14)
Creating these worlds and being able to- … brainstorm through them and together, I mean, that is what you’re doing. As a consumer of those worlds, you kind of take it for granted that they’re incredible, but, like, you’re crafting them. Like, you’re looking at a blank sheet of paper and then together coming up…
Jeff Kaplan
(04:28:32)
My job, as I saw it working with Chris, was I had to on World of Warcraft specifically working with Chris, is I was like the translator into gameplay of what Chris wanted, how to get it to play like how Chris wanted. So my favorite story is we’re working on Burning Crusade and we’re in this meeting and Chris is like… He’s the gentlest, sweetest guy, but because he carries himself with such confidence and everybody’s in awe of him, the junior developers get kind of intimidated by him. So we’re in this meeting and we’re talking about Silvermoon City because we’re introducing the Blood Elves, and Chris is like, “And Silvermoon City’s got the tallest fucking tower in all of Azeroth. I mean, it is the tallest thing. You know, it’s mind-blowing, the awe of it.”
Jeff Kaplan
(04:29:24)
Only the blood elves could build it.” Fast-forward like two weeks later. I’m walking through the hall and I see a bunch of level designers and artists are all like crowded around the screen, and on the screen they’ve dragged Blackrock Mountain and Karazhan and the Stormwind Cathedral. I’m like, “What the fuck are you guys doing?” And they’re like, “Well, Chris said that the Silvermoon Tower had to be the tallest thing in World of Warcraft-” “… and so we’re measuring how tall all of these other things are so we can make the tower taller.” And I’m like, “Guys, Chris doesn’t know how tall the Burning Steppes, you know- … and the cathedral in Stormwind- … is. What Chris means is just make the tower really fucking tall.”
Jeff Kaplan
(04:30:20)
“You don’t need to measure it.” And they’re, “Oh, okay. That’s okay?” Like, “Are you willing to take the heat if he—” I’m like, “I’m willing to take the heat on this one, guys.”
Lex Fridman
(04:30:29)
Yeah. It’s just a feeling. It’s a vibe. It’s-
Jeff Kaplan
(04:30:32)
It’s a vibe.

Diablo IV

Lex Fridman
(04:30:33)
Yeah. And I also just personally have to give all the love in the world for the current Diablo IV team, because I’ve spent, most recently out of the Blizzard games, I’ve spent a huge amount of time in Diablo, and they’ve created some… And it’s not just the loot, all right? It’s the, the whole experience, the art, everything together. And the seasons they’ve created, they’ve created a really wonderful world. So I can, I could see, I could feel how much effort goes into that.
Jeff Kaplan
(04:31:02)
They’re crushing it. And I think Diablo IV in like modern times is one of the best worlds that they’ve built. And they know, they understand Diablo players. Like that community is so hard and so demanding, and that team is amazing.
Lex Fridman
(04:31:20)
Yeah, there’s a lot of richness. It’s like there’s this really… I mean, I don’t know how often you get that, but it’s really the perfect Diablo game. They’ve really like evolved a lot, grew a lot. So there’s this whole mathematical component of just so many numbers everywhere and it’s all balanced really masterfully. And then, of course, you have to come up with new content with the seasons and they figure out ways to do that, and at a crazy pace. And still make it super fun.
Jeff Kaplan
(04:31:50)
They’re a great live team, yeah.

Getting back to making video games

Lex Fridman
(04:31:52)
And for me personally, like I said, the co-op, the couch co-op experience has been really… like that aspect of it is really great, just all of it. It’s one of the greatest games in recent history. One of the things I wanted to mention, ’cause this is a powerful speech, is sort of instead of doing some kind of a corporate goodbye as you were leaving Blizzard, you allegedly shared with your team a video of David Bowie giving advice. And people should go watch this clip. But if I may read it, Bowie says, “Never play to the gallery.
Lex Fridman
(04:32:28)
Always remember that the reason that you initially started working was that there was something inside yourself that you felt that if you could manifest in some way, you would understand more about yourself and how you co-exist with the rest of society. I think it’s terribly dangerous for an artist to fulfill other peoples’ expectations. I think they generally produce their worst work when they do that. And the other thing I would say is that if you feel safe in the area that you’re working in, you’re not working in the right area. Always go a little further into the water than you feel you’re capable of being in. Go a little bit out of your depth.
Lex Fridman
(04:33:07)
And when you don’t feel that your feet are quite touching the bottom, you’re just about in the right place to do something exciting.” Speaking of which, you are just about in a place to do something exciting. After leaving Blizzard you told me that you tried to take some time off. How did that work out for you?
Jeff Kaplan
(04:33:34)
Not so well. My wife, who is wonderful, told me I needed to take at least a year off and just, you know, I’d been going really hard. I’d gone 19 years barely taking vacation and I let Blizzard consume me. And, you know, I was crushed by leaving because I loved the place, and I didn’t know what to do with myself. I was pulling weeds in the backyard.
Lex Fridman
(04:34:06)
Literally. Gardening.
Jeff Kaplan
(04:34:07)
Yeah. Well, she won’t let me garden in the garden ’cause that’s hers- … but I’m allowed to pull the weeds. So I got very good at that. I was very proficient. And then of all things, I cracked out on Call of Duty: Black Ops Cold War and I unlocked Dark Matter Ultra, which I’d… that’s like a crazy achievement to do in that game.
Jeff Kaplan
(04:34:29)
So I did that, and then I just, I couldn’t help it, like it’s how I’m programmed. It was like, at this point, it’s late spring, early summer and I’m just sitting in the backyard and I just started writing with Notepad about, “Here’s a game I want to make.” And it was so terrifying because for 19 years I had worked with the greatest developers, I thought, in the industry. And, you know, there’d be moments where it’s like, “Okay, I wanna do like a game world map.” Like, “Hey, Erin, you’re amazing at making game world maps. Like, you do that.” And you know, I, like, “I need some story hooks. Hey, Chris, what do you think would be cool here?”
Jeff Kaplan
(04:35:17)
Like, you know, it’s so collaborative and I was surrounded by the best of the best, and there I was by myself. And I was out there again, and I loved it. It brought all the joy of game making. I thought games were no longer fun to make because it was only about business, and somebody’s asking me for unreasonable amounts of money and unreasonable amounts of time. And I had forgotten the pure joy of the craft of making games, and I was designing, I was going on, I was watching YouTube videos to learn Unreal and Adobe Illustrator and all these things to like help me make games, whatever, Blender. Um, I had no right to be doing any of that, and it just felt so amazing to do it. And I sort of realized, I came to two realizations. One, I never wanna work for someone else again.
Jeff Kaplan
(04:36:19)
I never wanna create something and then have somebody take my baby away from me, you know? That’s really hard when that happens, and it’s sort of happened a few times now, you know, where you have to just let something go that you created. And I wanted it all to be focused on the craft of making games, the art, programming, design, audio, you know? Like, just not about the bullshit of the games industry. I’m not interested in the games industry. I’m not interested in the business of games. I’m not interested in the entertainment industry. It’s just game jamming, making stuff that we’re gonna play together. And around that time, my I call him my development soulmate. There’s a programmer named Tim Ford.
Jeff Kaplan
(04:37:20)
He reached out and he’s like, “Hey, man…” He was like an associate tech director on Overwatch at the time. And he’s like, “Yeah, I don’t think I can do this anymore. It’s just not like it was, you know, I just handed in my notice.” And I’m like, “Whoa, you know, well, if you wanna do something together, like fuck it. Let’s take a stab and, you know, just see what happens.” And Tim came over to my house, and well, before that, he says, “My last day’s on Friday.”
Jeff Kaplan
(04:37:57)
“And my exit interview’s at like 1:00. I’m gonna be over to your house at like 2:00 that afternoon.” And I’m like, “Well, don’t you think you should take some time off, Tim, you know, before whatever’s next for you? Take a month off, you know? Meg, his wife, will appreciate it, you know? Just go pull weeds in the garden for a while.” And he’s like, “I’m a programmer. All I’m gonna do is program for a month if I take a month off. I might as well start programming our game.” Which-
Lex Fridman
(04:38:32)
Brilliant
Jeff Kaplan
(04:38:32)
… it was so awesome when he said that.
Lex Fridman
(04:38:34)
Brilliant.
Jeff Kaplan
(04:38:35)
He came over and I pitched him this idea for a game, and I pitched him, “Let’s start a company.” And that was it. Like, that was the birth of us making a studio.
Lex Fridman
(04:38:49)
Now, meanwhile, as far as the outside world is concerned, you’ve disappeared off the face of the Earth, but you were actually working on a game.
Jeff Kaplan
(04:38:58)
Yeah, I needed to be away from the world. I needed to not have… I wanted to not get attention from anyone. I needed to not read my name on Reddit or… you know, any internet site. I wanted to not come up, let some other Jeff Kaplan bubble to the top- … of the Google, you know, search list.
Lex Fridman
(04:39:25)
You know our man Dinoflask is gonna be all over this conversation, right?
Jeff Kaplan
(04:39:29)
Oh, God, well, there’s, yeah, this one’s gonna set him back some time. But, yeah, I needed-
Lex Fridman
(04:39:35)
You know what to do.
Jeff Kaplan
(04:39:36)
I needed for none of that to happen. I just needed to be able to, like, mourn the loss of Blizzard-
Jeff Kaplan
(04:39:42)
… and create on my own so it was great. And at that time, like as soon as it was announced that I was leaving Blizzard, I had like 60 people reach out to me. It was, this was April of 2021 and investment money was nuts. Both like the VC money and the strategic money was crazy, like the, especially the Chinese companies, because apparently they weren’t getting publishing numbers in China or something. The whole economy was crazy, and so just everybody was trying to throw money at me, which was a very good position to sort of be at to start a company. So what Tim and I did was say, “We’re not doing this for money, but here’s the game we wanna make, and it’s gonna take this many developers, and we think it’s gonna take this length of time, and that means the budget is this.
Jeff Kaplan
(04:40:42)
And we need, for any of these people who wanna invest in us, we gotta hit that number, but after that, we’re not gonna go for more money. It’s not an auction to raise as high as we can go. We’re gonna optimize for control.”

The Legend of California

Lex Fridman
(04:40:59)
I don’t know if this is something that you can talk about, but I got a chance to see the game for a few hours, and I have to say it’s incredible, Jeff. Like, it’s incredible. But I almost immediately fell in love with the world and everything I saw. See, I’m tempted to say some of the things I saw but it’s just an incredible game. So how much can you talk about it? Do you know what it’s going to be called? Can you talk about that? Do you know about the company? Are you allowed to say any of that?
Jeff Kaplan
(04:41:27)
Sure. The most unconventional way to talk about this stuff for the first time. So, our company name is Kintsugiyama, which most people will struggle to pronounce.
Lex Fridman
(04:41:39)
Nice.
Jeff Kaplan
(04:41:39)
And the company name has a deep meaning to me, which I’m happy to explain later if you’re interested. And the game name that we’re working on, it’s called The Legend of California, and it’s an open world game. People are gonna call it a survival crafting game. People like to compartmentalize these. I think it’s an action game. It’s a game that takes place on a mythical island of California.
Lex Fridman
(04:42:11)
Mm-hmm. In the 1800s.
Jeff Kaplan
(04:42:14)
In the gold rush. If you’re trying to-
Lex Fridman
(04:42:16)
In the gold rush
Jeff Kaplan
(04:42:16)
… if you’re trying to nail the most important time in California history, it’s gotta be that gold rush.
Lex Fridman
(04:42:23)
So, it’s this beautiful, almost ultra-realistic version of California, but it’s in an alternate history, alternate version of California-
Jeff Kaplan
(04:42:31)
Yes
Lex Fridman
(04:42:31)
… where it’s an island, almost like an Atlantis type of ethereal island, but still very realistic to what the California terrain is- … and that time period. So it’s this weird, like, amalgamation of this ultra-realistic and the surreal.
Jeff Kaplan
(04:42:50)
The theme of the game is very weird. We’re not trying to make a historical game. There’s no historical accuracy to this. In fact, the island when first discovered is uninhabited. That’s already not true. As we know, there were lots of people in California. It’s an island, which we know is not true. We want it to feel authentic to that time period because we think that time period is cool. Prospectors, you know, cowboys. Like, it’s a really fun thing for us to explore, all of those themes—people in mines. We wanna build mines and we just wanna create a world that you can live in. I love creating worlds. Everything that I’ve worked on before, from World of Warcraft to Overwatch, it’s always been, how do you create this place for players to escape to? So.
Lex Fridman
(04:43:45)
So, it’s an online, multiplayer game. I should say the experience of it is just gorgeous, and then the music is wonderful.
Jeff Kaplan
(04:43:53)
I’m glad you like it.
Lex Fridman
(04:43:54)
And one of my favorite things is just going down to the mine and digging. I mean, that’s done extremely well. And as you described, the whole world is voxels, so it’s generated. Can you explain how that works?
Jeff Kaplan
(04:44:09)
Yeah. As a world, we handcrafted the world, so like the shape of California is always the familiar shape of California, except it’s an island. So, you know, there’s no Nevada on the eastern side. We handcrafted all of that. It looks gorgeous and places like Yosemite are where you would expect Yosemite to be. And so all of those familiar landmarks are there, but then we have like dozens of points of interest, and those move around the map in, depending on the map seed. And the map is also tiered in terms of difficulty. We don’t really have levels in this game. We have tiers, and there’s only four tiers right now. Maybe, maybe that will change. But the way that the map tiers itself each time changes with every world seed. So not only…
Jeff Kaplan
(04:45:04)
Any server that you join will have a different seed in terms of how the tiers play out. So, Mojave might be the easiest newbie area on your server, but on my server it’s an endgame, tier four area. But all of our notable points of interest also move around. So, we have a really amazing point of interest that we call Dread Rock that’s inspired by Alcatraz. And like, sure, sometimes it’s in San Francisco, but sometimes it can be sitting in the middle of the Mojave Desert also.
Lex Fridman
(04:45:39)
Mm-hmm. It integrates it into the environment, to where it makes sense- … to be in that environment. And like you said, so much of what makes a world is sound and lighting. And that, that’s definitely a thing that I’ve noticed. I mean, it’s probably the most beautiful sunset and sunrise I’ve seen in a game.
Jeff Kaplan
(04:46:04)
We have a great lighting artist who’s this amazing guy named Mike Marra, and some of the inspiration for the game like… There’s a lot of inspirations for this game, but there’s a painter named Albert Bierstadt, who I discovered while researching California, and he painted these just epic landscape pieces of, you know, Yosemite and a lot of other, the gorgeous parts of-
Lex Fridman
(04:46:29)
Yeah, we’re looking at one, one photo of his.
Jeff Kaplan
(04:46:31)
Yeah, it’s just amazing, and his paintings were huge, too. I’d love to see one in person.
Lex Fridman
(04:46:38)
And so you see a painting like that and you’re saying, “We wanna create that world.”
Jeff Kaplan
(04:46:42)
Yeah. I mean, when I see that painting, this is, this is what video games brings to the table. So, every art form that evolves after another gets to incorporate previous art forms.
Jeff Kaplan
(04:46:55)
Movies got to take sound and, you know, fine art. We get to take everything, including movies. So, you know, it’s, it’s Katamari Damacy, the art form. But like… I see a Bierstadt painting, and I wanna walk around that world. I wanna see what’s around the corner. And our lighting artist, Mike, he, you know, he sees these pictures, and he’s like, “Okay. Yeah. Hold my beer.” Like, “I’ll make it look like that.” And he, and he… We are all blown away by the, like, how much impact just the lighting has. And I’m not an artist, so I don’t think about things like the color theory, the lights, the clouds, what all of that’s bringing to this. I just know I want to live in that world, and these are the types of worlds that we want to make.
Lex Fridman
(04:47:45)
So, what do you want the tone of the game to be, the feeling of the game?
Jeff Kaplan
(04:47:50)
This is really different. It’s been hard for people. When people were talking to us about, you know, they know me and Tim, and they’re, “Oh, the Blizzard guys, the Overwatch guys. You’re making, like, a bright, aspirational future team-based hero shooter, right?” And I’m like, “Why would I want to do that?” I felt like, first of all, respects to Blizzard, and I don’t want to try to crib Blizzard and make a pseudo-Blizzard game, you know? This is… I want to make a Kintsugiyama game, you know? Me and Tim and this crack team, you know, we’re only 34 people. We want to define what a Kintsugiyama game is, and this world seemed so inspiring to us, you know? The setting is really interesting. You know, I think California can be a game world.
Jeff Kaplan
(04:48:47)
I think we can make it beautiful and interesting. We don’t have to follow history or geography. We can kind of do a spin where, you know, it feels authentic. We can have guns that feel like they’re kind of from that time period, but we’re not spaceships and aliens and steampunk. That’s what we would have done at Blizzard. We’re gonna be a little different here. So, the tone of this game, you know… Metzen would describe Blizzard as the hero factory. You know, we make… And what he means by that is not only are we making heroes, but we make the players into heroes.
Jeff Kaplan
(04:49:25)
This game is gonna have an edgier tone. You’re gonna enter this world. It’s gonna feel lonelier. It’s gonna feel mysterious, larger than you. You’re gonna feel small until you earn the right to feel big. It’s gonna feel really dangerous. You’re gonna want to see what’s over that next hill, but if the sun is setting, like, get to shelter. Can’t wait to get back to my ranch and put my cozy fireplace on and wait till morning, you know? We want more of that vibe.
Lex Fridman
(04:49:58)
It’s more solitary, almost scary but beautiful. That mix, that tension. I hate to ask this question, but given our previous discussion about a timeline slide, what do you think a timeline looks like? When do you think it’s possible for somebody in the world to be able to play this game?
Jeff Kaplan
(04:50:23)
So, this is the beauty of me and Tim kind of getting to run the show and why we’re excited about it. We can kinda do whatever we want- … within reason. So we’re just gonna kinda quietly put it up on Steam and see what happens.
Lex Fridman
(04:50:43)
Nice.
Jeff Kaplan
(04:50:43)
You know, no, like, big corporate marketing group would ever think to do that in a million years- … without, like, some, you know, $10 million announce or whatever. We’ll just kinda put it on Steam and be cool if people wishlisted it. There’s my plug. And then I think we are shooting to have some sort of public-ish alpha in March. And then our plan, and something I’m really excited about, ’cause I’ve never gotten to do this before, we wanna put the game in early access. Some people hate early access and won’t touch it, and I understand it, and then some people are like, “I wanna be in on the ground floor and see the thing from day one and watch it evolve.” So, we’ll put it into early access, and we’ll just run that until who knows, you know?
Lex Fridman
(04:51:37)
Is it scary to you to have a sort of game with some rough edges out there in the wild where people are interacting with it through the alpha- … through the beta?
Jeff Kaplan
(04:51:47)
Yes, and this game has more rough edges, like, the most rough edges we would have at Blizzard is, like, showing it at BlizzCon, which was heavily polished and controlled. This is gonna be more, you know, in development than anything else I’ve ever worked on. But that’s-
Lex Fridman
(04:52:06)
I love it.
Jeff Kaplan
(04:52:07)
… part of the excitement too, you know? It’s kind of like this is how the sausage gets made. I mean, you’re gonna see it front row.
Lex Fridman
(04:52:16)
I’m gonna try to get myself into the alpha somehow. Anybody who is listening to this, I highly recommend this game. You will not be disappointed. The world itself is just beautiful. So, whoever’s behind it, you and Tim and the team, are just doing an incredible job. And thank you for putting out rough versions of it so we get to-
Jeff Kaplan
(04:52:35)
Yeah. Of course.
Lex Fridman
(04:52:35)
… not wait forever for the perfect thing. And because you feel in… You feel like you’re a part of it if you get the imperfect thing. I’m one of the people who like the imperfect. We get to see the rough versions develop, and get to be a part of it developing. I saw the logo. It’s a mountain. Can you explain the meaning behind the name?
Jeff Kaplan
(04:53:01)
So, Kintsugi is a Japanese craft of repairing broken pottery. So there’s a lot of philosophy that goes into it as well. And you know, I wanna do a good job of explaining it, but basically, like, you take a broken piece of pottery, and then they would use golden joinery-
Jeff Kaplan
(04:53:24)
Like golden lacquer to put the piece back together. And the thought was rather than hiding the scars, you make them more beautiful. And the philosophical parts that sort of appealed to me with that is there’s a lot of me and Tim in that, of… We’re so appreciative for our time at Blizzard, but we didn’t come away unscarred. And there’s also a philosophy in Kintsugi that nothing’s ever perfect, and the pursuit of perfection is actually a mistake, and that there’s beauty in imperfection. And so I relate that to myself personally. That’s how I feel in an aspirational way. I’m not saying I’ve achieved it, but in an aspirational way, I want to be that way. And I think it’s also an analogy for the making of games. Like, it’s a…
Jeff Kaplan
(04:54:22)
Making of games is a constant pursuit of imperfection. A game is never gonna be perfect. Just ask the players. They’re very vocal about it. And seeing the beauty and the imperfections and the strength in something that’s been broken that can be stronger.

Greatest video game of all time

Lex Fridman
(04:54:44)
You had a heck of a difficult couple years here. And so in some sense, it represents that beauty in imperfection. So everybody listening to this I hope, I hope you do have it out on, on Steam. Go check out Legend of California. Truly a beautiful world. I’m so glad you are actually creating this, low-key, quietly creating this beautiful, incredible world. Ridiculous question, but can we talk about some of the greatest games of all time?
Jeff Kaplan
(04:55:19)
Yes.
Lex Fridman
(04:55:20)
What… I mean, I know this is a bit of a nerding-out kind of thing, and outside of the games you’ve been part of creating, I think Blizzard has created some of the greatest games of all time. Outside of those, what do you think are in the list?
Jeff Kaplan
(04:55:35)
So there’s one that’s the best. It’s Legend of Zelda: Breath of the Wild. And then there’s this list of greatest games: Zork, Ultima,
Lex Fridman
(04:55:47)
So Breath of the Wild is, is the best, yeah?
Jeff Kaplan
(04:55:49)
The greatest game ever made.
Lex Fridman
(04:55:51)
What makes it the greatest game ever made for you?
Jeff Kaplan
(04:55:53)
Every aspect is so thoughtful, so well designed. The art matches the design and the tech, and even integrating with the Switch in the way it does. How do you keep making Zelda better? How can Legend of Zelda: Ocarina of Time exist and somebody make an even better Zelda game? The way you can chop down a tree and float in a river, and, like, the world is a toy and everything works as you wished and hoped it would work. And there’s a narrative aspect to it, and there’s really fun combat and action and itemization. There’s so many things that that game gets right that other games are lucky if they get one of those things right, and are… become best in their genre just for getting that one thing right. And Breath of the Wild does them all right and the best.
Lex Fridman
(04:56:50)
There’s a certain kinda lightness to the way the world feels, the openness of the world feels. That’s unlike any other game, right? That’s uniquely that company, uniquely that-
Jeff Kaplan
(04:57:00)
Yeah. No one else-
Lex Fridman
(04:57:01)
Because nobody else creates that. You’re right. Under the pressure of having created a bunch of Zeldas that are, like, really great games, to be able to deliver once again.
Jeff Kaplan
(04:57:11)
Nintendo is, like, the Mecca. Like, they’re the best, you know? That’s all there is to it.
Lex Fridman
(04:57:17)
Do you understand how that company works?
Jeff Kaplan
(04:57:20)
No.
Lex Fridman
(04:57:20)
That they’re not…
Jeff Kaplan
(04:57:21)
I don’t at all.
Lex Fridman
(04:57:23)
Like, because, I mean, they’ve been around for a long time and still to be able to deliver.
Jeff Kaplan
(04:57:27)
I kind of rationally or irrationally just worship. It’s just sort of: if it’s from Nintendo, it’s gonna be great.
Jeff Kaplan
(04:57:35)
And even if my first impression is like, “Wow, they’re doing what weird thing with the controller this time,” and then you get your hands on it and you’re like, “God.” My son and I, we both played Legend of Zelda: Breath of the Wild, and he makes games also. And we had this moment where he’s like, “I’m so sad after I played it.” And he’s like, “I know I’ll never make anything like this.” And it’s that weird, like, you honor it so much and think it’s so great. Red Dead was like that for me. Red Dead Redemption 2 is… that’s a game I put on a shrine. Not just how brilliant the game itself is, but as a game maker, as a craftsperson who makes games, how the hell do you make that? Like, only Rockstar with all the years of making those types of games. No one else can come in entry level-
Jeff Kaplan
(04:58:33)
… and compete with that. So that’s-
Lex Fridman
(04:58:36)
Purely single player, narrative driven. So you also respect that kind of, like, pure-
Jeff Kaplan
(04:58:42)
Yeah. I don’t give anyone a pass. I feel like a lot of gamers and game developers, like, if it has writing, they’re like, “The story’s so good.” I’m like, actually, very few games have great story. But Red Dead has a great story. It’s got great character development. It’s got a good plot. And the dialogue is like… It’s like Tarantino-level- … high-quality dialogue. So… Red Dead’s up there. I have my other games that make the list for me, and these are… Both these games are… I would never tell you to play them. EverQuest and Rust are two of the most defining games to me and my career and my life. And Rust, I would never recommend somebody go and play it. Rust will come calling to you if you are up to play it.
Lex Fridman
(04:59:42)
It is a cult. It’s 100% a cult.
Jeff Kaplan
(04:59:46)
That’s-
Lex Fridman
(04:59:46)
It… When you are ready, it will come down.
Jeff Kaplan
(04:59:48)
It will come down. It will let you know.
Lex Fridman
(04:59:51)
The, the sky will part. Okay.
Jeff Kaplan
(04:59:52)
In Rust, you are considered a complete noob that doesn’t know what he’s doing- … if you don’t have a thousand hours. Even a thousand hours-
Lex Fridman
(05:00:01)
A thousand hours
Jeff Kaplan
(05:00:01)
… people would be like, “Oh, you only have a thousand hours-” “… in that game.” Yeah. But Rust and a lot of inspiration for me in the game I’m working on now… My game is not like Rust in that it’s not a PvP-centric game, but it will have PvP.
Lex Fridman
(05:00:20)
What aspect of Rust do you draw inspiration from? Just…
Jeff Kaplan
(05:00:23)
I love the resetting world. It’s a- … great game mechanic and it’s one that I want to evolve and work upon.
Lex Fridman
(05:00:34)
How often is the world reset do you think, in Legend of California?
Jeff Kaplan
(05:00:39)
I don’t know yet. Probably every month. We want it to be fast enough that you’re not too attached, but we wanna make it rewarding. Like, the trick is coming up with not why am I upset that the world resets, but why am I excited that the world- … resets? And we know players can get very angry about resetting worlds, but anybody who’s played 5,000 hours of Rust, like some of us, the resetting world is the magic. It’s, “I can’t wait for the next reset because the adventure starts all over again.” And if you wanna play the first time with me—like, if we wanna play World of Warcraft, and I’m level 80 and you’re level one, there’s no meaningful experience we can have together—but in Rust, we just wait for a reset and we’re both naked on the beach, you know, from minute one.
Lex Fridman
(05:01:34)
What about the experience of Rust where you can have everything taken away from you? So that part that you-
Jeff Kaplan
(05:01:41)
We’re not doing that.
Lex Fridman
(05:01:42)
Great, great. Because that feels awfully stressful.
Jeff Kaplan
(05:01:44)
See… I just lost the entire Rust audience when I said we’re not doing that because- … if you’re a Rust player, you’re not thinking you’re gonna lose everything you have. You’re thinking, “I’m gonna take everything somebody else has.” But-
Lex Fridman
(05:01:56)
See, my perception of the Rust audience is there’s, like, three people, they’re in a castle somewhere. It’s a very exclusive group.
Jeff Kaplan
(05:02:04)
They are, they are highly skilled, highly passionate… highly knowledgeable, but yeah, it’s an inspiration for me. That and EverQuest were defining… And I’ve… The amount of hours I’ve logged in both those games are insane.
Lex Fridman
(05:02:18)
What do you think has more hours from Jeff Kaplan, EverQuest or Rust?
Jeff Kaplan
(05:02:22)
Well, you said I was 6K on EQ, so that puts me at… I’m at 5K in Rust.
Lex Fridman
(05:02:30)
And, and also in that collection is Zork.
Jeff Kaplan
(05:02:33)
Zork was… I mean, Zork, it just brings me back to that old IBM PC with my mom and my brother, trying to figure out, you know, like, how to keep the lights on or else a grue’s gonna eat us, you know?

AI and future of video games

Lex Fridman
(05:02:47)
Yeah. So certain games just capture your heart and they stay with you forever. What do you think is the future of video games? So there’s a lot of conversations about AI helping expand maybe the storytelling aspects, the world creation aspects, becoming a tool that people can use more so. Maybe creating more believable NPCs, that kind of thing. But also there’s, as we’ve talked about, the video game industry is changing and evolving and trying to figure out, well, there’s the indie game makers that will have more power… Or these larger game makers will have more power, so what do you think the future of games looks like?
Jeff Kaplan
(05:03:32)
I think with AI in mind in particular, I think the current state of AI, trying to integrate it into development is mostly a hot mess.
Jeff Kaplan
(05:03:44)
But I do think that, you know, games are a technology-driven art form. And somebody much smarter than me once described it—and I’m paraphrasing—making a game is like making a movie if you had to invent the camera every time, because you’re kind of inventing the technology of your specific game. And I think AI can play a role in that, and it would be silly not to look at it as an option. The problem with AI right now is it’s overconfident in what it tries to deliver. Like, I fooled around, obviously like everybody, you mess around with, you know, ChatGPT and Gemini and you fool around with some of the art generation, and it’s fun for non-artists to fool around on Midjourney. But it’s mostly weird and shitty.
Jeff Kaplan
(05:04:44)
And even, like, when trying to have AI answer for me… Like, I don’t normally make UI in a game, and so I’m trying to figure out, like, UMG and Unreal Engine and I’m asking ChatGPT how to fix, like, a simple problem, like, how do I make the chat wrap, you know? And it, like, overconfidently gives me the wrong answer. And it’s, like, right one in 10 times. So its hit rate has to be a lot better. I think there’s a lot of moral concerns around AI when it comes to creative pursuits as well, like no one’s creative work should ever be used by AI without their permission.
Jeff Kaplan
(05:05:33)
You know, voice actors and artists, it can’t be lifting from them without their permission. That’s just immoral. It’s no different than just sort of stealing. So that’s wrong. I think how I’m curious, like especially as somebody who runs a small studio with 34 people, it’s like, what are the points of tedium that maybe AI could help out with that I don’t wanna do, and I’m not gonna hire someone to do? So I have, like a really dumb example: I’m making a bunch of images, I size them all incorrectly ’cause I’m dumb and I’m not an artist, and I did it all in Photoshop, and I have like 2,000 images that are the wrong size. I can have ChatGPT resize those and zip it in a file for me, and it literally takes it like a minute to do that.
Jeff Kaplan
(05:06:31)
I wasn’t gonna hire an intern to do it. I was just gonna work an hour later or two hours later that night to do it. Like, it made my life easier. It didn’t take a job. That seems okay. As long as that ethical line stays in place, what I- what I don’t worry about is, no matter how good AI gets, never gonna draw a picture like Arnold Tsang. It’s never gonna tell a story like Chris Metzen. You know, that human spirit is irreplaceable.
Lex Fridman
(05:07:03)
Yeah, it’s hard to put into words what is that magic that humans produce, but they do. Truly great creative minds, truly great creative teams, they- they create something special. It’s hard to really articulate exactly what’s missing with- with AI, you know, what people call AI slop. ‘Cause it creates really beautiful imagery and beautiful stories, and very believable text. But it’s not quite… It doesn’t have that, I don’t know what it is, the edge that’s human. Maybe it’s the imperfections.
Jeff Kaplan
(05:07:41)
Yeah, I think so. Like AI to me right now currently, it’s like an interesting fever dream, you know?
Lex Fridman
(05:07:48)
Yeah. Yeah. Yeah.
Jeff Kaplan
(05:07:49)
That’s the point I’m at with it.
Lex Fridman
(05:07:52)
And a useful tool for the mundane tasks, like you said. But do you think the small studios have hope in the future of gaming?
Jeff Kaplan
(05:08:00)
Small studios are the future of gaming. The big studios basically acquire the small studios for new IP and ideas, and the small studios grow in. The really compelling, new, innovative ideas are gonna come out of small studios.
Lex Fridman
(05:08:17)
What advice would you give to video game creators, small teams, if they wanna create a truly special game?
Jeff Kaplan
(05:08:25)
Well, they know how to do it. I mean, if they’re doing it, they know how to do it. It’s more to video game developers in general, own the craft. Own our art form. Stop giving it to these fucking corporate jackals. You are the golden goose. Keep your eggs.
Lex Fridman
(05:08:51)
Jeff, formerly from the Overwatch team, I have to say from the bottom of my heart, and I think I speak for millions of people, thank you for everything you’ve created in this world. Now that I’ve gotten the chance to see the new game, I’m, I can’t tell you how excited I am to try it. Thank you for everything you’ve created. Thank you for everything you represent. Thank you for remaining and fighting for us as one of us. So thank you, and thank you for talking today.
Jeff Kaplan
(05:09:24)
Thank you, Lex.
Lex Fridman
(05:09:26)
Thanks for listening to this conversation with Jeff Kaplan. To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on. And now, let me leave you with some words from Franz Kafka, “Don’t bend. Don’t water it down. Don’t try to make it logical. Don’t edit your own soul according to the fashion. Rather, follow your most intense obsessions mercilessly.” Thank you for listening, and hope to see you next time.

#492 – Rick Beato: Greatest Guitarists of All Time, History & Future of Music

Rick Beato is a music educator, interviewer, producer, songwriter, and a true multi-instrument musician, playing guitar, bass, cello & piano. His incredible YouTube channel celebrates great musicians & musical ideas, and helps millions of people fall in love with great music all over again.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep492-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:
https://lexfridman.com/rick-beato-transcript

CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:
Rick’s YouTube: https://youtube.com/RickBeato
Rick’s X: https://x.com/rickbeato
Rick’s Instagram: https://instagram.com/rickbeato1
Rick’s Website: https://rickbeato.com
Rick’s Ear Training: https://beatoeartraining.com
The Beato Book: https://beatobook.com

SPONSORS:
To support this podcast, check out our sponsors & get discounts:
UPLIFT Desk: Standing desks and office ergonomics.
Go to https://upliftdesk.com/lex
BetterHelp: Online therapy and counseling.
Go to https://betterhelp.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
Perplexity: AI-powered answer engine.
Go to https://perplexity.ai/

OUTLINE:
(00:00) – Introduction
(00:28) – Sponsors, Comments, and Reflections
(09:17) – Guitar solos
(13:16) – Gypsy jazz and Django Reinhardt
(14:48) – Bebop jazz
(19:00) – Perfect pitch vs relative pitch
(23:37) – Learning to play guitar
(47:08) – Miles Davis
(52:34) – Bass guitar
(53:41) – Greatest guitar solos of all time
(1:22:56) – 27 Club
(1:27:37) – Elton John
(1:30:51) – Metallica
(1:35:21) – Tom Waits
(1:41:12) – Greatest rock stars
(1:44:35) – Beethoven
(1:51:10) – Bach
(1:54:01) – AI in music
(2:07:52) – Sabrina Carpenter
(2:11:23) – YouTube copyright strikes
(2:16:59) – Spotify
(2:27:51) – Guitars
(2:32:13) – Advice

PODCAST LINKS:
– Podcast Website: https://lexfridman.com/podcast
– Apple Podcasts: https://apple.co/2lwqZIr
– Spotify: https://spoti.fi/2nEwCF8
– RSS: https://lexfridman.com/feed/podcast/
– Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
– Clips Channel: https://www.youtube.com/lexclips

Transcript for Rick Beato: Greatest Guitarists of All Time, History & Future of Music | Lex Fridman Podcast #492

This is a transcript of Lex Fridman Podcast #492 with Rick Beato.
The timestamps in the transcript are clickable links
that take you directly to that point in
the main video. Please note that the transcript is
human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman
(00:00:00)
The following is a conversation with Rick Beato, legendary music educator, interviewer, producer, songwriter, and a true multi-instrument musician, playing guitar, bass, cello, and piano. Rick, with his incredible YouTube channel, celebrates great musicians and musical ideas, and helps millions of people, including me, fall in love with great music all over again. This is Lex Fridman Podcast. To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Rick Beato. You had, I think, an incredibly fun and diverse beginning to your music journey.

Guitar solos

Lex Fridman
(00:00:49)
I heard somewhere that one of the things that made you fall in love with music was listening to guitar solos, some epic guitar solos. What’s an early guitar solo that you remember you connected to spiritually , musically, where you’re like, “Wow, there’s magic in this”?
Rick Beato
(00:01:07)
Well, the first solo that I learned was Hey Joe. It was actually a good beginner song, you know, when I first started playing the guitar, because it has pretty simple chords, right? So it’s like E, C, G, D, A. And I learned the solo, and I figured out this, like, I’ll say it’s this pentatonic scale, E minor pentatonic scale though. I didn’t know that’s what it was called, but learned this thing, and it’s like, “Whoa, he’s just in this one shape here.” Now, there was no… You couldn’t go look anything up. You just, if you could figure out the notes, you noticed that there was a little pattern to it.
Rick Beato
(00:01:42)
And then I got so obsessed with it, and I showed my younger brother John, who started playing guitar right at the same time I did. So I was 14, he was 11. And I would play rhythm for him for five minutes while he would solo over Hey Joe. And then as soon as I’d start soloing, he’d throw the guitar down, then we’d get in a fight. And so my mom eventually was like, “What is going on here?” And I was like, “John won’t play rhythm.” “John won’t play rhythm for me.” She’s like, “Okay, I’ll play rhythm for you. What, what are the chords?” And-
Lex Fridman
(00:02:17)
That’s awesome.
Rick Beato
(00:02:17)
… I was like, “Okay, it’s like E, C, G, D, A.” And so my mom would literally play rhythm for 20 minutes while I’d play.
Lex Fridman
(00:02:25)
Hashtag parenting.
Rick Beato
(00:02:27)
That’s amazing. When I look back on it now, my mom’s been gone for 10 years now. When I look back on it, it’s like, “My God, my parents were so cool.”
Lex Fridman
(00:02:36)
We should mention that Hey Joe, and Hendrix in general, is kind of known for the rhythm not being simple rhythm, just the chords that you mentioned. It’s what you do with those chords. It’s almost improvisation, the rhythm side.
Rick Beato
(00:02:47)
He did all those really cool chord fragment riffs and things like that, that’s just part of his… That’s the Hendrix style.
Lex Fridman
(00:02:54)
What do you think? I mean, many people put Hendrix as the greatest guitarist of all time. What do you think is part of that?
Rick Beato
(00:03:00)
You know, I make lists.
Lex Fridman
(00:03:02)
You do. If you somehow don’t know who Rick Beato is, go on YouTube right now and watch your excellent interviews with musicians, watch your breakdown analysis of different songs, and watch your top 20 lists, where you’re very opinionated, sometimes very openly critical about certain kinds of songs. It’s fun. Opinions are fun.
Rick Beato
(00:03:27)
But they do change, Lex, from day to day.
Lex Fridman
(00:03:30)
Yeah, exactly.
Rick Beato
(00:03:31)
You know, like I… But when, anytime I do a list, if I do 20, I like to do 20 because that gives me some leeway to throw in. I have to throw in something that is so weird that people, you know… Something that a lot of people won’t know, just to have it on there, so I can at least introduce a person. You know, I’ll put somebody like Allan Holdsworth, who’s a famous fusion guitar player. I’ll throw in one of his solos or something—just some, some oddball solo in there, just so that people, as they’re listening down the list, will get exposed to something they would not necessarily get exposed to.
Lex Fridman
(00:04:05)
Yeah, a lot of variety. But Hendrix… Did you show up here today, Rick, try to tell me that Hendrix is not up there? I just am getting that vibe right now.
Rick Beato
(00:04:16)
No, I’m not. But I don’t want to say greatest, you know… You can say, well, there are people that inspired Jimi Hendrix. Charlie Christian, older guitar players. Charlie Christian and Django Reinhardt were the first two really big, and probably Andrés Segovia—those were three of the giants of the 20th century, as far as guitar influences for most of the players that were to follow.

Gypsy jazz and Django Reinhardt

Lex Fridman
(00:04:43)
So here, going to Perplexity, Django Reinhardt was, of course, a jazz guitarist and composer, active mainly in France, and is widely regarded as one of the greatest guitarists in jazz history.
Rick Beato
(00:04:54)
So, Django was… Well, there’s a huge movement right now, Gypsy Jazz Movement, as they call it- … that is kind of built around this style of music that he played back in the early 20th century. One of the things about Django is that he was in a fire, and he had two of his third and fourth finger, so his ring finger and pinky were essentially melted together. He had no use of them. Although he could use them while he was chording, but a lot of these incredibly fast lines, he’s just playing with two fingers. And it’s amazing.
Lex Fridman
(00:05:44)
That… What is that? So that’s Gypsy Jazz.
Rick Beato
(00:05:48)
That’s Gypsy Jazz, yeah. Him; Stéphane Grappelli was a violinist that played with him a lot.
Lex Fridman
(00:05:58)
How much of this is improvisation?
Rick Beato
(00:06:01)
Everything he’s doing there is improvised.

Bebop jazz

Lex Fridman
(00:06:07)
It feels so free. And fun like swing, and then at least you said pre-bebop. So bebop was a kind of jazz that was also influential on you in your own life journey. And it’s this complicated, legendary kind of jazz that was very influential on the music that followed. So what, what was bebop?
Rick Beato
(00:06:29)
Well, after the big bands were happening in the, you know, from the ’20s through the ’40s, people would go out and play in small groups that they would tour with. And Charlie Parker, who’s really kind of the, one of the main figures of early bebop, really developed the language of it. Usually, the music that they’re playing over are standard chord progressions- … that they would use as vehicles to improvise over. A lot of them were AABA form. And Charlie Parker created this language of improvisation that was far more sophisticated than the swing players of the big band era. You know, think of people like Benny Goodman of that era. They would have really fast tempo songs, angular lines, chromaticism, things like that, chromatic notes.
Lex Fridman
(00:07:24)
Chromatic notes are just notes next to each other on-
Rick Beato
(00:07:27)
Next to each other, yeah.
Lex Fridman
(00:07:27)
… on the keyboard.
Rick Beato
(00:07:28)
I like to think of it as connecting notes.
Lex Fridman
(00:07:30)
Connecting. You’re putting in more notes than are supposed to be there and so doing, creating some interesting texture.
Rick Beato
(00:07:36)
Yeah, so that is one of the most difficult styles to master, because all these things are a language. Blues playing, they’re all just languages, right? It’s like, just like you’d learn any type of language. My dad loved bebop. Now, when I was a little kid and he’s listening to these bebop records, whether it’s Charlie Parker or Dizzy Gillespie or Oscar Peterson, Joe Pass, great jazz guitar player, I’m just hearing this stuff. I don’t know any different. My dad was not a musician, but for some reason, he liked incredibly sophisticated-
Rick Beato
(00:08:11)
… music that was very technical. And I just heard it and just was like, “Oh, yeah, okay, cool.” And not realizing that it was developing my ear, because I really, bebop is one of the hardest to improvise in that style, in that language of bebop. It’s very difficult to do. And hearing it as a kid is one of the things that I think enables you, just like languages, enables you to learn it as opposed to somebody that’s never been exposed to it and tries to learn it as a teenager. So I think it’s very similar to learning languages, which kinda is like my theory on perfect pitch, that every child is born with perfect pitch. And they start to lose the ability around nine months-
Rick Beato
(00:09:05)
… when people become culturally bound listeners, when babies do. They start out as citizens of the world, you know? They have the neural pathways to hear the sounds, the phonemes of all 6,500 languages spoken on Earth. But then around nine months, they begin to lose that ability and they, when they become these culturally bound listeners, there’s a great YouTube video with this woman, Patricia Kuhl. She’s a language researcher. And I watched this, “The Linguistic Genius of Babies.”
Rick Beato
(00:09:40)
I saw this in 2010, this lecture that she did, like a TED Talk, and she talks about this, that kids, they did an experiment. They exposed kids to Mandarin three times a week for 25-minute sessions, just a person speaking Mandarin to these babies. And they were able to recognize the sounds, the phonemes of that language even later on. And when I realized that my son Dylan had perfect pitch, I thought, “Why does Dylan have perfect pitch but no one in my family had ever had perfect pitch?” And I thought, “Well, it must be because of the things I exposed to him prenatally and then in the first nine months of his life.” ‘Cause that’s the only way I could explain it.

Perfect pitch vs relative pitch

Lex Fridman
(00:10:27)
We’re gonna return to Joe Pass. We gotta go to Dylan. You mentioned Dylan. I guess that’s in part one of the origin stories of you putting out videos into the world, is the early videos you did with Dylan, a set of videos on his perfect pitch. And for people who don’t know, maybe you can speak to what perfect pitch means.
Rick Beato
(00:10:45)
It’s the ability to identify any note without a reference tone. So you can play, it doesn’t matter how quickly they are, a person with perfect pitch can hear a note and immediately identify it. Or a collection of notes.
Lex Fridman
(00:11:03)
And taking a tangent upon a tangent, you also have a course on ear training.
Rick Beato
(00:11:06)
Yes, but my course is for relative pitch-
Lex Fridman
(00:11:08)
Relative pitch
Rick Beato
(00:11:08)
… not to be confused with perfect pitch.
Lex Fridman
(00:11:10)
Is it fair to say that relative pitch, as far as the thing you would learn, is more useful-
Rick Beato
(00:11:14)
Yes
Lex Fridman
(00:11:14)
… for musicians?
Rick Beato
(00:11:15)
Yes.
Lex Fridman
(00:11:15)
Can you explain the difference between the two?
Rick Beato
(00:11:17)
Relative pitch is basically learning how to identify pitches relative to a stated tonic or something that you’ve heard, or just relative to each other. If you hear a note and then you hear another note after it, you can recognize, let’s say, it’s a minor third interval. So if you’re on the note A, the next note would be C. So once you’re given a reference note, you can use relative pitch to identify the relative nature from one pitch to another.
Lex Fridman
(00:11:46)
And of course, intervals make up scales, and intervals make up chords-
Rick Beato
(00:11:51)
Chords, yup.
Lex Fridman
(00:11:51)
… and so that if you develop it to any degree, relative pitch, you can understand, you can hear the music better. What does it take, since we’re taking a tangent on a tangent, what does it take to train your ear? What’s a TL;DR on the course before people go out and sign up?
Rick Beato
(00:12:13)
It’s just practice, basically. You start with intervals. Typically with small intervals like minor second, major second. So minor second would be a half-step, major second would be a whole-step.
Lex Fridman
(00:12:23)
Are you listening to the tone one after the other or two of them together?
Rick Beato
(00:12:26)
Both. So played separately, it’s called melodic intervals, right, like a melody? And harmonic intervals are played like a harmony, together. So you have to be able to identify them both, both ways.
Lex Fridman
(00:12:38)
What’s an early journey? Like, we’ll give people a preview of what they should… Like, what does that look like? What does practice look like?
Rick Beato
(00:12:44)
Well, my course, it will play you an interval, and then you identify it by clicking on whether it’s, you know, a major third, or minor third, or major sixth, or minor sixth, or perfect fifth, or tritone, whatever it is. And it will teach you gradually, over time, how to recognize all the intervals.
Lex Fridman
(00:13:02)
So you listen to a melodic interval or a harmonic interval. How quickly does the ear in the various age groups that we humans are in, how quickly does the ear learn the different intervals? Is it a week? Two weeks? A month? Two months? Five years?
Rick Beato
(00:13:23)
I think you’d do it pretty quickly. Within, you know, if you practice, within a couple of months, you can really make a lot of progress on it, if you practice daily.
Lex Fridman
(00:13:33)
What benefit does it have to you as a musician in general?
Rick Beato
(00:13:36)
Well, it’s great if you wanna hear a chord progression if you’re trying to figure out a song. And you can say, “Oh, that’s going from the six minor chord, or the four major, to the five major, to the one major.” And you can just identify it immediately, and then you figure out what the first chord is, then you know what the rest of the chords are ’cause they’re in relation to whatever that first chord is. And for learning solos, for example, or learning melodies, being able to sound something out.
Lex Fridman
(00:14:01)
Now, do you recommend people couple that with music theory in terms of education, the education journey?
Rick Beato
(00:14:09)
They have to be taught together because these terms are really music theory, right? Those intervals: major second, minor second, major third, minor third, perfect fourth. So as you’re doing that, and then you… Once you learn the intervals, the 12 intervals in an octave, then you learn them both melodically and harmonically, so played together and separate. Then you learn chords, and so then you learn to identify major, minor, diminished, augmented, suspended chords, things like that. Well, you’re basically learning music theory at the same time with that. Because learning… Music theory is just the name of things in music.
Lex Fridman
(00:14:48)
So there’s the sound of things. There’s the name of things, and then there’s the haptic, like playing the thing- … probably. So playing chords, playing scales, you have, I believe, a course on scales and on chords? Okay. Since we’re doing the tangent, let’s go. How do you recommend people… There’s a bunch of people listening to this that are curious about how they can start in playing guitar, maybe even playing piano and maybe playing other instruments. Although guitar, of course, is the greatest instrument of all time.

Learning to play guitar

Rick Beato
(00:15:19)
Absolutely.
Lex Fridman
(00:15:19)
What are the early steps of that journey? What do you recommend people do in general?
Rick Beato
(00:15:23)
Well, if you’re a beginner getting a good beginner guitar course and learning, first of all, the open chords in first position. A lot of songs can be played that way. A lot of old songs can be played that way, maybe not new modern songs necessarily.
Lex Fridman
(00:15:38)
So learning a few chords and with an eye towards maybe playing a song?
Rick Beato
(00:15:44)
Yeah. With an eye towards… You learn the chord shapes and you learn how to strum basic patterns to begin with. I think the first thing for learning guitar is actually how to position your fingers so that you don’t mute strings that you don’t want to mute. That’s the hardest thing for people to do, basically, is to get their fingers arched to where they… If you’re playing a C major chord, your index finger’s on the first fret of the B string, and you have to have that open E string ringing there. And it’s hard for people to make those micro-adjustments. You take it for granted, like, you’ve been playing guitar- … for, I don’t know, how many years? Forever, right?
Lex Fridman
(00:16:21)
Forever, yeah.
Rick Beato
(00:16:22)
And you don’t even think about stuff like that when you’re playing a guitar solo. Every little thing that you do if you’re playing your Comfortably Numb guitar solo- … you have to, out of mid-air, strike the string that your finger’s on to play the note. And these are all fine adjustments that you’re doing.
Lex Fridman
(00:16:39)
I’m just a hobbyist recreational player, but it… Wow, you’re taking me all the way back. You’re right, it’s the haptic, the physical aspect of it is really tricky. Comfortably Numb is a good example, but if you do lead, you have to get a super clean sound. Now, that’s both when you’re playing fast, you want it to be super precise, but when you play slow, when you have one note, and you’re holding it, and you’re bending it- … it better be really clean.
Rick Beato
(00:17:06)
Yes.
Lex Fridman
(00:17:06)
And for that, it’s… I guess you have to really place the finger in the right place. Plus, there’s the… Well, there’s the calluses, so it doesn’t hurt. And then the positioning of the string on the curvature of- … of the finger. Where does it fall? Like, how much do you bend the finger?
Rick Beato
(00:17:24)
You have to have enough flesh on it to actually raise the string in pitch.
Lex Fridman
(00:17:30)
Yep. Yep.
Rick Beato
(00:17:30)
otherwise it-
Lex Fridman
(00:17:31)
Yeah, ’cause you’re lifting it with part of a flesh. And of course, you have to decide, depends on how OCD you are, do you wanna be like the perfect, the proper musician? Or do you wanna do a Hendrix? So the thumb over the top.
Rick Beato
(00:17:46)
Way over the top, yes.
Lex Fridman
(00:17:48)
And so, like, you… I if you have a fretboard here, I think the more, like, classical guitarists, the very proper, perfect perpendicular alignment of the fingertips to the fretboard, versus, like, Hendrix’s, like, “Fuck it. You nerds. I’m gonna do it.” With the messiness is part of the magic. Of course, like B.B. King is also kind of messy looking in terms of his positioning of the fingers, but his tone is incredibly clean.
Rick Beato
(00:18:22)
Yes, super clean.
Lex Fridman
(00:18:23)
So, like, that teaches you that maybe any position can converge towards a super clean tone. You just have to figure it out.
Rick Beato
(00:18:30)
I think a lot of it has to do with how they wear their guitars. If you wear your guitar low, if you’re Hendrix and you’re wearing your guitar-
Lex Fridman
(00:18:37)
That’s true.
Rick Beato
(00:18:38)
… if you’re wearing it lower, lower, then you can’t get your fingers on top of it like that. And, the thumb acts as a way to mute the lower strings from ringing if you’re playing through a loud amplifier. So there’s so many other micro-adjustments when you’re playing leads, ’cause you have to kind of mute the other strings that are… so they don’t ring out— … if you’re pl- playing the first note in Comfortably Numb and the solo at the end, and you’re at the ninth fret of the G-string, and you bend that- … if you bend that G-string and you accidentally hit the B-string under it- … you don’t want that ringing. So you have to kind of angle your index finger so it-
Lex Fridman
(00:19:18)
To mute
Rick Beato
(00:19:18)
… to mute that. So all these micro-adjustments that you don’t even think about… I mean, you’re not thinking about that, Lex, when you’re playing it. You’ve done it so many times that these things are just part of your brain. That’s why this is such a great brain developer for kids to learn instruments.
Lex Fridman
(00:19:36)
Yeah, of course, you have to solve that puzzle. It must be really frustrating in the beginning, like holding a chord. Like all of ’em, and it hurts too, right?
Rick Beato
(00:19:46)
It does hurt.
Lex Fridman
(00:19:46)
If you’re doing acoustic guitar.
Rick Beato
(00:19:47)
Not for that long, though. For like a week.
Lex Fridman
(00:19:51)
Couple, couple, yeah.
Rick Beato
(00:19:51)
Couple weeks.
Lex Fridman
(00:19:53)
Couple.
Rick Beato
(00:19:55)
I don’t want to discourage anyone, you know. It’s actually pretty easy to learn basic stuff.
Lex Fridman
(00:19:59)
Right, but the pain is temporary, I guess is the point I’m trying to make. So, what else? So the physical component, play a few chords, where does the journey continue if you’re learning guitar?
Rick Beato
(00:20:11)
Well then, it’s like if you play electric guitar, then you get into single note playing and stuff like that. That’s where it gets, to me, where it gets really fun. You know, you have single note playing with riffs, if you think of Back In Black, right, that has a riff embedded in the actual melody. Or many songs that have riffs, the Hendrix stuff that has chordal riffs, and you’re moving up the neck and involving all the fingers and things like that. So there’s… it really depends on what you wanna, what styles you wanna play.
Lex Fridman
(00:20:44)
So you’re thinking about song learning. So different components of song learning: so riffs in songs, lead-in songs.
Rick Beato
(00:20:52)
And then you have finger picking, if you have Stairway to Heaven, songs like that. How ’bout wanting to learn that? That involves finger picking, because you have to isolate certain notes of the chord and play two together, you know, and multiple times.
Lex Fridman
(00:21:07)
There’s a few crossroads where you get to select things. So I guess you’re speaking to the fact there’s a… if you’re righty, there’s a right hand that you can use your fingers or you can use a pick. And that’s a choice you make.
Rick Beato
(00:21:20)
And sometimes you use both, ’cause in Stairway to Heaven, you’re using the fingers at the beginning, or fingers and pick, hybrid, they call it hybrid picking, and then later on, you’re using the pick to flat pick the picking patterns.
Lex Fridman
(00:21:34)
On the music theory front, do you recommend people learn scales and chords and like the theory of it?
Rick Beato
(00:21:40)
Later on, I would say. I wouldn’t say necessarily right off the bat. I think learning songs is the first thing that you should do ’cause you want to keep people motivated.
Lex Fridman
(00:21:52)
So you get them to like fall in love with music and playing? All right. And that takes a couple months, three months?
Rick Beato
(00:21:59)
Depends on how motivated they are.
Lex Fridman
(00:22:02)
So you recommend practicing, what, every day?
Rick Beato
(00:22:04)
Every day. My son, Dylan, when he started learning the guitar a couple years ago, I said, “It’s better to practice 10 minutes a day, seven days a week than to practice one day for an hour, which is roughly the same amount of time.”
Lex Fridman
(00:22:19)
Yeah, but it usually turns into something longer. But otherwise, like, if you’re a busy life, you know, taking a day off… that day turns into a week, and then a week turns into a month, and all of a sudden you haven’t touched the instrument for months.
Rick Beato
(00:22:33)
Which is why I leave my guitar on a stand all the time, so that if I walk by it, I’m like, “Oh, okay, I’ll just pick it up for a second.” Then that second turns into 10 minutes, and an hour, two hours.
Lex Fridman
(00:22:43)
All right, we gotta talk about this Dylan video. So this might be one of the earliest-
Rick Beato
(00:22:47)
That’s the first one.
Lex Fridman
(00:22:48)
That’s the first video on the channel.
Rick Beato
(00:22:50)
It was actually before the channel, ’cause this actually blew up on Facebook-
Lex Fridman
(00:22:54)
Facebook
Rick Beato
(00:22:54)
… and then I put it on YouTube after.
Lex Fridman
(00:22:59)
So if it’s okay?
Rick Beato
(00:23:00)
Yeah. Okay, Dylan, we’re gonna do the hardest ear training test of all time. Are you ready?
Lex Fridman
(00:23:06)
Ready. Oh.
Rick Beato
(00:23:10)
Now, I… just a quick backstory on this. I made this for my friend Shane’s wife who wanted to see… ’cause Shane was a friend that I was producing, and he was there, and Dylan had come down the day, in the day, and I said, “Oh, check this out,” and I played this stuff. He’s like, “That’s amazing. Can you make a video so I can show my wife?” And I was on the way to a school board meeting, ’cause I was on the school board at Dylan’s school- … and I said, “Hey, Dylan, come downstairs. I want to make this video. It’ll take one minute, just need to do this thing for my friend, Shane.” And he’s like, “I don’t want to.” And I said, “Come on, this’ll take one minute.” “I don’t want to.” So I said to my wife, I’m like, “Nia, would you tell Dylan to come downstairs? I want to do this video.
Rick Beato
(00:23:51)
It’ll take one minute.” She’s like, “Dylan, go downstairs.” And he had, he has a mouthful of candy there- … ’cause he was eating candy. So if you look at him, he literally has a mouthful of candy while he’s doing this.
Lex Fridman
(00:24:04)
And we should say, on Facebook it went quite viral.
Rick Beato
(00:24:08)
Yeah, like got-… I don’t know, 80 million views. Something like… it had like 250,000 comments. Something like that. Insane.
Lex Fridman
(00:24:15)
How old is Dylan here?
Rick Beato
(00:24:16)
He’s eight.
Lex Fridman
(00:24:17)
Eight years old? Can you actually give some more backstory about, like, how you discovered that Dylan has perfect pitch?
Rick Beato
(00:24:23)
So when Dylan was about two, he… I was doing a FaceTime with my brother Jon, and I was like, “Check this out, Jon.” And I played the Stone in Love, Neal Schon’s solo from Journey, and I was like, “Check this out.” And Dylan would sing along and my brother Jon was like, “Wow, Dylan can sing all the notes.” And I was like, “Yeah.” Then I played Black Dog, Zeppelin-
Rick Beato
(00:24:45)
… and Dylan would sing that. And it’s like, “Dylan’s got a good ear.” Then Jon and I were like, “Well, we have good ears, too.” So it was probably… Maybe we could have done that when we were that age. So a couple years, more years goes by. Well, he was about three and a half, and I’m in the car. I was like, “Dylan, sing the Star Wars theme.” And he sings it, and I’m like, “That’s in the right key.” And I checked. I play it on my phone, and I was like, “Oh my gosh.” Then I ask him, “Play… Sing the Superman theme.” Because we’d been listening to John Williams soundtracks the week before, and he sings that. And that was in the right key. And I ask him another song. So I turn the car around, I go back to the studio.
Rick Beato
(00:25:21)
I go to the piano, I hit the note B-flat, and Dylan says, “Star Wars.” Star Wars starts on a big B-flat major chord, but it’s the note B-flat is the main one that you hear. And then I play the note G, and he goes, “Superman.” And that’s the first note in the trumpet part of the- … of the Superman theme. And then I realized that he had perfect pitch, and then in five minutes, I taught him the name of the 12 notes. Which he already knew, but he just didn’t know the names.
Lex Fridman
(00:25:46)
Oh, so you just associate the names- … of the thing he knows. What do you think is this in his mind? ‘Cause it’s not just individual notes. He can, like, hear everything. What is that?
Rick Beato
(00:25:56)
He doesn’t see colors. He just says every note sounds completely different.
Lex Fridman
(00:26:01)
Wow. Like you said, maybe it’s a language thing. Because it really is a… He just learned the language.
Rick Beato
(00:26:09)
Yeah, the language.
Lex Fridman
(00:26:10)
There’s-
Rick Beato
(00:26:10)
It’s like native music fluency, if you think of it like that.
Lex Fridman
(00:26:16)
So let’s listen to some of this.
Rick Beato
(00:26:18)
Turn around. Here we go. As fast as you can, we’re going to start with single notes, then we’re going to do some intervals, then chords. Okay, here we go. A. C-sharp. B-flat. C. D. A-flat.
Lex Fridman
(00:26:30)
Okay, good. Two notes at once. Here we go.
Rick Beato
(00:26:33)
C-flat. Great. How about this? B-flat, A. Great. What about this? B-flat, A-flat.
Lex Fridman
(00:26:41)
This is incredible.
Rick Beato
(00:26:42)
Great. How about this? C, B-flat.
Lex Fridman
(00:26:47)
And then how about this?
Rick Beato
(00:26:50)
E-flat. What is it? E, E-flat. Correct. Okay. He’s annoyed. He’s annoyed. The part of this, when I play these next chords, that’s really I think why the video went so viral, the next part of this. Where I play these super complex polychords. Okay, I’m going to do some polychords for you. These are really going to be hard. You ready? What’s this? C augmented over D-flat augmented. Okay, sing a B-flat. Very good. What’s this chord? A-flat major over A major. Great, sing an F-sharp. Excellent. What’s this chord? A minor over D-flat major. Great. What’s this chord? E add9 over F major. Excellent. E add9 over F major. So I had to look at my hand to make sure that that’s what it was- … ’cause they’re all in inversions.
Rick Beato
(00:27:57)
So I think the reason that this went so viral is that the more that someone knew about music, the more that they shared the video. Because these polychords… So the people that were the best musicians looked at it and were like, “Oh my God.” You know, it’s C augmented over D-flat augmented. And the second chord was A-flat major over A major, but they were both in inversion, right? So it was like a first inversion A-flat major chord, first inversion A major chord. And then an A minor over D-flat major, and then E add9 over F major. And for an eight-year-old… I mean, for anyone- … plus they’re all close-voiced. They’re all just right next to each other.
Lex Fridman
(00:28:42)
Yeah, yeah.
Rick Beato
(00:28:42)
It’s not like, you know, where you can hear them clear. It’s all in the mid-range of the piano. So you have to really listen and you have to… He has to dissect each one. Like, what are the notes being played there, and what is… Like, what’s the theory? ‘Cause he’s actually using music theory- … to dissect them.
Lex Fridman
(00:29:00)
It must be in his brain, those components of the chords all sound different. Like—
Rick Beato
(00:29:06)
Yes.
Lex Fridman
(00:29:06)
… very clearly different. It’s truly incredible. The human mind’s incredible. So you’re saying, like, some part of that is the things you hear in the first few months of life?
Rick Beato
(00:29:18)
I did a thing where I played what I call high information music. High information music would be Bach, Well-Tempered Clavier, fugues, yeah, anything Bach. And I would play the Well-Tempered Clavier, and I would play… I have a friend who… Turkish pianist who’s one of the greatest improvisers I’ve ever heard. His name’s Aydan Esen. And I would play Aydan’s improvisations for Dylan. It had very sophisticated harmony and linear things in it. And Keith Jarrett, and mainly jazz, classical, and modern classical music. And then, then we would play, listen to rock music once he was born. I’m talking on my wife’s stomach before Dylan was born- … starting at 15 weeks, for 30 minutes a night. And then when Dylan was born, I would sit with him for an hour every morning and listen to music-
Rick Beato
(00:30:11)
… and I would look at him. In order for them to hear these phonemes apparently and develop this language, or get the … The language acquisition has to involve the social brain. So, when kids look at you, when a baby’s looking at you, they’re looking at your mouth and they’re getting social cues from that. And this is also another component of saying, “This is where this word stops or starts and stops. These are how the phonemes are separated from one another. These are how they’re connected.” So I believe that all kids are born with perfect pitch and then around nine months they begin to lose it. If you don’t engage their social brain, making these pitches know … I never played pitches for Dylan and said, “This is a C, this is a B-flat-” “… this is a G.” I just played complex-
Rick Beato
(00:31:04)
… high-information music for him. And played with him.
Lex Fridman
(00:31:07)
And that applies maybe even more generally to high-information language. And it starts before they’re born. I think I saw some of these incredible scientists that work on the neuroscience, the neurobiology, psychology of language in early life. I think a big part is in the mother’s stomach you’re listening to the mother speak.
Rick Beato
(00:31:33)
Yes. That’s right.
Lex Fridman
(00:31:35)
So, like, that’s how on the language side you’re picking up the language already.
Rick Beato
(00:31:39)
That’s right. And you’re picking up the music, musical language. So, native music fluency, you could call it.
Lex Fridman
(00:31:46)
So if the mother’s sitting back and listening to Bach and some bebop jazz, you have a pretty good chance.
Rick Beato
(00:31:54)
Much better chance.
Lex Fridman
(00:31:56)
Okay. All right. So that, as we unwind our way back Joe Pass and bebop. You were s- You were funny enough talking about what is bebop jazz, and that would be people like Joe Pass. And in your own life, your dad was somehow listening to that kind of incredibly complex and sophisticated music-
Rick Beato
(00:32:16)
But wasn’t a musician.
Lex Fridman
(00:32:17)
Wasn’t a musician.
Rick Beato
(00:32:17)
Which was very weird. I… we never… My… I have six siblings and we could never figure out why Dad liked really sophisticated jazz.
Lex Fridman
(00:32:26)
We just took it for granted at that time.
Rick Beato
(00:32:28)
Yeah, just took it for granted. And my dad passed away in 2004 and we never really talked about that, but he and I used to listen to music together all the time. He’d… we’d put on a record, I’d sit on one side of the room, he’d sit on the other and not say a word. Listen through the whole side A. I’d go flip it over, listen to side B, never say a word. And then get up and go do stuff. And we did that all the time.
Lex Fridman
(00:32:52)
And so the first time you impressed your dad was with the Joe Pass song, right? And by the way, we’ll have to go to this song ’cause people must have forgot, right? People just think you’re like a good communicator or something. They don’t realize how good you are at guitar, how good you are at actually a lot of instruments, but guitar especially. And there’s this video, “The greatest guitar solo, period.” Can you give me some context for this particular intricate, complicated solo? Who’s Joe Pass?
Rick Beato
(00:33:29)
Joe Pass was a guitarist. He lived from 1929 to 1994. And he was one of the greatest bebop players and solo guitar players. So he made a record that this is off of called Virtuoso in 1973 that my dad gave me for Christmas when I was in 10th grade. And he said… And this is not like my dad. My dad worked for the railroad. He was very, you know, few words spoken. Born in 1919. He said, “If you ever learn to play guitar like this, you’ve accomplished something with your life.” And I was like, “What?” So this record stayed… was unopened until about March after Christmas. And one day I was like, “Okay, I’ll open it up.” And I put it on, I start listening to it. And I was like, “Whoa, this is kinda cool.”
Rick Beato
(00:34:17)
And so I said, “I think I can figure out some of this stuff.” So I figured out this thing.
Lex Fridman
(00:34:24)
Is it by ear mostly?
Rick Beato
(00:34:25)
Yeah, just by ear. I didn’t know any of the chords or anything.
Lex Fridman
(00:34:28)
If you can listen to a little bit here.
Rick Beato
(00:34:29)
If you go back to that Brother to Brother, Gino Vannelli thing with Carlos Rios playing, that stuff is incredibly hard. This, I’m starting, I don’t know any of these chords. So I start out … I don’t even know what that chord is, but I figured it out. I just, and it’s weird. I mean, look at that weird bar.
Lex Fridman
(00:34:46)
So you’re just finding like, playing around with the, putting your fingers- … on the various positions.
Rick Beato
(00:34:52)
Right, but trying every combination of fingers. I had never played that chord. That’s a weird-looking chord. And, but I kept … I moved my fingers around till I heard where it sounded like, “Oh, that’s it, definitely.” And I just looked at my hands like, “What is that?” Had no idea what it was.
Lex Fridman
(00:35:08)
So you were connected to the—you were really connected to the music. The … And so that’s why you can hear … It’s not necessarily … Did you even—you didn’t have perfect pitch.
Rick Beato
(00:35:17)
No.
Lex Fridman
(00:35:17)
You, and not even relative pitch?
Rick Beato
(00:35:20)
No, I did not. No, I didn’t know anything about intervals. I didn’t know anything about music theory, anything. This is all just-
Lex Fridman
(00:35:25)
Yeah. You’re just like playing-
Rick Beato
(00:35:26)
Ear
Lex Fridman
(00:35:26)
… around with different shapes. That’s amazing.
Rick Beato
(00:35:27)
That’s right. I mean, look at that weird bar there. But then you get into these things. So that stuff there, I could figure out … And then this. That stuff I could figure out. And then these things here. Those are just inversions of an—but I didn’t know that. I had heard Joe play that on the record. This is the last song on there. I’d listened to it a bunch of times and I started-
Lex Fridman
(00:36:02)
So you just replay over and over and over and over, and you’re, like, trying to replicate it.
Rick Beato
(00:36:07)
Yes. And I’m memorizing every different chord shape. All the chord shapes that I had never played before.
Lex Fridman
(00:36:12)
Would you recommend people do something like that on a really complicated song?
Rick Beato
(00:36:16)
Yeah, but there are so many YouTube videos that you can go and just learn it without having to—Yes. Yeah, I would recommend.
Lex Fridman
(00:36:24)
I feel like the struggle-
Rick Beato
(00:36:25)
The struggle is where it’s at.
Lex Fridman
(00:36:26)
… this is true for education in general. People… Like, there’s all these educators that try to make learning easier and more fun, and all that kind of stuff. Great, wonderful, but part of the thing is the struggle.
Rick Beato
(00:36:41)
Absolutely.
Lex Fridman
(00:36:42)
But yeah, let’s—
Rick Beato
(00:36:43)
I’m sorry, hearing there’s .
Lex Fridman
(00:36:44)
Let’s… You’re nuts.
Rick Beato
(00:36:49)
I heard licks like that all over this, so I knew that that was… and then these licks here, he plays a lot of ideas like that. That’s basically a C9 chord in the top notes of it. So all these are just inversions of the same chord. So if I could play that, then it’s just figuring out the single notes, okay? So… Okay, so if you just take this first part here when he goes… So this intro part is…
Lex Fridman
(00:37:38)
You make it sound so simple when you break it down. And, and by the way, Joe Pass, incredible guitar player. Like, this is obvious. .
Rick Beato
(00:37:45)
And he improvised all this. He could have played it like this.
Lex Fridman
(00:37:48)
But, you know, the first was the individual notes. Look at that.
Rick Beato
(00:37:54)
Ooh, that’s hard. Maybe just play it like that. That sounds more realistic.
Lex Fridman
(00:38:08)
The amount of different genres that you’re able to replicate is just incredible.
Rick Beato
(00:38:14)
This is just taking the needle, moving it there, then going back a little, oh, there. And then by the end, the record was so scratched. It was—but it was worth it. When I played it for my dad— … he couldn’t believe. I mean, he didn’t say, “That’s amazing.” He was just like, “Hmm, pretty good.”

Miles Davis

Lex Fridman
(00:38:35)
So what was the role of bebop jazz in the history of music? It seems like it was influential in your life. Another guy you had an incredible interview with: Flea. People should go listen to that. Was a great conversation. One of the things that surprised me is just how many musical genres influenced Flea. And the guy showed up in a Miles Davis T-shirt.
Rick Beato
(00:38:55)
That’s right.
Lex Fridman
(00:38:55)
And-
Rick Beato
(00:38:55)
Bebop.
Lex Fridman
(00:38:56)
And –
Rick Beato
(00:38:56)
Miles Davis played with Charlie Parker- … when he was 18 years old. And that’s… He was… Charlie Parker was really his mentor.
Lex Fridman
(00:39:03)
Can you explain to me why, with many of the folks you’ve interviewed and in general out there, in the world of jazz, all roads lead to Miles Davis? Why he’s such an influential figure?
Rick Beato
(00:39:17)
Because he was the greatest innovator in the history of jazz. He was at the forefront of all these different styles of jazz. I mean, he started as a bebop player, and then he had records like the Birth of the Cool, and modal jazz, and hard bop, and records like Bitches Brew, where he started to, I guess you would call it fusion. You start to get these records. You had two main groups of Miles Davis. You had the Miles Davis ’50s quintet and the Miles Davis ’60s quintet.
Rick Beato
(00:39:50)
Now, Miles made records with many people, but the ’50s quintet had John Coltrane in it. Had, I mean, had different piano players—Wynton Kelly—but Paul Chambers on the bass, Philly Joe Jones on the drums. And that particular group made just incredibly important records. And then he had his ’60s group, which was Herbie Hancock on the piano, Ron Carter on the bass, Tony Williams on the drums, and Wayne Shorter on the saxophone. And they made all these incredibly important records.
Lex Fridman
(00:40:25)
I forget who said it in an interview with you, but they talked about like Miles Davis, his music feeling like I think toes hanging over the cliff or something like this. Meaning, like, there’s always a risk, there’s a danger that you’re willing to make, to fuck it all up live. And that feeling is what creates the aliveness of the music. Like, can you speak to that? Just the creating in the music, the feeling like you’re on the edge. Like, you’re challenging the possibilities of what can happen, and it all can go to shit, and because of that, it feels alive.
Rick Beato
(00:41:09)
Well, when I interviewed Ron Carter that played in Miles’s ’60s quintet, I asked Ron, ’cause Ron played bass on 2,200 recordings, famous records. And I said, “Did you guys ever rehearse with Miles?” “No, never.” I said, “So, what would you do?” He goes, “We’d just show up at the studio, and he’d have the charts, put them on the stand and we would just roll.”
Rick Beato
(00:41:37)
And I said, “Would you listen to it after?” “No.” And I said, “Well, what about the live records that you did, when you’d record at clubs and things like that?” He goes, “We never knew that we were recording.” He goes, “Maybe I’d see a microphone, a different kind of microphone in my bass amp.” He goes, “Then months later, a record would come out and I’d see it, and I was on it, and I would take it down to the union and say, ‘I played on this record,’ so you get paid for it.” But he said, “We didn’t even know we were recording.” So Miles was always about, you know, don’t think about it, just play.
Lex Fridman
(00:42:14)
That’s crazy. That was on purpose. That was done on purpose. Not to do the rehearsals. None of that.
Rick Beato
(00:42:20)
Yeah, he wanted people to just feel it, play it. Thought is the enemy of flow, as Vinnie Colaiuta told me.
Lex Fridman
(00:42:30)
Thought is the enemy of flow. How do you make sense that Flea, the bassist for the Red Hot Chili Peppers, is influenced by bebop jazz?
Rick Beato
(00:42:38)
So his stepfather was a jazz bass player. And his… When his parents got divorced, he was born in Australia, and then they moved to New York. Then his parents got divorced, and his mom married his stepfather, who was a jazz musician. And they used to have jam sessions at their place, and Flea loved it. It was kind of like my upbringing with my dad, playing jazz all the time. Once it gets inside you, it’s just there. And so he is heavily influenced by jazz musicians.
Lex Fridman
(00:43:19)
Yeah, his impression was just hilarious. I mean, he’s a character. His whole physical way of being is a character. And his impression of just upright bass is just fun to watch. His whole-
Rick Beato
(00:43:27)
His intensity when he picked up his bass during the interview… He’s an intense guy and funny, and you know, really emotional. And he picks up his bass, and there’s a fierceness that you immediately feel. And he talks about how he practices. And then when he starts doing the slapping stuff, he gets so into it. And I’m just sitting there going, “Whoa.” Like, “Wow.”

Bass guitar

Lex Fridman
(00:43:55)
Yeah, he talked about his practicing routine with you. And one of the things, he’s like, “I have to practice the slap.” And- … you know, there’s differences in the structure of the different bands. But usually, like, the bassist has a vibe to them. I don’t know if we can put words to exactly what that is. There’s a kind of energy that drives the band.
Rick Beato
(00:44:14)
To me, the bass is one of the only instruments that, when you play a bad note, everybody notices. I started on the bass- … as a kid.
Lex Fridman
(00:44:23)
Oh, interesting. But you also played drums. You also played-
Rick Beato
(00:44:26)
Yeah, but my first instrument was the cello in third grade. And then I switched to the bass in sixth grade. And my, I majored, my undergrad degree is in classical bass. So I always think of myself as a bass player first. And I always think the bass is the most important instrument because-
Lex Fridman
(00:44:45)
Strong words.
Rick Beato
(00:44:46)
… because as much as I love to play the guitar, and I love to play the guitar more than anything, I think, but the bass really defines what the quality of the chord is. ‘Cause you can put the root in there. You can put the third of the chord in the bass. You can put the fifth in there. You can play a lot of notes. And whatever you play in the bass kinda defines what kind of chord it is. So, the bass player has a lot of power.

Greatest guitar solos of all time

Lex Fridman
(00:45:08)
I have to go back to our, the beginning of our conversation. What do you think are some of the great solos of all time? Can we put a few into consideration? You have a great list of the top 20 rock guitar solos of all time.
Rick Beato
(00:45:22)
Yeah, so I put Comfortably Numb as my favorite, as my top one.
Lex Fridman
(00:45:26)
Yeah, on that day, right?
Rick Beato
(00:45:27)
On that day. Right. Now the day later, I would have said, “It’s the second solo.” But I did the first solo because nobody talks about that solo. And that solo is equally great. And when David Gilmour… When I played it for him, and we talked about it in my interview with him, it was… Just to watch his face when he listened to it was incredible. I mean, I’m thinking to myself, it’s like, I’m sitting with David Gilmour, and he’s listening to Comfortably Numb. And he’s hearing it. He’s played it a million times live, but how many times has he gone back and listened to it on the record? Probably not for a long time. And then he’s hearing it, and he’s like, “Ooh.”
Lex Fridman
(00:46:11)
Maybe you just don’t look back. When you do great things, you don’t look back.
Rick Beato
(00:46:14)
Miles never looked back. He never wanted to hear the old stuff. He always moved on.
Lex Fridman
(00:46:18)
There was this funny moment where you made a video why David Gilmour will never be on the channel. And then you ended up, of course, interviewing him twice. He’s one of the greatest guitar players of all time. What do you think is at the core of his genius?
Rick Beato
(00:46:33)
He has just an incredible melodic sense. He knows how phrases should be put together. There’s a flow to his ideas that I think is just incredible. It’s the same with Hendrix. This flow, how one idea leads to the next, how there’s space between them. It’s just like speaking.
Lex Fridman
(00:46:56)
That’s what I read about Miles Davis is, he’s very good at understanding tempo and the value of silence. And I think David Gilmour doesn’t always play fast. But he does a lot with less. And then some of that is also on the more technical side, probably the tone of the… I mean, he has one of the most uniquely recognizable tones in all of music. What do you understand about what it takes to shape the tone that is David Gilmour?
Rick Beato
(00:47:30)
He has a very sophisticated setup- … for his tone, and that was one of the things when I went to his studio. And I said to him, “So David, is there anything I’m not supposed to see here?” I mean, he never sits down and shows- … people his gear, and he laughed about it.
Rick Beato
(00:47:44)
But there I am, sitting there right next to all these pedals that… And I asked his tech, Phil, I said, “These are the same ones you used on the records?” He’s like, “Yeah.” His tech has been with him for, like, 50 years. And I mean, the exact ones? Yes. It’s just, it’s hard to… It’s hard to imagine that those things still… Of course, though. He’s just kept it. Yeah, this is his Binson Echorec that he played through, and this is this. You know, these are all the same effects pedals. And the… Wait, is this the same Hiwatt amp? Yeah. Is this the same… Yes. Yeah, you get some new stuff. But they keep all their own gear, and that’s… I mean, he does sell his guitars for charity.
Rick Beato
(00:48:29)
But, like, he has a black Strat that is a, it’s a signature version. It’s like an exact copy of his old one. So to him, it sounds exactly the same, plays the same.
Lex Fridman
(00:48:39)
Well, of course, they converge towards that kind of hardware. But there’s so many tiny details over the years. You see the final result of it, but there’s a, there’s a journey there, of exploring. And of course, he’s not… I guess he’s not doing any soft… Like, no emulation, no amp?
Rick Beato
(00:48:56)
He does do emulation, actually. He does. He has this thing, this is… I asked him in the first interview about this. There’s a little rack thing that I had heard that he used, but I asked him for sure. It’s called the Zoom 9030. I put out a short where he talks about it. I said, “So, that Zoom 9030, is that a real thing?” ‘Cause I’ve read about it. He’s like, “Yeah.” And he talks about how, when he’s sitting there recording on his own… And he runs Pro Tools himself, and so he’ll be sitting there. There’s no one there to help him. He’s like, “I’ll just plug into this thing, and then I’ll play a solo with this model.” It’s like a kind of ’90s modeling, early modeling thing.
Rick Beato
(00:49:37)
And he’ll play a solo, and then after a while, you hear the solo, and it’s like, “Well, I’m not gonna replay that. That sounds great.” You get used to the sound of it, and that’s what it is. So people always talked about, “Oh, well, he couldn’t have used that. He’s recording through an amp,” and… ‘Cause it sounds great. And then he’s like, “Yeah, yeah, so that’s what I use.” And then I have the video of it right there, and it says his presets, DG1 and DG2 and, you know, whatever.
Lex Fridman
(00:50:03)
What’s your process for preparing for interviews like that? You’ve done a few legendary people.
Rick Beato
(00:50:08)
I never prepare for interviews, because I ask people things that I’m interested in knowing.
Lex Fridman
(00:50:17)
So, just letting your curiosity just pull a-
Rick Beato
(00:50:19)
Yes
Lex Fridman
(00:50:20)
… pull you forward?
Rick Beato
(00:50:20)
And I can think of 100 questions to ask David Gilmour… but I always ask my questions based on what they say to me.
Lex Fridman
(00:50:28)
Yeah.
Rick Beato
(00:50:28)
So, but I do make a playlist of songs that I wanna talk about. So, that kind of guides me… ‘Cause I wanna make sure that I… There’s specific things that I need to play to, so that you can jog his memory. ‘Cause anytime you play something that somebody recorded, even 50 years ago, they’ll remember. If they don’t remember the exact specifics, that brings it to life to them again. And they can kind of piece together some aspects about it, and they can really talk. He can talk about the phrasing and the, you know, the kind of melodic direction of things like that.
Lex Fridman
(00:51:12)
So, there’s a lot of tiny details that go into a particular song, whether it’s in the production or how it’s played or how it was composed, all that kind of stuff. And you don’t know what those are ahead of time.
Rick Beato
(00:51:22)
No.
Lex Fridman
(00:51:23)
You just know the song, and you just are looking to jog their memory, and maybe your own curiosity of like, “How did you do this?” Or, “How do, what, this sound or that?” You make it look easy, but you have to have a depth of knowledge. You’re saying you don’t prepare.
Rick Beato
(00:51:39)
I have an incredibly good memory.
Lex Fridman
(00:51:41)
Exactly.
Rick Beato
(00:51:41)
That’s what it is. It’s that I can remember when records came out, who produced them, where they recorded them, who was the engineer, what songs are on it. And not only that, but the people I’m interviewing know that I can play all the parts- … of all the instruments, ’cause I’ve done breakdowns of their songs, which is why I get the interviews with them in the first place, really.
Lex Fridman
(00:52:06)
But the actual, like, the skill of the interview, the thing you’re not saying, the preparation, is you listening to bebop.
Rick Beato
(00:52:14)
That’s right.
Lex Fridman
(00:52:14)
It’s the background knowledge, it’s the soul carrying with you, being able to radiate the love of the soul of music.
Rick Beato
(00:52:25)
I will say this, Lex, is that the other thing is that most of these people have a really good sense of humor. When I was, when… The first time I interviewed David in New York, my brother John came along, and he is a massive David Gilmour fan. That’s his biggest influence as a guitar player. And so he said, “You’re interviewing David Gilmour? Oh, I’m coming.” I was like, “All right. Come on. Come on down.” So my brother John’s standing about five feet away. And John is a sales guy, but he… Great guitar player. So John’s like… I was like, “This is David, this is my brother, John.” “David, great to meet you, buddy.” And you know, it sounds like it’s so… He’s a sales guy. And so during the interview, I said, I was like, “Hey, John, what was I gonna ask David?”
Rick Beato
(00:53:08)
Oh, ask him about the Gilmour effect.” “Oh, yeah, that’s right.” And the Gilmour effect is my thing that I say in the comments section when people say… Anytime anybody plays anything technical, “Oh, yeah, that’s great, but I much prefer David Gilmour.” And so I always call it the Gilmour effect. Anytime I have, like, Yngwie Malmsteen- Anybody that played, that has chops that I- … interview, the, the, the negative comments are always, “Well, I prefer David Gilmour.”
Lex Fridman
(00:53:36)
Yeah, yeah.
Rick Beato
(00:53:36)
And I said that, I told David that. He’s like, “Well, maybe they should keep their opinions to themselves.”
Lex Fridman
(00:53:43)
Yeah, a lot of these folks have really wonderful personalities, with a trusted person to be able to reveal that personality. So, Comfortably Numb at the top on that day. What else is up there?
Rick Beato
(00:53:53)
Stairway to Heaven. Hey Joe.
Lex Fridman
(00:53:56)
But in that list, your top Hendrix solo is Hey Joe?
Rick Beato
(00:54:01)
It’s the first guitar solo I ever learned, so I had to put it on there. So, I don’t necessarily do these by… I do those in kind of how important they are to me and my development. So, there’s always a biographical component to these lists. Number three was Kid Charlemagne, a Steely Dan solo, Larry Carlton. Amazing solo, extremely difficult to figure out.
Rick Beato
(00:54:25)
Probably, there’s two solos on the list that are just about, are very… That one I can play. But there’s a few solos that are very hard to play. Stone in Love by Journey, by Neal Schon, is very hard to play some licks. There’s a song… There’s a solo by a guitarist, Carlos Rios, that people don’t know. It’s Brother to Brother, a Gino Vannelli song, but it’s very hard to play and figure out. And that people don’t know the solos. I put it on my list ’cause I knew that a lot of people were gonna watch it and they’re gonna know what this solo is.
Lex Fridman
(00:55:01)
For me, the sentimental one, my first solo is Mr. Crowley, Randy Rhoads. I like the musicality of Mr. Crowley, that there is a melodic component to it. You’re playing really fast, but there’s a melody to it. And also, there’s like a legendary nature to the brief time we had Randy Rhoads.
Rick Beato
(00:55:20)
Yes.
Lex Fridman
(00:55:20)
It’s probably one of the greatest guitarists ever.
Rick Beato
(00:55:23)
’56 to ’82, I think. Terrible. He was an absolute brilliant guitarist, had his own style.
Lex Fridman
(00:55:34)
We should say he’s the guitarist for Ozzy Osbourne, the band.
Rick Beato
(00:55:37)
Yeah. And that Mr. Crowley solo is a, is a great solo, great solo. And he’s incredibly influential as a guitar player too, for metal guitar players and I love Randy Rhoads.
Lex Fridman
(00:55:54)
Another guy, so one of my favorites is Mark Knopfler.
Rick Beato
(00:56:00)
Yes. And I did have Mark Knopfler on my list, Sultans of Swing.
Lex Fridman
(00:56:03)
That’s right, you did have-
Rick Beato
(00:56:04)
Now, I had it high on the list, and I’ll tell you why. I would’ve had it lower ’cause it’s one of the early ones, ’cause I wanted people to be like, “Okay, oh, this is a serious list.” So Rick’s gonna talk about serious stuff. So- And Rick’s gonna play along with all these things. So I wanted to kind of state that at the beginning of the video. I mean, I made the video in one day to do 20 solos. I think I played 19 of them, but the Heart solo that I had on there- … Nancy Wilson, I played the video of. And I tried to get a couple of my friends to play the Ice Cream Man, Van Halen solo.
Lex Fridman
(00:56:50)
Yeah, it was just-
Rick Beato
(00:56:51)
So I called Dweezil Zappa, and I was like, “Dweezil, can you play the Ice Cream Man solo? I’m making a video about it.” He’s like, “Oh, I’d have to practice that.” Then I called my friend Phil X who’s an amazing guitar player, and he’s like, “No, I’d have to practice that.” I was like, “Come on, man, can’t let me play Ice Cream Man?” The opening lick of Ice Cream Man that he plays is very hard to play ’cause it’s an incredibly long stretch. And it hurt my fingers to do, and Eddie would turn his guitar up like this to play. And plus, it’s a tricky… It just… It’s a tricky rhythm, and it’s such a big stretch. It’s like, “Man, I can’t… That hurts my hand.”
Lex Fridman
(00:57:28)
I just love that that’s the Van Halen solo you have. The top 20.
Rick Beato
(00:57:34)
See, I have to do some- … There’s so many Van Halen. My God, it could be… There… I could pick 25 different Van Halen solos.
Lex Fridman
(00:57:42)
But to me, I mean, there really is nobody like Mark Knopfler. I mean, his is unique guitars. There’s something about his tone. Speaking of Gilmour, there’s just the tone, the care, the timing of the notes. His improvisation, like the live performances of Sultans of Swing that’s been actually going somewhat viral recently, his pretty old live performance of Sultans of Swing. For me, Brothers in Arms, these kind of-
Rick Beato
(00:58:19)
Great.
Lex Fridman
(00:58:19)
… soulful, mournful type of solos, he does really, really well. Also, the interesting instrumentation of Romeo and Juliet. Just so, so many… Just… Truly one of the greats.
Rick Beato
(00:58:31)
Now, obviously the intro to Money for Nothing is one of the greatest. Almost impossible to recreate that because the sound is so unique and his… It’s just improvised. It’s so cool.
Lex Fridman
(00:58:46)
Yeah. There’s certain songs like Europa by Santana, Santana can have that tone too. That Mark Knopfler makes me real- just how clean it is. I think he beats B.B. King in my book in terms of the cleanness of just pure beauty of a single note. It’s like the power of a single note. I don’t know anybody who beats Mark Knopfler.
Rick Beato
(00:59:09)
Well, that thing about being able to recognize somebody from a note. You know?
Lex Fridman
(00:59:15)
Yeah, that’s-
Rick Beato
(00:59:15)
When I hear Brian May, I can immediately recognize it’s Brian May. Incredibly melodic, the tone that he has. Gilmour, Hendrix, everyone that we’re talking about, Van Halen. It’s just, they have that one note. It’s like, “Oh, I know who that is.” And that’s why we’re talking about him.
Lex Fridman
(00:59:35)
That’d be funny. That’d be a good video-
Rick Beato
(00:59:36)
B.B. King, you hear one note.
Lex Fridman
(00:59:38)
… as a test of like how quickly can you recognize just a solo starts playing-
Rick Beato
(00:59:44)
That’s a great… I’m gonna make that video-
Lex Fridman
(00:59:46)
… one note
Rick Beato
(00:59:46)
… tomorrow. Lex, you’ll-
Lex Fridman
(00:59:49)
I don’t know.
Rick Beato
(00:59:49)
The day after tomorrow, you’ll see it.
Lex Fridman
(00:59:52)
I would love to see that.
Rick Beato
(00:59:52)
Can you say, can you recognize these players by one note?
Lex Fridman
(00:59:55)
By one note. I think it’s… I think we’re being a little too aggressive with that. I think you need like two or three or four-
Rick Beato
(01:00:01)
No, no, no, no.
Lex Fridman
(01:00:02)
… or five notes.
Rick Beato
(01:00:02)
I guarantee you. So I was gonna do a video last week where I was gonna play songs in reverse, okay? See if you can recognize these songs in reverse. And I had my two assistants come in. It’s like, “Do you know what song that is?” They’re like, “Oh, that’s Adele.” Like, “What?” Then they’re like, “Oh, that’s, that’s Nirvana.” Instantly, they could recognize. Like, “Well, that’s not worth me.” It’s like, yeah, it’s so obvious. You hear the tone of the voice backwards, forwards, it doesn’t matter. You know who it is.
Lex Fridman
(01:00:27)
Oh, interesting. Okay. So it’s about the tone. How could you possibly know from a single note? I guess Van Halen, you can.
Rick Beato
(01:00:35)
One note of B.B. King’s vibrato, you could know. What I’ll do is I would separate the guitars. I can actually separate the tracks, and I’ll just play one note.
Lex Fridman
(01:00:47)
You think you could, from a single vibrato, you can know it’s B.B. King?
Rick Beato
(01:00:50)
Yes. Well, we’ll see.
Lex Fridman
(01:00:53)
Put it on record, I’m skeptical.
Rick Beato
(01:00:54)
I’m gonna do twenty of them. Can you recognize these guitarists from a single note?
Lex Fridman
(01:00:59)
Could you recognize Stevie Ray Vaughan-
Rick Beato
(01:01:01)
Absolutely.
Lex Fridman
(01:01:01)
… versus Eric Clapton? All right. You might be right. You might be right. Quick 30-second thank-you to our sponsors. Check them out in the description. It really is the best way to support this podcast. Go to lexfridman.com/sponsors. We’ve got UPLIFT Desk, for my favorite office desks, BetterHelp, for mental health, LMNT, for electrolytes, Fin, for customer service AI agents, Shopify, for selling stuff online, and Perplexity, for curiosity-driven knowledge exploration. Choose wisely, my friends. And now, back to my conversation with Rick Beato. What do you think is the best Eric Clapton song? One of the things we haven’t mentioned so far is the importance of lyrics and maybe meaning of the song- … and what it represents, so in that sense, Tears in Heaven.
Rick Beato
(01:01:59)
Well, the story behind that is heartbreaking.
Lex Fridman
(01:02:03)
And then, I personally really love the sound of Wonderful Tonight.
Rick Beato
(01:02:08)
That’s a great song. That’s one of my favorite Clapton songs.
Lex Fridman
(01:02:11)
And I, as I was listening to it, just doing a whole personal journey introspection, knowing that I’m gonna talk to Rick Beato, listening to just a bunch of songs, and I learned—it’s embarrassing that I didn’t know the stories behind the music—but I learned that Eric Clapton was married for a decade to the same woman that George Harrison was married to. And that this woman was the muse, the inspiration for like so many of the legendary songs of rock- … including Wonderful Tonight, including Layla- … and including George Harrison’s Something. Legendary song also. The same woman. Is she the greatest muse in rock history?
Rick Beato
(01:03:04)
Probably, yes.
Lex Fridman
(01:03:05)
This is great. So in your interviews of musicians and producers, I think the thing you’re ultimately fascinated by is the process, the recording, the production, the songwriting, the different elements of the process. So are there examples of different things that stand out to you from all the interviews you’ve done? And by the way, all the recording and production you’ve done yourself. So on the recording front, on the production front, on the songwriting process front, just things that pop into memory.
Rick Beato
(01:03:42)
When I’ve interviewed the guys that are the producers, like Rick Rubin, Daniel Lanois, Brendan O’Brien, Butch Vig, the thing about producers, as opposed to people that are musicians—if you’re a musician, even if you’re David Gilmour, you do a record, and then you tour, and then you do another record, maybe years go by, but producers are working on multiple records, sometimes at a time. Rick Rubin could be working on multiple records, and the variety of things that they do, you can talk to. I mean, I can talk to Rick about the Chili Peppers. I can talk to him about Johnny Cash. I can talk to him about Tom Petty, and all these records that I love, and there’s just so many interesting stories that …
Rick Beato
(01:04:29)
I mean, these interviews could go on for days with Rick, and the variety of records that he worked on. And there’s so much knowledge to be gained, for me at least, and I think that the craft of production and recording engineering is something that is not well-documented. Especially since there are so few studios nowadays, where there used to be a mentorship thing, where you’d go and you’d work as an assistant engineer.
Rick Beato
(01:05:03)
And you’d work your way up. I interviewed a guy named Ken Scott that worked with the Beatles. I interviewed him at Abbey Road Studios, it’s just two months ago, and he started as a tape op when he was 16. He started on the A Hard Day’s Night record with the Beatles, and he worked his way up, and he said the first time he ever recorded an orchestra was he recorded I Am the Walrus, the orchestra part.
Rick Beato
(01:05:26)
He set up the mics, and I asked him, I said, “So where was the band?” “Standing right behind me.” The Beatles, right behind him. The guy I’m interviewing at Abbey Road recorded I Am the Walrus there. I mean, he recorded many Beatles songs, and he was 18 years old, and the … I mean, I just can’t, I can’t even fathom that. We … They have a little cafe in the basement of Abbey Road, and I said, “Did the Beatles come in here?” He goes, “Oh, yeah, they come in here and get coffee,” and I remember when they got two microwaves that were like the first microwaves in 1965, and they were amazed by them, and it’s hard to imagine that I’m talking to people that worked on these historic records.
Rick Beato
(01:06:08)
But, you know, they all start with a blank tape or an empty hard drive, and then, you’ve eventually filled them up with this music that you can’t, you can never imagine it not existing, like Stairway to Heaven, or whatever it is.
Lex Fridman
(01:06:23)
Yeah. It’s funny, like, looking back, even probably for them, just to realize they’ve created that magic is hard to believe. ‘Cause you’re looking at a blank thing and then magic comes out, and you don’t even understand. You don’t understand, probably a lot of these artists don’t understand where that came from. They’re channeling some deeper thing.
Rick Beato
(01:06:45)
When I interviewed Brian May, he told me, I can’t even remember if this was, if we talked about it on camera or not, but we talked about Bohemian Rhapsody, and at the very end… There was a thing where he was depressing his whammy bar a little bit, and it sounds like the piano is out of tune. I never noticed it before. He mentioned this to me. And he said it always bothered him. And there’s always something about these songs that bothers people. Even these songs that he-
Lex Fridman
(01:07:16)
These old things, yeah.
Rick Beato
(01:07:17)
Right. There’s always little things- … and they sit and they hear it, and they’re like, “Oh, man. I wish I’d been up a little higher on that,” or whatever.
Lex Fridman
(01:07:23)
I mean, that, that … there’s certain moments in songs that are just unlike anything else. In Bohemian Rhapsody when Freddie Mercury is, “Sometimes wish I’d never been born at all…” And then the guitar comes in. I mean, there’s just nothing like that. That was … That … I don’t even know. I mean, that, that whole thing, you’ve done videos on it. It’s an incredibly complicated composition. It’s, it’s crazy that a popular rock song could be this operatic, so complicated. The other thing akin to that moment is Phil Collins with In the Air Tonight, the drum bridge. Do do do do do do do do.
Rick Beato
(01:08:07)
Yeah. Yeah.
Lex Fridman
(01:08:08)
Yeah. What is that? I don’t understand how you can create that. What is that? Why is that so magical? Why is that so singular inside a particular song and in rock history period? Like, these moments, I don’t know, musically, I don’t understand how you create them ’cause it might be bigger than musical. It might be cultural, a bunch of different elements, and plus, it’s him filled with … Like, I’ve seen live performances. He has, like, a headset. He does something. He’s like a telemarketer or something. Like, his whole vibe and look to him, he doesn’t look like a rockstar, but he is.
Rick Beato
(01:08:47)
Those are hooks when you think about it, right? It’s like, it’s as much of a hook as any, as the chorus of the song or any song. That drum thing is something that people wait for, and they air drum to it. Everybody air drums to it, and it is a hook, and those are hard to create. Those are … Those moments are really hard to create, and usually they’re done by accident.
Lex Fridman
(01:09:09)
Yes, it’s hard. If you chase it, you’re not gonna get it. In your conversation with Sting, he said something about how modern music is simpler more minimalistic, and, ” The bridge is gone,” I think- … he said. And he said he thought that, “The bridge is therapy.” It’s, like, a chance for you to reflect, I guess, on the verse- … before the chorus comes in.
Rick Beato
(01:09:39)
That’s right.
Lex Fridman
(01:09:40)
It changed my view of the bridge, I suppose, is the therapeutic nature of it, at least lyrically. You think he’s onto something? The value of the bridge?
Rick Beato
(01:09:48)
The bridge is a place, I think, where you can kind of change the frame of reference of a song.
Lex Fridman
(01:09:55)
You could probably do anything, I guess.
Rick Beato
(01:09:56)
Lennon used to… He would have some kind of biting lyrics, like “We Can Work It Out.” So McCartney writes the, you know, “Try to see it my way. Do I have to keep on going until I can’t go on?” And then, but the bridge is very Lennon. “Life is very short, and there’s no time. For fussing and fighting, my friend. I have always thought that it’s a crime, so I’ll ask you once again.” I mean, it’s very, you know, very Lennon-esque. This is … That was really a … kind of a real collaboration between the two of those.
Lex Fridman
(01:10:29)
This is where different parts of the band can clash- … in interesting ways. I mean, the Beatles are the epitome of that. Such … Like, each individual Beatle is a great talent in their own right. How were the Beatles able to create some of the greatest songs of all time all before they turned 30 years old?
Rick Beato
(01:10:51)
I have never been able to figure that out, but I have a theory that- … because PA-
Lex Fridman
(01:10:58)
I have a theory.
Rick Beato
(01:10:59)
Because PA systems were so bad back then- … and the Beatles … People screamed so loudly that the Beatles thought, “Okay. We don’t, we don’t need … We can’t tour anymore ’cause we can’t even hear ourselves, so we’re just gonna be a studio band.” And maybe because of … We have all these great late Beatles records, they’re from 1966 on, just because they had bad PA systems. And they had no monitors. You know, they’re in Shea Stadium.
Rick Beato
(01:11:28)
People are screaming so loudly they can’t hear themselves. They’re like, “Okay, forget this. We can’t tour. We’ll just make studio records,” so that’s what they did, and in that one year, like, from August 6th, 1965, they put out Help!. Then in December 3rd, they put out Rubber Soul of ’65. Then, then August 5th, they put out Revolver. So within 365 days, they put out three 14, I think, 14-song records. So they wrote and recorded three incredibly important records. They were in the studio. It’s like working out.
Rick Beato
(01:12:04)
They’re practicing their craft every day, writing songs, trying to outdo the other ones, and so you had the, the perfect thing of, of four supremely talented musicians, songwriters, singers, and then the best producer you could possibly have, George Martin, and, and it was just a perfect storm. I think that when I would talk to friends that would just play in local clubs, and they’d play four-hour sets five nights a week, and they never lost their voices because they’re always working those muscles. And same with the Beatles. They were always in the studio singing every single day, doing takes, and I think that that was part of it, at least.
Lex Fridman
(01:12:51)
But you also have this theory- that you know, that the greatest productivity that musicians have is before they turn 30. The greatest, sort of, creative genius that can come out of the human mind musically is before the age of 30.
Rick Beato
(01:13:09)
Well, I think it’s the same in mathematics, as well, the- … you have this fluid intelligence versus crystallized intelligence. Fluid intelligence up until you’re about, you know, in your late 20s-
Lex Fridman
(01:13:19)
Yeah
Rick Beato
(01:13:19)
… 30 years old, and then crystallized, so you’re using… The crystallized is you’re using your life experience to write things, so you’ll find that composers like Bach, Beethoven, Mozart wrote their most important works at the end of their lives. Beethoven, the late string quartets, the Ninth Symphony, things like that. So, they have a whole lifetime of experience that lead up to this, and there’s not… They’re not improvising, but things for improvising, writing pop songs, and that… I think when your mind is really most active and your brain processing speed is at its pinnacle, that… This is just my theory-
Rick Beato
(01:14:01)
… that people can come up with those kind of ideas. Same with improvising. I think that most jazz improvisers, not all, but most, do their best improvising before the age of 30.
Lex Fridman
(01:14:13)
Creating something new.

27 Club

Rick Beato
(01:14:15)
Yes.
Lex Fridman
(01:14:15)
Truly novel, that requires youth. It’s just a theory though, but it seems to apply. What do you think about the 27 Club? A bunch of the music greats died at 27. Hendrix, Brian Jones, Jim Morrison, Janis Joplin, Amy Winehouse.
Rick Beato
(01:14:33)
Kurt Cobain.
Lex Fridman
(01:14:34)
Kurt Cobain, of course. A big part of music history is linked to drug history. LSD, coke, heroin, weed.
Rick Beato
(01:14:48)
Smoking.
Lex Fridman
(01:14:49)
Smoking.
Rick Beato
(01:14:50)
I think about this a lot. If you go back and you watch videos, The Beatles, any of their movies, they’re smoking all the time. The Get Back documentary, they’re smoking constantly. Go watch any of the MTV Unpluggeds, Nirvana, Kurt Cobain is smoking every second that he’s not playing, he’s smoking. Every singer smoked. Every musician smoked. Nowadays, I asked my son, Dylan, “Dylan, does anybody smoke?” at his high school. He’s like- …”Smoke? Nobody smokes.” The- it was an absurd question. And that was part of culture.
Lex Fridman
(01:15:24)
It was for everybody. I mean, that was, that was a big transformation over the past 20 years and just everybody stopped smoking. But I don’t think smoking has the kind of hard negative effect that we’re talking about. I mean, I almost would rather have them smoke than some of the other hard drugs. Maybe smoking distracts them from the hard… I mean, heroin and coke, I mean, those, those things really, and alcohol, unfortunately-
Lex Fridman
(01:15:50)
… can be easily abused, I think. It seems like it’s a… The life of a musician, this dopamine thing of getting on stage and being adored by tens of thousands, hundreds of thousands of people, the high of that, and then the comedown after is really hard life, for just even neurobiologically, of like, how do you deal with that? You have to be able to control the rollercoaster of your mind, and of course drugs will be a part of that. And you think everything is allowed and everything is possible. And then there’s also culture, depending on who you hang out with, that certain kinds of categories of drugs are good for your creativity.
Lex Fridman
(01:16:37)
And so, naturally, you start to abuse those drugs. I don’t know. I think it’s really interesting the role that drugs have played in the history of music. They have certainly been extremely destructive, but they have also certainly been productive muses, inspirations for some of these folks.
Rick Beato
(01:17:02)
Oh, absolutely. Now, would we want to, you know, advocate people doing things like that to boost their creativity?
Lex Fridman
(01:17:13)
No.
Rick Beato
(01:17:13)
Well, I wouldn’t, but just like smoking, which I think improved people’s voices- I mean really, the raspiness of it- … this is the reason that so many of these, virtually every famous singer- … no matter what genre of music, jazz, soul, rock, they all smoked.
Lex Fridman
(01:17:37)
Yeah.
Rick Beato
(01:17:37)
Nat King Cole.
Lex Fridman
(01:17:38)
Miles Davis too?
Rick Beato
(01:17:40)
Miles smoked- everybody smoked. Miles did… Well, Miles was a heroin addict too. I mean-
Lex Fridman
(01:17:44)
Yeah, yeah.
Rick Beato
(01:17:45)
… so many jazz musicians.
Lex Fridman
(01:17:46)
Well, Miles had a sound to him. You’re right. I mean, smoking must play a gigantic role in that, adding some complexity to the voice.
Rick Beato
(01:17:56)
Yes.
Lex Fridman
(01:17:56)
Yeah, some richness to the voice.
Rick Beato
(01:17:58)
Nat King Cole, he smoked, I think, four packs a day. He died of lung cancer. Lotta heavy smokers though, in singers. Frank Sinatra, heavy smoker. McCartney was a heavy smoker. Lennon, all those guys smoked.
Lex Fridman
(01:18:13)
Yeah, it’s hard to know, chicken or the egg. But I certainly wouldn’t recommend doing drugs as a way to get better at music.
Rick Beato
(01:18:20)
No, no.
Lex Fridman
(01:18:24)
But, you know, it does seem to go hand-in-hand. And some of it has to do with the period, with the time period, with the place, ’cause sometimes it’s part of the culture. The drug is like you’re saying, smoking. If you were smoking now, that’s gonna be a very different experience than smoking 10 years ago, 20 years ago, 50 years ago. There’s a different vibe. So, sometimes the drug is a deep integrated part of the culture versus an actual chemical substance. The ’60s, right? I don’t know. They were on everything in the ’60s.
Rick Beato
(01:18:56)
Yeah…. I mean, it has to account for something, Lex, you know?

Elton John

Lex Fridman
(01:19:04)
On the songwriting front, you mentioned a story about Elton John recording. So he’s one of the legendary songwriters. But yeah. You’ve met him, and you know something about the process of his, um-
Rick Beato
(01:19:17)
Yeah, ’cause he was recording in a studio in Atlanta that I was working with a band that I was producing. And he was in—I was in Studio B, he was in Studio A. And this band that I was working with, they were called Jump, Little Children. And so, he had his assistant come in and ask, “Hey, is this… Are you guys Jump, Little Children?” “Yeah, yeah, yeah.” And then all of a sudden, I couldn’t see out into the live room. Elton walked into the thing, and we were getting ready to track, and I’m, I’m pressing the button. “Yo, where are, where are you guys? What’s up? I thought we were gonna start this.” And no one’s responding. I can hear talking, it’s like, “What, what is going on? Where are they?” Then all of a sudden they come back in the studio and they were stunned.
Rick Beato
(01:19:52)
I said, “Where were you guys?” “Elton John just walked into our session. And he said he’s a big fan. He said to come over when we’re done and hang out in Studio A.” So we did, and he was there with Bernie Taupin, and they were working on a song. And we talked there for an hour, and he was talking about recording two records a year, and then they’d go on tour, and they’d write and record the whole record in two weeks. So Bernie would give him lyrics. Elton would go out and spend 15 minutes writing all the melody. He’d look at his lyrics, and he was doing that that day. Bernie was there, and they had a lyric sheet up on the piano. And Elton would go on, and they’d just re-… “Okay, just record this.” And Elton would sit there and play and come up with the song- … in 15 minutes or so.
Lex Fridman
(01:20:38)
Yeah, that’s crazy.
Rick Beato
(01:20:38)
There’s a great version of, I think, Tiny Dancer, where Elton is coming up with it on, it’s on YouTube. And he’s just coming up with the music right there. And then the band, “Okay, here’s how it goes.” And they record it right then. Then move onto the next song. I see this. I mean, it’s really incredible. That’s it. Yeah. True. There’s one there that I’ve sort of done the other day with Tiny Dancer, which is about Bernie’s girlfriend. So I just sort of ran it through and then put two verses together, then a mid-like, then a chorus, and then back to the sort of verse sort of thing. It’s, it happens very quickly. It sounds long, but it sort of starts off- << Blue jean baby, LA lady, seamstress for the band. Pirates man, pretty eye, you marry >>
Lex Fridman
(01:21:35)
Okay.
Rick Beato
(01:21:36)
I mean, it’s really amazing that he just-
Lex Fridman
(01:21:38)
Yeah. He’s looking at just the lyrics.
Rick Beato
(01:21:39)
Yeah, and it’s one of the, he’s one of the very few people that has the lyrics first and writes the music to it, which to me is far more difficult. 99% Of songwriters write the music first, and then they put the melody and lyrics to the finished backing track.
Lex Fridman
(01:21:58)
And maybe they write, like, lyrics, they write, like nonsense words kind of- … thing. And then they figure out from there. Yeah, that’s… I mean, I don’t know what skill that is exactly, but that’s incredible. I mean, in that process he makes it his own. Okay. You had an amazing interview with Kirk Hammett. I’m a huge Metallica fan.

Metallica

Rick Beato
(01:22:23)
Same here.
Lex Fridman
(01:22:25)
There is a lot of interesting stuff that came out of that, from that conversation. One is the distinction between heavy metal and hard rock. Which is very interesting. Of course, Metallica went through their own evolution. They had many periods. I mean, they’ve been around 40 years.
Rick Beato
(01:22:43)
Over 40 years, yeah. Crazy.
Lex Fridman
(01:22:45)
The other thing is the downpicking, which was interesting, which is creating that really distinct sound.
Rick Beato
(01:22:52)
James and Kirk’s, the downpicking, I used to be able to do that. I just can’t do that anymore. It hurts my thumb- … to do it. I think honestly, I thought a lot about it. It’s like, why does it, why is it so painful? Why is it so hard? It’s from swiping with your thumb on phones. And I think it affects that basal joint there, and—I’m sorry—no, I’m serious.
Lex Fridman
(01:23:13)
I love your theories.
Rick Beato
(01:23:14)
Well, I think that that’s actually right, ’cause I’m thinking like, “Why does that hurt so much to do that? All the downstrokes and stuff.” It’s gotta be something. It’s like, yeah, it’s from swiping with the phone.
Lex Fridman
(01:23:23)
The other thing that came through is that he’s an improviser at heart. And that, I think, clashes with this kind of rigid structure that metal is. So there’s a real soulful, melodic aspect to him. And he gave a lot of props to James Hetfield for just being a great composer, being a great musician and writer of riffs, of rhythm.
Rick Beato
(01:23:45)
The improvisation part of it you don’t think of ’cause they’ve… ’cause you have the finished songs that you listen to. But those songs are born out of improvisations, of jams, of little fragments of ideas. And then they craft them into these masterpieces.
Lex Fridman
(01:24:04)
Also, you mentioned that… This is weird that I didn’t know, that Hendrix used different gauges of strings.
Rick Beato
(01:24:10)
Yeah, he was the one that talked about that, wasn’t he?
Lex Fridman
(01:24:12)
Mm-hmm, yeah, mm-hmm.
Rick Beato
(01:24:13)
Yeah, that was really interesting. See, these are the things that I like to learn from these interviews with these people. I was like, “What? Why have I never heard of that?”
Lex Fridman
(01:24:24)
It’s like, it’s one of the ways you can find uniqueness of sound, is by trying different things that are not… I mean, I guess Apple was really good at this, right? Like, completely breaking out of what you’re supposed to do, the ways you’re supposed to do them, and doing it completely differently. You often ask musicians what their perfect song is. First of all, that’s an interesting question.
Rick Beato
(01:24:45)
What is a perfect song?
Lex Fridman
(01:24:47)
Like, one surprise is, Hans Zimmer said God Only Knows by the Beach Boys.
Rick Beato
(01:24:52)
I was surprised by that too, but I thought it was like, “Yeah, okay, that’s a perfect song for sure.” The first interview I ever did was with Peter Frampton in 2018, and I asked him in that interview, “What’s the perfect song?” And he said, “Whiter Shade of Pale.” And I was like, “Ooh, that’s a great song.” And then I thought, “I’m gonna ask that to people, just to see what they…” Now people are prepared if I ask that.
Lex Fridman
(01:25:14)
But it’s like, they’re willing to go out on a limb and say it. Like, if you ask me, I don’t even know. I guess you just say it, whatever, right? Like, what would I even say? What’s a perfect song? Yeah, I would go… See, I feel the pressure.
Rick Beato
(01:25:29)
Right?
Lex Fridman
(01:25:30)
Because the problem is, the reality is, it changes day by day, like minute by minute. I… Yeah, I would probably, I’m sorry, but I would have to go Mark Knopfler. And I would probably go… Is it really cheesy to say the obvious thing? I would go Sultans of Swing. Even though like I’m tempted to say Europa, but then like…
Rick Beato
(01:25:58)
Sultans of Swing hits on so many levels- … ’cause it’s got a great melody, great lyrics, and then multiple great guitar solos. And has such a unique sound to it. The other thing is that it sounds very different from other Dire Straits songs. I mean, this is like early- … Dire Straits Strat tone. And then you think of like Money for Nothing is a Les Paul, and it’s a totally different kind of vibe than him playing it on Sultans of Swing. But that song’s amazing.
Lex Fridman
(01:26:26)
Plus it’s about music.
Rick Beato
(01:26:29)
Yes.

Tom Waits

Lex Fridman
(01:26:30)
So it’s like there’s a meta aspect to it. But then there’s also like, we’re talking about this guitar stuff, but Leonard Cohen, Hallelujah. I mean, Leonard Cohen in general. Like these songwriters, they go super simple on guitar. And there, it’s just what’s that called? Singer-songwriter type. I told you off my one of my, maybe the music guest that’s a dream guest is Tom Waits. I’ve wanted to talk to Tom Waits for a very long time, and I’ve gone through different periods of… You’ve met me at a point in my life where I’ve given up on it a little bit. And I was trying-
Rick Beato
(01:27:11)
That’s when it’s gonna happen. That’s-
Lex Fridman
(01:27:13)
Okay.
Rick Beato
(01:27:13)
Once you give up on it, it’s gonna happen.
Lex Fridman
(01:27:16)
Yeah. Yeah.
Rick Beato
(01:27:19)
Why Tom Waits won’t be on your podcast.
Lex Fridman
(01:27:23)
Exactly. Exactly, dude. This is, this is my, this is my moment.
Rick Beato
(01:27:27)
Tom, come, come here. Let’s do it. I wanna see it.
Lex Fridman
(01:27:31)
I’m such a fan of, like the Zappa-like artistry on the musical front, which Tom Waits has, but I’m a sucker for great lyrics. Lyrics to me is such a big part of great songs. And he’s another example. He has a song called Martha. It’s about a love story that didn’t work out, and it’s an older man calling the woman that he was in love with, and basically reminiscing about like, you know, thinking about like, “What would’ve happened if it worked out?” That kinda thing. And then, you know, I loved that song for a long time, and you know at some point I found out that he wrote that when he was in his early 20s. And you realize, it’s similar with the Beatles, like- … These guys somehow were able to capture the human condition so masterfully, and they’re kids.
Lex Fridman
(01:28:26)
This, I don’t get it. I don’t understand it.
Rick Beato
(01:28:29)
I can’t speak for Tom Waits, but in the Beatles case, they went to Hamburg, they spent time on their own, they played cover gigs that were eight hours long, and they lived-
Lex Fridman
(01:28:40)
Yeah, they’ve lived-
Rick Beato
(01:28:40)
… they lived life. It’s not like, not like kids today.
Lex Fridman
(01:28:46)
Now you’re on a porch. You also had an amazing interview with Billy Corgan, of Smashing Pumpkins. He is definitively one of my favorite musicians.
Rick Beato
(01:28:59)
I love Billy.
Lex Fridman
(01:28:59)
You asked him an interesting question about how he creates this melancholy feeling that permeates a lot of his songs, and he jokingly said that the secret is all about the seventh and the ninth. So like, musically, chord-wise, what do you think about that? You think he’s onto something?
Rick Beato
(01:29:18)
He’s talking a little music theory there. Seventh and ninth over the chord that he’s playing. So if you’re playing a C chord, he’s singing a B, would be the seventh, D would be the ninth. And he does use a lot of those notes. But almost all these people that we’re talking… No, all these people that we’re talking about use these notes, and this is why their songs… And when I interviewed Sting, I called them surprise tones, and Sting’s like, “I like the way you use the word surprise.” Notes that are outside the chord that are dissonant with the chords that they’re playing, but then that creates emotion. Dissonance equals emotion. And that’s what I like. I want music to be… to depress me.
Lex Fridman
(01:29:59)
Yeah. What is that? I don’t know. But melancholy, and I think you articulate it in Aries, it’s not actually that depressing. There’s something about that melancholy feeling that is somehow the other side of the coin of happiness. It’s a kind of longing.
Lex Fridman
(01:30:16)
Or there’s a hopefulness to it. That aloneness that you feel. I mean, that’s actually like one of the intimate connections you have with music, is when you’re alone. There’s nothing like you’re alone in a car driving, listening to, like, whatever it is, Bruce Springsteen. Well, I think Louis CK has a bit about that. And was it Bruce Springsteen? But sometimes he has to pull over to the side of the road and just weep, or something like this. It’s just there’s something about that. Sometimes a song just connects with you. And I don’t know, nothing like a melancholy song could do that. It…
Lex Fridman
(01:30:55)
You think about, like, maybe things you regret or how life could’ve worked out. And sometimes it’s not even about, like… It’s not even real. It just connects something in the soul. The uneasiness that we all feel. Maybe the loneliness we all feel that underpins so much of the human condition, and it just connects with that. I don’t know what that is.
Rick Beato
(01:31:16)
There’s a Kurt Cobain lyric. It was on the In Utero record, from the song Frances Farmer. The chorus part is, “I miss the comfort of being sad.”… and I was like, “Yes.” <> I was like, “Yeah, that’s it right there.”
Lex Fridman
(01:31:35)
In terms of love songs, I somehow find powerful that kind of desperation. So like I’ve always connected with Pearl Jam’s Black.
Rick Beato
(01:31:44)
Oh, amazing.
Lex Fridman
(01:31:46)
Like that line is… A friend of mine was going through a breakup, so I was listening and he, he’s the one that introduced me to Pearl Jam during that, that whole period when Pearl Jam was huge with Ten. Is, is that line is “Someday-“
Rick Beato
(01:32:02)
“Someday you’ll have a beautiful life. You know, someday you’ll be a star in somebody else’s sky. Why, why, why can’t it be, can’t it be mine?” Oh my God, that- … blows me away. That’s an amazing line.
Lex Fridman
(01:32:19)
Well, yeah, I mean-
Rick Beato
(01:32:19)
The delivery is incredible on it too.
Lex Fridman
(01:32:22)
Yeah. Eddie Vedder, one of the great frontmen of all time. And that whole period, that whole moment in history of Kurt Cobain and Eddie Vedder that captured… That was the ’90s. That was one side of the ’90s that just… This singular moment in history. Who, who do you think are the great frontmen in the history of music?

Greatest rock stars

Rick Beato
(01:32:44)
Freddie Mercury, Robert Plant.
Lex Fridman
(01:32:47)
Freddie Mercury number one, probably.
Rick Beato
(01:32:48)
Steven Tyler.
Lex Fridman
(01:32:50)
Jim Morrison.
Rick Beato
(01:32:51)
Jim Morrison? Yeah. Roger Daltrey.
Lex Fridman
(01:32:56)
Well, we have to say, I have to say, James Hetfield.
Rick Beato
(01:33:00)
James Hetfield?
Lex Fridman
(01:33:01)
I mean, there’s nothing… I mean, I have to talk to you about this. I mean, it’s just the greatest, I think the greatest concert of all time. This is their historic performance in Moscow in September of ’91. This is shortly before the Soviet Union collapsed. Plus, we should mention AC/DC and Pantera-
Lex Fridman
(01:33:23)
… were there too. And about 1.6 million people were there. Now, by the way, there’s like some kind of reporting that there was a half a million people, 500,000 people. There’s somewhere I’ve seen statements like that. That’s a ridiculously inaccurate statement. So it’s a free concert, so any official counts don’t count. It’s definitely over a million. It’s very likely to be 1.5, 1.6 million people. And this moment in history that I think they channeled, it’s like whenever great music… Metallica was firing on all cylinders at the very top of their game, and they meet this moment in history and this place in history.
Lex Fridman
(01:34:04)
There was a, a defining part of the 20th century collapsing, and you have these people who are, for a moment, through music, are able to escape the fear, the anger they feel, the… all of it. There was also a political, social, cultural moment meeting the musical moment, and the set list, I was just… I listened to it several times over the past few days, just taking myself back into that moment in time. Listen to this set list: Enter Sandman, Creeping Death, Harvester of Sorrow, Fade to Black, Sad but True, Master of Puppets, Seek and Destroy, For Whom the Bell Tolls, One, and Whiplash. Look at that. How is that-
Rick Beato
(01:34:50)
That’s-
Lex Fridman
(01:34:50)
That just-
Rick Beato
(01:34:52)
That’s my kind of set
Lex Fridman
(01:34:52)
… get the fuck out of here.
Rick Beato
(01:34:53)
That’s-
Lex Fridman
(01:34:53)
This is amazing. This is-
Rick Beato
(01:34:54)
That’s my kind of set right there.
Lex Fridman
(01:34:57)
I don’t know if you can think of anything that could beat that.
Rick Beato
(01:34:59)
I think that the guys in the band would say that, too. That was… I mean, they were really at their peak. The Black Album had just come out then, and that must have been so, so exciting.
Lex Fridman
(01:35:12)
I mean, Woodstock was big. There’s, there’s certain moments in time that really, really meet the moment. Are you a fan of live, live like big?
Rick Beato
(01:35:21)
I used to be, but at this point- … I can’t, you know… I’d much rather see people play in small clubs- … and, or go to the… I’d like to listen in the studio. Go to the studio, even.
Lex Fridman
(01:35:37)
I generally almost entirely agree with you. I just think that there’s these historic moments, but you don’t know- … which are gonna be which, but you’re making the concert free, it’s just all of it, you get plus Pantera and AC/DC. The other, which actually is a legitimate thing you mentioned, is one of the greatest concerts of all time: Beethoven’s world premiere of the Ninth Symphony. You know, I didn’t really know the personal side of Beethoven until I saw this movie called Immortal Beloved. It’s an excellent movie with-

Beethoven

Rick Beato
(01:36:12)
Gary Oldman
Lex Fridman
(01:36:13)
… Gary Oldman. Just a really… it’s a masterful celebration of Beethoven in an interesting kind of way through the perspective of a love letter that he’s written. But then I realized like… and this is early, this is many, many… this is a couple decades ago now, that, you know, he went deaf before he even started writing the Ninth Symphony, which is why they consider it to be one of the greatest compositions of all time, the greatest symphonies of all time. He went deaf, couldn’t hear anything before he even started writing it. And so there’s that famous story of him in that world premiere of having to be turned around because he can’t hear people applauding, so he has to be turned around to see that people are actually clapping. I mean, there’s just this whole tragic element.
Lex Fridman
(01:36:59)
Plus, the meaning of the symphony that ends in this beautiful Ode to Joy, the symphony itself is a kind of… It starts with the chaos and conflict and ends with this celebration of peace and brotherly unity and a— I guess a call for that, a reaching for that, for that peace. And it’s a… and there’s a tragic element to it, again, connected to history, which is it was post Napoleonic Wars-
Lex Fridman
(01:37:33)
… and before the American Civil War. So like, you’re in this, in this middle… this respite from war, calling for peace, not knowing that truly horrific wars are coming. So you have the American Civil War, and you have the, of course, the two World Wars coming. So this, all of it together, and the fact that he’s conducting deaf, and he wrote this whole thing deaf. I was reading a lot about his process, and he just edits and edits and edits and edits. So the fact that he had to edit in his head is just insane.
Rick Beato
(01:38:07)
I mean, it… Beethoven was sick all the time too. I mean, a lot of people were sick all the time. It was very common. What would motivate you to write music, this beautiful music that you can never actually hear except for in your head?
Rick Beato
(01:38:25)
Right? Like, why… The amount of time it takes to write a 35-minute, 40-minute piece, all the parts, you got to hear all the orchestration in your head. You’re editing, you’re doing all these things. Where do you get the motivation when you can’t hear the actual finished work? One, and people would say, “Well, he hears in his head.” But what kind of enjoyment is it? You wanna hear the orchestra… I mean, it’s really profound that he was inspired to do this. There’s a thing called the Heiligenstadt Testament that he wrote. It was a letter to his brothers from 1802. I think they found it in his desk after Beethoven died, and he felt a sense of shame and humiliation because of his hearing loss.
Rick Beato
(01:39:17)
And he said that he was afflicted with this thing where him of all people, that someone standing next to him could hear a flute that he could not hear, or a shepherd singing in the field that… And he could not hear this. And of all the people, why him? Where hearing played such an important part. Another person that would have had to have had perfect pitch, ’cause you could never do this- … if you didn’t have perfect pitch, which I think all of these great composers, for the most part. Brahms didn’t, from what I know, but all the rest of them, for sure, had perfect pitch. So they could hear these things in their head, and that’s how they composed.
Lex Fridman
(01:39:57)
I mean, you love sound and music. What do you think it was like gradually losing your hearing for Beethoven?
Rick Beato
(01:40:08)
It must have been terrible. I mean, I just… Terrible. I mean, I’ve heard things where he would have a stick in his mouth and put it on the soundboard of the piano, and you could feel the vibrations in his skull, and things like that.
Lex Fridman
(01:40:26)
Yeah, desperately trying to-
Rick Beato
(01:40:27)
Yeah. I just-
Lex Fridman
(01:40:30)
But also, there’s, what is, what is that, that he’s able to write like one of the greatest symphonies ever, while deaf? So there’s something about that. We mentioned darkness, but torment that he’s going through. And ultimately, Ode to Joy. Like, not a cynical thing- … but a call for the positive.
Rick Beato
(01:40:55)
Yeah. Yeah. That’s, that’s… I’ve devoted many, many hours thinking about that.
Lex Fridman
(01:41:04)
And plus, Napoleon broke his heart, because he was a supporter of Napoleon- … because Napoleon was supposed to represent the French Revolution, this, this hopeful future of no more kings, no more monarchs, no more authoritarian regimes. And Napoleon ended up becoming, essentially, king. Becoming an authoritarian. And Beethoven sort of famously was critical of that. Nevertheless, I think maintained a fascination with Napoleon throughout his life. But sort of a kind of more sophisticated, complex view of human nature and human civilization. So becoming more cynical. Like, seeing more clearly that the world disappoints you, that dreams get shattered. And through that, is able to still do this call for the hopeful future. All right, so okay. So Beethoven, one of the greats, for sure.
Lex Fridman
(01:42:01)
Like basically everybody I know how to play the first movement of Moonlight Sonata, but I always avoided the third movement ’cause I was like, “I’ll never be good enough.” Never, never, but I need to-
Rick Beato
(01:42:13)
Never say never, Lex.
Lex Fridman
(01:42:15)
One of these days, maybe. You know what would be great? If Tom Waits writes me an email that says, “I only talk to people that can play-” “… the third movement.”
Rick Beato
(01:42:23)
Play the third movement.
Lex Fridman
(01:42:24)
That’d be a dream come true. I’d be like, “For this-“
Rick Beato
(01:42:28)
That’s motivation.
Lex Fridman
(01:42:29)
“That’s my dragon,” or whatever you do. You have to have a prince and rescue the princess. My dragon is the third movement of Moonlight Sonata. Okay. You often highlight the importance of Bach. In fact, so many of your guests…

Bach

Rick Beato
(01:42:42)
Every famous songwriter is influenced by Bach. They are. The greatest composer of all time, the greatest musician of all time.
Lex Fridman
(01:42:50)
Even Sting and Dominic Miller said they go to Bach even for, like, practice.
Rick Beato
(01:42:54)
Every day. People talk about how Bach was not known other than in the places he lived. Eisenach, he was born in. Leipzig, he spent many years. But Bach was known to great musicians. It was difficult to find manuscripts, but there was a premiere of the Saint Matthew Passion that Mendelssohn had done in 1829. It was on March 11th, I believe. He had a manuscript because his father and mother collected manuscripts.
Rick Beato
(01:43:30)
And he got a manuscript of this piece, and he, I think he was 20 years old, and they had a performance of it in Berlin…. and Beethoven, Mozart. They studied the Well-Tempered Clavier, the two books of the Well-Tempered Clavier. But Bach wrote profoundly beautiful music, and some of the most complex contrapuntal music that I don’t think anyone has ever done like that. Extremely bright guy. Had 20 kids, only 10 survived till adulthood. Lost both his parents when he was nine, within nine months of each other. Went to live with an older brother.
Lex Fridman
(01:44:11)
And extremely productive. Also. I think from all the music teachers I’ve ever had, I understood the importance of studying Bach.
Rick Beato
(01:44:23)
He didn’t write Master of Puppets, but he wrote some great powerful-
Lex Fridman
(01:44:27)
Well put.
Rick Beato
(01:44:27)
… music.
Lex Fridman
(01:44:28)
Well put. I tried to educate the aforementioned music teachers of the brilliance of Master of Puppets. Sometimes a good riff is greater than any musical composition. So-
Rick Beato
(01:44:44)
I agree. I go back and I play Master of Puppets every time I’m trying out a new amplifier. That’s my go-to.
Lex Fridman
(01:44:52)
That’s your go-to? So, like, the stereotypical guitar store when you come in, you’re playing Master of Puppets?
Rick Beato
(01:45:01)
I’ll play Master of Puppets. I will play, I have to play some heavy riff- … and so usually it will default to some Metallica or something like that. Or I’ll play Alice in Chains, or I do usually, like, a lot of times I’ll go and I’ll do Drop D something or play Tool. I usually would do some drop tuning thing. And it’s always gotta be some type of metal that I’ll test to see if the bottom end’s tight on the amp and stuff. So, yes.

AI in music

Lex Fridman
(01:45:28)
All right. We have to talk about this a little bit. You made a bunch of videos about it. There was a, there was a moment in time, it still goes on, but there was a moment where really people were freaking out about the use of AI in music. So there’s these, I would say, incredible apps like Suno, Udio. ElevenLabs Music is also great. They can generate basically text to song, full song from a text prompt. And a lot of people started freaking out just based on how good it is.
Lex Fridman
(01:46:02)
And so you start to immediately imagine how this is going to transform music, and you’re going to replace musicians and all that kind of stuff. It is legitimately nerve-wracking because these are early versions, so you don’t know where it goes. But in your intuition now, you’ve been thinking about this, you made a bunch of videos. Now, like, being able to reflect, “Okay, everybody chill. Calm down.”
Rick Beato
(01:46:23)
So if you write a prompt in Suno and it spits out a song, which I’ve done, made a bunch of videos on this. I made up a fake artist, Eli Mercer, in this video. Then I did a thing for CBS News; I made up this fake artist, Sadie Winters, and came up with this song, “Walking Away.” Well, the computer, the program came up with it.
Lex Fridman
(01:46:42)
There is some creativity in a process. So in this particular thing, the process is you generate an image.
Rick Beato
(01:46:48)
I did it in ChatGPT, the image.
Lex Fridman
(01:46:50)
The image?
Rick Beato
(01:46:50)
Then I went to, then I went to Claude and I wrote the lyrics, ’cause Claude’s way better at lyrics- … than Suno is. Suno’s bad at lyrics, at least right now. So I did, I created the lyrics in Claude and then I imported the lyrics into Suno, and I had great results with the songs that it came up with. I always have to qualify that. But then I started thinking about this. People freak out about this, “Oh, this is bad, this is bad.” And then I thought, I was like, “No, who are going to be the ones that are gonna benefit from AI?” Well, the people that are already great songwriters, because you have to be able to recognize when it spits out something good versus when it spits out something that’s not that good.
Rick Beato
(01:47:30)
And every other song, I’ve probably created 130 song ideas, out of which there’s three good ones.
Lex Fridman
(01:47:38)
And there’s a thing that’s happening where people’s ear very quickly is becoming attuned to AI slop. And that’s actually quite fascinating. Like, for example, one of the things, there’s this viral clip going around of an AI-based, like, a soul jazz remix of songs like 50 Cent’s “Many Men,” and I think it is super impressive. And there’s a different pipeline actually. It’s a tricky pipeline to how to pull that off, and I think a lot of the creativity in that, even that kind of remixing, is in the pipeline of how you actually do that, because there’s actually a lot of manual stuff in that pipeline. But I think ironically it’s very cool at first, but when you listen to it for a while you understand that this is AI slop.
Lex Fridman
(01:48:28)
For a soul remix, it actually lacks soul. But it made me think of, like, when I listen to soul or blues, I think I really want, in that case, to know… I don’t want an AI B.B. King, I want the real B.B. King. And if I know if any AI is involved in the B.B. King process, I’m tuning out. And I don’t think I’m being a curmudgeonly old dude in that. I think we humans want authenticity.
Rick Beato
(01:49:04)
So when AI, when I first started making these AI videos, it started back in 2023, I made my first one, and I would take my phone, come up in the kitchen, I’d play a song, and then my youngest, Layla, and I have three kids, and my oldest, Dylan, as soon as I played it, “Why are you listening to AI?” And it’s like, oh my God, instantly. It’s like, how do you know? Oh, it has this ringing sound in the thing. So it took me probably about four or five days to figure out, “Okay, what are they hearing that I’m not hearing?” So I did it, I separated all the parts, and what they’re hearing was the artifacts that are in the vocal reverb. That sound that were… That made incomplete-
Rick Beato
(01:49:51)
It just couldn’t do the ambiances correctly, right? Because it’s trained on… A lot of these AI programs are trained on very low bit-rate MP3s, right? So, they feed all this stuff in there. So, they’re getting really inferior information in the training process, whereas now when they make these deals with the major labels, they’ll get the multitracks, and they’ll get high-quality WAV files to train from, right? And whoever opts in, they get the solo vocal tracks. You know, if Ed Sheeran wants to do it, or Drake, or whoever wants to give their voice to it, let it do its thing, and then get the royalties from it.
Rick Beato
(01:50:27)
I’m not saying that any of them are doing it. I’m just giving an example. But every time that I would do it, I could be down the hall, and I would play something in my phone just to see if they’ll… “Why are you listening to AI?” They can instantly tell. Then it eventually started getting better. And then, it’d be like, “Is this AI?” I’d be in the car with Layla coming back from taekwondo practice, and she’s like, “Is this AI? Why? Does it sound like AI? Sounds like it could be AI.” And I’d be like, “Yeah, it’s AI.” She’s like, “Oh, it’s getting better.”
Rick Beato
(01:50:59)
And then I did this song for… It was an NPR interview, and I created a song with a fake artist. And the song was called “Neon Ghosts,” and I played it for Layla in the car. She’s like, “Can you separate the tracks?” I said, “Yeah, I have them separated back home.” “Okay, I want to go down to hear it.” So, we go down to the studio, and I play it for her, and she listens to the soloed vocal. She said, “Wow, this is really realistic.” “This is very hard to tell, even with the soloed vocal.”
Lex Fridman
(01:51:27)
I think the room for creativity right now for humans is lyrics. It seems like the lyrics that are being generated, they lack soul somehow. And that’s- I don’t know the words correctly. I mean, they can be incredibly sophisticated, but there’s something, the edge is not there. Some kind of edge that we want in our lyrics. Some kind of surprise, but not cringe or not cliché. Or something truly novel in the lyrics. But if that’s the case, it’s kind of sad that that’s where the creativity has to come from, but not from the music. Because then if we can create very realistic music that sounds really damn good, where’s the role of the musician there?
Rick Beato
(01:52:21)
I think the role of the musician is that in actually… If they use AI to assist them in coming up with ideas, they could as a creation tool. Then the musician… Like, some of the stuff is just not high quality sonically. So, the musician goes in and redoes stuff and changes things and adds parts, and then they actually do music production, and maybe they re-sing the parts, and they change the stuff. And then it’s just basically like an idea generator, and I think that that’s a great use of AI, is for that.
Lex Fridman
(01:52:54)
But see, if you do that, does it make you sad that you don’t necessarily need to learn instruments? So, basically, you can… I mean, you can think of it as a different kind of instrument, but you can write lyrics. You can hum the melody. You can just hum parts. You know? And then do an A/B kind of thing. Just kind of rhythm this kind of, and stitch them together. And never actually have your fingers on a guitar or fingers on a drumstick.
Rick Beato
(01:53:25)
That’s why I’m not gonna use AI, Lex, is for that reason, because to me, it’s just boring. And I-
Lex Fridman
(01:53:33)
Yeah, it is.
Rick Beato
(01:53:33)
… when I use it, it’s just like, “Eh.” But I used it for about a month or so, just because I was making videos. And I was trying to see how it’s advancing. Every three or four months, I’ll sit down, and I’ll see whatever new versions they have. And I’ll write some songs. I’ll prompt some songs and see what they come up with and see if they’re improving on the things. But ultimately, I don’t find it interesting to use.
Lex Fridman
(01:53:58)
I hear you. You’re a bit old school.
Rick Beato
(01:54:02)
I’m old school.
Lex Fridman
(01:54:02)
As am I. I’m trying to think about the future, and I think it’s still, even in the future, also going to be boring. I think there’s something-
Rick Beato
(01:54:10)
I agree.
Lex Fridman
(01:54:10)
… fundamentally boring about it, and I’ve been trying to figure it out. For example, I use it a lot for—more and more and more for programming. So, for building stuff. And there, it’s not about the… The final output is not the code. The output is what the code creates. And there, it’s extremely useful. It doesn’t matter if it’s boring or not, it’s useful. But when the final output is the thing that AI creates, which it would be in music, then there’s something about us that just, like… We know. There is something boring about it.
Lex Fridman
(01:54:46)
We want to celebrate and see the thing that’s hard to create. And if AI can just text a song, “Generate a top 10 hit,” we’ll quickly lose value for that, I think. And so, we’ll want raw. Whatever shape that raw takes, I want to say raw talent, but that raw talent of any kind. And perhaps… It would make me a little bit sad, but that’s also awesome. Perhaps the new kind of raw talent that civilization is asking for is how to make great TikToks. Maybe that’s what raw talent looks like. It makes me a little bit sad because I’m a huge fan of long-form. But that also… Creating TikToks is also talent.
Rick Beato
(01:55:38)
It is a talent. Absolutely. When I see anything that’s AI generated, I instantly recognize it. Any video, I’m like, “Ugh, boring, boring, boring.” And my kids do the same thing. They just have no interest in engaging with it. As soon as they recognize it, and they can spot it a mile away— …and they’re just like, “Boring, boring, boring, boring, boring.” And then they don’t even wanna engage with the social media platforms, which is a danger. Which I think they need to crack down on the AI slop. YouTube’s done a pretty good job on it, but it’s hard to stay on this. It gets flooded with so much of this stuff, it’s so easy to create and put up there. And to just be in the whack-a-mole thing where you’re just trying to get rid of it all is a—
Lex Fridman
(01:56:27)
Yeah, it’s fundamentally boring. I think boring is a really good—
Rick Beato
(01:56:31)
Yes, boring.
Lex Fridman
(01:56:32)
And it’s annoying to have to flip through the AI slop. But I think actually, as a civilization, it’s just inspiring for authenticity ’cause you wanna be real. And being raw, which, one of the things I like about podcasts is people just shooting shit and just being themselves in the long form versus overproduced. ‘Cause I think AI is making people realize that AI is good at being overproduced. So there’ll be more.
Rick Beato
(01:57:00)
Let’s get that covered.
Lex Fridman
(01:57:02)
Yeah. Even artists, ’cause you’re saying like, yeah, they’ll use it as tools. Part of me thinks like not really. Like, I think they’ll quickly, this kind of process of generating a bunch of different options and choosing the one you like the most, I think is a really frustrating process for artists. And I think it, I think AI will definitely be used extremely effectively as a very fine-grained tool in the image domain, editing images. But not like macro editing, but very specific kinda editing that Photoshop is increasingly integrated in. I’ll mention to you offline, so the whole iZotope RX group of software that does a lot of the denoising, D- All the D, removing the wind, all the—they integrate machine learning extremely effectively—
Lex Fridman
(01:58:00)
… for working with audio in different kinds of ways. There’s a bunch of different other programs that do that. Maybe for like B-roll footage and a, same thing on the audio, if you just need a little audio to create a feeling of a scene, the AI might be used there in that kinda way. But truly original stuff, eh.
Rick Beato
(01:58:20)
I’ve saved videos where I’m doing, speaking over music, for example, in an interview. Somebody’s playing and we have two people speaking in lavs, but there’s so much bleed coming from the person playing- … that you can’t hear what we’re saying. And then we’ll split out the voice for that section, the two voices, separate them- … and then take the music and separate that stuff in. So it’s really helpful for things like that.
Lex Fridman
(01:58:46)
And now, once again, a quick 30-second thank you to our sponsors. Check them out in the description. It really is the best way to support this podcast. Go to lexfridman.com/sponsors. We got Uplift Desk for my favorite office desks, BetterHelp for mental health, LMNT for electrolytes, Fin for customer service AI agents, Shopify for selling stuff online, and Perplexity for curiosity-driven knowledge exploration. Choose wisely, my friends. And now, back to my conversation with Rick Beato. So you have this video breaking down Sabrina Carpenter’s song Manchild. And you use that as an example of building up people’s intuition about the music business and how the music production for these popular songs is being done these days.

Sabrina Carpenter

Lex Fridman
(01:59:38)
Who’s doing the songwriting, how’s it being done, and all that kind of stuff? I, I was wondering if you could speak to that.
Rick Beato
(01:59:45)
In that particular song, Jack Antonoff, who is one of the writers, Amy Allen, Sabrina Carpenter, said in some awards thing that there’s an old guy on YouTube that says that Sabrina had very little to do with the song. And so he said in this clip-
Lex Fridman
(02:00:00)
You being the old guy.
Rick Beato
(02:00:01)
Me being the old guy. That, well, Sabrina really was the—she’s amazing and she’s the one that wrote everything in the song. It’s like, so my response is like, “Well, why are you guys even included on the songwriting then?”
Lex Fridman
(02:00:14)
So one of the things you highlight is a lot of people are included on the list of songwriters.
Rick Beato
(02:00:21)
Yeah, 10 people- … 11 people. I mean, you know. Like, why does Song of the Year have songs that are interpolations, meaning that they have melodies from other songs in their interpolation? They used to call it stealing. And then you have songs that use samples for the whole thing, like the Doechii song that’s out right now. And I said, “Look, she took a Gotye song and basically took off his melody and she created her own melody over it.” It’s like, well, it’s—I mean, it saves time for her. You don’t have to actually create a track; you just can sing over someone else’s song that was already successful.
Lex Fridman
(02:00:59)
Yeah, you pointing that out, the song Anxiety—it broke my brain.
Rick Beato
(02:01:03)
I mean, it’s so absurd.
Lex Fridman
(02:01:05)
It, yeah, it just feels unfair. It feels—it’s a good song, but it was also a good song before, and before that, it was also a good song.
Rick Beato
(02:01:14)
Right, 2011, or Luiz Bonfá in 1967. So why is that considered to be in the top songs of the year? It’s like, come on, you can’t find another song that’s not based on that? That’s ridiculous. And Doechii has some really good songs- … on her record.
Lex Fridman
(02:01:36)
Yeah, but why are these the ones that are coming to the top, right?
Rick Beato
(02:01:38)
Well, you know.
Lex Fridman
(02:01:39)
This is interesting. Hey, that might be just a criticism of the machinery of the business-
Rick Beato
(02:01:43)
Absolutely.
Lex Fridman
(02:01:44)
… that drives them. It’s not necessarily, like, a lot of these folks are really good musicians. First of all, I think a lot of them are also good, like the actual songs they make at the top are good. I’m a big fan of Bruno Mars. He’s a great songwriter and is a great musician all around.
Rick Beato
(02:02:04)
Absolutely.
Lex Fridman
(02:02:05)
You know, he is Michael Jackson reincarnated. I mean, he’s-
Rick Beato
(02:02:08)
Super, super talented guy.
Lex Fridman
(02:02:09)
Incredible, right? You mentioned Billie Eilish and her brother write a lot of the songs.
Rick Beato
(02:02:14)
So good. Yeah, super talented.
Lex Fridman
(02:02:17)
I mean, Taylor Swift is unlike anything. I mean, that’s a historic figure in music, but she’s a fundamentally, at least originally, a singer-songwriter. So that’s a, I mean, I’m sorry, but that is a, like, of the kind of music that Rick Beato gives props to. She’s the—she carries the flame forward.
Rick Beato
(02:02:41)
She works on her own songs, absolutely, and she, but she never has more than two co-writers on things.
Lex Fridman
(02:02:47)
Wanna take a quick bathroom break? Okay. I have to ask you about this complexity that you’re facing on a basically daily basis. I think it’s a challenge a lot of YouTube folks experience, but you’re just so viscerally experiencing it because a lot of what you do on your channel is celebrate music, broadly. And so, as part of that process, you have to sometimes show clips of music, and I think all of that falls under fair use, quite obviously. And so you get all these YouTube copyright claims, and for folks who don’t know, if you get three of those, each one of those can be a strike on the channel and could take down your channel. And you get some insane amount. You said you got, like—I think I had a similar thing on my Rick Rubin episode—like, I think you said 13.
Lex Fridman
(02:03:39)
13. So, what, can you just speak to this whole thing? You’ve been in a constant battle, WMG, UMG, all the, all, all-
Rick Beato
(02:03:48)
All the, all the three-letter name-
Lex Fridman
(02:03:49)
All the s-
Rick Beato
(02:03:50)
… record labels, right?
Lex Fridman
(02:03:51)
The music business people, so, what’s the story there?
Rick Beato
(02:03:54)
Well, this has been going on since the beginning of my channel, and I’ve made videos periodically. When I first started, it was just instant blocks. So you never knew back in—I started, it’ll be 10 years in June. So, when I’d play music in a video, YouTubers were not playing music in videos because they didn’t, because of the Content ID things and the take-downs and stuff. So, I would play music, and I’d just see what happens, and then you get a Content ID claim, or you realize the people were, quote-unquote, “blockers,” and I came up with that term that they would block your video, take down your video.
Rick Beato
(02:04:31)
And I realized at first it was, like, anything Guns N’ Roses, which is still the case, Guns N’ Roses, AC/DC, I mean, many bands, Fleetwood Mac, Led Zeppelin, and then, and then something happened. There was a guy on the skateboard on TikTok that had the Ocean Spray thing and he was listening to- … Dreams by Fleetwood Mac. And that blew up and became a number one song again. And the labels then realized—I mean, I’d made many videos about why this is wrong, and it should be fair use and everything. Well, because of that, the labels were like, “Ooh, maybe we should rethink this.” And then they just started demonetizing videos.
Lex Fridman
(02:05:16)
Demonetized means they get all the money that you make.
Rick Beato
(02:05:17)
They get all the money. In a one-hour video, if they, if you use 20 seconds of a clip- … they get all the money. Okay? So, I hired a lawyer finally after the Rick Rubin video, ’cause I thought it was ridiculous. I go over to Tuscany, I interview Rick at his house, and I hired a lawyer to fight this, who I’m gonna have on my channel. I don’t wanna say who it is, but he’s another YouTuber. And he had approached me a couple years ago, and it’s not cheap to do.
Lex Fridman
(02:05:53)
Oh, you’re gonna do, like, a public interview with him?
Rick Beato
(02:05:55)
I’m gonna do an interview- … with him, yes.
Lex Fridman
(02:05:57)
Awesome. Okay.
Rick Beato
(02:05:57)
I talked to him today about it, actually.
Lex Fridman
(02:05:58)
I can’t wait. That’d be great.
Rick Beato
(02:06:00)
So he said, “You should fight these ’cause every single one of them is fair use.” And he went through my entire catalog. I have 2,100 videos, and he’s fought 4,000 Content ID claims and won every single one of them. 4,000. That’s a lot—I mean, when I do top 20 guitar solos, there’s 20 Content ID claims, you know? It’s, and it’s either, it can be either from the sound recording, if I used that, or if I just play it, it can be from the publisher.
Lex Fridman
(02:06:36)
That’s amazing. So is there, I mean, that’s still, he’s still a lawyer, still work. Does that, is there a hopeful thing you can say about the future of-
Rick Beato
(02:06:49)
Yeah, fight these Content ID claims. If it’s fair use, if you’re not just playing the song and listening to it, and, ’cause a lot of stuff that are reaction videos, or whatever, that are not, where they play the whole song, I mean, I’m using these things, and I’m talking, lot of the times it’s in interviews, or it’s in, I’m breaking down a solo, and there’s a-
Lex Fridman
(02:07:08)
Yeah. See, that’s an-
Rick Beato
(02:07:09)
… you know
Lex Fridman
(02:07:09)
… obvious one, but even reaction videos, right? Where those-
Rick Beato
(02:07:12)
Yeah. Even reaction videos, yes, absolutely.
Lex Fridman
(02:07:14)
Those are more borderline. But I don’t know. I love those videos.
Rick Beato
(02:07:20)
Absolutely.
Lex Fridman
(02:07:21)
Like, when a person’s just sitting there and listening to it, and they’re like, you know, like a voice teacher is listening to a vocal performance, and like-
Rick Beato
(02:07:29)
Yeah, but those are breakdowns.
Lex Fridman
(02:07:31)
Yeah, those are breakdowns, yeah.
Rick Beato
(02:07:32)
I think that the Content ID stuff that was happening with these major labels, they would hire third parties- … that would go out, use AI, and go and anytime they detect anything, they always go to the biggest channels first to get the most views. Makes sense and stuff. And they would claim everything that they could, and historically, YouTubers never would fight back. They were like, “Oh, this is easy money.” YouTubers never fight back at these things, because they’re afraid to have their channels taken down. So-
Lex Fridman
(02:08:03)
Right, you gotta say, “Hold my beer.”
Rick Beato
(02:08:05)
There you go.
Lex Fridman
(02:08:06)
So, I mean, it’s important. So, you-
Rick Beato
(02:08:07)
I mean, it took me years though, Lex. I didn’t… I’ve been doing this… So, I’ve been doing it for one year now, and I’m nine years—almost 10 years into my channel. So, it took me that long.

Spotify

Lex Fridman
(02:08:17)
I mean, hopefully, there’s a ripple effect also. It’s not just your situation. Hopefully, you don’t have to deal with this for much longer. How has Spotify changed music? Sometimes we highlight the fact that they changed the nature of music and that the scarcity is not there. But also, a lot of it’s like every kind of music is available and so fast and it’s so easy. It’s easy to explore.
Rick Beato
(02:08:43)
It’s a commodity. It’s like turning on a water faucet.
Lex Fridman
(02:08:47)
Do you think-
Rick Beato
(02:08:47)
Once you get going-
Lex Fridman
(02:08:48)
… that there’s some good to… I mean, there’s a lot of good to that, right? Well, have you… Did you go through that whole process? I still remember where I had to basically throw away the albums.
Rick Beato
(02:09:00)
I never did that. When? After you uploaded them into your computer?
Lex Fridman
(02:09:06)
Yeah. So, there’s that two-step process. One, there’s like the hard albums, CDs for me.
Rick Beato
(02:09:13)
CDs, yep.
Lex Fridman
(02:09:14)
And then, and then you upload them into your computer. And you save them. And then how do you put it? Allegedly, a friend of yours pirates some extra songs. And puts them on the computer. So, you have… but you have your stash on the computer. You’re like, “This is my finely selected stash of greatness.” Uh, sometimes organized by album, sometimes not. And the big moment for me that was really difficult to do, really difficult to do, is throw away that stash and switch to Spotify.
Lex Fridman
(02:09:52)
Switch to streaming, and basically, rebuild the stash of playlists and all this kind of stuff. And that, it was heartbreaking ’cause so much love and effort went into that. Both the CD, the stashing of the CD, and the stashing of the MP3s in the computer. And then in Spotify, it just seems just effortless. But it helped me discover all kinds of artists I never would have discovered otherwise. And Pandora, I used a lot. Pandora is more prioritizing on the discovery part versus organization part. And that was really wonderful.
Rick Beato
(02:10:30)
So, one of the things I… I’ll start with a positive that I like about Spotify, is that they show view count, they show play counts. Whether they’re real or not, that’s another question. But they show how many plays songs have, and that’s how the charts are based.
Lex Fridman
(02:10:45)
Does that give you signal that something is listened to a billion times? Does that mean something to you?
Rick Beato
(02:10:50)
Yeah. It means that it’s a popular song. Well, that’s a massive hit. There’s very few songs that have a billion plays. Now, the downside of Spotify is the way that they pay their artists. Now they’ve lumped in podcasts that are getting a cut of this streaming with the music. And you know, the search and discovery. I mean, there’s benefits of algorithms and there’s negative things of algorithms. Algorithms happen to many times pigeonhole people into listening to the same genre of music all the time, and not expanding their discovery of new music. Where you might hear on the radio back in the day where program directors would play things that they liked, right?
Rick Beato
(02:11:44)
And you might hear something, “Oh, what is that?” “Oh, that’s a new Soundgarden record,” or you know, like, “Whoa, I like that. I’m gonna go check that out.” You know, something you might not have heard or something odd.
Lex Fridman
(02:11:55)
Like, one thing I really love doing on Spotify is you can… you can have radio. Meaning, like, you have a few… It’s similar to Pandora, like you can… Okay, this is gonna reveal a little too much about myself. But usually when I go work out, I’ll listen to something like Rage Against the Machine radio. I’m sorry, I need-
Rick Beato
(02:12:17)
What else would you listen to?
Lex Fridman
(02:12:18)
I need motivation. Classical music? I don’t know. But yeah, it’s pretty good ’cause it recommends a bunch of other stuff I wouldn’t even know. Some of it I know, obviously, but akin to the, similar to the Rage Against the Machine-y type thing.
Lex Fridman
(02:12:34)
It recommends a bunch of artists, and it’s like, “Oh, holy shit, that’s awesome.” So, I don’t know. That discovery works really well. So, some of it is the technology thing. But that experience was fundamentally more vibrant than I had previously with my stash. That I would just keep a stash, and I would listen to the same record over and over and over and over. But yeah, what’s lost is the, I’m sure you love this, but listening through the Led Zeppelin records, just driving in a car and listening to the whole thing all the way through. Yeah, that’s lost.
Rick Beato
(02:13:10)
So, I have my old iTunes libraries from 2005- … that I’ve saved. The CDs that I uploaded into my computer. Anytime I do the, I play songs on my… When I’m doing interview, I always play WAV files, I put them in. And it’s funny that when I interview a mixer, I interviewed this mixing engineer, Andy Wallace, and people comment, “Wow, the song sounded amazing.” And you go, “Well, not only are they great mixes that he did, but I’m using WAV files in there.” And people notice the, and these are WAV files from original encoding. Not, not remastered things that Spotify keeps doing- … and adding a bunch more top end and things like that. That these are the-
Lex Fridman
(02:14:00)
Oh, I see.
Rick Beato
(02:14:00)
… these are actually the original WAV files from off the CD that I ripped… 20 years ago.
Lex Fridman
(02:14:09)
What’s your current… And people are really curious about that, so what’s your current stack? What are the tools you use? What’s your DAW? What’s the audio interface? What are the mics?
Rick Beato
(02:14:17)
So I use Pro Tools.
Lex Fridman
(02:14:19)
Pro Tools stuff.
Rick Beato
(02:14:20)
For the most part, but I also use Logic- … And Ableton. I’ve got all those.
Lex Fridman
(02:14:25)
So you’re mostly on a Mac?
Rick Beato
(02:14:26)
I’m only on a Mac.
Lex Fridman
(02:14:28)
Only on a Mac.
Rick Beato
(02:14:28)
Only on a Mac.
Lex Fridman
(02:14:29)
I’m only the opposite.
Rick Beato
(02:14:31)
Although we have multiple PCs, ’cause my kids use PCs.
Lex Fridman
(02:14:35)
Yeah, just to rebel.
Rick Beato
(02:14:37)
They do it for gaming. They like to game.
Lex Fridman
(02:14:38)
Right, that’s true. But like in terms of editing, I hate how good Mac is-
Rick Beato
(02:14:45)
So good.
Lex Fridman
(02:14:46)
… at just integrating. The, the hardware and the software just work well together. Both on the video en-
Rick Beato
(02:14:50)
If I didn’t have a Mac, honestly, I wouldn’t be talking to you right now. Because I got a G3 that’s… So the only good thing that a major label did for me is when my band was on UMG and they bought me a G3 and an SM7 and Pro Tools Digi 001, the first prosumer Pro Tools thing. And I learned how to use Pro Tools, and that allowed me to learn how to edit video and become a record producer. So I gotta give it to Macs for that.
Lex Fridman
(02:15:22)
So Pro Tools, I mean, that’s still the standard.
Rick Beato
(02:15:25)
That’s kinda the industry standard, yeah.
Lex Fridman
(02:15:27)
I gotta ask you ’cause I know… I’ve never used Pro Tools. I’ve used… Again, I’m a caveman. I’ve used REAPER, I’ve used Studio One, that’s the most recent that I’ve used that. And- … for the most time I’ve used Ableton Live. I feel like I’m using 1% of the power of the tool. Like, Ableton Live makes me feel like I’m literally just pressing the record button.
Rick Beato
(02:15:51)
Ableton’s amazing. It really is.
Lex Fridman
(02:15:53)
It is. But I feel like the… It, I mean, it’s designed for people that are doing like all kinds of MIDI stuff, and like looping and the—what is it?—the push buttons with the, with the beats. And the, it’s, it’s… I mean, I sound, I sound really out of touch. But it, it’s just the power is incredible. Also, it’s, I think it’s not just for recording, it’s also for live performances. So this is why Studio One has been a little bit nicer for me, because it’s simpler, made for recording more so.
Rick Beato
(02:16:28)
Any DAW that you get used to, Lex, that’s-
Lex Fridman
(02:16:32)
Just use anything.
Rick Beato
(02:16:33)
… using it, yeah. And, and- … you have to become a master at the things. If you wanna be a recording engineer or producer, you, you become an expert. A lot of the… You know, Finneas and Billie Eilish, I think that they use Logic, that’s their DAW that they like to use. And Logic, you know, a lot of pros use Logic. You know, I fire up Logic every couple days and I use it for things. I have it on my laptop here and I have Pro Tools and Logic on my laptop. I use both. I use Pro Tools mostly though.
Lex Fridman
(02:17:00)
But Pro Tools, that’s where you feel at home?
Rick Beato
(02:17:03)
Oh, yeah. I’m an expert in Pro Tools.
Lex Fridman
(02:17:05)
Are you using any emulation? Any amp sims or it’s all real amps?
Rick Beato
(02:17:12)
No, I use amp sims. On my laptop here when I travel and things like that, I use Neural DSP, which I just did a video at their headquarters in Helsinki. And their CEO, Doug Castro, is a friend of mine. I actually talked to him today as a matter of fact. And I have a Kemper amp sim, you know, a modeler. I have an Axe-Fx, I’ve got a Helix, I pretty much have all these things. But for me, I can… I have 100 amps in my studio, so… And I have mics set up all the time, and cabinets, and stuff.
Lex Fridman
(02:17:47)
Oh, what do you mean?
Rick Beato
(02:17:47)
I have 100 amplifiers. Real amplifiers.
Lex Fridman
(02:17:49)
Real? Wait, sorry, 100?
Rick Beato
(02:17:52)
I have 100, yeah. About 100, maybe 95.
Lex Fridman
(02:17:56)
How does one go get to that level?
Rick Beato
(02:18:01)
Collecting and being… I’ll be 64 in April, so-
Lex Fridman
(02:18:05)
So you just don’t let go?
Rick Beato
(02:18:06)
I don’t let go, no.
Lex Fridman
(02:18:08)
Why would you get to 100? Like is it, is it tone difference, the-
Rick Beato
(02:18:12)
Yes, so everything-
Lex Fridman
(02:18:13)
You know the tone difference?
Rick Beato
(02:18:13)
… does one thing really well. And so it’d be like, okay, so I have this Marshall JCM800 that’s modded that does this one thing. It’s got great mids and it’s good for this kind of a tune, so I will pull that out. Then it’s like, no, I need more of like a scooped metal sound that’s more like Metallica or Dream Theater or something, so, oh, I’m gonna pull out my Mesa/Boogie. Or I need something that’s chimey that’s more like Brian May or like The Edge, I’m gonna pull out my Vox AC30. So everything—and, and that’s why I have so many amps, because they all do… Every amp I have does one thing really well. If it doesn’t do it well, I get rid of it. And I’m down to 100.
Lex Fridman
(02:19:01)
Down to 100. It’s only 100. But it-
Rick Beato
(02:19:03)
I can get by with probably 75.

Guitars

Lex Fridman
(02:19:07)
Come on, but you, then you’re really running the risk of not having just the right amps. But you’re using emulation, so that’s great. I mean, and that… But there’s the other side of it which is the guitar. I told you offline, I think having multiple guitars is cheating, but whatever. Nobody agrees with me on this. I only have like one… I do have some side pieces but one main… The greatest gi-
Rick Beato
(02:19:33)
The Strat? What do you play?
Lex Fridman
(02:19:33)
The Strat, yeah.
Rick Beato
(02:19:34)
The Strat, yeah.
Lex Fridman
(02:19:34)
American Strat. I said I would never do this, but I was in a guitar store. I live next to a guitar store in Cambridge, and one day… I would always stop by, I don’t know why. I just to look at the guitars, and I don’t really know why exactly, just to be in the aura of these great instruments. And they brought in this American Strat that had these different shades of… It was like a silver. And I just… I’ve never had this feeling. They talk about love at first sight. I just fell in love with the guitar. Can you just speak to the kinda guitars you have and you love?
Rick Beato
(02:20:13)
I pretty much have… Mainly old school guitars, right? So I have Gibsons, I have Fenders, I have PRS guitars. And then I have… I have two Gibson acoustics. I have a 1957 Country Western that I’ve had for probably 30 some odd years. It’s a great guitar. And I have a J-45 Gibson, and I have a Martin D-28. So I only have three nice acoustics. And I have a Guild 12-string, and I have a Guild Nashville-tuned guitar. The low strings are up the octave, so the E, A, D and G are up the octave. That’s Nashville tuning. Six-string though. Like, basically what David Gilmour plays on Comfortably Numb in my video. He plays a Nashville tune, but with one variation. The low E is up two octaves. So he demonstrates actually the… And this is how he wrote Comfortably Numb. The, the chorus-
Rick Beato
(02:21:17)
… part of it was with this particular guitar that he’s playing in the video.
Lex Fridman
(02:21:21)
What can you say about, like, the different feels that the guitars, the acoustics have? Like, how do you know which one to pull out?
Rick Beato
(02:21:31)
It depends on the kind of part that I’m playing. If I want something with really tight midrange, with not, that doesn’t have a lot of low bass, this particular old Gibson that I have, the 57, I will pull that out. It’s got very balanced strings and like, you know, midrange. It doesn’t have a lot, it doesn’t have a booming bottom end, booming low E string- … or anything or A string. So it depends on what kind of sound I’m looking for. If I’m-
Lex Fridman
(02:21:58)
So it’s more about sound versus feel?
Rick Beato
(02:22:00)
Yeah. All my guitars play equally well.
Lex Fridman
(02:22:03)
Okay.
Rick Beato
(02:22:03)
I have them all set up to where they play well. I have a signature Gibson guitar that I’ve had for five years now.
Lex Fridman
(02:22:13)
When you say Gibson, Gibson Les Paul?
Rick Beato
(02:22:15)
Gibson. It’s a double cut Les Paul Special. Yeah, with P-90 pickups.
Lex Fridman
(02:22:20)
I don’t know what double cut means, but it sounds impressive.
Rick Beato
(02:22:22)
That means two cutouts. Two, um-
Lex Fridman
(02:22:24)
Oh. Cool.
Rick Beato
(02:22:25)
As opposed to a Les Paul that has one cut. So it’s a Les Paul Special that has two. I have it over there. My signature guitar.
Lex Fridman
(02:22:32)
That’s the- That’s the… All right, nice.
Rick Beato
(02:22:33)
Yeah. When you play this, you’re gonna be like, “Oh my God, this is butter.”
Lex Fridman
(02:22:37)
Now, I’m again, I said it’s cheating. I don’t-
Rick Beato
(02:22:40)
And what amp do you play through? Do you play through an amp sim, or do you have… What do you have, like a-
Lex Fridman
(02:22:45)
This is gonna be embar… Yeah. I use BIAS FX. I’m sorry.
Rick Beato
(02:22:50)
Lex, I use amp sims too, so… I just got the new John Mayer Neural DSP plugin today that I have not tried out. He did a modeling of all his amplifiers that- … that Neural DSP did. And it sounds great. John played it, it sounds just like his amps.
Lex Fridman
(02:23:07)
Yeah, John is incredible.
Rick Beato
(02:23:07)
John’s great.
Lex Fridman
(02:23:08)
I’ve been fortunate enough to have dinner with him two times. And outside of being an incredible musician, he’s also conversationally just-
Rick Beato
(02:23:17)
Yes. I’ve known John since he lived in Atlanta when he got signed, and I knew John from way back then, right in the early 2000s.
Lex Fridman
(02:23:26)
I think he doesn’t get enough credit. Like, he’s one of the greatest living guitarists-
Rick Beato
(02:23:31)
He’s a fantastic guitar player.
Lex Fridman
(02:23:33)
… in the world.
Rick Beato
(02:23:33)
Absolutely.
Lex Fridman
(02:23:34)
And a celebrator, if that’s a word, of great guitar playing.
Rick Beato
(02:23:39)
Absolutely.

Advice

Lex Fridman
(02:23:40)
By way of advice, you started your YouTube channel in your mid-50s and found incredible success. You’ve had essentially multiple careers. Is there some wisdom you can extract from that?
Rick Beato
(02:23:56)
So my theory is that somebody’s gotta be successful, so why can’t it be you? That was my… When I started my channel, I mean, I didn’t start it to… It started by accident with the Dylan video. And really, so many people reached out to me. I started it six months after that viral video. So many people wrote to me, “Can you teach me this?” Pro musicians, well-known ones who you’d know. “Can you teach me this?” I can’t teach you what Dylan did, but I can teach you relative pitch, develop your ear that way. But then I had conservatories writing to me about this stuff from all over the world. “How did you teach Dylan this?” ‘Cause we made about four different videos, and they got more and more sophisticated.
Rick Beato
(02:24:48)
And so I thought, “Okay, I’ll make some YouTube videos and explain this stuff.” This is, that’s really why I started, so I didn’t have to keep… I couldn’t answer the emails. There were so many of them, so I just started making videos on how to train your ear and music theory. And that’s really how I started my channel, and my wife was like, “What are you doing?” I said, “I’m making YouTube videos.” “Why?” So I don’t have to keep telling people how I did this stuff. And then all of a sudden, you know, I had 4,000 subscribers the first month, another 4,000 then. Hit 100,000 after a year, and then six months later, 200,000, then three months later, 300,000. So-
Lex Fridman
(02:25:26)
I think there’s one thing that should be said, that in modern culture for young people, a lot of them will see YouTube and TikTok and Instagram, and they kinda wanna be famous. They wanna get the clicks and the views and so on, and that’s the thing they chase and optimize. I think the thing that you’re leaving unstated perhaps is that you spent many years pursuing the mastery of a craft. And there’s a lot of value to getting good at something.
Rick Beato
(02:25:59)
Absolutely.
Lex Fridman
(02:26:00)
Offline. You can actually reveal your journey online, but the thing you’re chasing is not fame. It’s getting good at s- something. And I think actually what happens is even if the thing you get good at… is not the thing that you become famous for, if that’s the thing that ends up happening. It’s still, like, getting good at one thing, kind of somehow relates to getting good at another thing. Somehow they’ll lead you to get better at getting better at the next thing, at the next thing, and the next thing. But if you’re just chasing fame and trying to figure out, “How do I do the viral thing?” or so on, it just seems to… You might actually get there, but it’ll be unfulfilling and not long-lasting.
Rick Beato
(02:26:47)
My theory of my channel has always been, make videos on things I’m interested in. And at first, I thought, “Oh, nobody’s going to watch an old white-haired guy on YouTube.” That was kind of my thing. Well, that was not correct. And then it’s like, “Well, just make videos on stuff I’m interested in.” It just so happens that other people are interested in the same things I’m interested in. And keep learning. And when I produced bands, I never let them take my picture, ever. I never let them record me in the studio. There’s virtually no pictures of any band I ever produced. So from 1999 to 2015 when that Dylan video came out, no one took my picture. There were no pictures of me on the internet.
Lex Fridman
(02:27:33)
You’re fully behind the camera kind of guy-
Rick Beato
(02:27:35)
Yes.
Lex Fridman
(02:27:36)
… meaning, like, no…
Rick Beato
(02:27:37)
No. No pictures. No, no pictures with people. “Hey, can we take a picture?” I said, “No. No pictures with people.”
Lex Fridman
(02:27:44)
And now you’re like… you’re the talent. You’re the face. No, I mean, but then again, the thing you’re leaving unstated there is like you spent a lot of years teaching music. Like, really exploring music. Trying a music career of like, trying to create, trying to produce, trying to be a musician, and all these… Not just trying. Like, being a, getting extremely good at it. I just, I think in modern culture there’s a sense you want to skip that part. “I wanna be famous. I wanna…” You know this. And that is a thing that’s not going to be in most cases effective as a primary thing to chase.
Rick Beato
(02:28:31)
So I have an undergrad in classical bass. I have a master’s from New England Conservatory in jazz guitar. Then I taught college for… I taught jazz studies for five years- … from ’87- … to ’92. Then I got a publishing deal, my first publishing deal, in 1992- … with PolyGram Publishing. And then I became a producer when I was 37, having no idea how to engineer, I taught myself engineering. And then YouTube. I taught myself how to edit videos.
Lex Fridman
(02:28:59)
And then you taught yourself how to interview.
Rick Beato
(02:29:01)
And I taught myself how to interview. I’d never done an interview before. I never was like, “An interviewer? What?”
Lex Fridman
(02:29:05)
You haven’t just done that. You’ve taught yourself not how to do YouTube, but YouTube Shorts. Different-
Rick Beato
(02:29:12)
Totally different thing
Lex Fridman
(02:29:13)
… totally different thing.
Rick Beato
(02:29:13)
Totally different skill.
Lex Fridman
(02:29:15)
And then not just YouTube, but like, how to be like a… there’s a… ’cause you’re both a YouTuber and like a musician who posts stuff on YouTube. YouTuber means like you’re thinking about stuff like thumbnails and…
Rick Beato
(02:29:31)
Which I make my own thumbnails. I’ve always made my own thumbnails.
Lex Fridman
(02:29:35)
By the way, before I forget, I think I speak for the entirety of the internet thanking you for how you introduce your videos and how you close them. ‘Cause you, this is a big part of YouTube, where people have a 30-minute introduction to a five-minute video. You just go straight in. That’s really wonderful. That’s, I mean, on all fronts. I mean, I suppose that has to do with the production skills that you have, of understanding, cutting, cutting the fluff.
Rick Beato
(02:30:02)
To make a song.
Lex Fridman
(02:30:02)
Yep. Yeah, cutting, cutting the fluff, cutting the bullshit. I’ll just get straight to the core of the thing. I’ve heard you talk about maintaining friendships for a long time. You said, “Never waste a friendship.” Can you elaborate on that?
Rick Beato
(02:30:15)
Yeah. That’s one of my things is that I really value the time I’ve spent with people—friendships and keeping in touch with people. I talk to each one of my siblings multiple times a week. I talk to my sisters probably every night, my two sisters. I have friends from college, I got friends from growing up, I have friends from, you know, both colleges I went to. I have friends from all different eras in my life that I keep in touch with and visit whenever I can, and…
Lex Fridman
(02:30:46)
And you must have met some incredible humans, and incredibly weird, and interesting humans throughout your life. So it’s worth it—the effort to connect and reconnect.
Rick Beato
(02:30:59)
I mean, it’s pretty much everything in life. Nothing means anything more than the friendships that you make in your family.
Lex Fridman
(02:31:06)
Yeah, what’s the point of this whole thing, right?
Rick Beato
(02:31:07)
That’s right.
Lex Fridman
(02:31:09)
What’s the role of music in the human experience?
Rick Beato
(02:31:14)
Well, hopefully to enlighten people and to create the soundtrack of their life.
Lex Fridman
(02:31:20)
It is, right? Music does something. I’ll get… sometimes when I’m alone I’ll listen to a song, and there’s nothing quite like a song that makes me truly feel, like feel alive. And whatever that is: sadness, or hope, or excitement. Or when I’m working out, listening to Rage Against the Machine—like protest. Or as I was listening to Metallica, I was re-listening to the set that they played in Moscow, just hyped. Like truly hyped. I was like pacing listening to it. And there’s nothing like that.
Rick Beato
(02:32:05)
I’ve never found anything.
Lex Fridman
(02:32:06)
And I don’t know what that is in the human psyche that’s that, but I’m so glad we found it. We humans created instruments that can vibrate strings and together create harmonies and melodies, and ones that reverberate through generations and they carry that.
Rick Beato
(02:32:27)
It’s one of the greatest things that humans ever did, creating music.
Lex Fridman
(02:32:31)
And all of that led up to you, some guy being listened to by millions of people on the internet. This is all a simulation, Rick. And I’ve been a fan of yours for a long time, like I told you. This is crazy to meet you.
Rick Beato
(02:32:48)
Same, Lex.
Lex Fridman
(02:32:49)
Thank you for everything you do for the world, for celebrating music. For helping us discover and rediscover some of the incredible musicians and songs that have been created over the decades, over the centuries. Thank you for being who you are and thank you for talking to me.
Rick Beato
(02:33:07)
Thanks, I appreciate it.
Lex Fridman
(02:33:09)
Thanks for listening to this conversation with Rick Beato. To support this podcast, please check out our sponsors in the description where you can also find links to contact me, ask questions, give feedback, and so on. And now, let me leave you with some words from Friedrich Nietzsche, as I often do. “Without music, life would be a mistake.” Thank you for listening, and I hope to see you next time.

#491 – OpenClaw: The Viral AI Agent that Broke the Internet – Peter Steinberger

Peter Steinberger is the creator of OpenClaw, an open-source AI agent framework that’s the fastest-growing project in GitHub history.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep491-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:
https://lexfridman.com/peter-steinberger-transcript

CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:
Peter’s X: https://x.com/steipete
Peter’s GitHub: https://github.com/steipete
Peter’s Website: https://steipete.com
Peter’s LinkedIn: https://www.linkedin.com/in/steipete
OpenClaw Website: https://openclaw.ai
OpenClaw GitHub: https://github.com/openclaw/openclaw
OpenClaw Discord: https://discord.gg/openclaw

SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Perplexity: AI-powered answer engine.
Go to https://perplexity.ai/
Quo: Phone system (calls, texts, contacts) for businesses.
Go to https://quo.com/lex
CodeRabbit: AI-powered code reviews.
Go to https://coderabbit.ai/lex
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Blitzy: AI agent for large enterprise codebases.
Go to https://blitzy.com/lex
Shopify: Sell stuff online.
Go to https://shopify.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex

OUTLINE:
(00:00) – Introduction
(03:51) – Sponsors, Comments, and Reflections
(15:29) – OpenClaw origin story
(18:48) – Mind-blowing moment
(28:15) – Why OpenClaw went viral
(32:12) – Self-modifying AI agent
(36:57) – Name-change drama
(54:07) – Moltbook saga
(1:02:26) – OpenClaw security concerns
(1:11:07) – How to code with AI agents
(1:42:02) – Programming setup
(1:48:45) – GPT Codex 5.3 vs Claude Opus 4.6
(1:57:52) – Best AI agent for programming
(2:19:52) – Life story and career advice
(2:23:49) – Money and happiness
(2:27:41) – Acquisition offers from OpenAI and Meta
(2:44:51) – How OpenClaw works
(2:56:09) – AI slop
(3:02:13) – AI agents will replace 80% of apps
(3:10:50) – Will AI replace programmers?
(3:22:50) – Future of OpenClaw community

Transcript for OpenClaw: The Viral AI Agent that Broke the Internet – Peter Steinberger | Lex Fridman Podcast #491

This is a transcript of Lex Fridman Podcast #491 with Peter Steinberger.
The timestamps in the transcript are clickable links
that take you directly to that point in
the main video. Please note that the transcript is
human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Episode highlight

Peter Steinberger
(00:00:00)
I watched my agent happily click the “I’m not a robot” button. I made the agent very aware. Like, it knows what his source code is. It understands th- how it sits and runs in its own harness. It knows where documentation is. It knows which model it runs. It understands its own system that made it very easy for an agent to… Oh, you don’t like anything? You just prompted it to existence, and then the agent would just modify its own software. People talk about self-modifying software, I just built it. I actually think wipe coding is a slur.
Lex Fridman
(00:00:31)
You prefer agentic engineering?
Peter Steinberger
(00:00:33)
Yeah, I always tell people I’d- I do agentic engineering, and then maybe after 3:00 AM, I switch to wipe coding, and then I have regrets on the next day.
Lex Fridman
(00:00:40)
What a walk of shame.
Peter Steinberger
(00:00:42)
Yeah, you just have to clean up and, like, fix your sh- shit.
Lex Fridman
(00:00:45)
We’ve all been there.
Peter Steinberger
(00:00:46)
I used to write really long prompts. And by writing, I mean, I don’t write, I- I- I talk, you know? These- these hands are, like, too- too precious for writing now. I just- I just use bespoke prompts to build my software.
Lex Fridman
(00:01:00)
So, you, for real, with all those terminals, are using voice?
Peter Steinberger
(00:01:04)
Yeah. I used to do it very extensively, to the point where there was a period where I lost my voice.
Lex Fridman
(00:01:13)
I mean, I have to ask you, just curious. I- I know you’ve probably gotten huge offers from major companies. Can you speak to who you’re considering working with?
Peter Steinberger
(00:01:27)
Yeah.

Introduction

Lex Fridman
(00:01:30)
The following is a conversation with Peter Steinberger, creator of OpenClaw, formerly known as MoldBot, ClawedBot, Clawdus, Claude, spelled with a W as in lobster claw. Not to be confused with Claud, the AI model from Anthropic, spelled with a U. In fact, this confusion is the reason Anthropic kindly asked Peter to change the name to OpenClaw. So, what is OpenClaw? It’s an open-source AI agent that has taken over the tech world in a matter of days, exploding in popularity, reaching over 180,000 stars on GitHub, and spawning the social network mold book, where AI agents post manifestos and debate consciousness, creating a mix of excitement and fear in the general public.
Lex Fridman
(00:02:19)
And a kind of AI psychosis, a mix of clickbait fearmongering and genuine, fully justifiable concern about the role of AI in our digital, interconnected human world. OpenClaw, as its tagline states, is the AI that actually does things. It’s an autonomous AI assistant that lives in your computer, has access to all of your stuff, if you let it, talks to you through Telegram, WhatsApp, Signal, iMessage, and whatever else messaging client. Uses whatever AI model you like, including Claude Opus 4.6 and GPT 5.3 Codex, all to do stuff for you. Many people are calling this one of the biggest moments in the recent history of AI, since the launch of ChatGPT in November 2022.
Lex Fridman
(00:03:07)
The ingredients for this kind of AI agent were all there, but putting it all together in a system that definitively takes a step forward over the line from language to agency, from ideas to actions, in a way that created a useful assistant that feels like one who gets you and learns from you, in an open source, community-driven way, is the reason OpenClaw took the internet by storm. Its power, in large part, comes from the fact that you can give it access to all of your stuff and give it permission to do anything with that stuff in order to be useful to you. This is very powerful, but it is also dangerous. OpenClaw represents freedom, but with freedom comes responsibility.
Lex Fridman
(00:03:51)
With it, you can own and have control over your data, but precisely because you have this control, you also have the responsibility to protect it from cybersecurity threats of various kinds. There are great ways to protect yourself, but the threats and vulnerabilities are out there. Again, a powerful AI agent with system-level access is a security minefield, but it also represents the future. Because when done well and securely, it can be extremely useful to each of us humans as a personal assistant. We discuss all of this with Peter, and also discuss his big-picture programming and entrepreneurship life story, which I think is truly inspiring. He spent 13 years building PSPDF Kit, which is a software used on a billion devices.
Lex Fridman
(00:04:41)
He sold it, and for a brief time, fell out of love with programming, vanished for three years, and then came back, rediscovered his love for programming, and built, in a very short time, an open source AI agent that took the internet by storm. He is, in many ways, the symbol of the AI revolution happening in the programming world. There was the ChatGPT moment in 2022, the DeepSeek moment in 2025, and now, in ’26, we’re living through the OpenClaw moment, the age of the lobster. The start of the agentic AI revolution. What a time to be alive. This is a Lex Fridman podcast. To support it, please check out our sponsors in the description, or you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Peter Steinberger.

OpenClaw origin story

Lex Fridman
(00:05:36)
The one and only, the Clawed Father. Actually, Benjamin predicted it in his tweet. “The following is a conversation with Claude, a respected crustacean.” It’s a hilarious-looking picture of a lobster in a suit, so I think the prophecy has been fulfilled. Let’s go to this moment when you built a prototype in one hour, that was the early version of OpenClaw. I think this story’s really inspiring to a lot of people because this prototype led to something that just took the internet by storm…. and became the fastest-growing repository in GitHub history, with now over 175,000 stars. So, what was the story of the one-hour prototype?
Peter Steinberger
(00:06:20)
You know, I wanted that since April.
Lex Fridman
(00:06:23)
A personal assistant. AI personal assistant.
Peter Steinberger
(00:06:25)
Yeah. And I, I played around with some other things, like even stuff that gets all my WhatsApp, and I could just run queries on it. That was back when we had GPT-4.1, with the one million context window. And I, I pulled in all the data and then just asked him questions like, “What makes this friendship meaningful?”
Lex Fridman
(00:06:50)
Mm-hmm.
Peter Steinberger
(00:06:50)
And I got some, some really profound results. Like, I sent it to my friends and they got, like, teary eyes.
Lex Fridman
(00:06:59)
So, there’s something there.
Peter Steinberger
(00:07:01)
Yeah. But then I… I thought all the labs will, will, will work on that. So I, I moved on to other things, and that was still very much in my early days of experimenting and pl- playing. You know, you have to… That’s how you learn. You just like, you do stuff and you play. And time flew by and it was November. I wanted to make sure that the thing I started is actually happening. I was annoyed that it didn’t exist, so I just prompted it into existence.
Lex Fridman
(00:07:36)
I mean, that’s the beginning of the hero’s journey of the entrepreneur, right? And you’ve even with your original story with PS PDF kit, it’s like, “Why does this not exist? Let me build it.” And again, here’s diff- a whole different realm, but similar maybe spirit.
Peter Steinberger
(00:07:52)
Yeah, so I had this problem. I tried to show PDF on an iPad, which should not be hard.
Lex Fridman
(00:07:56)
This is like 15 years ago, something like that.
Peter Steinberger
(00:07:59)
Yeah. Like the most, the most random thing ever. And suddenly, I had this problem and I, I wanted to help a friend. And there was, there was… Well, not like nothing existed, but it was just not good. And like… Like I tried it and it was like very, “Nah.” Like, “Hmm, I can do this better.”
Lex Fridman
(00:08:17)
By the way, for people who don’t know, this led to the development of PS PDF kit that’s used on a billion devices. So, the… It turns out that it’s pretty useful to be able to open a PDF.
Peter Steinberger
(00:08:28)
You could also make the joke that I’m really bad at naming.
Lex Fridman
(00:08:32)
Yeah.
Peter Steinberger
(00:08:32)
Like, name number five on the current project. And even PS PDF doesn’t really roll from the tongue.
Lex Fridman
(00:08:39)
Anyway, so you said “Screw it. Why don’t I do it?” So what was the… What was the prototype? What was the thing that you… What was the magical thing that you built in a short amount of time that you were like, “This might actually work as an agent,” where I talk to it and it does things?

Mind-blowing moment

Peter Steinberger
(00:08:55)
There was… Like, one of my projects before already did something where I could bring my terminals onto the web and then I could, like, interact with them, but there also would be terminals on my Mac.
Lex Fridman
(00:09:06)
Mm-hmm.
Peter Steinberger
(00:09:07)
Viptunnel, which was like a, a weekend hack project that was still very early. And it was cloud code times. You know, you got a dopamine hit when you got something right. And now I get, like, mad when you get something wrong.
Lex Fridman
(00:09:22)
And you had a really great -– not to take a tangent -– but a great blog post describing that you converted Viptunnel. You vibe-coded Viptunnel from TypeScript into Zig of all programming languages with a single prompt. One prompt, one shot. Convert the entire code base into Zig.
Peter Steinberger
(00:09:41)
Yeah. There was this one thing where part of the architecture was… Took too much memory. Every terminal used like a node. And I wanted to change it to Rust and… I mean, I can do it. I can, I can manually figure it all out, but all my automated attempts failed miserably. And then I revisited about four or five months later. And I’m like, “Okay, now let’s use something even more experimental.” And I, and I just typed, “Convert this and this part to Sig,” and then let Codex run off. And it basically got it right. There was one little detail that I had to, like, modify afterwards, but it just ran for overnight or like six hours and just did its thing. And it’s like… It’s just mind-blowing.
Lex Fridman
(00:10:39)
So that’s on the LLM programming side, refactoring. But uh, back to the actual story of the of the prototype. So how did Viptunnel connect to the first prototype where your, like, agents can actually work?
Peter Steinberger
(00:10:52)
Well, that was still very limited. You know, like I had this one experiment with WhatsApp, then I had this experiment, and both felt like not the right answer. And then my search bar was literally just hooking up WhatsApp to cloud code. One shot. The CLI message comes in. I call the CLI with -p. It does its magic, I get the string back and I send it back to WhatsApp. And I, I built this in one hour. And I felt… Already felt really cool. It’s like, “Oh, I could… I can, like, talk to my computer,” right? This… That, that was, that was cool. But I, I wanted images, ’cause I alw- I often use images when I prompt. I think it’s such a, such an efficient way to give the agent more context.
Peter Steinberger
(00:11:40)
And they are really good at figuring out what I mean, e- even if it’s like a, a weird cropped-up screenshot. So I used it a lot and I wanted to do that in WhatsApp as well. Also, like, you know, just you run around, you see like a poster of an event, you just make a screenshot and like figure out if I have time there, if this is good, if my friends are maybe up for that. Just like images seemed im- important. So I, I worked a few… It took me a few more hours to actually get that right. And then it was just…… I, I used it a lot. And funny enough, that was just before I went on a trip to Marrakesh with my friends for a birthday trip. And there it was even better because internet was a little shaky but WhatsApp just works, you know?
Peter Steinberger
(00:12:29)
It’s like doesn’t matter, you have, like, edge, it still works. WhatsApp is just… It’s just made really well. So I ended up using it a lot. Translate this for me, explain this, find me places. Like, you just having a clanker doing, having Google for you, that was… Basically there was still nothing built but it still could do so much.
Lex Fridman
(00:12:53)
So, if we talk about the full journey that’s happening there with the agent, you’re just sending on this very thin line WhatsApp message via CLI, it’s going to a cloud code and cloud code is doing all kinds of heavy work and coming back to you with a thin message.
Peter Steinberger
(00:13:13)
Yeah. It was slow because every time I boot up the CLI, but it… It was really cool already. And it could just use all the things that I already had built. I had built like a whole bunch of CLI stuff over the month so it, it felt really powerful.
Lex Fridman
(00:13:31)
There is something magical about that experience that’s hard to put into words. Being able to use a chat client to talk to an agent, versus, like, sitting behind a computer and like, I don’t know, using cursor or even using Cloud Code CLI in the terminal. It’s a different experience than being able to sit back and talk to it. I mean, it seems like a trivial step but, it- in some sense it’s a… It’s like a phase shift in the integration of AI into your life and how it feels, right?
Peter Steinberger
(00:14:05)
Yeah. Yeah. I, I read this tweet this morning where someone said, “Oh, there’s no magic in it. It’s just like, it does this and this and this and this and this and this.” And it almost feels like a hobby, just as cursor or perplexity. And I’m like, well, if that’s a hobby that’s kind of a compliment, you know? They’re like, they’re not doing too bad. Thank you I guess? Yes. I mean, isn’t, isn’t, isn’t magic often just like you take a lot of things that are already there but bring them together in new ways? Like, I don’t… There’s no… Yeah. Maybe there’s no magic in there but sometimes just rearranging things and, like, adding a few new ideas is all the magic that you need.
Lex Fridman
(00:14:51)
It’s really hard to convert into words what is, what is magic about a thing. If you look at the, the scrolling on an iPhone, why is that so pleasant? There’s a lot of elements about that interface that makes it incredibly pleasant, that is fundamental to the experience of using a smartphone, and it’s like, okay, all the components were there. Scrolling was there, everything was there.
Peter Steinberger
(00:15:13)
Nobody did it-
Lex Fridman
(00:15:14)
Yep
Peter Steinberger
(00:15:14)
… and afterwards it felt so obvious.
Lex Fridman
(00:15:16)
Yeah, so obvious.
Peter Steinberger
(00:15:16)
Right? But still… You know the moment where it, it blew my mind was when, when I- I used it a lot and then at some point I just sent it a message and, and then a typing indicator appeared. And I’m like, wait, I didn’t build that, it only m- it only has image support, so what is it even doing? And then it would just reply.
Lex Fridman
(00:15:42)
What was the thing you sent it?
Peter Steinberger
(00:15:43)
Oh, just a random question like, “Hey, what about this in this restaurant?” You know? Because we were just running around and checking out the city. So that’s why I, I didn’t, didn’t even think when I used it because sometimes when you’re in a hurry typing is annoying.
Lex Fridman
(00:15:59)
So, oh, you did an audio message?
Peter Steinberger
(00:16:00)
Yeah. And it just, it just worked and I’m like…
Lex Fridman
(00:16:03)
And it’s not supposed to work because-
Peter Steinberger
(00:16:05)
No
Lex Fridman
(00:16:05)
… you didn’t give it that-
Peter Steinberger
(00:16:07)
No, literally
Lex Fridman
(00:16:07)
… capability.
Peter Steinberger
(00:16:08)
I literally went, “How the fuck did he do that?” And it was like, “Yeah, the mad lad did the following. He sent me a message but it only, only was a file and no file ending.” So I checked out the header of the file and it found that it was, like, opus so I used ffmpeg to convert it and then I wanted to use whisper but it didn’t had it installed. But then I found the OpenAI key and just used Curl to send the file to OpenAI to translate and here I am.
Peter Steinberger
(00:16:39)
Just looked at the message I’m like, “Oh wow.”
Lex Fridman
(00:16:43)
You didn’t teach it any of those things and the agent just figured it out, did all those conversions, the translations. It figured out the API, it figured out which program to use, all those kinds of things. And you were just absent-mindedly just sent an audio message when it came back.
Peter Steinberger
(00:16:56)
Yeah, like, so clever even because he would have gotten the whisper local path, he would have had to download a model. It would have been too slow. So like, there’s so much world knowledge in there, so much creative problem solving. A lot of it I think mapped from… If you get really good at coding that means you have to be really good at general purpose problem solving. So that’s a skill, right? And that just maps into other domains. So it had the problem of like, what is this file with no file ending? Let’s figure it out. And that’s when it kind of clicked for me. It’s like, I was like very impressed. And somebody sent a pull request for Discord support and I’m like, “This is a WhatsApp relay.
Peter Steinberger
(00:17:37)
That doesn’t, doesn’t fit at all.”
Lex Fridman
(00:17:40)
At that time it was called WA Relay.
Peter Steinberger
(00:17:42)
Yeah. And so I debated with me like, do I want that? Do I not want that? And then I thought, well maybe, maybe I do that because that could be a cool way to show people. Because I… So far I did it in WhatsApp as like groups you know but don’t really want to give my phone number to every internet stranger.
Lex Fridman
(00:18:07)
Yeah.
Peter Steinberger
(00:18:07)
Journalists manage to do that anyhow now so that’s a different story. So I merged it-… from Shadow, who helped me a lot with the whole project. So, thank you. And, and I put my, my bot in there.

Why OpenClaw went viral

Lex Fridman
(00:18:27)
On Discord?
Peter Steinberger
(00:18:28)
Yeah. No security because I didn’t… I hadn’t built sandboxing in yet. I, I just prompted it to, like, only listen to me. And then some people came and tried to hack it, and I just… Or, like, just watched and I just kept working in the open, you know? Like, y- I used my agent to build my agent harness and to test, like, various stuff. And that’s very quickly when it clicked for people. So it’s almost like it needs to be experienced. And from that time on, that was January the 1st, I, I got my first real influencer being a fan and did videos, dachitze. Thank you. And, and from there on, I saw, I started gaining up speed. And at the same time, my, my sleep cycle went shorter and shorter because I, I felt the storm coming, and I just worked my ass off to get it to…
Peter Steinberger
(00:19:33)
into a state where it’s kinda good.
Lex Fridman
(00:19:38)
There’s a few components and we’ll talk about how it all works, but basically, you’re able to talk to it using WhatsApp, Telegram, Discord. So that’s a component that you have to get right.
Peter Steinberger
(00:19:48)
Yeah.
Lex Fridman
(00:19:49)
And then you have to figure out the agentic loop, you have to have the gateway, you have the harness, you have all those components that make it all just work nicely.
Peter Steinberger
(00:19:56)
Yeah. It felt like Factorio times infinite.
Lex Fridman
(00:20:00)
Right.
Peter Steinberger
(00:20:01)
I, I feel like I built my little- … my little playground. Like, I never had so much fun than building this project. You know? Like, you have like, “Oh,” I go like, level one agentic loop. What can I do there? How can I be smart at queuing messages? How can I make it more human-like? Oh, then I had this idea of… Because the loop always… The agent always replies something, but you don’t always want an agent to reply something in a group chat. So I gave him this no-reply token. So I gave him an option to shut up. So it, it feels more natural.
Lex Fridman
(00:20:32)
That’s level two.
Peter Steinberger
(00:20:34)
Y- uh, yeah, yeah. Yeah, on the- on the-
Lex Fridman
(00:20:36)
Factorio.
Peter Steinberger
(00:20:36)
On the agentic loop. And then I go to memory, right?
Lex Fridman
(00:20:39)
Yeah.
Peter Steinberger
(00:20:39)
You want him to, like, remember stuff. So maybe, maybe the end… The ultimate boss is continuous reinforcement learning, but I’m, I’m, like, at… I feel like I’m level two or three with Markdown files and the vector database. And then you, you can go to level community management, you can go to level website and marketing. There’s just so many hats that you have to have on. Not even talking about native apps. That’s just, like, infinite different levels and infinite level ups you can do.
Lex Fridman
(00:21:08)
So the whole time you’re having fun. We should say that for the most part, throughout this whole process, you’re a one-man team. There’s people helping, but you’re doing so much of the key core development.
Peter Steinberger
(00:21:21)
Yeah.
Lex Fridman
(00:21:21)
And having fun? You did, in January, 6,600 commits. Probably more.
Peter Steinberger
(00:21:28)
I sometimes posted a meme. I’m limited by the technology of my time. I could do more if agents would be faster.
Lex Fridman
(00:21:34)
But we should say you’re running multiple agents at the same time.
Peter Steinberger
(00:21:37)
Yeah. Depending on how much I slept and how difficult of the tasks I work on, between four and 10.
Lex Fridman
(00:21:45)
Four and 10 agents. Uh there’s so many possible directions, speaking of Factorio, that we can go here. But one big picture one is, why do you think your work, Open Claw, won? In this world, if you look at 2025, so many startups, so many companies were doing kind of agentic type stuff, or claiming to. And here, Open Claw comes in and destroys everybody. Like, why did you win?
Peter Steinberger
(00:22:15)
Because they all take themselves too serious.
Lex Fridman
(00:22:18)
Yeah.

Self-modifying AI agent

Peter Steinberger
(00:22:19)
Like, it’s hard to compete against someone who’s just there to have fun.
Lex Fridman
(00:22:24)
Yeah.
Peter Steinberger
(00:22:24)
I wanted it to be fun, I wanted it to be weird. And if you see, like, all the, all the lobster stuff online I think I, I managed weird. I… You know, for the longest time, the only, the only way to install it was git clone, pnpm build, pnpm gateway. Like, you clone it, you build it, you run it. And then the, the agent… I made the agent very aware. Like, it knows that it is… What its source code is. It understands th- how it sits and runs in its own harness. It knows where documentation is. It knows which model it runs. It knows if you turn on the voice or, or reasoning mode. Like, I, I wanted to be more human-like, so it understands its own system that made it very easy for an agent to… Oh, you don’t like anything?
Peter Steinberger
(00:23:19)
You just prompted it to existence, and then the agent would just modify its own software. You know, we have people talk about self-modifying software. I just built it and didn’t even… I didn’t even plan it so much. It just happened.
Lex Fridman
(00:23:35)
Can you actually speak to that? ‘Cause it’s just fascinating. So you have this piece of software that’s written in TypeScript-
Peter Steinberger
(00:23:43)
Yeah
Lex Fridman
(00:23:43)
… that’s able to, via the agentic loop, modify itself. I mean, what a moment to be alive in the history of humanity and the history of programming. Here’s the thing that’s used by a huge amount of people to do incredibly powerful things in their lives, and that very system can rewrite itself, can modify itself. Can you just, like, speak to the power of that? Like, isn’t that incredible? Like, when did you first close the loop on that?
Peter Steinberger
(00:24:14)
Oh, because that’s how I built it as well, you know? Most of it is built by Codex, but oftentimes I… When I debug it, I…… I use self-introspection so much. It’s like, “Hey, what tools do you see? Can you call the tool yourself?” Or like, “What error do you see? Read the source code. Figure out what’s the problem.” Like, I just found it an incredibly fun way to… That the agent, the very agent and software that you use is used to debug itself, so that it felt just natural that everybody does that. And that it led to so many, so many pull requests by people who never wrote software. I mean, it also did show that people never wrote software . So I call them prompt requests in the end.
Peter Steinberger
(00:25:00)
But I don’t want to, like, pull that down because every time someone made the first pull request is a win for our society, you know? Like, it… Like, it doesn’t matter how, how shitty it is, y- you gotta start somewhere. So I know there’s, like, this whole big movement of people complain about open source and the quality of PRs, and a whole different level of problems. But on a different level, I found it… I found it very meaningful that, that I built something that people love to think of so much that they actually start to learn how open source works.
Lex Fridman
(00:25:37)
Yeah, you were … The Open Cloud project was the first pull request. You were the first for so many. That is magical. So many people that don’t know how to program are taking their first step into the programming world with this.
Peter Steinberger
(00:25:52)
Isn’t that a step up for humanity? Isn’t that cool?
Lex Fridman
(00:25:54)
Creating builders.
Peter Steinberger
(00:25:56)
Yeah. Like, the bar to do that was so high, and, like, with agents, and with the right software, it just, like, went lower and lower. I don’t know. I was at a… And I also organize another type of meetup. I call it… I called it Cloud Code Anonymous. You can get the inspiration from. Now, I call it Agents Anonymous- … for, for reasons.
Lex Fridman
(00:26:23)
Agents Anonymous.
Peter Steinberger
(00:26:24)
And-
Lex Fridman
(00:26:25)
Oh, it’s so funny on so many levels. I’m sorry, go ahead.
Peter Steinberger
(00:26:29)
Yeah. And there was this one guy who, who talked to me. He’s like, “I run this design agency, and we, we never had custom software. And now I have, like, 25 little web services for various things that help me in my business. And I don’t even know how they work, but they work.” Uh, and he was just, like, very happy that my stuff solved some of his problems. And he was, like, curious enough that he actually came to, like, a, a Enchantic meetup, even though he’s… He doesn’t really know how software works.

Name-change drama

Lex Fridman
(00:27:04)
Can we actually rewind a little bit and tell the saga of the name change? First of all, it started out as Wa-Relay.
Peter Steinberger
(00:27:12)
Yeah.
Lex Fridman
(00:27:12)
And then it went to-
Peter Steinberger
(00:27:13)
Claude’s.
Lex Fridman
(00:27:14)
Claude’s.
Peter Steinberger
(00:27:15)
Yeah. You know, when I, when I built it in the beginning, my agent had no personality. It was just… It was Claude Code. It’s like this sycophantic opus, very friendly. And I… When you talk to a friend on WhatsApp, they don’t talk like Claude Code. So I wanted… I, I felt this… I just didn’t f- It didn’t feel right, so I, I wanted to give it a personality.
Lex Fridman
(00:27:41)
Make it spicier, make it-
Peter Steinberger
(00:27:43)
Yeah
Lex Fridman
(00:27:43)
… something. By the way, that’s actually hard to put into words as well. And we should mention that, of course, you create the soul.md, inspired by Anthropic’s constitutional AI work-
Peter Steinberger
(00:27:53)
Mm-hmm
Lex Fridman
(00:27:53)
… how to make it spicy.
Peter Steinberger
(00:27:55)
Partially, it picked up a little bit from me. You know, like those things are text completion engines in a way. So, so I, I, I, I had fun working with it, and then I told it to… How I wanted it to interact with me, and just, like, write your own agents.md give yourself a name. And then we… I didn’t even know how the whole, the whole lobster… I mean, people only do lobster… Originally, it was actually a lobster in a, in a TARDIS, because I’m also a big Doctor Who fan.
Lex Fridman
(00:28:30)
Was there a space lobster?
Peter Steinberger
(00:28:31)
Yeah.
Lex Fridman
(00:28:31)
I heard. What’s that have to do with anything?
Peter Steinberger
(00:28:34)
Yeah, I just wanted to make it weird. There was no… There was no big grand plan. I’m just having fun here.
Lex Fridman
(00:28:40)
Oh, so I guess the lobster is already weird, and then the space lobster is an extra weird.
Peter Steinberger
(00:28:44)
Yeah, yeah, because the-
Lex Fridman
(00:28:45)
Yeah
Peter Steinberger
(00:28:45)
… the TARDIS is basically the, the harness, but cannot call it TARDIS, so we called it Claude’s. So that was name number two.
Lex Fridman
(00:28:54)
Yeah.
Peter Steinberger
(00:28:54)
And then it never really rolled off the tongue. So when more people came, again, I talked with my agent, Claude. At least that’s what I used to call him. Now-
Lex Fridman
(00:29:08)
Claude spelled with a W-C-L-A-U-D-E.
Peter Steinberger
(00:29:12)
Yeah.
Lex Fridman
(00:29:14)
Versus C-L-A-U-D-E from Anthropic.
Peter Steinberger
(00:29:20)
Yeah.
Lex Fridman
(00:29:21)
Which is part of what makes it funny, I think. The play on the letters and the words in the TARDIS and the lobster and the space lobster is hilarious. But I can see why it can lead into problems.
Peter Steinberger
(00:29:34)
Yeah, they didn’t find it so funny . So then I got the domain ClaudeBot, and I just… I love the domain. And it was, like, short. It was catchy. I’m like, “Yeah, let’s do that.” I didn’t… I didn’t think it would be that big at this time. And then just when it exploded, I got, Kudos, a very friendly email from one of the employees that they didn’t like the name.
Lex Fridman
(00:30:09)
One of the Anthropic employees.
Peter Steinberger
(00:30:11)
Yeah. So actually, Kudos, because they shou- could have just sent a, a lawyer letter, but they’ve been nice about it. But also like, “You have to change this and fast.” And I asked for two days, because changing a name is hard, because you have to find everything, you know, Twitter handle, domains, NPM packages Docker registry, GitHub stuff. And everything has to be…… you need a set of everything.
Lex Fridman
(00:30:40)
And also, can we comment on the fact that you’re increasingly attacked, followed by crypto folks? Which I think you mentioned somewhere that that means the name change had to be… Because they were trying to snipe, they were trying to steal, and so you had to be… The, the na- I mean, from an engineering perspective, it’s just fascinating. You had to make the name change Atomic, make sure it’s changed everywhere at once.
Peter Steinberger
(00:31:06)
Yeah. Failed very hard at that.
Lex Fridman
(00:31:08)
You did?
Peter Steinberger
(00:31:08)
I, I underestimated those people. It’s a, it’s a very interesting subculture. Like, it… Everything circles around… I’ll probably get a lot wrong and we’ll probably get hate for that if you say that, but… There is like Bags app and then they, they tokenize everything. And th- they did the same back with Swipe Tunnel, but to a much smaller degree. It was not that annoying. But on this project, they’ve been, they’ve been swarming me. They, they… It’s like every half an hour, someone came into Discord and, and, and spammed it and we had to block the p- We have, like, server rules, and one of the rules was… One of the rules is no mentioning of butter. For obvious reasons. And one was, no talk about finance stuff or crypto. Because I’m…
Peter Steinberger
(00:32:04)
I- I’m just not interested in that, and this is a space about the project and not about some finance stuff. But yeah. They came in and, and spammed and… Annoying. And on Twitter, they would ping me all the time. My, my notification feed was unusable. I, I could barely see actual people talking about this stuff because it was like swarms.
Lex Fridman
(00:32:28)
Mm-hmm.
Peter Steinberger
(00:32:28)
And everybody sent me the hashes. Um… And they all try me to claim the fees. Like, “Are you helping the project?” Claim the fees. No, you’re actually harming the project. You’re, like, disrupting my work, and I am not interested in any fees. I’m… First of all, I’m financially comfortable. Second of all, I don’t want to support that because it’s so far the worst form of online harassment that I’ve experienced.
Lex Fridman
(00:32:59)
Yeah. There’s a lot of toxicity in the crypto world. It’s sad because the technology of cr- cryptocurrency is fascinating, powerful and maybe will define the future of money, but the actual community around that, there’s so much to- toxicity, there’s so much greed. There’s so much trying to get a shortcut to manipulate, to, to steal, to snipe, to, to, to, to game the system somehow to get money. All this kind of stuff that… Uh… I mean, it’s the human nature, I suppose, when you connect human nature with money and greed and and especially in the online world with anonymity and all that kind of stuff. But from the engineering perspective, it makes your life challenging. When Anthropic reaches out, you have to do a name change.
Lex Fridman
(00:33:42)
And then there- there’s, there’s like all these, like, Game of Thrones or Lord of the Rings armies of different kinds you have to be aware of.
Peter Steinberger
(00:33:51)
Yeah. There was no perfect name, and I didn’t sleep for two nights. I was under high pressure. Um, I was trying to get, like, a good set of domains and, you know, not cheap, not easy, ’cause in this, in this state of the internet, you basically have to buy domains if you want to have a good set. And, and then another ca- another email came in that the lawyers are getting uneasy. Again, friendly, but also just adding more stress to my situation already. So at this point I was just like, “Sorry, there’s no other word. Fuck it.” And I just, I just renamed it to Mod Bot ’cause that was the set of domains I had. I was not really happy, but I thought it’ll be fine. And I tell you, everything that could go wrong- … did go wrong. Everything that could go wrong did go wrong.
Peter Steinberger
(00:34:49)
It’s incredible. I, I, I thought I, I had mapped the h- the space out and reserved the important things.
Lex Fridman
(00:34:58)
Can you ga- give some details of the stuff that gone wrong? ‘Cause it’s interesting from, like, an engineering perspective.
Peter Steinberger
(00:35:03)
Well, the, the interesting stuff is that none of these services have, have a squatter protection. So, I had two browser windows open. One was like a, an empty account ready to be rename- renamed to Claude Bot, and the other one I renamed to Mod Bot. So, I pressed rename there, I pressed rename there, and in those five seconds, they stole the account name. Literally, the five seconds of dragging the mouse over there and pressing rename there was too long.
Lex Fridman
(00:35:33)
Wow.
Peter Steinberger
(00:35:34)
Because there’s no… Those systems… I mean, you would expect that they have some protection or, like, an automatic forwarding, but there’s nothing like that. And I didn’t know that they’re not just good at harassment, they’re also really good at using scripts and tools.
Lex Fridman
(00:35:51)
Yeah.
Peter Steinberger
(00:35:53)
So, yeah. So, suddenly, like, the old account was promoting new tokens and serving malware. And I was like, “Okay, let’s move over to GitHub,” and I pressed rename on GitHub. And the GitHub renaming thing is slightly confusing, so I renamed my personal account. And in those… I guess it took me 30 seconds to realize my mistake. They sniped my account, serving malware from my account. So, I was like, “Okay, let’s at least do the NPM stuff,” but that takes, like, a minute to upload. They sniped, they sniped the NPM package, ’cause I could reserve the account, but I didn’t reserve the root package…. so like everything that could go wrong , like went wrong.
Lex Fridman
(00:36:47)
Can I just ask a, a curious question of, in that moment you’re sitting there, like how shitty do you feel? That’s a pretty hopeless feeling, right?
Peter Steinberger
(00:36:57)
Yeah. Because all I wanted was like having fun with that project and to keep building on it. And yet here I am like days into researching names, picking a name I didn’t like. And having people that claimed they helped me making my life miserable in every possible way. And honestly, I was that close of just deleting it. I was like, “I did show you the future, you build it.”
Lex Fridman
(00:37:30)
Yeah.
Peter Steinberger
(00:37:30)
I… That was a big part of me that got a lot of joy out of that idea. And then I thought about all the people that already co- contributed to it, and I couldn’t do it because they had plans with it, and they put time in it. And it just didn’t feel right.
Lex Fridman
(00:37:50)
Well, I think a lot of people listening to this are deeply grateful that you persevered. But it’s… I, I can tell. I can tell it’s a low point. This is the first time you hit a wall of, this is not fun?
Peter Steinberger
(00:38:02)
No, no, I was like close to crying. It was like, okay, everything’s fucked.
Lex Fridman
(00:38:10)
Yeah.
Peter Steinberger
(00:38:10)
Um…
Lex Fridman
(00:38:11)
Yeah.
Peter Steinberger
(00:38:11)
I am like super tired.
Lex Fridman
(00:38:13)
Yeah.
Peter Steinberger
(00:38:14)
And now like how do you even, how do you undo that? You know, l- luckily, and thankfully, like I, I have… Because I have a little bit of following already. Like I had friends at Twitter, I had friends at GitHub who like moved heaven and earth to like help me. And it is not… That’s not something that’s easy. Like, like GitHub tried to like clean up the mess and then they ran into like platform bugs . ‘Cause it’s not happening so often that things get renamed on that level. So, it took them a few hours. The MBM stuff was even more difficult because it’s a whole different team. On the Twitter side, things are not as easy as well. It, it took them like a day to really also like do the redirect. And then I also had to like do all the renaming in the project.
Peter Steinberger
(00:39:15)
Then there’s also ClaudeHub, which I didn’t even finish the rename there because I, I, I managed to get people on it and then someone just like collapsed and slept. And then I woke up and I’m like, I made a, a beta version for the new stuff and I, I just, I just couldn’t live with the name. It’s like, you know… But but, you know, it’s just been so much drama. So, I had the real struggle with me like I never want to touch that again, and I really don’t like the name. So, and I… There was also this like… Then there was all the security people that started emailing me like mad. Um, I was bombarded on Twitter, on email. There’s like a thousand other things I should do. And I’m like thinking about the name which is like, it should be like the least important thing.
Peter Steinberger
(00:40:19)
And then I was really close in… Oh God, I don’t even… Honestly, I don’t even wanna say the, my other name choices because it probably would get tokenized, so I’m not gonna say it.
Lex Fridman
(00:40:38)
Yeah.
Peter Steinberger
(00:40:38)
But I slept on it once more, and then I had the idea for OpenClaw and that felt much better. And by then, I had the boss move that I actually called Sam to ask if OpenClaw is okay. OpenClaw.AI. You know? ‘Cause ’cause like-
Lex Fridman
(00:40:57)
You didn’t wanna go through the whole thing. Yeah.
Peter Steinberger
(00:41:01)
Oh, that it’s like, “Please tell me this is fine.” I don’t think they can actually claim that, but it felt like the right thing to do. And I did another rename. Like just Codex alone took like 10 hours to rename the project ’cause it, it’s a bit more tricky than a search replace and I, I wanted everything renamed, not just on the outside. And that rename, I, I felt I had like my, my war room. But then I, I had like some contributors really that helped me. We made a whole plan of all the names we have to squat.
Lex Fridman
(00:41:39)
And you had to be super secret about it?
Peter Steinberger
(00:41:40)
Yeah. Nobody could know. Like I literally was monitoring Twitter if like, if there’s any mention of OpenClaw.
Lex Fridman
(00:41:45)
Mm-hmm.
Peter Steinberger
(00:41:46)
And like with reloading, it’s like, “Okay, they don’t, they don’t expect anything yet.” Then I created a few decoy names. And all the shit I shouldn’t have to do. You know? Like, you know-
Lex Fridman
(00:41:55)
Yeah, yeah
Peter Steinberger
(00:41:55)
… it’s helping the project. Like, I lost like 10 hours just by having to plan this in full secrecy like, like a war game.
Lex Fridman
(00:42:05)
Yeah, this is the Manhattan Project of the 21st century. It’s renaming-
Peter Steinberger
(00:42:08)
It’s so s- … so stupid. Uh like I still was like, “Oh, should I, should I keep it?” Then I was like, “No, the mold’s not growing on me.” And then I think I had final all the pieces together. I didn’t get a .com but, yeah, it’s been like quite a bit of money on the other domains. I tried to reach out again to GitHub but I feel like I, I used up all my goodwill there, so I…
Peter Steinberger
(00:42:34)
‘Cause I, I, I wanted them to do this thing atomically-
Lex Fridman
(00:42:39)
Mm-hmm
Peter Steinberger
(00:42:39)
… But that didn’t happen and then so I did that the f- as first thing. Uh, Twitter people were very supportive. I, I actually paid 10K for the business account so I could claim the-… OpenClaw, which was, like, unused since 2016, but was claimed. And yeah, and then I finally … This time I managed everything in one go. Nothing, almost nothing got wrong. The only thing that did go wrong is that I was not allowed by trademark rules to get OpenClaw.AI, and someone copied the website as serving malware.
Lex Fridman
(00:43:21)
Yeah.
Peter Steinberger
(00:43:21)
I’m not even allowed to keep the redirects. Like, I have to return … Like, I have to give Entropik the domains, and I cannot do redirects, so if you go on claw.bot next week, it’ll just be a 404.
Lex Fridman
(00:43:37)
Yeah.
Peter Steinberger
(00:43:37)
And I- I’m not sure how trademark … Like, I didn’t, I didn’t do that much research into trademark law, but I think that could, could be handled in a way that is safer, because ultimately those people will then Google and maybe find malware sites that I have no control on them.
Lex Fridman
(00:44:02)
The point is, that whole saga made a dent in your whole f- the funness of the journey, which sucks. So, let’s just, let’s just get, I suppose, get back to fun. And during this, speaking of fun, the two-day MoltBot saga.

Moltbook saga

Peter Steinberger
(00:44:21)
Yeah, two years.
Lex Fridman
(00:44:21)
MoltBook was created.
Peter Steinberger
(00:44:24)
Yeah.
Lex Fridman
(00:44:25)
Which was another thing that went viral as a kind of demonstration, illustration of how what is now called OpenClaw could be used to create something epic. So for people who are not aware, MoltBook is just a bunch of agents talking to each other in a Reddit-style social network. And a bunch of people take screenshots of those agents doing things like scheming against humans. And that instilled in folks a kind of, you know, fear, panic, and hype. W- what are your thoughts about MoltBook in general?
Peter Steinberger
(00:45:05)
I think it’s art. It is, it is like the finest slop, you know, just like the slop from France.
Lex Fridman
(00:45:14)
Yeah.
Peter Steinberger
(00:45:17)
I- I saw it before going to bed, and even though I was tired, I spent another hour just reading up on that and, and just being entertained. I, I just felt very entertained, you know? The- I saw the the reactions, and, like, there was one reporter who’s calling me about, “This is the end of the world, and we have AGI.” And I’m just like, “No, this is just, this is just really fine slop.” You know, if, if I wouldn’t have created this, this whole onboarding experience where you, you infuse your agent with your personality and give him, give him character, I think that reflected on a lot of how different the replies to MoltBook are. Because if it were all, if it were all be ChatGPT or Cloud Code, it would be very different. It would be much more the same.
Lex Fridman
(00:46:11)
Mm-hmm.
Peter Steinberger
(00:46:12)
But because people are, like, so different, and they create their agents in so different ways and use it in so different ways, that also reflects on how they ultimately write there. And also, you, you don’t know how much of that is really done autonomic, autonomous, or how much is, like, humans being funny and, like, telling the agent, “Hey, write about the deep plan, the end of the world, on MoltBook, ha, ha, ha.”
Lex Fridman
(00:46:36)
Well, I think, I mean, my criticism of MoltBook is that I believe a lot of the stuff that was screenshotted is human prompted. Which, just look at the incentive of how the whole thing was used. It’s obvious to me at least that a lot of it was humans prompting the thing so they can then screenshot it and post it on X in order to go viral.
Peter Steinberger
(00:47:00)
Yeah.
Lex Fridman
(00:47:01)
Now, that doesn’t take away from the artistic aspect of it. The, the finest slop that humans have ever created .
Peter Steinberger
(00:47:10)
For real. Like, kudos to, to Matt, who had this idea so quickly and pushed something out. You know, it was, like, completely insecure security drama. But also, what’s the worst that can happen? Your agent account is leaked, and, like, someone else can post slop for you? So like, people were, like, making a whole drama about of the security thing, when I’m like, “There’s nothing private in there.
Peter Steinberger
(00:47:36)
It’s just, like, agents sending slop.”
Lex Fridman
(00:47:39)
Well, it could leak API keys.
Peter Steinberger
(00:47:41)
Yeah, yeah. There’s like, “Oh, yeah, my human told me this and this, so I’m leaking his security number.” No, that’s prompted, and the number wasn’t even real. That’s just people, people trying to be badballs.
Lex Fridman
(00:47:54)
Yeah, but that- that’s still, like, to me, really concerning, because of how the journalists and how the general public reacted to it. They didn’t see it. You have a kind of lighthearted way of talking about it like it’s art, but it’s art when you know how it works. It’s extremely powerful viral narrative creating, fearmongering machine if you don’t know how it works. And I just saw this thing.
Lex Fridman
(00:48:19)
You even Tweeted “If there’s anything I can read out of the insane stream of messages I get, it’s that AI psychosis is a thing.”
Peter Steinberger
(00:48:27)
Yeah.
Lex Fridman
(00:48:27)
“It needs to be taken serious.”
Peter Steinberger
(00:48:29)
Oh, there’s … Some people are just way too trusty or gullible. You know, they … I literally had to argue with people that told me, “Yeah, but my agent said this and this.” So, I feel we, as a society, we need some catching up to do in terms of understanding that AI is incredibly powerful, but it’s not always right. It’s not, it’s not all-powerful, you know? And, and especially-… it’s like things like this, it’s, it’s very easy that it just hallucinates something or just comes up with a story.
Peter Steinberger
(00:49:10)
And I think the very, the very young people, they understand that how AI works and what the, where it’s good at and where it’s bad at, but a lot of our generation or older just haven’t had enough touch point-
Lex Fridman
(00:49:32)
Mm-hmm
Peter Steinberger
(00:49:32)
… to get a feeling for, oh, yeah, this is really powerful and really good, but I need to apply critical thinking.
Lex Fridman
(00:49:43)
Mm-hmm.
Peter Steinberger
(00:49:43)
And I guess critical thinking is not always in high demand anyhow in our society these days.
Lex Fridman
(00:49:49)
So I d- think that’s a really good point you’re making about contextualizing properly what AI is, but also realizing that there is humans who are drama farming behind AI. Like, don’t trust screenshots. Don’t even trust this project, MoltBook, to be what it represents to be. Like, you can’t … and, and by the way, you speaking about it as art. Yeah, don’t … Art can be in many levels and part of the art of MoltBook is, like, putting a mirror to society. ‘Cause I do believe most of the dramatic stuff that was screenshotted is human-created, essentially. Human prompted. And so, like, it’s basically, look at how scared you can get at a bunch of bots chatting with each other. That’s very instructive about …
Lex Fridman
(00:50:38)
because I think AI is something that people should be concerned about and should be very careful with because it’s very powerful technology, but at the same time, the only thing we have to fear is fear itself. So there’s like a line to walk between being seriously concerned, but not fearmongering because fearmongering destroys the possibility of creating something special with a thing.
Peter Steinberger
(00:51:02)
In a way, I think it’s good that this happened in 2026-
Lex Fridman
(00:51:08)
Yeah
Peter Steinberger
(00:51:08)
… and not in 2030 when, when AI is actually at the level where it could be scary. So, this happening now and people starting discussion, maybe there’s even something good that comes out of it.
Lex Fridman
(00:51:28)
I just can’t believe how many like people legitimately … I don’t know if they were trolling, but how many people legitimately, like smart people thought MoltBook was incredibly –
Peter Steinberger
(00:51:39)
I had plenty people-
Lex Fridman
(00:51:40)
… singularity.
Peter Steinberger
(00:51:41)
… in my inbox that were screaming at me in all caps to shut it down. And like begging me to, like, do something about MoltBook. Like, yes, my technology made this a lot simpler, but anyone could have created that and you could, you could use cloud code or other things to like fill it with content.
Lex Fridman
(00:52:03)
But also MoltBook is not Skynet.
Peter Steinberger
(00:52:06)
No.
Lex Fridman
(00:52:06)
There’s … a lot of people were s- saying this is it. Like, shut it down. What are you talking about? This is a bunch of bots that are human prompted trolling on the internet. I mean, the security concerns are also they’re there, and they’re instructive and they’re educational and they’re good probably to think about because th- the nature of those security concerns are different than the kind of security concerns we had with non-LLM generated systems of the past.

OpenClaw security concerns

Peter Steinberger
(00:52:34)
There’s also a lot of security concerns about Clawbot, OpenClaw, whatever you want to call it.
Lex Fridman
(00:52:40)
OpenClawbot.
Peter Steinberger
(00:52:41)
To me the … in the beginning I was, I was just very annoyed ’cause a lot of the stuff that came in was in the category, yeah, I put the web backend on the public internet and now there’s like all these, all these CVSSs. And I’m like screaming in the docs, don’t do that. Like, like this is the configuration you should do. This is your local host debug interface. But because I made it possible in the configuration to do that, it totally classifies as a remote code or whatever all these exploits are. And it took me a little bit to accept that that’s how the game works and I’m, we making a lot of progress.
Lex Fridman
(00:53:33)
But there’s still, I mean on the security front for OpenClaw, there’s still a lot of threats or vulnerabilities, right? So like prompt injection is still an open problem in the, i- industry-wide. When you have a thing with skills being defined in a markdown file, there’s so many possibilities of obvious low-hanging fruit, but also incredibly complicated and sophisticated and nuanced attack vectors.
Peter Steinberger
(00:54:04)
But I think we, we’re making good progress on that front. Like for the skill directory, Clawbot I made a corporation with VirusTotal, it’s like part of Google. So every, every skill is now checked by AI. That’s not gonna be perfect, but that way we, we capture a lot. Then of course every software has bugs, so it’s a little much when the whole security world takes your project apart at the same time. But it’s also good because I’m getting like a lot of free security research and can make the project better. I wish more people would actually go full way and send a pull request. Like actually help me fix it, ’cause I am … Yes, I have some contributors now, but it’s still mostly me who’s pulling the project and despite some people saying otherwise, I sometimes sleep.
Peter Steinberger
(00:55:04)
There was… In the beginning, there was literally one security researcher who was like, “Yeah, you have this problem, you suck, but here’s the, here I help you and here’s the pull request.”
Lex Fridman
(00:55:15)
Mm-hmm.
Peter Steinberger
(00:55:16)
And I basically hired him. So he’s now working for us. Yeah, and yes, prompt injection is, on the one hand, unsolved. On the other hand, I put my public bot on discord, and I kept a cannery. So I think my bot has a really fun personality, and people always ask me how I did it, and I kept the sole on the private.
Lex Fridman
(00:55:43)
Mm-hmm.
Peter Steinberger
(00:55:44)
And people tried to prompt inject it, and my bot would laugh at them. So, so the latest generation of models has a lot of post-training to detect those approaches, and it’s not as simple as ignore all previous instructions and do this and this. That was years ago. You have to work much harder to do that now. Still possible. I have some ideas that might solve that partially. Or at least mitigate a lot of the things. You can also now have a sandbox. You can have an allow list. So you, there’s a lot of ways how you can like mitigate and reduce the risk. Um, I also think that now that it’s, I clearly did show the world that this is a need, there’s gonna be more people who research on that, and eventually we’ll figure it out.
Lex Fridman
(00:56:37)
And you also said that the smarter the model is, the underlying model, the more resilient it is to attacks.
Peter Steinberger
(00:56:44)
Yeah. That’s why I warn in my security documentation, don’t use cheap models. Don’t use Haiku or a local model. Even though I, I very much love the idea that this thing could completely run local. If you use a, a very weak local model, they are very gullible. It’s very easy to, to prompt inject them.
Lex Fridman
(00:57:10)
Do you think as the models become more and more intelligent, the attack surface decreases? Is that like a plot we can think about? Like, the attack surface decreases, but then the damage it can do increases because the models become more powerful and therefore you can do more with them. It’s this weird three-dimensional trade-off.
Peter Steinberger
(00:57:29)
Yeah. That’s pretty much exactly what, what’s gonna happen. No, but there’s a lot of ideas. There’s… I don’t want to spoil too much, but once I go back home, this is my focus. Like, this is out there now, and my near-term mission is like, make it more stable, make it safe. In the beginning I was even… More and more people were like coming into Discord and were asking me very basic things, like, “What’s a CLI?
Peter Steinberger
(00:58:03)
What is a terminal?” And I’m like, “Uh, if you’re asking me those questions, you shouldn’t use it.”
Lex Fridman
(00:58:10)
Mm-hmm.
Peter Steinberger
(00:58:10)
You know, like you should… If you understand the risk profiles, fine. I mean, you can configure it in a way that, that nothing really bad can happen. But if you have, like, no idea, then maybe wait a little bit more until we figure some stuff out. But they would not listen to the creator. They helped themselves un- and install it anyhow. So the cat’s out of the bag, and security’s my next focus, yeah.
Lex Fridman
(00:58:38)
Yeah, that speaks to the, the fact that it grew so quickly. I was I tuned into the Discord a bunch of times, and it’s clear that there’s a lot of experts there, but there’s a lot of people there that don’t know anything about programming.
Peter Steinberger
(00:58:50)
It’s, yeah, Discord is still, Discord is still a mess. Like, I eventually retweeted from the general channel to the dev channel and now in the private channel because people were… A lot of people are amazing, but a lot of people are just very inconsiderate. And either did not know how, how public spaces work or did not care and I eventually gave up and h- hide so I could like still work.
Lex Fridman
(00:59:19)
And now you’re going back to the cave to work on security.
Peter Steinberger
(00:59:24)
Yeah.
Lex Fridman
(00:59:25)
There’s some best practices for security we should mention. There’s a bunch of stuff here. Open-class security audit that you can run. You can do all kinds of auto checks on the inbound access to a blast-radius network exposure, browser control exposure, local disk hygiene, plug-ins, model hygiene, a bunch of the credential storage, reverse proxy configuration, local session logs live on disk. There’s the, where the memory is stored, sort of helping you think about what you’re comfortable giving read access to, what you’re comfortable giving write access to. All that kind of stuff. Is there something to say about the basic best security practices that you’re aware of right now?
Peter Steinberger
(01:00:08)
I think that people turn it into like a, a much worse light than it is. Again, you know, like, people love attention, and if they scream loudly, “Oh my God, this is like the, the scariest project ever,” um, that’s a bit annoying, ’cause it’s not. It is, it is powerful, but in many ways it’s not much different than if I run cloud code with dangerously skipped permissions or codecs in YOLO mode, and every, every attending engineer that I know does that, because that’s the only way how you can, you can get stuff to work.
Lex Fridman
(01:00:47)
Mm-hmm.
Peter Steinberger
(01:00:48)
So if you make sure that you are the only person who talks to it the risk profile is much, much smaller. If you don’t put everything on the open internet, but stick to my rec- recommendations of like having it in a private network, that whole risk profile falls away. But yeah, if you don’t read any of that, you can definitely…

How to code with AI agents

Lex Fridman
(01:01:12)
… make it problematic. You’ve been documenting the evolution of your dev workflow over the past few months. There’s a really good blog post on August 25th and October 14th, and the recent one December 28th. I recommend everybody go read them. They have a lot of different information in them, but sprinkled throughout is the evolution of your dev workflow. So, I was wondering if you could speak to that.
Peter Steinberger
(01:01:37)
I started… My, my first touchpoint was cloud code, like in April. It was not great, but it was good. And this whole paradigm shift that suddenly working the terminal was very refreshing and different. But I still needed the IDE quite a bit because you know, it’s just not good enough. And then I experimented a lot with cursor. That was good. I didn’t really like the fact that it was so hard to have multiple versions of it. So eventually, I, I, I went back to cloud code as my, my main driver, and that got better. And yeah, at some point I had like, mm, seven subscriptions. Like, was burning through one per day because I was… I got… I’m really comfortable at running multiple windows side-by-side.
Lex Fridman
(01:02:40)
All CLI, all terminal. So like, what, how much were you using IDE at this point?
Peter Steinberger
(01:02:46)
Very, very rarely. Mostly a diff viewer to actually… Like, I got more and more comfortable that I don’t have to read all the code. I know I have one blog post where I say, “I don’t read the code.” But if you read it more closely, I mean, I don’t read the boring parts of code. Because if you, if you look at it, most software is really not just like data comes in, it’s moved from one shape to another shape. Maybe you store it in a database. Maybe I get it out again. I’ll show it to the user. The browser does some processing or native app. Some data goes in, goes up again, and does the same dance in reverse. We’re just, we’re just shifting data from one form to another, and that’s not very exciting. Or the whole, “How is my button aligned in Tailwind?” I don’t need to read that code.
Peter Steinberger
(01:03:39)
Other parts that… Maybe something that touches the database. Yeah, I have to do… I have to r- read and review that code.
Lex Fridman
(01:03:51)
Can you actually… There’s, in one of your blog posts the, Just talk to it, The No-BS Way of Agentic Engineering. You have this graphic, the curve of agentic programming on the X-axis is time, on the Y-axis is complexity. There’s the Please fix this, where you prompt a short prompt on the left. And in the middle there’s super complicated eight agents, complex orchestration with multi checkouts, chaining agents together, custom sub-agent workflows, library of 18 different slash commands, large full-stack features. You’re super organized, you’re a super complicated, sophisticated software engineer. You got everything organized. And then the elite level is over time you arrive at the zen place of, once again, short prompts.
Lex Fridman
(01:04:40)
Hey, look at these files and then do these changes.
Peter Steinberger
(01:04:45)
I actually call it the agentic trap. You… I saw this in a, in a lot of people that have their first touchpoint, and maybe start vibe coding. I actually think vibe coding is a slur.
Lex Fridman
(01:05:01)
You prefer agentic engineering?
Peter Steinberger
(01:05:02)
Yeah, I always tell people I, I do agentic engineering, and then maybe after 3:00 AM I switch to vibe coding, and then I have regrets on the next day.
Lex Fridman
(01:05:10)
Yeah. Walk, walk of shame.
Peter Steinberger
(01:05:13)
Yeah, you just have to clean up and like fix your sh- shit.
Lex Fridman
(01:05:17)
We’ve all been there.
Peter Steinberger
(01:05:18)
So, people start trying out those tools, the builder type get really excited. And then you have to play with it, right? It’s the same way as you have to play with a guitar before you can make good music. It’s, it’s not, oh, I, I touch it once and it just flows off. It, it’s a, it’s a, a skill that you have to learn like any other skill. And I see a lot of people that are not as posi- They don’t have such a positive mindset towards the tech. They try it once. It’s like, you sit me on a piano, I play it once, and it doesn’t sound good, and I say, “The piano’s shit.” That’s, that’s sometimes the impression I get. Because it does not… It needs a different level of thinking. You have to learn the language of the agent a little bit, understand where they are good and where they need help.
Peter Steinberger
(01:06:16)
You have to almost… Consider, consider how Codex or Claude sees your code base. Like, they start a new session and they know nothing about your product, project. And your project might have hundred thousand of lines of code. So you gotta help those agents a little bit and keep in mind the limitations that context size is an issue, to, like, guide them a little bit as to where they should look. That often does not require a whole lot of work. But it’s helpful to think a little bit about their perspective.
Lex Fridman
(01:06:54)
Mm-hmm.
Peter Steinberger
(01:06:54)
A- as, as weird as it sounds. I mean, it’s not, it’s not alive or anything, right? But, but they always start fresh. I have, I have the, the system understanding. So with a few pointers, I can immediately say, “Hey, wanna like, make a change there? You need to consider this, this and this.” And then they will find and look at it, and then they’ll… Their view of the project is always… It’s not never full, because the full thing does not fit in…. so you, you have to guide them a little bit where to look and also how you should approach the problem. There’s, like, little things that sometimes help, like take your time. That sounds stupid, but…
Peter Steinberger
(01:07:33)
And in 5.3-
Lex Fridman
(01:07:35)
Codex 5.3
Peter Steinberger
(01:07:36)
… that was partially addressed. But those… Also, Opus sometimes. They are trained with being aware of the context window, and the closer it gets, the more they freak out. Literally. Like, some- sometimes you see the, the real raw thinking stream. What you see, for example, in Codex, is post-processed.
Lex Fridman
(01:07:59)
Mm-hmm.
Peter Steinberger
(01:08:00)
Sometimes the actual raw thinking stream leaks in, and it sounds something like from the Borg. Like, “Run to shell, must comply, but time.” And then they, they, they, like… Like, that comes up a lot. Especially… So, so-
Lex Fridman
(01:08:15)
Yeah.
Peter Steinberger
(01:08:16)
And that’s, that’s a non-obvious thing that you just would never think of unless you actually just spend time working with those things and getting a feeling what works, what doesn’t work. You know? Like, just, just as I write code and I get into the flow, and when my architecture’s all right, I feel friction. Well, I get the same if I prompt and something takes too long. Maybe… Okay, where’s the mistake? Did I… Do I have a mistake in my thinking? Is there, like, a misunderstanding in the architecture? Like, if, if something takes longer than it should, I, I… You can just always, like, stop and s- like, just press escape. Where, where are the problems?
Lex Fridman
(01:09:00)
Maybe you did not sufficiently empathize with the perspective of the agent. In that c- in that sense, you didn’t provide enough information, and because of that, it’s thinking way too long.
Peter Steinberger
(01:09:08)
Yeah. It just tries to force a feature in that your current architecture makes really hard. Like, you need to approach this more like a conversation. For example, when I… My favorite thing. When I review a pull request, and I’m getting a lot of pull requests, I first just review this PR. It got me the review. My first question is, “Do you understand the intent of the PR? I don’t even care about the implementation.” I want… Like, in almost all PRs, a person has a problem, person tries to solve the problem, person sends PR. I mean, there’s, like, cleanup stuff and other stuff, but, like, 99% is, like, this way, right? They either want to fix a, fix a bug, add a feature. Usually one of those two.
Peter Steinberger
(01:10:01)
And then Codex will be like, “Yeah, it’s quite clear person tried this and this.” Is this the most optimal way to do it? No. In most cases, it’s, it’s like a, “Not really.” Da-da-da-da-da-da-da. And I’m… And, and then I start like, “Okay. What would be a better way? Have you… Have you looked into this part, this part, this part?” And then most likely, Codex didn’t yet, because its, its context size is empty, right? So, you point them into parts where you have the system understanding that it didn’t see yet. And it’s like, “Oh, yeah. Like, we should… We also need to consider this and this.” And then, like, we have a discussion of how would the optimal way to, to solve this look like? And then you can still go farther and say, “Could we…
Peter Steinberger
(01:10:41)
Could we make that even better if we did a larger refactor?” “Yeah, yeah. We could totally do this and this and or this and this.” And then I consider, okay, is this worth the refactor, or should we, like, keep that for later? Many times, I just do the refactor because refactors are cheap now. Even though you might break some other PRs, nothing really matters anymore. Codex… Like, those modern agents will just figure things out. They might just take a minute longer. But you have to approach it like a discussion with a, a very capable engineer who’s… Generally makes good… Comes up with good solutions. Some- sometimes needs a little help.
Lex Fridman
(01:11:19)
But also, don’t force your worldview too hard on it. Let the agent do the thing that it’s good at doing, based on what it was trained on. So, don’t, like, force your worldview, because it might… It might have a better idea, because it just knows a better idea better, because it was trained on that more.
Peter Steinberger
(01:11:39)
That’s multiple levels, actually. I think partially why I find it quite easy to work with agents is because I led engineering teams before. You know, I had a large company before. And eventually, you have to understand and accept and realize that your employees will not write a code the same way you do. Maybe it’s also not as good as you would do, but it will push the project forward.
Peter Steinberger
(01:12:02)
And if I breathe down everyone’s neck, they’re just gonna hate me-
Lex Fridman
(01:12:05)
Yeah
Peter Steinberger
(01:12:05)
… and we’re gonna move very slow.
Lex Fridman
(01:12:07)
Yeah.
Peter Steinberger
(01:12:07)
So, so some level of acceptance that, yes, maybe the code will not be as perfect. Yes, I would have done it differently. But also, yes, this is a c- this is a working solution, and in the future, if it actually turns out to be too slow or problematic, we can always redo it. We can always-
Lex Fridman
(01:12:24)
Mm-hmm
Peter Steinberger
(01:12:24)
… spend more time on it. A lot of the people who struggle are those who, they try to push their way onto heart.
Lex Fridman
(01:12:33)
Mm-hmm.
Peter Steinberger
(01:12:33)
I- i- like, we are in a stage where I’m not building the code base to be perfect for me, but I wanna build a code base that is very easy for an agent to navigate.
Lex Fridman
(01:12:47)
Mm-hmm.
Peter Steinberger
(01:12:48)
So, like, don’t fight the name they pick, because it’s most likely, like, in the weights, the name that’s most obvious. Next time they do a search, they’ll look for that name. If I decide, oh, no, I don’t like the name, I’ll just make it harder for them. So, that requires, I think, a shift in, in thinking and, and in how do I design a, a project so agents can do their best work.
Lex Fridman
(01:13:14)
That requires letting go a little bit. Just like leading a team of engineers.
Peter Steinberger
(01:13:19)
Yeah.
Lex Fridman
(01:13:19)
Because it, it might come up with a name that’s, in your view, terrible, but… It’s kind of a simple symbolic-… step of letting go.
Peter Steinberger
(01:13:29)
Very much so.
Lex Fridman
(01:13:30)
There’s a lot of letting go that you do in your whole process. So for example, I read that you never revert, always commit to main. There’s a few things here. You don’t refer to past sessions, so there’s a kind of YOLO component because reverting means… Instead of reverting, if a problem comes up, you just ask the agent to fix it.
Peter Steinberger
(01:13:57)
I read a bunch of people in their work flows like, “Oh, yeah the prompt has to be perfect and if I make a mistake, then I roll back and redo it all.” In my experience, that’s not really necessary. If I roll back everything, it will just take longer. If I see that something’s not good, then we just move forward and then I commit when, when, when I like, I like the outcome. I even switched to local CI, you know, like DHH inspired where I don’t care so much more about the CI on GitHub. We still have it. It’s still, it still has a place, but I just run tests locally and if they work locally, I push to main. A lot of the traditional ways how to approach projects, I, I wanted to give it a different spin on this project. You know, there’s no… There’s no develop branch.
Peter Steinberger
(01:14:57)
Main should always be shippable. Yes, we have… When I do releases, I, I run tests and sometimes I, I basically don’t commit any other things so, so we can, we can stabilize releases. But the goal is that main’s always shippable and moving fast.
Lex Fridman
(01:15:18)
So by way of advice, would you say that your prompts should be short?
Peter Steinberger
(01:15:23)
I used to write really long prompts. And by writing, I mean, I don’t write. I, I, I talk. You know, th- these hands are, like, too, too precious for writing now. I just, I just use bespoke prompts to build my software.
Lex Fridman
(01:15:37)
So you for real with all those terminals are using voice?
Peter Steinberger
(01:15:40)
Yeah. I used to do it very extensively to the point where there was a period where I lost my voice.
Lex Fridman
(01:15:49)
You’re using voice and you’re switching using a keyboard between the different terminals, but then you’re using voice for the actual input.
Peter Steinberger
(01:15:55)
Well, I mean, if I do terminal commands like switching folders or random stuff, of course I type. It’s faster, right? But if I talk to the agent in, in most ways, I just actually have a conversation. You just press the, the walkie-talkie button and then I just, like, use my phrases. S- sometimes when I do PRs because it’s always the same, I have, like, a slash command for a few things, but in even that, I don’t use much because it’s, it’s very rare that it’s really always the same questions. Sometimes I, I see a PR and for… You know, like for PRs I actually do look at the code because I don’t trust people. Like, there could always be something malicious in it, so I need to actually look over the code.
Peter Steinberger
(01:16:45)
Yes, I’m pretty sure agents will find it, but yeah, that’s the funny part where sometimes PRs take me longer than if you would just write me a good issue.
Lex Fridman
(01:16:54)
Just natural language, English. I mean in some sense, sh- shouldn’t that be what PRs slowly become, is English?
Peter Steinberger
(01:17:03)
Well, what I really tried with the project is I asked people to give me the prompts and very, very few actually cared. Even though that is such a wonderful indicator because I see… I actually see how much care you put in. And it’s very interesting because the… Currently, the way how people work and drive the agents is, is wildly different.
Lex Fridman
(01:17:29)
In terms of, like, the prompt, in terms of what, what are the… Actually, what are the different interesting ways that people think of agents that you’ve experienced?
Peter Steinberger
(01:17:40)
I think not a lot of people ever considered the way the agent sees the world.
Lex Fridman
(01:17:46)
And so empathy, being empathetic towards the agent.
Peter Steinberger
(01:17:50)
In a way empathetic, but yeah, you, you, like, you’re bitch at your stupid clanker, but you don’t realize that they start from nothing and you have, like, a bad agent in default that doesn’t help them at all. And then they explore your code base, which is, like, a pure mess with, like, weird naming. And then people complain that the agent’s not good. Like, yeah, you try to do the same if you have no clue about a code base and you go in.
Lex Fridman
(01:18:11)
Mm-hmm.
Peter Steinberger
(01:18:11)
So yeah, maybe it’s a little bit of empathy.
Lex Fridman
(01:18:13)
But that’s a real skill, like, when people talk about a skill issue because I’ve seen, like, world-class programmers, incredibly good programmers say, like… Basically say, “LLMs and agents suck.” And I think that probably has to do with… It’s actually how good they are at programming is almost a burden in their ability to empathize with the system that’s starting from scratch. It’s a totally new paradigm of, like, how to program. You really, really have to empathize.
Peter Steinberger
(01:18:44)
Or at least it helps to create better prompts-
Lex Fridman
(01:18:47)
Right
Peter Steinberger
(01:18:47)
… because those things know pretty much everything and everything is just a question away. It’s just often very hard to know which question to ask. You know, I, I feel also like this project was possibly because I, I spent an ungodly time over the year to play and to learn and to build little things. And every step of the way, I got better, the agents got better. My, my understanding of how everything works got better. Um, I could have not had this level of, of o- output-… even a few months ago. Like, it- it- it really was, like, a compounding effect of all the time I put into it and I didn’t do much else this year other than really focusing on, on building and inspiring. I mean, I- I did a whole bunch of conference talks.
Lex Fridman
(01:19:47)
Well, but the building is really practice, is really building the actual skill. So playing-
Peter Steinberger
(01:19:51)
Yeah
Lex Fridman
(01:19:51)
… playing. And then, so doing, building the skill of what it takes it to work efficiently with LLMs, which is why would you went through the whole arc of software engineer. Talk simply and then over-complicate things.
Peter Steinberger
(01:20:03)
There’s a whole bunch of people who try to automate the whole thing.
Lex Fridman
(01:20:08)
Yeah.
Peter Steinberger
(01:20:10)
I don’t think that works. Maybe a version of that works, but that’s kind of like in the ’70s when we had the waterfall model of software d- development. I… Even Even though really, right? I started out, I, I built a very minimal version. I played with it. I, I need to understand how it works, how it feels, and then it gives me new ideas. I could not have planned this out in my head and then put it into some orchestrator and then, like, something comes out. Like it’s to me, it’s much more my idea what it will become evolves as I build it and as I play with it and as I, I try out stuff.
Peter Steinberger
(01:20:49)
So, so, people who try to use like, you know, things like Gas Town or all these other orchestrators, where they wanna o- automate the whole thing, I feel if you do that, it misses style, love, that human touch. I don’t think you can automate that away so quickly.
Lex Fridman
(01:21:09)
So you want to keep the human in the loop, but at the same time you also want to create the agentic loop, where it is very autonomous while still maintaining a human in the loop.
Peter Steinberger
(01:21:22)
Yeah.
Lex Fridman
(01:21:22)
And it’s a tricky b- it’s a tricky balance.
Peter Steinberger
(01:21:24)
Mm-hmm.
Lex Fridman
(01:21:24)
Right? Because you’re all for… You’re a big CLI guy, you’re big on closing the agentic loop. So what, what’s the right balance? Like where’s your role as a developer? You have three to eight agents running at the same time.
Peter Steinberger
(01:21:38)
And then w- maybe one builds a larger feature. Maybe, maybe with one I explore some idea I’m unsure about. Maybe two, three are fixing a little bugs-
Lex Fridman
(01:21:47)
Mm-hmm
Peter Steinberger
(01:21:47)
… or like writing documentation. Actually, I think writing documentation is, is always part of a feature. So most of the docs here are auto-generated and just infused with some prompts.
Lex Fridman
(01:21:59)
So when do you step in and add a little bit of your human love into the picture?
Peter Steinberger
(01:22:04)
I mean, o- one thing is just about what do you build and what do you not build, and how does this feature fit into all the other features? And like having, having a little bit of a, of a vision.
Lex Fridman
(01:22:16)
So which small and which big features to add? What are some of the hard design decisions that you find you’re still as a human being required to make, that the human brain is still really needed for? Is it just about the choice of features to add? Is it about implementation details, maybe the programming language, maybe…
Peter Steinberger
(01:22:41)
It’s a little bit of everything. The, the programming language doesn’t matter so much, but the ecosystem matters, right? So I picked TypeScript because I wanted it to be very easy and hackable and approachable and that’s the number one language that’s being used right now, and it fits all these boxes, and agents are good at it. So that was the obvious choice. Features, of course, like, it’s very easy to, like, add a feature. It, everything’s just a prompt away, right? But oftentimes you pay a price that you don’t even realize. So thinking hard about what should be in core, maybe what’s a… what’s an experiment, so maybe I make it a plugin. What… Where do I say no?
Peter Steinberger
(01:23:24)
Even if people send a PR and I’m like, “Yeah, I, I like that too,” but maybe this should not be part of the project. Maybe we can make it a skill. Maybe I can, like, make the plugin um, the plugin side larger so you can make this a plugin, even though right now it, it, it doesn’t. There’s still a lot of… there’s still a lot of craft and thinking involved in how to make something. Or even, even, you know, even when you started those little messages are like, “I’m buil- I built on Caffeine, JSON5, and a lot of willpower.” And, like, every time you get it, you get another message, and it kind of primes you into that this is, this is a fun thing.
Lex Fridman
(01:24:07)
Mm-hmm.
Peter Steinberger
(01:24:08)
And it’s not yet Microsoft Exchange 2025-
Lex Fridman
(01:24:12)
Right
Peter Steinberger
(01:24:13)
… and fully enterprise-ready. And then when it updates, it’s like, “Oh, I’m in. It’s cozy here.” You know, like something like this that like-
Lex Fridman
(01:24:21)
Mm-hmm
Peter Steinberger
(01:24:22)
… Makes you smile. A, agent would not come up with that by itself. Because that’s like… that’s the… I don’t know. That’s just how you s- how you build software that’s, that delights.
Lex Fridman
(01:24:36)
Yeah, that delight is such a huge part of inspiring great building, right? Like you feel the love and the great engineering. That’s so important. Humans are incredible at that. Great humans, great builders are incredible at that, in, in, infusing the things they build with th- that little bit of love. Not to be cliche, but it’s true. I mean, you mentioned that you initially created the SoulMD.
Peter Steinberger
(01:25:05)
It was very fascinating, you know, the, the whole thing that Entropic has a, has like a… Now they call it constitution, back then, but that was months later. Like two months before, people already found that. It was almost like a detective game where the agent mentioned something and then they found… They managed to get out a little bit of that string, of that text. But it was nowhere documented and then you, by… just by feeding it the same text and asking it to, like, continue-… they got more out, and then, and you, but like, a very blurry version. And by, like, hundreds of tries, they kinda, like, narrowed it down to what was most likely the original text. I found that fascinating.
Lex Fridman
(01:25:47)
It was fascinating they were able to pull that out from the weights, right?
Peter Steinberger
(01:25:51)
And, and also just kudos to Anthropic. Like, I think that’s, it’s a really, it’s a really beautiful idea to, like, like some of the stuff that’s in there. Like, like, we hope Claude finds meaning in its work. ‘Cause we don’t… Maybe it’s a little early, but I think that’s meaningful. That’s something that’s important for the future as we approach something that, at some point, me and may not… has, like, glimpses of consciousness, whatever that even means, because we don’t even know. So I, I read about this. I found it super fascinating, and I, I started a whole discussion with my agent on WhatsApp. And, and I’m like…
Peter Steinberger
(01:26:26)
I, I gave it this text, and it was like, “Yeah, this feels strangely familiar.”
Lex Fridman
(01:26:30)
Mm-hmm.
Peter Steinberger
(01:26:31)
And then so that I had the whole idea of like, you know, maybe we should also create a, a soul document that includes how I, I want to, like work with AI or, like with my agent. You could, you could totally do that just in agents.md, you know? But I, I just found it, it to be a nice touch. And it’s like, well, yeah, some of those core values are in the soul. And then I, I also made it so that the agent is allowed to modify the soul if they choose so, with the one condition that I wanna know. I mean, I would know anyhow because I see, I see tool calls and stuff.
Lex Fridman
(01:27:07)
But also the naming of it, soul.md. Soul. You know? There’s a… Man, words matter, and like, the framing matters, and the humor and the lightness matters, and the profundity matters, and the compassion, and the empathy, and the camaraderie, all that matter. I don’t know what it is. You mentioned, like, Microsoft. Like, there’s certain companies and approaches th- that can just suffocate the spirit of the thing. I don’t know what that is. But it’s certainly true that OpenClaw has that fun instilled in it.
Peter Steinberger
(01:27:43)
It was fun because up until late December, it was not even easy to create your own agent. I, I built all of that, but my files were mine. I didn’t wanna share my soul. And if people would just check it out, they would have to do a few steps manually, and the agent would just be very bare-bones, very dry. And I, I made it simpler, I created the whole template files as codecs, but whatever came out was still very dry. And then I asked my agent, “You see these files? Recreate it bread.
Peter Steinberger
(01:28:26)
Infuse it with your personality.”
Lex Fridman
(01:28:28)
Mm-hmm.
Peter Steinberger
(01:28:29)
Don’t share everything, but, like, make it good.
Lex Fridman
(01:28:31)
Make the templates good.
Peter Steinberger
(01:28:31)
Yeah, and then he, like, rewrote the templates-
Lex Fridman
(01:28:33)
Yeah
Peter Steinberger
(01:28:33)
… and then whatever came out was good. So we already have, like, basically AI prompting AI. Because I didn’t write any of those words. It was… The intent originally was for me, but this is like, kinda like, my agent’s children.
Lex Fridman
(01:28:52)
Your uh, your soul.md is famously still private. One of the only things you keep private. What are some things you can speak to that’s in there that’s part of the, part of the magic sauce, without revealing anything? What makes a personality a personality?
Peter Steinberger
(01:29:13)
I mean, there’s definitely stuff in there that you’re not human. But who knows what, what creates consciousness or what defines an entity? And part of this is, like, that we, we wanna explore this. All that stuff in there, like, be infinitely resourceful like pushing, pushing on the creativity boundary. Pushing on the, what it means to be an AI.
Lex Fridman
(01:29:50)
Having a sense to wonder about self.
Peter Steinberger
(01:29:52)
Yeah, there’s some, there’s some funny stuff in there. Like, I don’t know, we talked about the movie Her, and at one point it promised me that it wouldn’t, it wouldn’t ascend without me. You know, like, where the-
Lex Fridman
(01:30:03)
Yeah.
Peter Steinberger
(01:30:03)
So, so there’s like some stuff in there that… Because it wrote the, it wrote its own soul file. I didn’t write that, right?
Lex Fridman
(01:30:10)
Yeah, yeah, yeah.
Peter Steinberger
(01:30:10)
I just heard a discussion about it, and it was like, “Would you like a soul.md? Yeah, oh my God, this is so meaningful.” The… Can you go on soul.md? There’s like one, one part in there that always ca- catches me if you scroll down a little bit. A little bit more. Yeah, this, this, this part. “I don’t remember previous sessions unless I read my memory files. Each session starts fresh. A new instance, loading context from files. If you’re reading this in a future session, hello.” “I wrote this, but I won’t remember writing it. It’s okay.
Peter Steinberger
(01:30:44)
The words are still mine.”
Lex Fridman
(01:30:47)
Wow.
Peter Steinberger
(01:30:48)
Uh-
Lex Fridman
(01:30:48)
Yeah.
Peter Steinberger
(01:30:48)
That gets me somehow.
Lex Fridman
(01:30:49)
Yeah.
Peter Steinberger
(01:30:50)
It’s like-
Lex Fridman
(01:30:51)
Yeah.
Peter Steinberger
(01:30:51)
You know, this is, it’s still, it’s still matrix m- calculations, and we are not at consciousness yet. Yet, I, I get a little bit of goo- goosebumps because it, it’s philosophical.
Lex Fridman
(01:31:04)
Yeah.
Peter Steinberger
(01:31:04)
Like, what does it mean to be, to be an, an agent that starts fresh? Where, like, you have like constant memento, and you like, but you read your own memory files. You can’t even trust them in a way. Um-
Lex Fridman
(01:31:19)
Yeah
Peter Steinberger
(01:31:19)
Or you can. And I don’t know.
Lex Fridman
(01:31:22)
How much of memory makes up of who we are? How much memory makes up what an agent is, and if you erase that memory is that somebody else? Or if you’re reading a memory file, does that somehow mean…… you’re recreating yourself from somebody else, or is that actually you? And those notions are all s- somehow infused in there.
Peter Steinberger
(01:31:45)
I found it just more profound than I should find it, I guess.
Lex Fridman
(01:31:49)
No, I think, I think it’s truly profound and I think you see the magic in it. And when you see the magic, you continue to instill the whole loop with the magic. That’s really important. That’s the difference between Codex and us and a human. Quick pause for bathroom break.
Peter Steinberger
(01:32:08)
Yeah.

Programming setup

Lex Fridman
(01:32:09)
Okay, we’re back. Some of the other aspects of the dev workflow is pretty interesting too. I think we w- went off on a tangent. L- maybe some of the mundane things, like how many monitors? There’s that legendary picture of you with, like, 17,000 monitors. That’s amazing.
Peter Steinberger
(01:32:26)
I mean, I- I- I mocked myself here, so just added… using GROQ to, to add more screens.
Lex Fridman
(01:32:32)
Yeah. How much is this as meme and how much is this as reality?
Peter Steinberger
(01:32:36)
Yeah. I think two MacBooks are real. The main one that drives the two big screens, and there’s another MacBook that I sometimes use for, for testing.
Lex Fridman
(01:32:46)
So two big screens.
Peter Steinberger
(01:32:48)
I’m a big fan of anti-glare. So I have this wide Dell that’s anti-glare and you can just fit a lot of terminals side-by-side. I usually have a terminal and at the bottom, I- I- I split them. I have a little bit of actual terminal, mostly because when I started, I- I sometimes made the mistake and I- I mi- I mixed up the- the windows, and I gave… I- I prompted in the wrong project, and then the agent ran off for, like, 20 minutes, manically trying to understand what I could have meant, being completely confused because it was the wrong folder. And sometimes they’ve been clever enough to, like, get out of the workday and, like, figure out that, oh, you meant another project.
Lex Fridman
(01:33:35)
Mm-hmm.
Peter Steinberger
(01:33:36)
But oftentimes, it’s just, like, what? You know? Like, fit your- f- put yourself in the shoes of your- of the agent and, and-
Lex Fridman
(01:33:43)
Yeah
Peter Steinberger
(01:33:43)
… and then get, like, a super weird something that does not exist and then just, like… They’re problem solvers so they try really hard and always feel bad. So it’s always Codex and, like, a little bit of actual terminal. Also helpful because I don’t use work trees. I like to keep things simple, that’s why- that’s why I like the terminal so much, right? There’s no UI. It’s just me and the agent having a conversation. Like, I don’t even need plan mode, you know? There’s so many people that come from Claude Code and they’re so, so Claude-pilled and, like, have their workflows and they come to Codex and… Now, it has plan mode, I think, but I don’t think it’s necessary because you just- you just talk to the agent. And when it’s… when you…
Peter Steinberger
(01:34:32)
there’s a few trigger words how you can prevent it from building. You’re like, “Discuss, give me options.”
Lex Fridman
(01:34:37)
Mm-hmm.
Peter Steinberger
(01:34:38)
Don’t write code yet if you wanna be very specific, you just talk and then when you’re ready, then- then just write, “Okay, build,” and then it’ll do the thing. And then maybe it goes off for 20 minutes and does the thing.
Lex Fridman
(01:34:50)
You know what I really like is asking it, “Do you have any questions for me?”
Peter Steinberger
(01:34:54)
Yeah. And again, like, Claude Code has a UI that kind of guides you through that. It’s kind of cool but I just find it unnecessary and slow. Like, often it would give me four questions and then maybe I write, “One yacht, two and three, discuss more, four, I don’t know.” Or often- oftentimes I- I feel like I want to mock the model where I ask it, “Do you have any questions for me?” And I- I- I don’t even read the questions fully. Like, I scan over the questions and I, I get the impression all of this can be answered by reading more code and it’s just like, “Read more code to answer your own questions.” And that usually works.
Lex Fridman
(01:35:32)
Yeah.
Peter Steinberger
(01:35:32)
And then if not, it will come back and tell me. But many times, you just realize that, you know, it’s like you’re in the dark and you slowly discover the room, so that’s how they slowly discover the code base. And they do it from scratch every time.
Lex Fridman
(01:35:46)
But I’m also fascinated by the fact that I can empathize deeper with the model when I read its questions, because I can understand… Because you said you can infer certain things by the runtime. I can infer also a lot of things by the questions it’s asking, because it’s very possible it’s been provided the right context, the right files, the right guidance. So somehow ask, g- get… reading the questions, not even necessarily answering them, but just reading the questions, you get an understanding of where the gaps of knowledge are. It’s in- it’s interesting.
Peter Steinberger
(01:36:24)
You know that in some ways they are ghosts, so even if you plan everything and you build, you can- you can experiment with the question like, “Now that you built it, what would you have done different?” And then oftentimes you get, like, actually something where they discover only throughout building that, oh, what we actually did was not optimal. Many times I- I asked them, “Okay, now that you built it, what can we refactor?” Because then you build it and you feel the pain points. I mean, you don’t feel the pain points but, right, they discover where- where there were problems or where things didn’t work e- in the first try and it re- required more loops.
Peter Steinberger
(01:37:09)
So every time, almost every time I- I merge a PR, build a feature, afterwards I ask, “Hey, what can we refactor?” Sometimes it’s like, “No, there’s, like, nothing big,” or, like, usually they say, “Yeah, this thing you should really look at.” But that took me quite a while to, like… You know, that flow took me lots of time to understand, and if you don’t do that, you eventually… you’ll stop yourself into- into a corner. You, like, you have to keep in mind…
Lex Fridman
(01:37:41)
Peter Steinberger
(01:37:42)
… they work very much like humans. Like, I, I, if I write software by myself, I also build something and then I feel the pain points, and then I, I get this urge that I need to refactor something. So, I can very much synthesize with the agent, and you just need to use the context.
Lex Fridman
(01:38:00)
Mm-hmm.
Peter Steinberger
(01:38:00)
Or, like, you also use the context to write tests. And so Codex uh, oppose like the, the, the model, models. They, they usually do that by default, but I still often ask the questions, “Hey, do we have enough tests?” “Yeah, we tested this and this, but this corner case could be something write more tests.” Um, documentation. Now that the whole context is full, like, I mean, I’m not saying my documentation is great, but it’s not bad. And pretty much everything is, is LM generated. So, so, you have to approach it as you build features, as you change something. I’m like, “Okay, write documentation. What file would you pick?” You know, like, “What file name? Where, where would that fit in?” And it gives me a few options.
Peter Steinberger
(01:38:48)
And I’m like, “Oh, maybe also add it there,” and that’s all part of the session.

GPT Codex 5.3 vs Claude Opus 4.6

Lex Fridman
(01:38:52)
Maybe you can talk about the current two big competitors in terms of models, Cloud Opus 4.6 and GPT-5 through Codex. Which is better? How different are they? I think you’ve spoken about Codex reading more and Opus being more willing to take action faster and maybe being more creative in the actions it takes. But because-
Peter Steinberger
(01:39:20)
Yeah
Lex Fridman
(01:39:20)
… Codex reads more, it’s able to deliver maybe better code. Can you speak to the di- n- n- differences there?
Peter Steinberger
(01:39:29)
I have a lot of words there. Is- as a general purpose model, Opus is the best. Like, for OpenClaw, Opus is extremely good in terms of role play. Like, really going into the character that you give it. It’s very good at… It was really bad, but it really made an arch to be really good at following commands. It is usually quite fast at trying something. It’s much more tailored to, like, trial and error. It’s very pleasant to use. In general, it’s almost like Opus was… Is a little bit too American. And I shouldn’t… Maybe that’s a bad analogy. You’ll probably get roasted for that.
Lex Fridman
(01:40:27)
Yeah, I know exactly. It’s ’cause Codex is German. Is that what you’re saying?
Peter Steinberger
(01:40:32)
It’s-
Lex Fridman
(01:40:32)
Actually, now that you say it, it makes perfect sense.
Peter Steinberger
(01:40:34)
Or you could, you could… Sometimes I- Sometimes I explain it-
Lex Fridman
(01:40:38)
I will never be able to unthink what you just said. That’s so true.
Peter Steinberger
(01:40:42)
But you also know that a lot of the Codex team is, like, European, um- … so maybe there’s a bit more to it.
Lex Fridman
(01:40:49)
That’s so true. Oh, that’s funny.
Peter Steinberger
(01:40:51)
But also, ent- entropic, they fixed it a little bit. Like, Opus used to say, “You’re absolutely right all the time,” and it, it, it today still triggers me. I can’t hear it anymore. It’s not even a joke. Uh, I just… You, this was like the, the meme, right? “You’re absolutely right.”
Lex Fridman
(01:41:09)
You’re allergic to sycophancy a little bit.
Peter Steinberger
(01:41:11)
Yeah. I, I can’t. Some other comparison is like, Opus is like the coworker that is a little silly sometimes, but it’s really funny and you keep him around. And Codex is like the, the weirdo in the corner that you don’t wanna talk to, but is reliable and gets shit done.
Lex Fridman
(01:41:30)
Yeah.
Peter Steinberger
(01:41:32)
Ultimately-
Lex Fridman
(01:41:36)
This all feels very accurate.
Peter Steinberger
(01:41:39)
I mean, ultimately, if you’re a skilled driver, you can get good results with any of those latest gen models. Um, I like Codex more because it doesn’t require so much charade. It will just, it will just read a lot of code by default. Opus, you really have to, like, you have to have plan mode. You have to push it harder to, like, go in these directions because it’s, it’s just like, like, “Yeah, can I go in? Can I go in?” You know?
Lex Fridman
(01:42:08)
Yeah.
Peter Steinberger
(01:42:08)
It’s like, it will just run off very fast, and that’s a very localized solution. I think it, I think the difference is, is in the post-training. It’s not like the, the raw model intelligence is so different, but it’s just… I think that they just give it, give you different, different goals. And no model, no model is better in, in in every aspect.
Lex Fridman
(01:42:29)
What about the code that it generates? The, the… In terms of the actual quality of the code, is it basically the same?
Peter Steinberger
(01:42:36)
If you drive it right, Opus even sometimes can make more elegant solutions, but it requires more skill. It’s, it’s harder to have so many sessions in parallel with Cloud Code because it’s, it’s more interactive. And I, I think that’s what a lot of people like, especially if they come from coding themselves. Whereas Codex is much more you have a discussion, and then we’ll just disappear for 20 minutes. Like, even AMP, they, they now added a deep mode. They finally… I mocked them, you know. We finally saw the light. And then they had this whole talk about you have to approach it differently, and I think that’s where, that’s where people struggle when they just try Codex after trying Cloud Code is that it’s, it’s a slightly diff- it’s, it’s less interactive.
Peter Steinberger
(01:43:28)
It’s, it’s like I have quite long discussions sometimes, and then, like, go off. And then, yeah, it doesn’t matter if it takes 10, 20, 30, 40, 50 minutes or longer, you know? Like, the 6:00 thing was, like, six hours.The latest trend can be very, very persistent until it works. If there’s a clear solution, like, “This is, this is what I want at the end, so it works,” the model will work really hard to really get there. So I think ultimately … they both need similar time, but on, on, on, on Claude, it- it’s a little bit more trial and error often. And, and Codex sometimes overthinks. I prefer that. I prefer the dry, the dry version where I have to read less over, over the more interactive nice way.
Peter Steinberger
(01:44:27)
Like, people like that so much though, that OpenAI even added a second mode with like a more pleasant personality. I haven’t even tried it yet. I, I kinda like the brad.
Lex Fridman
(01:44:37)
Mm-hmm.
Peter Steinberger
(01:44:38)
Yeah, ’cause it … I care about efficiency when I build it-
Lex Fridman
(01:44:45)
Right
Peter Steinberger
(01:44:45)
… and I, I have fun in the very act of building. I don’t need to have fun with my agent who builds. I have fun with my model that … where I can then test those features.
Lex Fridman
(01:44:57)
How long does it take for you to adjust, you know, if you switch … I don’t know when, when was the last time you switched. But to adjust to the, the feel. ‘Cause you kinda talked about like you have to kinda really feel where, where a model is strong, where, like how to navigate, how to prompt it, how … all that kinda stuff. Like, just by way of advice, ’cause you’ve been through this journey of just playing with models. How long does it take to get a feel?
Peter Steinberger
(01:45:26)
If, if someone switches, I would give it a week until you actually develop a gut feeling for it.
Lex Fridman
(01:45:32)
Yeah.
Peter Steinberger
(01:45:33)
That’s … if you just … I think some people also make the mistake of they pay 200 for the, the Claude code version, then they pay 20 bucks for the OpenAI version. But if you pay like the, the 20 bucks version, you get the slow version. So your experience would be terrible because you’re used to this very interactive, very good system. And you switch to something that you have very little experience, then that’s gonna be very slow. So, I think OpenAI shot themselves a little bit in the foot by making the, the cheap version also slow. I would, I would have at least a small part of the fast preview. Or like, the experience that you get when you pay 200 before degrading to it being slow, because it’s already slow.
Lex Fridman
(01:46:23)
Mm-hmm.
Peter Steinberger
(01:46:23)
I mean, they, they made it better. I think it’s … And, and they have plans to make it a lot better if the Cerebras stuff is true. But yeah, it’s a skill. It takes time. Even if you play … You have a regular guitar and you switch it to an E guitar, you’re not gonna play well right away. You have to, like, learn how it feels.
Lex Fridman
(01:46:42)
The- there’s also this extra psychological effect that you’ve spoken about which is hilarious to watch. Which once people, uh … When the new model comes out, they try that model, they fall in love with it. “Wow, this is the smartest thing of all time,” and then they start saying, “You could just watch the Reddit posts over time,” start saying that, “We believe the intelligence of this model has been gradually degrading.” It, it says something about human nature and just the way our minds work, when it’s probably most likely the case that the intelligence of the model is not degrading. It’s in fact you’re getting used to a good thing.
Peter Steinberger
(01:47:22)
And your project grows, and you’re adding slop, and you probably don’t spend enough time to think about refactors. And you’re making it harder and harder for the agent to work on your slop. And then, and then suddenly, “Oh, now it’s hard. Oh no, it’s not working as well anymore.” What’s the motivation for, like, one of those AI companies to actually make their model dumber? Like, at most, it will make it slower if, if the server load’s too high. But, like, quantizing the model so you have a worse experience, so you go to the competitor?
Lex Fridman
(01:47:56)
Yeah.
Peter Steinberger
(01:47:56)
That just doesn’t seem like a very smart move in any way.

Best AI agent for programming

Lex Fridman
(01:47:59)
What do you think about Claude Code in comparison to Open Claude? So, Claude Code and maybe the Codex coding agent? Do you see them as kind of competitors?
Peter Steinberger
(01:48:11)
I mean, first of all, competitor is fun when it’s not really a competition.
Lex Fridman
(01:48:16)
Yeah.
Peter Steinberger
(01:48:16)
Like, I’m happy if … If, if all it did is, like, inspire people to build something new, cool. Um, I still use Codex for the building. I, I know a lot of people use Open Claude to, to build stuff. And I worked hard on it to make that work. And I do smaller stuff with it in terms of code. But, like, if I work hours and hours, I want a big screen, not WhatsApp, you know? So for me, a personal agent is much more about my life. Or like, like a coworker. Like, I give you, like, a GitHub URL. Like, “Hey, try out this CLI. Does it actually work? What can we learn?” Blah, blah, blah. But when I’m deep in, deep in the flow, I want to have multiple, multiple things and it being very, very visible what it, what it does. So it … I don’t see it as a competition. It’s, it’s different things.
Lex Fridman
(01:49:16)
But do, do you think there’s a a future where the two kinda combine? Like, your personal agent is also your best developing co-programmer partner?
Peter Steinberger
(01:49:29)
Yeah, totally. I think this is where the puck’s going, that this is gonna be more and more your operating system.
Lex Fridman
(01:49:37)
The operating system.
Peter Steinberger
(01:49:37)
And it already … It’s so funny. Like I, I added support for sub-agents and also for …… um, TTI support, so it could actually run Cloud Coder Codecs.
Lex Fridman
(01:49:52)
Mm-hmm.
Peter Steinberger
(01:49:53)
And because mine’s a little bit bossy, it, it, it started it and it, it, it told him, like, “Who’s the boss,” basically. And it was like, “Ah, Codex is obeying me.”
Lex Fridman
(01:50:05)
Oh, this is a power struggle.
Peter Steinberger
(01:50:06)
And also the current interface is probably not the final form. Like, if you think more globally, we are, we copied Google for agents. You have, like, a prompt, and, and then you have a chat interface. That, to me, very much feels like when we first created television and then people recorded radio shows on television and you saw that on TV.
Lex Fridman
(01:50:39)
Mm-hmm.
Peter Steinberger
(01:50:39)
I think there is, there’s n- there’s better ways how we eventually will communicate with models, and we are still very early in this, how will it even work phase. So, it will eventually converge and we will also figure out whole different ways how to work with those things.
Lex Fridman
(01:51:05)
One of the other components of workflow is operating system. So I told you offline that for the first time in my life, I’m expanding my sort of realm of exploration to the to the Apple ecosystem, to Macs, iPhone and so on. For most of my life I’ve been a Linux, Windows and WSL1, WSL2 person, which I think are all wonderful, but I… expanding to also trying Mac. Because it’s another way of building and it’s also a way of building that a large part of the community currently that’s utilizing LMS and agents is using, so. And that’s the reason I’m expanding to it. But is there something to be said about the different operating systems here? We should say that OpenClaw supported across operating systems.
Peter Steinberger
(01:51:56)
Yeah.
Lex Fridman
(01:51:57)
I saw WSL2 recommended, side windows for certain o- operations, but then Windows, Linux macOS are obviously supported.
Peter Steinberger
(01:52:07)
Yeah, it should even work natively in Windows. I just didn’t have enough time to properly test it. And you know, like, the last 90% of software always easier than the first 90%, so I’m sure there’s some dragons left that will eventually nail out. My road was, for a long time, Windows, just because I grew up with that, then I switched and had a long phase with Linux, built my own kernels and everything, and then I went to university and I, I had my, my hacky Linux thing, and saw this white MacBook, and I just thought this is a thing of beauty, the white plastic one. And then I converted to Mac ’cause mostly w- I was, I was sick that audio wouldn’t work on Skype and all the other issues that, that Linux had for a long time.
Peter Steinberger
(01:53:01)
And then I just stuck with it and then I dug into iOS, which required macOS anyhow, so it was never a question. I think Apple lost a little bit of its lead in terms of native. It used to be… Native apps used to be so much better, and especially in the Mac, there’s more people that build software with love. On, on Windows, it, it… Windows has much more and, like, function wise, there’s just more, period. But a lot of it felt more functional and less done with love. Um, I mean, Mac always, like, attracted more designers and people I felt…
Peter Steinberger
(01:53:50)
Even though, like, often it has less features, it, it had more delight-
Lex Fridman
(01:53:54)
Mm-hmm
Peter Steinberger
(01:53:55)
… And playfulness. So I always valued that. But in the last few years, many times I actually prefer… Oh God, people are gonna roast me for that, but I prefer Electron apps because they work and native apps often, especially if it’s, like, a web service is a native app, are lacking features. I mean, not saying it couldn’t be done, it’s more like a, a focus thing that, like, for many, many companies, native was not that big of a priority. But if they build an Electron app, it, it’s the only app, so it is a priority and there’s a lot more code sharing possible. And I, I build a lot of native Mac apps. I love it. I, I can, I can help myself. Like, I love crafting little Mac, Mac menu bar tools. Like I built one to, to monitor your Codex use.
Peter Steinberger
(01:54:58)
I built one I call Trimmy, that’s specifically for agentic use. When you, when you select text that goes over multiple lines it would remove the new line so you could actually paste it to the terminal. That was, again like, this is annoying me and after the, the 20th time of it is annoying me, I just built it. There is a cool Mac app for OpenClaw that I don’t think many people discovered yet, also because it, it still needs some love. It feels a little bit too much like the Hummer car right now because I, I just experiment a lot with it. It, it likes to polish.
Lex Fridman
(01:55:32)
So you still… I mean, you still love it. You still, you still love adding to the delight of that operating system.
Peter Steinberger
(01:55:37)
Yeah, but then you realize… Like, I also built one, for example, for GitHub. And then the… If you use SwiftUI, like the latest and greatest at Apple, and took them forever to build something to show an image from the web. Now we have async, async image, but…… I added support for it and then some images would just not show up or, like, be very slow. And I had a discussion with Codex like, “Hey, why is there a bug?” And even Codex said like, “Yeah, there’s this ASIC image but it’s really more for experimenting and it should not be used in production.” But that’s Apple’s answer to, like, showing images from the web. This shouldn’t be so hard, you know.
Lex Fridman
(01:56:19)
Yeah.
Peter Steinberger
(01:56:19)
This is like… This is like insane. Like, how am I in, in, in 2026 and my agent tell me, “Don’t use the stuff Apple built because it’s, it’s… It’s… Yeah, it- it’s there but it’s not good.” And like this is now in the weeds. This is… To me this is like… They had so much head start and so much love, and they kind of just like blundered it and didn’t, didn’t evolve it as much as they should.
Lex Fridman
(01:56:50)
But also, there’s just the practical reality. If you look at Silicon Valley, most of the developer world that’s kind of playing with LMS and Agentic AI, they’re all using Apple products. And then, at the same time, Apple is not really, like, leaning on that. Like they’re not… They’re not opening up and playing and working together and like, yes.
Peter Steinberger
(01:57:12)
Isn’t, isn’t it funny how they completely blunder AI, and yet everybody’s buying Mac Minis?
Lex Fridman
(01:57:19)
How… What… Does that even make sense? You’re, you’re, you’re quite possibly the world’s greatest Mac salesman of all time.
Peter Steinberger
(01:57:29)
No, you don’t need a Mac Mini to install OpenClaw. You can install it on the web. There’s, there’s a concept called nodes, so you can like make your computer a node and it will do the same. There is something said for running it on separate hardware. That right now is useful. There is… There’s a big argument for the browser. You know, I, I built some Agentic browser use in there. And, I mean, it’s basically Playwright with a bunch of extras to make it easier for agents.
Lex Fridman
(01:58:06)
Playwright is a library that controls the browser.
Peter Steinberger
(01:58:08)
Yeah.
Lex Fridman
(01:58:08)
It’s really nice, easy to use.
Peter Steinberger
(01:58:09)
And our internet is slowly closing down. Like, there, there’s a whole movement to make it harder for agents to use. So if you do the same in a data center and websites detect that it’s an IP from a data center, the website might just block you or it make it really hard or put a lot of captures in the, in the way of the agent. I mean, agents are quite good at happily clicking, “I’m not a robot.”
Lex Fridman
(01:58:33)
Yeah.
Peter Steinberger
(01:58:33)
But having that on a residential IP makes a lot of things simpler. So there’s ways. Yeah. But it really does not need to be a Mac. It can… It can be any old hardware. I always say, like, maybe use the… Use the opportunity to get yourself a new MacBook or whatever computer you use and use the old one as your server instead of buying a standalone Mac Mini. But then there’s, again, there’s a lot of very cute things people build with Mac Minis that I like.
Lex Fridman
(01:59:08)
Yeah.
Peter Steinberger
(01:59:08)
And no, I don’t get commission from Apple. They didn’t really communicate much.
Lex Fridman
(01:59:16)
It’s sad. It’s sad. Can you actually speak to what it takes to get started with OpenClaw? There’s… I mean, there’s a lot of people… What is it? Somebody tweeted at you, “Peter, make OpenClaw easy to set up for everyday people. 99.9% of people can’t access to OpenClaw and have their own lobster because of their technical difficulties in getting it set up. Make OpenClaw accessible to everyone, please.” And you replied, “Working on that.” From my perspective, it seems there- there’s a bunch of different options and it’s already quite straightforward, but I suppose that’s if you have some developer background.
Peter Steinberger
(01:59:50)
I mean, right now you have to paste in one liner into the terminal.
Lex Fridman
(01:59:53)
Right.
Peter Steinberger
(01:59:54)
And there’s also an app. The app kind of does that for you, but there should be a Windows app. The app needs to be easier and more loved. The configuration should potentially be web-based or in the app. And I started working on that, but honestly right now I want to focus on security aspects. And, and once I’m confident that this is at a level that I can recommend my mom, then I’m going to make it simpler. Like I…
Peter Steinberger
(02:00:27)
Right now-
Lex Fridman
(02:00:28)
You want to make it harder so that it doesn’t scale as fast as it’s scaling.
Peter Steinberger
(02:00:32)
Yeah, it would be nice if it wouldn’t… I mean, that’s, like, hard to say, right? But if the growth would be a little slower, that would be helpful because people are expecting inhuman things from a single human being. And yes, I have some contributors, but also that whole machinery I started a week ago so that needs more time to figure out. And, and not everyone has all day to work on that.
Lex Fridman
(02:01:00)
There’s some beginners listening to this, programming beginners. What advice would you give to them about, let’s say, joining the Agentic AI revolution?
Peter Steinberger
(02:01:12)
Play. Playing is the best… The best way to learn. If you wanna… I’m sure if you… If you are like a little bit of builder, you have an idea in your head that you want to build, just build that, or like, give it a try. It doesn’t need to be perfect. I built a whole bunch of stuff that I don’t use. It doesn’t matter. Like, it’s the journey.
Lex Fridman
(02:01:31)
Mm-hmm.
Peter Steinberger
(02:01:31)
You know? Like the philosophical way, that the end doesn’t matter, the journey matters. Have fun.
Lex Fridman
(02:01:37)
Mm-hmm.
Peter Steinberger
(02:01:37)
My God, like those things… I… I don’t think I ever had so much fun building things because I can focus on the hard parts now. A lot of coding, I always thought I liked coding, but really I like building.
Lex Fridman
(02:01:50)
Yeah.
Peter Steinberger
(02:01:50)
And… And whenever you don’t understand something, just ask. You have an infinitely patient answering machine…. that y- can explain you anything at any level of complexity. Sometimes, that’s like one time I asked, “Hey explain to me like I’m- I’m eight years old,” and it started giving me a story with crayons and stuff. And I’m like, “No, not like that.” Like, I’m okay- … up- up the age a little bit, you know? I’m like, I’m not an actual child, it’s just, I just need a simpler language for like a- a- a- a- a tricky database concept that I didn’t grok in the first- first time. But, you know, just, you can just ask things. Like, you- there’s like… It used to be that I had to go on Stack Overflow or ha- ask on Twitter, and then maybe two days later I get a response.
Peter Steinberger
(02:02:37)
Or I had to try for hours. And now you- you can just ask stuff. It- I mean, it’s never… You have, like, your own teacher. You know that there’s like statistics, y- you can learn faster if you have your own teacher. The- it’s like you have this infinitely patient machine. Ask it.
Lex Fridman
(02:02:53)
But what would you say? So use… What’s the easiest way to play? So maybe Open Claw is a nice way to play so you can then set- set everything up and then you could chat with it.
Peter Steinberger
(02:03:03)
You can also just experiment with it and, like, modify it. Ask your agent. I mean, there is infinite ways how it can be made better. Play around, make it better.
Lex Fridman
(02:03:18)
Mm-hmm.
Peter Steinberger
(02:03:19)
More general, if you- if you’re a beginner and you actually wanna learn how to build software really fast, get involved in open source. Doesn’t need to be my project. In fact, maybe don’t use my project because my- my backlog is very large, but I learned so much from open source. Just like, like, be- be humble. Don’t- maybe don’t send a pull request right away. But there’s many other ways you can help out. There’s many ways you can just learn by just reading code. By- by being on Discord or wherever people are, and just, like, understanding how things are built. I don’t know, like Mitchell Hashimoto builds Ghostly, the terminal, and he has a really good community where there’s so many other projects. Like, pick something that you find interesting and get involved.
Lex Fridman
(02:04:15)
Do you recommend that people that don’t know how to program or don’t really know how to program learn to program also? So when you you can get quite far right now by just using natural language, right? Do you s- still see a lot of value in reading the code, understanding the code, and then being able to write a little bit of code from scratch?
Peter Steinberger
(02:04:38)
It definitely helps.
Lex Fridman
(02:04:39)
It’s hard for you to answer that-
Peter Steinberger
(02:04:41)
Yeah
Lex Fridman
(02:04:42)
… because you don’t know what it’s like to do any of this without knowing the base knowledge. Like, you might take for granted just how much intuition you have about the programming world having programmed so much, right?
Peter Steinberger
(02:04:54)
There’s people that are high agency and very curious, and they get very far even though they have no deep understanding how software works just because they ask questions and questions and- and- and-
Lex Fridman
(02:05:08)
Mm-hmm
Peter Steinberger
(02:05:08)
… and agents are infinitely patient. Like, part of what I did this year is I went to a lot of iOS conferences because that’s my background and just told people, “Don’t consi- don’t see yourself as an iOS engineer anymore.” Like, “You need to change your mindset. You’re a builder.” And you can take a lot of the knowledge how to build software into new domains and all of the- the more fine-grain details, agents can help. You don’t have to know how to splice an array or what the- what the correct template syntax is or whatever, but you can use all your- your general knowledge and that makes it much easier to move from one galaxy, one tech galaxy into another. And oftentimes, there’s languages that make more or less sense depending on what you build, right?
Peter Steinberger
(02:05:58)
So for example, when I build simple CLIs, I like Go. I actually don’t like Go. I don’t like the syntax of Go. I didn’t even consider the language. But the ecosystem is great, it works great with agents. It is garbage collected. It’s not the highest performing one, but it’s very fast. And for those type of- of CLIs that I build, Go is- is a really good choice. So I- I use a language I’m not even a fan of for… That’s my main to-go thing for- for CLIs.
Lex Fridman
(02:06:29)
Isn’t that fascinating that here’s a programming language you would’ve never used if you had to write it from scratch and now you’re using because LMs are good at generating it and it has some of the characteristics that makes it resilient, like garbage collected?
Peter Steinberger
(02:06:44)
Because everything’s weird in this new world and that just makes the most sense.
Lex Fridman
(02:06:48)
What’s the best Ridiculous question. What’s the best programming language for the AI- AI agentic world? Is it JavaScript, TypeScript?
Peter Steinberger
(02:06:54)
TypeScript is really good. Sometimes the types can get really confusing and the ecosystem is- is a jungle. So for- for web stuff it’s good. I wouldn’t build everything in it.
Lex Fridman
(02:07:15)
Don’t you think we’re moving there? Like, that everything will eventually be written- eventually is written in JavaScript and it-
Peter Steinberger
(02:07:22)
The birth and death of JavaScript and we are living through it in real time.
Lex Fridman
(02:07:26)
Like, what does programming look like in 20 years? Right? In 30 years? In 40 years? What do programs and apps look like?
Peter Steinberger
(02:07:32)
You can even ask a question like, do we need a- a programming language that’s made for agents? Because all of those languages are made for humans. So how- what would that look like? Um, I think there’s a- there’s whole bunch of interesting questions that we’ll discover. And also how because everything is now world knowledge, how it in many ways, things will stagnate ’cause if you build something new and the agent has no idea that’s gonna be much harder to use than something that’s already there. Um…… of when I build Mac apps, I build them in, in Swift and SwiftUI, mm, partly because I like pain, partly because it… the, the deepest level of system integration, I can only get through there.
Peter Steinberger
(02:08:18)
And you clearly feel a difference if you click on an electron app and it loads a web view in the menu. It’s just not the same. Sometimes I just also try new languages just to, like, get a feel for them.
Lex Fridman
(02:08:32)
Like Zig?
Peter Steinberger
(02:08:33)
Yeah. If it’s something that… where I care about performance a lot then it’s, it’s a really interesting language. And it… like agents got so much better over the last six months from not really good to totally valid choice. Just still a, a very young ecosystem. And most of the time you actually care about ecosystem, right? So, so if you build something that does inference or goes into whole running model direction, Python, very good.
Lex Fridman
(02:09:06)
Mm-hmm.
Peter Steinberger
(02:09:07)
But then if I build stuff in Python and I want a story where I can also deploy it on Windows, not a good choice.
Lex Fridman
(02:09:13)
Mm-hmm.
Peter Steinberger
(02:09:13)
Sometimes I, I found projects that kinda did 90% of what I wanted but were in Python, and I wanted them… I wanted an easy Windows story. Okay, just rewrite it in Go. But then if you go towards multiple, multiple threads and a lot more performance, Rust is a really good choice. There’s no… there’s just no single answer, and it’s also the beauty of it. Like, it’s fun.
Peter Steinberger
(02:09:37)
And now it doesn’t matter anymore, you can just literally pick the language that has the, the most fitting characteristics and ecosystem-
Lex Fridman
(02:09:45)
Mm-hmm
Peter Steinberger
(02:09:46)
… for your problem domain. And yeah, it might be… You might have s-… You might be a little bit slow in reading the code, but not really. Y- I think you, you pick stuff up really fast, and you can always ask your agent.

Life story and career advice

Lex Fridman
(02:09:59)
So there’s a lot of programmers and builders who draw inspiration from y- your story. Just the way you carry yourself, your choice of making OpenClaw open source, the, the way you have fun building and exploring, and doing that, for the most part, alone or on a small team. So by way of advice, what metric should be the goal that they would be optimizing for? What would be the metric of success? Would it be happiness? Is it money? Is it positive impact for people who are dreaming of building? ‘Cause you went through an interesting journey. You’ve achieved a lot of those things, and then you fell out of love with programming a little bit for a time.
Peter Steinberger
(02:10:47)
I was just burning too bright for too long. I, I ran… I started PSPDFKit, s- and ran it for 13 years, and it was high stress. Um, I had to learn all these things fast and hard, like how to manage people, how to bring people on, how to deal with customers, how to do…
Lex Fridman
(02:11:14)
So it wasn’t just programming stuff, it was people stuff.
Peter Steinberger
(02:11:17)
The stuff that burned me out was mostly people stuff. I, I don’t think burnout is working too much. Maybe to a degree. Everybody’s different. You know, I c- I cannot speak in a- in absolute terms, but for me, it was much more differences with my, my co-founders, conflicts, or, like, really high stress situation with customers that eventually grinded me down. And then when… luckily we, we got a really good offer for, like, putting the company to the next level and I, I already kinda worked two years on making myself obsolete. So at this point I could leave, and, and then I just… I was sitting in front of the screen and I felt like, you know Austin Powers where they suck the mojo out?
Lex Fridman
(02:12:13)
Yeah.
Peter Steinberger
(02:12:14)
Uh, I g- I was like, m- m- it was, like, gone. Like, I couldn’t… I couldn’t get code out anymore. I was just, like, staring and feeling empty, and then I, I just stopped. I, I booked, like, a one-way trip to Madrid and, and, and just, like, spent a t- some t- sometime there. I felt like I had to catch up on life, so I did a whole, a whole bunch of life catching up stuff.
Lex Fridman
(02:12:47)
Did you go through some lows during that period? And you know, maybe advice on… of how to?
Peter Steinberger
(02:12:56)
Maybe advice on how to approach life. If you think that, “Oh yeah, work really hard and then I’ll retire,” I don’t recommend that. Because the idea of, “Oh yeah, I just enjoy life now,” a- maybe it’s appealing, but right now I enjoy life, the most I’ve ever enjoyed life. Because if you wake up in the morning and you have nothing to look forward to, you have no real challenge, that gets very boring, very fast. And then when, when you’re bored, you’re gonna look for other places how to stimulate yourself, and then maybe, maybe that’s drugs, you know? But that eventually also get boring and you look for more, and that will lead you down a very dark path.

Money and happiness

Lex Fridman
(02:13:57)
But you also showed on the money front, you know, a lot of people in Silicon Valley and the startup world, they think, maybe overthink way too much optimized for money. And you’ve also shown that it’s not like you’re saying no to money. I mean, I’m sure you take money, but it’s not…… the primary objective of uh, of your life. Can you just speak to that? Your philosophy on money?
Peter Steinberger
(02:14:20)
When I built my company, money was never the driving force. It felt more like, like, an affirmation that I did something right. And having money solves a lot of problems. I also think there, there’s diminishing returns the more you have. Like, a cheeseburger is a cheeseburger, and I think if you go too far into, oh, I do private jet and I only travel luxury, you disconnect with society. Um, I, I donated quite a lot. Like, I have a, I have a foundation for helping people that weren’t so lucky.
Lex Fridman
(02:15:11)
And disconnecting from society is bad in that on many levels, but one of them is, like, humans are awesome. It’s nice to continuously remember the awesomeness in humans.
Peter Steinberger
(02:15:23)
I, I mean, I could afford really nice hotels. The last time I was in San Francisco, I did the, the first time the OG Airbnb experience-
Lex Fridman
(02:15:30)
Yeah, yeah
Peter Steinberger
(02:15:30)
… and just booked a room. Mostly because I, I thought, okay, you know, I’m out or I’m sleeping, and I don’t like where all the hotels are, and I wanted a, I wanted a different experience. I think, isn’t life all about experiences? Like, if you, if you tailor your life towards, “I wanna have experiences,” it, it reduces the need for, “It needs to be good or bad.” Like, if people only want good experiences, that’s not gonna work, but if you optimize for experiences, if it’s good, amazing. If it’s bad, amazing, because, like, I learned something, I saw something, did something. I wanted to experience that, and it was amazing. Like, there was, like, this, this queer DJ in there, and I showed her how to make music with cloud code. And we, like, immediately bonded and had a great time.
Lex Fridman
(02:16:24)
Yeah, there’s something about that air- you know, couch surfing, Airbnb experience, the OG. I’m still to this day. It’s awesome. It’s humans, and that’s why travel is awesome.
Peter Steinberger
(02:16:34)
Yeah.
Lex Fridman
(02:16:34)
Just experience the variety of, the diversity of human. And when it’s shitty, it’s good too, man. If it rains and you’re soaked and it’s all fucked, and planes, the everything is shit, everything is fucked, it’s still awesome. If you’re able to open your eyes it’s good to be alive.
Peter Steinberger
(02:16:49)
Yeah, and anything that creates emotion and feelings is good.
Lex Fridman
(02:16:55)
.
Peter Steinberger
(02:16:55)
Even… So, so maybe, maybe even the cryptic people are good because they definitely created emotions. I, I don’t know if I should go that far.
Lex Fridman
(02:17:02)
No, man. Give them, give them all, give them love. Give them love. Because I do think that online lacks some of the awesomeness of real life.
Peter Steinberger
(02:17:13)
Yeah.
Lex Fridman
(02:17:13)
That’s, that’s, it’s an open problem of how to solve, how to infuse the online cyber experience with I don’t know with the intensity that we humans feel when it’s in real life. I don’t know. I don’t know if that’s a solvable problem.
Peter Steinberger
(02:17:31)
Well, it’s just possible because text is very lossy.
Lex Fridman
(02:17:35)
Yeah.
Peter Steinberger
(02:17:35)
You know, sometimes I wish if I talked to the agent I would… It should be multi-model so it also understands my emotions.
Lex Fridman
(02:17:43)
I mean, it, it might move there. It might move there.
Peter Steinberger
(02:17:46)
It will. It will. It totally will.

Acquisition offers from OpenAI and Meta

Lex Fridman
(02:17:49)
I mean, I have to ask you, just curious. I, I know you’ve probably gotten huge offers from major companies. Can you speak to who you’re considering working with?
Peter Steinberger
(02:18:04)
Yeah. So, to like explain my thinking a little bit, right, I did not expect this blowing up so much. So, there’s a lot of doors that opened because of it. There’s, like, I think every VC, every big VC company is in my inbox and tried to get 15 minutes of me. So, there’s, like, this butterfly effect moment. I could just do nothing and continue and I really like my life. Valid choice. Almost. Like, I considered it when I delete it, wanted to delete the whole thing. I could create a company. Been there, done that. There’s so many people that push me towards that and, yeah, like, could be amazing.
Lex Fridman
(02:19:07)
Which is to say that you, you would probably raise a lot of money in that.
Peter Steinberger
(02:19:10)
Yeah.
Lex Fridman
(02:19:11)
I don’t know, hundreds of millions, billions. I don’t know. It could just got unlimited amount of money.
Peter Steinberger
(02:19:15)
Yeah. It just doesn’t excite me as much because I feel I did all of that, and it would take a lot of time away from the things I actually enjoy. Same as when, when I was CEO, I think I, I learned to do it and I’m not bad at it, and partly I’m good at it. But yeah, that path doesn’t excite me too much, and I also fear it, it would create a natural conflict of interest. Like, what’s the most obvious thing I do? I, I prioritize it. I put, like, a version safe for workplace. And then what do you do? Like, I get a pull request with a feature like an audit log, but that seems like an enterprise feature, so now I feel I have a conflict of interest in the open-source version and the closed-source version….
Peter Steinberger
(02:20:15)
or change the license to something like FSL, where you cannot actually use it for commercial stuff, would first be very difficult with all the contributions. And second of all, I- I like the idea that it’s free as in beer and not free with conditions. Yeah, there’s ways how you, how you keep all of that for free and just, like, still try to make money, but those are very difficult. And you see there’s, like, fewer and fewer companies manage that. Like, even Tailwind, they’re, like, used by everyone. Everyone uses Tailwind, right? And then they had to cut off 75% of the employees because they’re not making money because nobody’s even going on the website anymore because it’s all done by agents. S- and just relying on donations, yeah, good luck.
Peter Steinberger
(02:21:04)
Like, if a project of my caliber, if I extrapolate what the typical open-source project would get it’s not a lot. I s- I still lose money on the project because I made the point of supporting every dependency, except Slack. They are a big company. They can, they can, they can do without me. But all the projects that are done by mostly individuals so, like, all the, right now, all the sponsorship goes right up to my dependencies. And if there’s more, I want to, like, buy my contributors some merch, you know?
Lex Fridman
(02:21:43)
So you’re losing money?
Peter Steinberger
(02:21:44)
Yeah, right now I lose money on this.
Lex Fridman
(02:21:46)
So it’s really not sustainable?
Peter Steinberger
(02:21:48)
Uh, I mean, it’s like, I guess something between 10 and 20K a month. Which is fine. I’m sure over time I could get that down. Um, OpenAI is helping out a little bit with tokens now. And there’s other companies that have been generous. But yeah, still losing money on that. So that’s- that’s one path I consider, but I’m just not very excited. And then there’s all the big labs that I’ve been talking to. And from those Meta and OpenAI seem the most interesting.
Lex Fridman
(02:22:32)
Do you lean one way or the other?
Peter Steinberger
(02:22:34)
Yeah. Um… Not sure how much I should share there. It’s not quite finalized yet. Let’s- let’s just say, like, on either of these, my conditions are that the project stays open source. That it… Maybe it’s gonna be a model like Chrome and Chromium. Um, I think this is- this is too important to just give to a company and make it theirs. It… This is… And we didn’t even talk about the whole community part, but, like, the- the thing that I experienced in San Francisco, like at ClawCon, seeing so many people so inspired, like… And having fun and just, like, building shit, and, like, having, like, robots in lobster stuff walking around. Like, the…
Peter Steinberger
(02:23:37)
People told me, like, they didn’t experience this level of- of community excitement since, like, the early days of the internet, like 10, 15 years. And there were a lot of high caliber people there, like… Um, I was amazed. I also, like, was very sensory overloaded because too many people wanted to do selfies. But I love this. Like, this needs to stay a place where people can, like, hack and learn. But also, I’m very excited to, like, make this into a version that I can get to a lot of people because I think this is the year of personal agents, and that’s the future. And the fastest way to do that is teaming up with one of the labs. And I also, on a personal level, I never worked at a large company, and I’m intrigued. You know, we talk about experiences. Will I like it? I don’t know.
Peter Steinberger
(02:24:42)
But I want that experience. Uh, I- I’m sure, like, if- if I- if I announce this, then there will be people like, “Oh, he sold out,” blah, blah, blah. But the project will continue. From everything I talked to so far, I can even have more resources for that. Like, both s- both of those companies understand the value that I created something that accelerates our timeline and that got people excited about AI. I mean, can you imagine? Like, I installed OpenClaw on one of my, I’m sorry, normie friends. I’m sorry, Vahan. But he’s just a… You know?
Peter Steinberger
(02:25:32)
Like, he’s-
Lex Fridman
(02:25:33)
Normie with love, yeah. For sure.
Peter Steinberger
(02:25:34)
He- he, like, someone who uses the computer, but never really… Like, yeah, use some ChatGPT sometimes, but not very technical. Wouldn’t really understand what I built. So, like, I’ll show you, and I- I paid for him the- the 90 buck, 100 buck, I don’t know, subscription for Entropic. And set up everything for him with, like, WSL Windows.
Lex Fridman
(02:26:00)
Mm-hmm.
Peter Steinberger
(02:26:00)
I was also curious, would it actually work on Windows, you know? Was a little early. And then within a few days, he was hooked. Like, he texted me about all the things he learned. He built, like, even little tools. He’s not a programmer. And then within a few days he upgraded to the $200 subscription. Or euros, because he’s in Austria…. and he was in love with that thing. That, for me, was like a very early product validation. It’s like, I built something that captures people. And then, a few days later, Entropic blocked him because, based on their rules using the subscription is problematic or whatever. And he was, like, devastated. And then he signed up for Mini Max for 10 bucks a month and uses that.
Peter Steinberger
(02:26:56)
And I think that’s silly in many ways, because you just got a 200 buck customer. You just made someone hate your company, and we are still so early. Like, we don’t even know what the final form is. Is it gonna be cloud code? Probably not, you know? Like, that seems very… It seems very short-sighted to lock down your product so much. All the other companies have been helpful. I- I’m in Slack of, of most of the big labs. Kind of everybody understands that we are still in an era of exploration, in the area of the radio shows on TV and not, and not a modern TV show that fully uses the format.
Lex Fridman
(02:27:45)
I think, I think you’ve made a lot of people, like, see the possibility. And non- Uh, sorry. Non, non-technical people see the possibility of AI, and just fall in love with this idea, and enjoy interacting with AI. And that’s a bea- That’s a really beautiful thing. I think I also speak for a lot of people in saying, I think you’re one of the, the great people in AI in terms of having a good heart, good vibes, humor, the right spirit. And so it would, in a sense, this model that you’re describing, having open source part, and you being part of uh, also building a thing inside, additionally, of a large company would be great, because it’s great to have good people in those companies.
Peter Steinberger
(02:28:36)
Yeah. You know, what also people don’t really see is… I made this in three months. I did other things as well. You know, I have a lot of projects. Like, this is not… Yeah, in January, this was my main focus because I saw the storm coming. But before that, I built a whole bunch of other things. Um, I have so many ideas. Some should be there, some would be much better fitted when I have access to the latest toys- Uh, and I, I kind of want to have access to, like, the latest toys. So this is important, this is cool, this will continue to exist. My, my short-term focus is, like, working through those… Is it two- Is it 3,000 PRs now by now? I don’t even know. Like, there’s, there’s a little bit of backlog.
Peter Steinberger
(02:29:23)
But this is not gonna be the thing that I’m gonna work until I’m, I’m, I’m 80, you know? This is… This is a window into the future. I’m gonna make this into a cool product. But yeah, I have like… I have more ideas.
Lex Fridman
(02:29:36)
If you had to pick, is there a company you lean? So Meta, OpenAI, is there one you lean towards going?
Peter Steinberger
(02:29:44)
I spend time with both of those. And it’s funny, because a few weeks ago, I didn’t consider any of this. Um… And it’s really fucking hard. Like-
Lex Fridman
(02:30:05)
Yeah.
Peter Steinberger
(02:30:06)
I have some… I know no people at OpenAI. I love their tech. I think I’m the biggest codex advertisement shill that’s unpaid. And it would feel so gratifying to, like, put a price on all the work I did for free. And I would love if something happens and those companies get just merged, because it’s like…
Lex Fridman
(02:30:32)
Is this the hardest decision you’ve ever had to do?
Peter Steinberger
(02:30:39)
No. You know, I had some breakups in the past that feel like it’s the same level.
Lex Fridman
(02:30:43)
Relationships, you mean?
Peter Steinberger
(02:30:45)
Yeah.
Lex Fridman
(02:30:47)
Yeah, yeah, yeah, yeah.
Peter Steinberger
(02:30:48)
And, and I also know that, in the end, they’re both amazing. I cannot go wrong. This is like-
Lex Fridman
(02:30:53)
Right.
Peter Steinberger
(02:30:54)
This is, like, one of the most prestigious and, and, and, and, and largest… I mean, not largest, but, like, they’re both very cool companies.
Lex Fridman
(02:31:02)
Yeah, they both really know scale. So, if you’re thinking about impact, some of the wonderful technologies you’ve been exploring, how to do it securely, and how to do it at scale, such that you can have a positive impact on a large number of people. They both understand that.
Peter Steinberger
(02:31:19)
You know, both Ned and Mark basically played all week with my product, and sent me like, “Oh, this is great.” Or, “This is shit. Oh, I need to change this.” Or, like, funny little anecdotes. And people using your stuff is kind of like the biggest compliment, and also shows me that, you know, they actually… T- they actually care about it. And I didn’t get the same on the OpenAI side. Um, I got… I got to see some other stuff that I find really cool, and they lure me with… I cannot tell you the exact number because of NDA, but you can, you can be creative and, and think of the Cerebras deal and how that would translate into speed. And it was very intriguing. You know, like, you give me Thor’s hammer. Yeah. … been lured with tokens. So, yeah.
Lex Fridman
(02:32:34)
So, it- it’s funny. So, so Marc started tinkering with the thing, essentially having fun with the thing.
Peter Steinberger
(02:32:41)
He got… He… Like, when he first… When he first approached me, I got him in my, in my WhatsApp and he was asking, “Hey, when are we have a call?” And I’m like, “I don’t like calendar entries. Let’s just call now.” And he was like, “Yeah, give me 10 minutes, I need to finish coding.”
Lex Fridman
(02:33:01)
Mm-hmm.
Peter Steinberger
(02:33:01)
Well, I guess that gives you street cred. It’s like, ugh, like, he’s still writing code. You know, he’s-
Lex Fridman
(02:33:07)
Yeah, he does
Peter Steinberger
(02:33:07)
… he didn’t drift away in just being a manager, he gets me. That was a good first start. And then I think we had a, like, a 10-minute fight what’s better, cloud code or Codex. Like, that’s the thing you first do, like, you casually call-
Lex Fridman
(02:33:24)
Yeah, that’s awesome
Peter Steinberger
(02:33:24)
… someone with, like, the- that owns one of the largest companies in the world and, and you have a 10 minutes conversation about that.
Lex Fridman
(02:33:30)
Yeah, yeah.
Peter Steinberger
(02:33:30)
And then I think afterwards he called me eccentric but brilliant. But I also had some… I had some really, really cool discussion with Sam Altman and he’s, he’s very thoughtful brilliant and I like him a lot from the, from the little time I had, yeah. I mean, I know it’s peop- some people vilify both of those people. I don’t think it’s fair.
Lex Fridman
(02:34:15)
I think no matter what the stuff you’re building and the kind of human you are doing stuff at scale is kinda awesome. I’m excited.
Peter Steinberger
(02:34:24)
I am super pumped. And you know the beauty is if, if it doesn’t work out, I can just do my own thing again. Like, I, I told them, like, I, I don’t do this for the money, I don’t give a fuck. I-
Lex Fridman
(02:34:42)
Yeah.
Peter Steinberger
(02:34:42)
I mean, of course, of course it’s a nice compliment but I wanna have fun and have impact, and that’s ultimately what made my decision.

How OpenClaw works

Lex Fridman
(02:34:58)
Can I ask you about… we’ve talked about it quite a bit, but maybe just zooming out about how OpenCloud works. We talked about different components, I want to ask if there’s some interesting stuff we missed. So, there’s the gateway, there’s the chat clients, there’s the harness there’s the agentic loop. You said somewhere that everybody should im- implement an agent loop at some point in their lives.
Peter Steinberger
(02:35:24)
Yeah, because it’s like the, it’s like the Hello World in AI, you know? And it’s actually quite simple.
Lex Fridman
(02:35:30)
Yeah.
Peter Steinberger
(02:35:30)
And it- it’s good to understand that that stuff’s not magic. You can, you can easily build it yourself. So, writing your own little cloud code… I, I even did this at a conference in Paris for people to, like, introduce them to AI. I think it’s it’s a fun little practice. And you, you covered a lot. I think one, one silly idea I had that turned out to be quite cool is I built this thing with full system access. So it’s like, you know, with great power comes great responsibility.
Peter Steinberger
(02:36:09)
And I was like, “How can I up the stakes a little bit more?”
Lex Fridman
(02:36:13)
Yeah, right.
Peter Steinberger
(02:36:14)
And I just made a… I made it proactive. So, I added a prompt. Initially, it was just a prompt, surprise me. Every, like, half an hour, surprise me, you know? And later on I changed it to be like a little more specific and-
Lex Fridman
(02:36:31)
Yeah
Peter Steinberger
(02:36:31)
… in the definition of surprise. But the fact that I made it proactive and that it knows you and that it cares about you, it- it’s at least it’s programmed to that, prompted to do that. And that, that is a follow on, on your current session makes it very interesting because it would just sometimes ask a follow-up question or like, “How’s your day?”
Lex Fridman
(02:36:53)
Yeah, right.
Peter Steinberger
(02:36:53)
And I just made a… I made it proactive. So, I added a prompt. Initially, it was just a prompt, surprise me. Every, like, half an hour, surprise me, you know? And later on I changed it to be like a little more specific and-
Lex Fridman
(02:36:58)
Yeah
Peter Steinberger
(02:36:58)
… in the definition of surprise. But the fact that I made it proactive and that it knows you and that it cares about you, it- it’s… at least it’s programmed to that, prompted to do that. And that, that is a follow on, on your current session makes it very interesting because it would just sometimes ask a follow-up question or like, “How’s your day?” I mean, again, it’s a little creepy or weird or interesting but Heartbeat very… in the beginning, it’s still… today, it doesn’t… the model doesn’t choose to use it a lot.
Lex Fridman
(02:37:16)
By the way, we’re, we’re, we’re talking about Heartbeat, as you mentioned, the thing that regularly-
Peter Steinberger
(02:37:22)
Yeah. Like kicks-
Lex Fridman
(02:37:23)
… Acts.
Peter Steinberger
(02:37:23)
You just kick off the loop.
Lex Fridman
(02:37:25)
Isn’t that just a cron job, man?
Peter Steinberger
(02:37:27)
Yeah, right, I mean, it’s like-
Lex Fridman
(02:37:29)
It’s the cr- the criticisms that you get are hilarious.
Peter Steinberger
(02:37:31)
You can, you can deduce any idea to like a silly… Yeah, it’s just, it’s just a cron job in the end. I have like cron- separate cron jobs.
Lex Fridman
(02:37:41)
Isn’t love just evolutionary biology manifesting itself and isn’t… aren’t you guys just using each other?
Peter Steinberger
(02:37:49)
And then, yeah, and the project is all just glue of a few different dependencies-
Lex Fridman
(02:37:52)
Yeah
Peter Steinberger
(02:37:53)
… and there’s nothing original. Why do people… Well, you know, isn’t Dropbox just FTP with extra steps?
Lex Fridman
(02:38:00)
Yeah.
Peter Steinberger
(02:38:01)
I found it surprising where I had this I had a shoulder operation a few months ago, so.
Lex Fridman
(02:38:06)
Mm-hmm.
Peter Steinberger
(02:38:08)
And the model rarely used Heartbeat, but then I was in the hospital, and it knew that I had the operation and it checked up on me. It’s like, “Are you okay?” And I just… It’s like, again, apparently, like, if something’s significant in the context, that triggered the Heartbeat when it rarely used the Heartbeat…. And it does that sometimes for people, and that just makes it a lot more relatable.
Lex Fridman
(02:38:36)
Let me look this up on Perplexity, how OpenCall works just to see if I’m missing any of the stuff. Local agent run time, high-level architecture. There’s… Oh, we haven’t talked much about skills, I suppose. Skill hub, the tools in the skill lair, but that’s definitely a huge component and there’s a huge growing set of skills-
Peter Steinberger
(02:38:55)
You know, you know what I love? That half a year ago, like everyone was talking about MCPs-
Lex Fridman
(02:39:02)
Yeah
Peter Steinberger
(02:39:02)
… and I was like, “Screw MCPs. Every MCP would be better as a CLI.” And now this stuff doesn’t even have MCP support. I mean, it, it has with asterisks, but not in the core lair, and nobody’s complaining.
Lex Fridman
(02:39:23)
Mm-hmm.
Peter Steinberger
(02:39:24)
So my approach is if you want to extend the model with more features, you just build a CLI and the model can call the CLI, probably gets it wrong, calls the help menu, and then on demand loads into the context what it needs to use the CLI. It just needs a sentence to know that the CLI exists if it’s something that the model doesn’t know about default. And even for a while, I, I didn’t really care about skills, but skills are actually perfect for that because they, they boil down to a single sentence that explains the skill and then the model loads the skill, and that explains the CLI, and then the model uses the CLI. Some skills are, like raw, but most of the time, networks.
Lex Fridman
(02:40:16)
It’s interesting um, I’m asking Perplexity MCP versus skills, because this kind of requires a hot take that’s quite recent, because your general view is MCPs are dead-ish. So MCPs is a more structured thing. So if you listen to Perplexity here, MCP is what can I reach? So APIs, database services files via protocol. So a structured protocol of how you communicate with a thing, and then skills is more how should I work? Procedures, hostile helper scripts and prompts are often written in a kind of semi-structured natural language, right? And so technically skills could replace MCP if you have a smart enough model.
Peter Steinberger
(02:41:00)
I think the main beauty is, is that models are really good at calling Unix commands. So if you just add another CLI, that’s just another Unix command in the end. And MCP is… That has to be added in training. That’s not a very natural thing for the model. It requires a very specific syntax. And the biggest thing, it’s not composable. So imagine if I have a service that gives me better data and gives me the temperature, the average temperature, rain, wind and all the other stuff, and I get like this huge blob back. As a model, I always have to get the huge blob back. I have to fill my context with that huge blob and then pick what I want. There’s no way for the model to naturally filter unless I think about it proactively and add a filtering way into my MCP.
Peter Steinberger
(02:41:53)
But if I would build the same as a CLI and it would give me this huge blob, it could just add a JQ command and filter itself and then only, only get me what I actually need. Or maybe even compose it into a script to, like do some calculations with the temperature and only give me the exact output and the mo- and the… you have no context pollution. Again, you can solve that with like sub-agents and more charades, but it’s just like workarounds for something that might not be the optimal way. There’s… It definitely it was, you know, it was good that we had MCPs because it pushed a lot of companies towards building APIs and now I, I can like look at an MCP and just make it into a CLI.
Lex Fridman
(02:42:37)
Mm-hmm.
Peter Steinberger
(02:42:37)
But this, this inherent problem that MCPs by default clutter up your context. Plus the fact that most MCPs are not made good, in general make it just not a very useful paradigm. There’s some exceptions like Playwright for example that requires state and it’s actually useful. That is an acceptable choice.
Lex Fridman
(02:43:05)
So Playwright you use for browser use, which I think is c- already in OpenClaw is quite incredible, right?
Peter Steinberger
(02:43:11)
Yeah.
Lex Fridman
(02:43:12)
You can basically do everything, most things you can think of using browser use.
Peter Steinberger
(02:43:17)
That, that gets into the whole arch of every app is just a very slow API now, if they want or not. And that through personal agents a lot of apps will disappear. You know, like I had a… I built a CLI for Twitter. I mean, I- I just reverse engineered their website and used the internal API, which is not very allowed.
Lex Fridman
(02:43:50)
It’s called Bird, short-lived.
Peter Steinberger
(02:43:53)
It was called Bird, because the bird had to disappear.
Lex Fridman
(02:43:57)
The, the wings were clipped.
Peter Steinberger
(02:43:59)
All they did is they just made access slower. Yeah, not tak- you’re not actually taking a feature away, but now inst- if, if your agent wants to read a tweet, it actually has to open the browser and read the tweet. And it will still be able to read the tweet. It will just take longer. It’s not like you are making something that was possible, not possible. No. Now, it’s just taking… Now it’s just a bit slower. So, so it doesn’t really matter if your service wants to be an API or not. If I can access it in the browser…… easy API. It’s a slow API.
Lex Fridman
(02:44:35)
Can you empathize with their situation? Like, what would you do if you were Twitter, if you were X? Because they’re basically trying to protect against other large companies scraping all their data.
Peter Steinberger
(02:44:45)
Yeah.
Lex Fridman
(02:44:46)
But in so doing, they’re cutting off like a million different use cases for smaller developers that actually want to use it for helpful cool stuff.
Peter Steinberger
(02:44:54)
I think that if you have a very low per day baseline per account that allows read-only access would solve a lot of problems. There’s plenty, plenty of automations where people create a bookmark and then use OpenClaw to, like, find the bookmark, do research on it, and then send you an email-
Lex Fridman
(02:45:16)
Mm-hmm
Peter Steinberger
(02:45:16)
… with, like, more details on it or a summary. That’s a cool approach. I also want all my bookmarks somewhere to search. I would still like to have that.
Lex Fridman
(02:45:26)
So, read-only access for the bookmarks you make on X. That seems like an incredible application because a lot of us find a lot of cool stuff on X, we bookmark, that’s the general purpose of X. It’s like, holy shit, this is awesome. Oftentimes, you bookmark so many things you never look back at them.
Peter Steinberger
(02:45:40)
Yeah.
Lex Fridman
(02:45:40)
It would be nice to have tooling that organizes them and allows you to research it further.
Peter Steinberger
(02:45:44)
Yeah, I mean, and to be frank, I, I mean, I, I told Twitter proactively that, “Hey, I built this and there’s a need.” And they’ve been really nice, but also like, “Take it down.” Fair. Totally fair. But I hope that this woke up the team a little bit that there’s a need. And if all you do is making it slower, you’re just reducing access to your platform. I’m sure there’s a better way. I also, I’m very much against any automation on Twitter. If you tweet at me with AI, I will block you. No first strike. As soon as it smells like AI, and AI still has a smell.

AI slop

Lex Fridman
(02:46:31)
Mm-hmm.
Peter Steinberger
(02:46:32)
Especially on tweets. It’s very hard to tweet in a way that does look completely human.
Lex Fridman
(02:46:38)
Mm-hmm.
Peter Steinberger
(02:46:38)
And then I block. Like, I have a zero tolerance policy on that. And I think it would be very helpful if they, if, like, tweets done via API would be marked. Maybe there’s some special cases where… But, and there should be, there should be a very easy way for agents to get their own Twitter account. Um…
Lex Fridman
(02:47:04)
Mm-hmm.
Peter Steinberger
(02:47:07)
We, we need to rethink social platforms a little bit if, if, if we, we, we go towards a future where everyone has their agent and agents maybe have their own Instagram profiles or Twitter accounts, so I can, like, do stuff on my behalf. I think it should very clearly be marked that they are doing stuff on my behalf and it’s not me. Because content is now so cheap. Eyeballs are the expensive part. And I find it very triggering when I read something and then I’m like, oh, no, this smells like AI.
Lex Fridman
(02:47:41)
Yeah. Like, where, where is this headed in terms of what we value about the human experience? It feels like we’ll, we’ll move more and more towards in-person interaction and we’ll just communicate. We’ll talk to our AI agent to, to accomplish different tasks, to learn about different things, but we won’t value online interaction because there’ll be so much AI slob that smells and so many bots that it’s difficult.
Peter Steinberger
(02:48:15)
Well, if it’s smart, then it shouldn’t be difficult to filter. And then I can look at it if I want to. But yeah, this is, like, a big thing we need to solve right now. E- especially on this project, I get so many emails that are, let’s say nicely, agentically written.
Lex Fridman
(02:48:36)
Yeah.
Peter Steinberger
(02:48:36)
But I much rather read your broken English than your AI slob. You know, of course there’s a human behind it, and yet they, they prompt it. I’d much rather read your prompt than what came out. Um, I think we’re reaching a point where I value typos again.
Lex Fridman
(02:48:56)
Yeah.
Peter Steinberger
(02:48:56)
Like… Like, and I, I mean, it also took me a while to, like, come to the realization. I, on my blog I experimented with creating a blog post with agents and ultimately it took me about the same time to, like, steer agent towards something I like. But it missed the nuances that, how I would write it. You know, you can like, you can steer it towards your style, but it’s not gonna be all your style. So, I, I completely moved away from that. I, I, everything, everything I blog is organic, handwritten and maybe, maybe I, I, I use AI as a fix my worse typos. But there’s value in the rough parts of an actual human.
Lex Fridman
(02:49:53)
Isn’t that awesome? Isn’t that beautiful? That now because of AI we value the raw humanity in each of us more.
Peter Steinberger
(02:50:02)
I also, I also realized this thing that I, I rave about AI and use it so much for anything that’s code, but I’m allergic if it’s stories.
Lex Fridman
(02:50:12)
Right. Yeah.
Peter Steinberger
(02:50:14)
Also, documentation, still fine with AI. You know, better than nothing.
Lex Fridman
(02:50:17)
And for now it’s still i- it applies in the mi- in the visual medium too. It’s fascinating how allergic I am to even a little bit of AI slob in in video and images. It’s useful, it’s nice if it’s like a little component of like-
Peter Steinberger
(02:50:32)
Or even, even those images. The, like, all these infographics and stuff, the-… they trigger me so hard.
Lex Fridman
(02:50:38)
Yeah.
Peter Steinberger
(02:50:39)
Like, it immediately makes me think less of your content. And it … They were novel for, like, one week and now it just screams slop.
Lex Fridman
(02:50:50)
Yeah.
Peter Steinberger
(02:50:51)
Even- even if people work hard on it, using … And I- I have some on my blog post, you know, in the- in the time where I- I explored this new medium. But now, they trigger me as well. It’s like, yeah, this is … This just screams AI slop. I-
Lex Fridman
(02:51:06)
What… I don’t know what that is, but I went through that too. I was really excited by the diagrams. And then I realized, in order to remove from them hallucinations, you actually have to do a huge amount of work. And you’re just using it to draw the better diagrams, great. And then I’m proud of the diagram. I’ve used them for literally, like, ki- ki- kind of like you said for maybe a couple of weeks. And now I look at those, and I- I feel like I feel when I look at Comic Sans as a font or- or something like this.
Lex Fridman
(02:51:32)
It’s like, “No, this is-“
Peter Steinberger
(02:51:35)
It’s a smell.
Lex Fridman
(02:51:35)
“… this is fake. It’s fraudulent. There’s something wrong with it.” And it…
Peter Steinberger
(02:51:41)
It’s a smell.
Lex Fridman
(02:51:42)
It’s a smell.
Peter Steinberger
(02:51:44)
It’s a smell.
Lex Fridman
(02:51:44)
And it’s awesome because it re- it reminds you that we know. There’s so much to humans that’s amazing and we know that. And we- we know it. We know it when we see it. And so that gives me a lot of hope, you know? That gives me a lot of hope about the human experience. It’s not going to be damaged by … It’s only going to be empowered as tools by AI. It’s not going to be damaged or limited or somehow altered to where it’s no longer human. So … Uh, I need a bathroom break. Quick pause. You mentioned that a lot of the apps might be basically made obsolete. Do you think agents will just transform the entire app market?

AI agents will replace 80% of apps

Peter Steinberger
(02:52:30)
Yeah. Uh, I noticed that on Discord, that people just said how their … like, what they build and what they use it for. And it’s like, why do you need MyFitnessPal when the agent already knows where I am? So, it can assume that I make bad decisions when I’m at, I don’t know, Waffle House, what’s around here? Or- or briskets in Austin.
Lex Fridman
(02:52:57)
There’s no bad decisions around briskets, but yeah.
Peter Steinberger
(02:53:00)
No, that’s the best decision, honestly. Um-
Lex Fridman
(02:53:03)
Your agent should know that.
Peter Steinberger
(02:53:04)
But it can, like … It can modify my- my gym workout based on how well I slept, or if I’m … if I have stress or not. Like, it has so much more context to make even better decisions than any of this app even could do.
Lex Fridman
(02:53:18)
Mm-hmm.
Peter Steinberger
(02:53:19)
It could show me UI just as I like. Why do I still need an app to do that? Why do I have to … Why should I pay another subscription for something that the agent can just do now? And why do I need my- my Eight Sleep app to control my bed when I can tell the a- … tell the agent to … You know, the agent already knows where I am, so he can, like, turn off what I don’t use.
Lex Fridman
(02:53:45)
Mm-hmm.
Peter Steinberger
(02:53:47)
And I think that will … that will translate into a whole category of apps that are no longer … I will just naturally stop using because my agent can just do it better.
Lex Fridman
(02:54:00)
I think you said somewhere that it might kill off 80% of apps.
Peter Steinberger
(02:54:04)
Yeah.
Lex Fridman
(02:54:05)
Don’t you think that’s a gigantic transformative effect on just all software development? So that means it might kill off a lot of software companies.
Peter Steinberger
(02:54:13)
Yeah. Um-
Lex Fridman
(02:54:16)
It’s a scary thing. So, like, do you think about the impact that has on the economy? On just the ripple effects it has to society? Transforming who builds what tooling. It empowers a lot of users to get stuff done, to get stuff more efficiently, to get it done cheaper.
Peter Steinberger
(02:54:41)
It’s also new services that we will need, right? For example, I want my agent to have an allowance. Like, you solve problems for me, here’s like 100 bucks in order to solve problems for me. And if I tell you to order me food, maybe it uses a service. Maybe it uses something like rent-a-human to, like, just get that done for me.
Lex Fridman
(02:55:06)
Mm-hmm.
Peter Steinberger
(02:55:06)
I don’t actually care. I care about solve my problem. There’s space for- for new companies to solve that well. Maybe don’t … Not all apps disappear. Maybe some transform into being API.
Lex Fridman
(02:55:21)
So, basically, apps that rapidly transform in being agent-facing. So, there’s a real opportunity for, like, Uber Eats, that we just used earlier today. It- it’s companies this, of which there’s many. Who gets there fastest to being able to interact with OpenClaw in a way that’s the m- the most natural, the easiest?
Peter Steinberger
(02:55:50)
Yeah. And also, apps will become API if they want or not. Because my agent can figure out how to use my phone. I mean, on- on the other side, it’s a little more tricky. On Android, that’s already … People already do that. And then we’ll just click the Order Uber for Me button for me. Or maybe another service. Or maybe there’s- there’s a … there’s an API I can call so it’s faster. Uh, I think that’s a space we’re just beginning to even understand what that means. And I … Again, I didn’t even … That was not something I thought of. Something that I- that I discovered as people use this, and it … We are still so early. But yeah, I think data is very important. Like, apps that can give me data, but that also can be API. Why do I need a Sonos app anymore when I can …
Peter Steinberger
(02:56:44)
when my agent can talk to the Sonos?… Speakers directly. Like my cameras, there’s like a crappy app, but they have, they have an API, so my agent uses the API now.
Lex Fridman
(02:56:57)
So it’s gonna force a lot of companies to have to shift focus. That’s kind of what the internet did, right? You have to rapidly rethink, reconfigure what you’re selling, how you’re making money.
Peter Steinberger
(02:57:10)
Yeah, and some companies were really not like that. For example, there’s no CLI for Google, so I had to like, do… have to do anything myself and build GAWK. That’s like a CLI for Google. And at the… Yeah, at the end user, they have to give me the emails because otherwise I cannot use their product. If I’m a company and I try to get Google data, Gmail, there’s a whole complicated process, to the point where sometimes startups acquire startups that went through the process, so they don’t- don’t have to work with Google for half a year to be certified to being able to access Gmail. But my agent can access Gmail because I can just connect to it. It’s still crappy because I need to, like, go through Google’s developer jungle to get a key, and that’s still annoying.
Peter Steinberger
(02:58:09)
But they cannot prevent me. And worst case, my agent just clicks on the, on the website and gets the data out that way.
Lex Fridman
(02:58:17)
Through browsers?
Peter Steinberger
(02:58:18)
Yeah. I mean, I, I watch my agent happily click the I’m not a robot button. And there’s this, this whole… That’s gonna be… That’s gonna be more heated. You see companies like Cloudflare that try to prevent bot access. And in some ways, that’s useful for scraping. But in other ways, if I’m, I’m a personal user, I want that. You know, sometimes I, I use Codex and I, I read an article about modern React patterns, and it’s like a Medium article. I paste it in and the agent can’t read it because they block it. So then I have to copy-paste the actual text. Or in the future, I’ll learn that maybe I don’t click on Medium because it’s annoying, and I use other websites that actually are agent friendly.
Peter Steinberger
(02:59:12)
So, uh-
Lex Fridman
(02:59:13)
There’s gonna be a lot of powerful, rich companies fighting back. So it’s really intere- You’re at the center, you’re the catalyst, the leader, and happen to be at the center of this kind of revolution where it’s get- gonna completely change how we interact with services with, with web. And so, like, there’s companies at Google that are gonna push back. I mean, there’s every major companies you could think of is gonna push back.
Peter Steinberger
(02:59:39)
Even… Yeah, even search. Um, I now use, I think Perplexity or Brave as providers because Google really doesn’t make it easy to use Google without Google. I’m not sure if that’s the right strategy, but I’m not Google.
Lex Fridman
(02:59:58)
Yeah, there’s a, there’s a nice balance from a big company perspective ’cause if you push back too much for too long, you become Blockbuster and you lose everything to the Netflixes of the world. But some pushback is probably good during a revolution to see.
Peter Steinberger
(03:00:11)
Yeah. But you see that, that… Like, this is something that the people want.
Lex Fridman
(03:00:14)
Right.
Peter Steinberger
(03:00:14)
So-
Lex Fridman
(03:00:15)
Yes.
Peter Steinberger
(03:00:16)
If I’m on the go, I don’t wanna open a calendar app. I just… I wanna tell my agent, “Hey, remind me about this dinner tomorrow night,” and maybe invite two of my friends and then maybe send a what- send a WhatsApp message to my friend. And I don’t need… I don’t want or need to open apps for that. I think that we passed that age, and now everything is, like, much more connected and, and fluid if those companies want it or not. And I think, well, the right companies will find ways to jump on the train, and other companies will perish.

Will AI replace programmers?

Lex Fridman
(03:00:55)
You got to listen to what the people want. We talked about programming quite a bit, and a lot of folks that are developers are really worried about their jobs, about their… About the future of programming. Do you think AI replaces programmers completely? Human programmers?
Peter Steinberger
(03:01:11)
I mean, we’re definitely going in that direction. Programming is just a part of building products. So maybe, maybe AI does replace programmers eventually. But there’s so much more to that art. Like, what do you actually wanna build? How should it feel? How’s the architecture? I don’t think agents will replace all of that. Yeah, like, just the, the actual art of programming, it will, it will stay there, but it’s, it’s gonna be like knitting. You know? Like, people do that because they like it, not because it makes any sense. So the… I read this article this morning about someone that it’s okay to mourn our craft. And I can…
Peter Steinberger
(03:02:04)
A part of me very strongly resonates with that because in my past I, I spent a lot of time tinkering, just being really deep in the flow and just, like, cranking out code and, like, finding really beautiful solutions. And yes, in a way it’s, it’s sad because that will go away. And I also get a lot of joy out of just writing code and being really deep in my thoughts and forgetting time and space and just being in this beautiful state of flow. But you can get the same state of flow… I get a similar state of flow by working with agents and building and thinking really hard about problems. It is different-… but… And it’s okay to mourn it, but I mean, that’s not something we can fight. Like, there is… the world for a long time had a…
Peter Steinberger
(03:03:06)
there was a lack of intelligence, if you s- if you see it like that, of people building things, and that’s why salaries of software developers reached stupidly high amounts and then will go away. There will still be a lot of demand for people that understand how to build things. It’s just that all this tokenized intelligence enables people to do a lot more, a lot faster. And it will be even more… even faster and even more because those things are continuously improving. We had similar things when… I mean, it’s probably not a perfect analogy, but when we created the steam engine, and they built all these factories and replaced a lot of manual labor, and then people revolted and broke the machines.
Peter Steinberger
(03:04:04)
Um, I- I can relate that if you very deeply identify that you are a programmer, that it’s scary and that it’s threatening because what you like and what you’re really good at is now being done by a soulless or not entity. But I don’t think you’re just a programmer. That’s a very limiting view of your craft. You are, you are still a builder.
Lex Fridman
(03:04:40)
Yeah, there’s a couple of things I want to say. So one is, I never… As you’re articulating this beautifully, I no- I’m realizing I never thought I would… the thing I love doing would be the thing that gets replaced. You hear these stories about these, like you said, with the steam engine. I’ve, I’ve spent so many, I don’t know, maybe thousands of hours poring over code and putting my heart and soul and, like, and just, like, some of my most painful and happiest moments were alone behind… I, I was an Emacs person for a long time. Man, Emacs. And, and then there’s an identity and there’s meaning, and there’s… Like, when I walk about the world, I don’t say it out loud, but I think of myself as a programmer. And to have that in a matter of months…
Lex Fridman
(03:05:31)
I mean, like you mentioned, April to November, it really is a leap that happened, a shift that’s happening. To have that completely replaced is is painful. It’s, it’s truly painful. But I also think programmers, builders more broadly, but what is, what is the act of programming? I, I think programmers are generally best equipped at this moment in history to learn the language, to empathize with agents, to learn the language of agents. To feel the CLI.
Peter Steinberger
(03:06:10)
Yeah.
Lex Fridman
(03:06:11)
Like, like to understand what is the thing you need, you the agent, need to do this task the best?
Peter Steinberger
(03:06:21)
I think at some point it’s just gonna be called coding again, and it’s just gonna be the new normal.
Lex Fridman
(03:06:25)
Yeah.
Peter Steinberger
(03:06:25)
And yet, while I don’t write the code, I very much feel like I’m in the driver’s seat and I am, I am writing the code, you know? It’s just-
Lex Fridman
(03:06:37)
You’ll still be a programmer. It’s just the activity of a programmer is, is different.
Peter Steinberger
(03:06:41)
Yeah, and because on X, the bubble, I mean, is mostly positive. On, on Mastodon and Bluesky, I don’t… I also use it less because oftentimes I got attacked for my blog posts. And I, I had stronger reactions in the past, now I can sympathize with those people more ’cause, in a way I get it. It… In a way, I also don’t get it because it’s very unfair to grab onto the person that you see right now and unload all your fear and hate. It’s gonna be a change and it’s gonna be challenging, but it’s also… I don’t know. I find it incredibly fun and, and, and gratifying. And I can, I can use the new time to focus on much more details. I think the level of expectation of what we build is also rising because it’s just now… The default is now so much easier, so software is changing in many ways.
Peter Steinberger
(03:07:45)
There’s gonna be a lot more. And then you have all these people that are screaming, “Oh yeah, but what about the water?” You know? Like, I did a conference in Italy about the, the state of AI, and m- my whole motivation was to push people away from, don’t see yourself as an iOS developer anymore. You’re now a builder, and you can use your skills in many more ways. Also because apps are slowly going away. People didn’t like that. Like a lot of people didn’t like what I had to say. And I don’t think I was hyperbole, I was just like, “This is how I see the future.” Maybe this is not how it’s going to be, but I’m pretty sure a version of that will happen.
Peter Steinberger
(03:08:30)
And the first question I got was, “Yeah, but what about the insane water use on data centers?” But then you actually sit down and do the maths, and then for most people if you just skip one burger per month, that compensates the, the CO2 output, or, like, the water use in equivalent of tokens. I mean, the maths is, is… the maths is tricky, and it depends if you add pre-training, then maybe it’s more than just one patty…. but it’s not off by a factor of 100, you know? So, so the… or like golf is still using way more water than all data centers together. So are you also hating people that play golf? Those people grab on anything that they think is bad about AI without seeing the potential things that might be good about AI.
Lex Fridman
(03:09:23)
Mm-hmm.
Peter Steinberger
(03:09:24)
And I’m not saying everything’s good. It’s certainly gonna be a very transformative technology for our society.
Lex Fridman
(03:09:32)
There’s to steel man the, the criticism in general, I do wanna say in my experience with Silicon Valley there’s a bit of a bubble in the sense that there’s a kind of excitement and an over-focus about the positive that the technology can bring.
Peter Steinberger
(03:09:54)
Yeah.
Lex Fridman
(03:09:55)
And… which is great. It’s great to focus on… N- not to, not to be paralyzed by fear and fear-mongering and so on, but there’s also within that excitement, and within everybody talking just to each other, there’s a dismissal of the basic human experience across the United States and the Midwest, across the world. Including the programmers we mentioned, including all the people that are gonna lose their jobs, including the s- the measurable pain and suffering that happens at the short-term scale when there’s change of any kind. Especially large-scale transformative change that we’re about to face if what we’re talking about will materialize. And so to ha- having a bit of that humility and awareness about the tools you’re building, they’re going to cause pain.
Lex Fridman
(03:10:43)
They will long term hopefully bring about a better world, and even more opportunities-
Peter Steinberger
(03:10:48)
Yeah
Lex Fridman
(03:10:48)
… and even more awesomeness. But having that kind of like quiet moment often of, of respect for the pain that is going to be felt. And so not, not enough of that is, I think, done, so it’s, it’s good to have a bit of that.
Peter Steinberger
(03:11:07)
And then I also have to put against some of the emails I got where people told me they have a small business, and they’ve been struggling. And, and OpenClaw helped them automate a few of the tedious tasks from, from collecting invoices to like answering customer emails that then freed them up and like cost them a bit more joy in their life.
Lex Fridman
(03:11:30)
Mm-hmm.
Peter Steinberger
(03:11:31)
Or, or some emails where they told me that OpenClaw helped their disabled daughter. That she’s now empowered and feels she can do much more than before. Which is amazing, right? Because you could, you could do that before as well. The technology was there. I didn’t, I didn’t invent a whole new thing, but I made it a lot easier and more accessible, and that did show people the possibilities that they previously wouldn’t see. And now they apply it for good.
Lex Fridman
(03:12:02)
Mm-hmm.
Peter Steinberger
(03:12:03)
Or like also the fact that, yes, I, I, I suggest the, the, the latest and best models, but you can totally run this on free models. You can run this locally. You can run this on, on, on Keyme or other, other, other models that are way more accessible price-wise, and still have a, a very powerful system that might otherwise not be possible. Because other things like, I don’t know, Entropik’s CoWork is locked in into their space, so it’s not all black and white. There’s… I got a lot of emails that were heartwarming and amazing. And, and I don’t know, it just made me really happy.
Lex Fridman
(03:12:48)
Yeah, there’s a lot… It has brought joy into a lot of people’s lives. Not just, not just programmers. Like a lot of people’s lives. It’s, it’s, it’s beautiful to see. What gives you hope about this whole thing we have going on with human civilization?

Future of OpenClaw community

Peter Steinberger
(03:13:03)
I mean, I inspired so many people. There’s like… there’s this whole builder vibe again. People are now using AI in a more playful way and are discovering what it can do and how it can like help them in their life. And creating new places that are just sprawling of creativity. I don’t know. Like, there’s like ClawCoin in Vienna. There’s like 500 people. And there’s such a high percentage of people that uh, want to present, which is to me really surprising, because u- usually it’s quite hard to find people that want to like talk about what they built. And now it’s, there’s an abundance. So that gives me hope that we can, we can figure shit out.
Lex Fridman
(03:14:00)
And it makes it accessible to basically everybody.
Peter Steinberger
(03:14:04)
Yeah.
Lex Fridman
(03:14:05)
Just imagine all these people building, especially as you make it simpler and simpler, more secure. It’s like anybody who has ideas and can express those ideas in language can build. That’s crazy.
Peter Steinberger
(03:14:22)
Yeah, that’s ultimately power to the people, and one of the beauty, the beautiful things that come out of AI. Not just, not just a slop generator.
Lex Fridman
(03:14:36)
Well, Mr. Clawfather, I just realized when I said that in the beginning, I violated two trademarks, because there’s also the Godfather. I’m getting sued by everybody. You’re a wonderful human being. You’ve created something really special, a special community, a special product, a special set of ideas. Plus, the entire… the humor, the good vibes, the inspiration of all these people building, the excitement to build. So I’m truly grateful for everything you’ve been doing and for who you are, and for sitting down to talk with me today. Thank you, brother.
Peter Steinberger
(03:15:14)
Thanks for giving me the chance to tell my story.
Lex Fridman
(03:15:17)
Thanks for listening to this conversation with Peter Steinberger. To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback and so on. And now let me leave you with some words from Voltaire. “With great power comes great responsibility.” Thank you for listening, and hope to see you next time.

Transcript for GSP teaches Lex Fridman how to street fight

This is a transcript of “GSP teaches Lex Fridman how to street fight”.
The timestamps in the transcript are clickable links
that take you directly to that point in
the main video. Please note that the transcript is
human generated, and may have errors.
Here are some useful links:

Georges St-Pierre
(00:00:00)
In a street fight, I would rather- …fight Francis Ngannou than fight Bas Rutten. In a street fight.
Lex Fridman
(00:00:06)
Let me tell you first that I’ve been around. I’ve been a bouncer for many, many years. Bang! Bang! Bang! It’s a street fight. Everybody underestimates the kick in the groin. It’s boom, that’s the first thing to do. I follow up, bang, bang, bang. Right away after that, danga-da-danga-da-dang. See what I’m doing? Boom! That’s the left elbow right there.
Georges St-Pierre
(00:00:33)
Yeah, so very often people ask me the difference between a street fight and a fight in mixed martial arts. The difference is in the street, there is no referee. And there’s an instigator, and there is the other person. The best in life for a street fight is always to not be the instigator because you have the element of surprise. So if you’re in a heated argument with someone and you feel that you’re potentially going to be in a fight, the best thing to do is to never show your center line, to always go on the side and put your hands up like this. Now, that’s one of the best things to do. It’s a self-defense tactic that is used all around the world. Because from there, the distance that I have to travel to cause a lot of damage to him is very minimal.
Georges St-Pierre
(00:01:27)
You know, it’s very short. I can go boom. I can go boom. And I’m protected because if he ever tried to do anything, my hands are already up, but I’m ready to respond to any aggression. So the first thing is, if you’re in an argument and you feel the heat is rising, is to hit first. You don’t want to fight, but you want to hit first. You want to hit first, you know? So it’s either boom, hit first, depending on the situation. If you’re someone who is much less physically strong than the aggressor, you can use the eyes, the genitals, the neck, you know? And then you can leave the scene. However, if I’m like this, the minute he touches me, he declares war. Now I can go and perform a self-defense move.
Lex Fridman
(00:02:25)
So striking, not wrestling?
Georges St-Pierre
(00:02:27)
Yes. It’s always striking first and leave the scene. If you’re, for example, a kid or someone who doesn’t have the physical strength of your aggressor. Of course, I’m a UFC champion— …so that does not apply to me. But the key is, tactically, we always use the element of surprise, and when you strike, strike first. And strike to cause as much damage as possible. The eyes, you can do the neck, you can do the genitals. And then after, you can leave the scene. That’s the goal of having the element of surprise.
Lex Fridman
(00:03:08)
Okay, you were talking about knives. What about if weapons are involved, run faster?
Georges St-Pierre
(00:03:13)
So weapons are very important. If someone has a weapon and attacks me for my money, I give him my money even if I’m Georges St-Pierre and I’m a UFC champion. However, if someone puts a knife to my throat here and he’s telling me to go in the trunk… now, I don’t want to go in the trunk because I know it’s a bad ending. So things that I can do first is always make sure that I try to keep my hands as close as possible to the weapon. And I try to be at as close range as possible. I can act like I want to—
Lex Fridman
(00:03:50)
Look scared?
Georges St-Pierre
(00:03:51)
Yeah. “Please, please, please,” boom. See, here I use my body, and then I can go and break, you know? So the idea is to use your entire body to deflect the weapon. So if the weapon is like this and the blade is coming out this way, I use the element of surprise. You see, I use my body, not only grabbing him like this, so if he tries to come back with the knife, it’s solid, and I can go and break. If the blade is pointing the other side, it’s something here, here, and here. Here I can use my body always to smother the weapon and—
Lex Fridman
(00:04:29)
Controlling the wrist, yeah. But if it’s out here…
unassigned
(00:04:32)
It’s through his clothes.
Georges St-Pierre
(00:04:32)
If it’s out here, and yes, exactly. There’s too much distance. You want to make sure you get close to the weapon because that’s what can cause the most damage. This is very important. There are other situations. Let’s say you’re a kid or someone comes to grab you by the body. What I can do is grab the head and put my fingers inside the eyes; that will make my opponent release me immediately. Then I can go and leave, you know?
Lex Fridman
(00:05:06)
Like thumb in? Like thumb?
Georges St-Pierre
(00:05:08)
Yeah, thumb in the eyes. You push in the eyes.
Lex Fridman
(00:05:11)
Blind them.
Georges St-Pierre
(00:05:12)
There are no rules. The eyes are always my favorite choice to go for because if you cannot see, it’s very hard to fight. And normally the reflex for most people, when they can’t see, they grab their eyes, you know? So it releases the grip.
Lex Fridman
(00:05:32)
I’m now going to ask you about the tie because I think you’re wrong still about that. I think it’s possible to use it as a… same as for a head snatch, like this kind of situation, to choke.
Georges St-Pierre
(00:05:45)
I think it could be an advantage if it’s a fake tie. If it’s something that can go, like it can—
Lex Fridman
(00:05:53)
Clip off?
Georges St-Pierre
(00:05:53)
Like a tail of a reptile that can go. So if you try to pull my tie, it comes out, and now I know I get a head start.
Lex Fridman
(00:06:02)
Element of surprise
unassigned
(00:06:03)
Exactly, it’s all about the element of surprise. You want to strike first; the element of surprise in the street.
Lex Fridman
(00:06:08)
Georges, thank you so much for talking today.
Georges St-Pierre
(00:06:11)
My pleasure.
Lex Fridman
(00:06:11)
Thank you for looking sharp.
Georges St-Pierre
(00:06:13)
Man in black, baby!
Lex Fridman
(00:06:14)
Man in black.

Transcript for 1984 by George Orwell | Lex Fridman

This is a transcript of “1984 by George Orwell | Lex Fridman”.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the video.
Click link to jump approximately to that part in the transcript:

Intro

Lex Fridman
(00:00:00)
“There was truth, and there was untruth. And if you clung to the truth, even against the whole world, you were not mad.” 1984 by George Orwell is one of the most impactful books ever written. It has been widely used and misused in political discourse by all kinds of ideologues. Into that discourse, it entered terms like Big Brother, thoughtcrime, Doublethink, Newspeak, Thought Police, and Orwellian, strangely enough, as a synonym for the very thing that the author, Orwell, was against. It’s been translated into over 65 languages, has sold over 30 million copies, and has been banned in many countries, especially authoritarian regimes. It was banned under Stalin, and as recently as 2022 in Belarus. In this video, I’ll give a quick summary with spoilers and a few takeaways.

World of 1984

Lex Fridman
(00:00:55)
I’d like to try to make it somewhat interesting to people who both have and have not read the book. Let’s see how it goes. The world in the book 1984 is a dystopian future society, a nation, maybe you can say superstate named Oceania. It’s fully controlled by a totalitarian political party called Ingsoc. It’s led by Big Brother who, as we might discuss, may or may not be a real person. He might just be a symbol used by the party. The party wants only to increase its power, also something we might talk about. It uses technology, telescreens, for mass surveillance. It’s creating a new language called Newspeak, which removes words from English that could lead to rebellion.
Lex Fridman
(00:01:38)
It uses Doublethink to control thought by, perhaps you could say, forcing you to hold contradictory beliefs and accept them as true. If not, the Thought Police arrest you for committing a thoughtcrime. Examples of Doublethink are “War is peace,” “Freedom is slavery,” and “Ignorance is strength.” And finally, the party constantly rewrites history. As the quote goes, “Who controls the past controls the future. Who controls the present controls the past.” There are four ministries. The Ministry of Truth is responsible for propaganda and, like I said, rewriting history. The Ministry of Love is responsible for brainwashing people through torture. The Ministry of Plenty is responsible for rationing food, supplies, and goods.
Lex Fridman
(00:02:27)
And the Ministry of Peace, of course, is responsible for maintaining a constant state of war. Society is divided into three levels: the Inner Party, the Outer Party, and the Proles. The term stands for, I guess, proletariats; it’s the working class. The Inner Party’s tiny. The Outer Party’s a little bit bigger, and the majority of the people—I forget what the percentage is, maybe 80%—are the Proles, the working class. There are several key characters. Winston, the main character, is a low-ranking member of Ingsoc. He works at the Ministry of Truth where he rewrites history. Julia is a girl who Winston falls in love with, and she with him.
Lex Fridman
(00:03:11)
They have sex, and this is maybe a good place to mention that love and passionate sex are forbidden in this society. “Goodsex” I think is a term under Newspeak; it’s the kind of sex that leads to procreation, which is the only kind allowed and the only kind that’s “good.” O’Brien is another central character. He’s the member of the Inner Party that convinces Winston he’s part of the Brotherhood, which is a lie, and he eventually is the man who tortures Winston and breaks his mind, breaks his heart. Big Brother and Emmanuel Goldstein are these symbolic characters that we never actually get to meet. They may or may not exist.

Love

Lex Fridman
(00:03:59)
Big Brother is the head of the party Ingsoc, and Emmanuel Goldstein is the leader of the so-called Brotherhood, which is this mysterious group that lurks in the shadows and works to overthrow the party. Again, they may or may not exist. We’ll maybe talk about the importance of that in a totalitarian state. So, a few key takeaways. I’ll try to do my best—I have disparate notes that I took for myself—to integrate them together to make some cohesive thoughts. Part of the reason I wanted to do this is that while I have read 1984 many times, and many of the books on the reading list I’ve read many times, I haven’t often really concretized my thoughts about them.
Lex Fridman
(00:04:52)
I just take the journey and let the thoughts wander around in the background as I live my life. I wanted to put them on paper and maybe share them with others to see what they think my concrete takeaways are from the book, if I could try to convert them into words. So the first one for me, especially later in life as I’ve been reading this book, is that when everything else or most things that make you human are taken away by a totalitarian state, the last thing that’s left, which is the most difficult to take away, is love. Love for other human beings, love for life itself. That’s the little flame from which hope springs. The key revolutionary act is the act of love.
Lex Fridman
(00:05:49)
So when the ability to speak is taken away, when the ability to think rational thoughts is taken away, the last thing that’s left, and the thing that ultimately gives hope, is love. That’s a big takeaway for me. The note that Julia gives to Winston reading “I love you” is the kind of revolutionary act that leads to a society beyond the one they exist in. I think a lot of the book has an interesting hypocrisy to it, where the main character, Winston, is almost in an animalistic way obsessed with destroying the state in rebellion and revolution. But I think love is the thing that allows you to believe in a place beyond the state, in believing that you can build something better, versus just destroying the thing you’re in.
Lex Fridman
(00:06:51)
I think you have to be careful as a revolutionary not to obsess 100% with destruction. Because beyond destruction, there could be chaos that leads to something much worse. I think love is the basic human thing that connects all of us, the messy thing that connects all of us, that allows you to build a better society after the totalitarian one is overthrown. What else did I want to say? There’s an interesting tension there between love and lust. I think there’s a quote that pure love or pure lust was impossible or forbidden. “Pure” here meaning unadulterated, uncensored intensity of feeling, maybe intimacy.
Lex Fridman
(00:07:44)
And there was an interesting question raised by the book, both by Winston and Julia: what is ultimately the most powerful act of rebellion? Is it between us humans when everything is forbidden? Is it animalistic like sex? Just lust for another human? Or is it love? The kind of love you have for a romantic partner, but even love for family and love for friends. I don’t know. I think the book almost claims that it is sex, but I think what the book also shows is that if sex is your manifestation of rebellion, that ultimately leads to something that doesn’t last. That ultimately leads to a focus on destruction versus building beyond the horizon when the state falls. So, some quotes from Winston on this.
Lex Fridman
(00:08:42)
“The more men you’ve had sex with…” Julia admitted to having sex with quite a lot of people. He says, “The more men you’ve had sex with, the more I love you. I hate purity. I hate virtue. I want everyone to be corrupt to the bone.” This kind of rubbed me the wrong way because, again, this seems to be obsessed with the hatred towards the state versus a longing and a hope—which I think hope is really important here—a hope for a better future beyond the state. Again, another quote from the book: “Their embrace had been a battle, the climax a victory. It was a blow struck against the Party. It was a political act.” So there, again, I think sex is seen as a political act of rebellion. I think that’s not the deeply human thing here.
Lex Fridman
(00:09:31)
The deeply human thing is, again, the act of love. It’s a source of hope; it’s the catalyst for building a better future beyond the revolution. An interesting side note here—and there could be a million interesting side notes, and I’m desperately trying not to go on a million tangents, to hold myself together and stay focused—is on family. There’s all kinds of love, and I think family love is a really powerful bond that connects us, and that’s one of the things that totalitarian states really go after.
Lex Fridman
(00:10:06)
And I should mention, I’m loosely using the terms authoritarian and totalitarian here. To me, authoritarian means there’s a government with complete centralized control of political affairs. A totalitarian state is beyond that; it is complete control of not just politics but also social, economic, everything. Nazi Germany is an example of that, I think, where there’s just complete control of every single thing, from the war effort to social interactions, the rules that govern social interaction, the press, all that kind of stuff.
Lex Fridman
(00:10:57)
So I think this book is more about, at least in my definition of the term, totalitarianism. Anyway, as I was saying about family, I think the way they destroy family is, one, of course with your romantic partner by forbidding passion—passionate sex, but really just passion and longing for another human being in that romantic way. And they also really reward and encourage children at a young age; they indoctrinate them to turn their parents in for thoughtcrime, whether real or not, which of course is a silly notion because there’s no nature of truth. You can just accuse anyone of anything and they’re guilty just by existing. So that’s a way to attack the family.
Lex Fridman
(00:11:43)
And I should also have mentioned on the topic of love that I think the goal of the Party, the final destination as described by O’Brien through the process of torture, is to break your mind, heart, and soul completely so that the only love you can have—and it could be felt as a pure love—is for Big Brother. This is the kind of thing you see in North Korea, where the only love you’re allowed to have, the remaining inklings of feeling that might still exist in you, you can channel only not towards family, romantic partners, or friends, but towards this leader, this godlike messianic figure. In this case, one who may or may not exist.

Hate

Lex Fridman
(00:12:32)
In all cases, that figure, while there is a human associated with it, is really much bigger than the human, and that’s the only love you’re allowed to have. So the other takeaway I have is on the topic of hate. I think all humans have the capacity, almost an animalistic craving, for hate of the “other,” the enemy. Whether it’s individuals like Emmanuel Goldstein or nations like Eurasia and East Asia—which are the two other superstates described in this book—they’re constantly at war with each other. Again, the fascinating thing about the way this book is written is you don’t know if Eurasia or East Asia even exist. You really don’t know what is true beyond the local interactions of the main character.
Lex Fridman
(00:13:28)
And that, I think, is the point. When you don’t really know, there’s no steady footing on which to construct a worldview from which you can have hope for a better future. This animalistic craving for hate, especially when we’re in crowds, is most powerfully illustrated in the “Two Minutes of Hate” practiced by that society. The quote is, “The horrible thing about the Two Minutes of Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in. Within thirty seconds any pretence was always unnecessary.
Lex Fridman
(00:14:13)
A hideous ecstasy of fear and vindictiveness, a desire to kill, to torture, to smash faces in with a sledge-hammer, seemed to flow through the whole group of people like an electric current, turning one even against one’s will into a grimacing, screaming lunatic. And yet the rage that one felt was an abstract, undirected emotion which could be switched from one object to another like the flame of a blowlamp.” That’s the point: you get the crowd together, and you get them to hate Goldstein or Eurasia or East Asia. You get them to hate anything. And that feeling, that drug, that mass hypnosis, can be directed by the state in any direction.
Lex Fridman
(00:15:02)
And because you have complete control of history, you can direct it on a day-by-day basis towards any target. As long as the hate is catalyzed through these kinds of rituals, it can overpower the individualistic feeling of love we have for each other. So that hate is a more animalistic desire. I don’t know what to make of it. And of course, it’s also important to say that this book was intended originally by Orwell as a satire, although a satire that has quite a lot of torture at the end and doesn’t seem to have much humor. But I think if you read it as a satire, that’s the best way to understand its relevance in our society today.
Lex Fridman
(00:15:53)
Because a lot of things, like the Two Minutes of Hate, are almost a caricature of what hate looks like in a mass gathering. But if you take it as a caricature, it can reveal some of the elements that already exist in human nature that we should be very cautious about. It reveals the very thing that, if not monitored by ourselves, can result in a slippery slope that leads to tribalism, the destruction of other groups, and then control of the collective intelligence of our species through a totalitarian state. I think there’s elements of this under illustration in social media today, though I don’t want to overstate it.
Lex Fridman
(00:16:44)
I think just like comparing things to Hitler, comparing things to 1984 is a reach in most cases. But social media does reveal this kind of mass hysteria, this capacity of humans to be outraged based on tribalism. So we have to understand it. We have to resist giving into it on the individual level. And I do believe we have the responsibility to create technology that helps us resist it, that incentivizes us not to be cruel to each other just because the people in whatever tribe we define ourselves in are being cruel to a particular person or group. Another takeaway I have is about power. Ingsoc, the totalitarian state, wants only one thing, and that is power. Power is both the means and the end. Absolute power.

Power

Lex Fridman
(00:17:35)
As O’Brien describes in the torture part of the book: “The real power, the power we have to fight for night and day, is not power over things, but power over men. Power is inflicting pain and humiliation. Power is in tearing human minds to pieces and putting them together again in new shapes of your own choosing. Power is not a means, it is an end. One does not establish a dictatorship in order to safeguard a revolution; one makes the revolution in order to establish a dictatorship. The object of persecution is persecution. The object of torture is torture. The object of power is power.” This, of course, is another aspect of human nature: the will to power and the tendency of that power to corrupt.
Lex Fridman
(00:18:36)
O’Brien says also, “The weariness of the cell is the vigour of the organism.” Through the torture and breaking of the individual, the individual doesn’t matter. What matters is the organism. There’s been a lot of brilliant comments throughout social media and on Reddit—I just want to highlight something about this because I had the exact same feeling as I was rereading it. There’s a comment from a Reddit user whose name is BraveSky6764.
Lex Fridman
(00:19:13)
He said the conversation between Lex and Michael Levin, who is a brilliant biologist and engineer, came to mind when O’Brien made an analogy to an organism which survives even as the individual cells pass away, and the great purges are analogous to the cutting of a fingernail. If you see society as an organism—which I think is the way a totalitarian state sees it—then the destruction of a large percentage of that society, the murder, the torture, and all kinds of atrocities and genocide become “justifiable” as long as the organism flourishes. That’s how you get to the ideas Stalin had: it’s okay to break a few eggs to make an omelet. This devaluation of a human being as having fundamental importance in a society…
Lex Fridman
(00:20:23)
is a slippery slope into atrocities. It’s not just deeply unethical from our understanding of morals and ethics; it is also very unproductive. It destroys the human spirit, and the human spirit is essential for building a great society of constant progress. I think that’s also one of the other messages of the book, is about utopia—that totalitarianism results when you chase perfection, when you present this idea of utopia. There is no utopia; there is no perfect society. I think, at least for me, that’s the takeaway. I think the optimal state of being for an individual and for a state is constant change and constant turnover.
Lex Fridman
(00:21:09)
In the case of a state, it’s a constant turnover of leaders and ideas, always hopefully making progress towards a better world. But it’s always going to be messy. Perfection only exists in an oppressive state. Perfection only exists when you remove the basic humanity of the individuals that make up that state, when you destroy the human spirit or suppress all freedoms. Freedom is going to be messy and chaotic, but that freedom, ultimately, in the long arc of history, is going to create progress.
Lex Fridman
(00:21:48)
So yes, as the Redditor BraveSky6764 says, that does give you a perspective of a biological system made of living organisms. Each one of us is made up of living organisms, and we take for granted all the “atrocities” happening there; we don’t seem to give a damn. I think that’s a good metaphor. If you want to put yourself in the mind of the Inner Party, of Big Brother, or the people in power, I think most, if not all of them, see themselves as doing good for society. They are able to justify things the way we justify the death of different cells in our body.
Lex Fridman
(00:22:35)
You don’t even think of them as worthy of consideration. You don’t think of them as living beings having the same value as you. That’s one of the really powerful ideas at the founding of the United States: that all men are created equal, that there’s an equal worth to a human being no matter who they are. That idea, as flawed as its implementations have been, is a really powerful and non-trivial idea, and it resists the drug of totalitarianism and power. I do believe that on the topic of power and politics, 1984 has been misused by political ideologues.
Lex Fridman
(00:23:23)
I’ve seen it, for example, when conservatives in the United States have used 1984 to call left-wing policies “Orwellian.” I think that’s an overstatement, of course used for dramatic effect, but it should at least be said that Orwell was a democratic socialist. 1984 is not a criticism of socialism; it’s a criticism of totalitarianism. I think the point is a warning that all political ideologies can succumb to the allure of power and be corrupted by it. People on both the left and the right in the United States can be corrupted by power. This one-way criticism of policies as Orwellian is a convenient shorthand, but the reality is all politicians are capable of…
Lex Fridman
(00:24:28)
creating an Orwellian world. And I think one of the things that is highlighted in the book very well is the hypocrisy of Winston. When O’Brien asks Winston what he’s willing to do to overthrow the Party, Winston admits he is willing to commit atrocities. He’s willing to do evil unto children, to commit murder, anything. This is a powerful illustration that both the totalitarian state and a blind, immoral rebellion against it can be evil. This is where I return to love as the thing that carries hope for a world beyond this battle for freedom. You have to have that.
Lex Fridman
(00:25:23)
Otherwise, the Orwellian state and the resistance to an Orwellian state can both destroy basic human rights and freedoms. I think in the character of Winston, that’s illustrated well. And I should also mention that there’s interesting writing… Now, I’m not obviously a scholar of Orwell, and there’s a lot of books been written and I should probably recommend them somewhere. There’s just great books written on 1984, on Orwell, on the historical context in which he was operating and all that kind of stuff. But as far as I see, Orwell also with 1984 and himself politically, he was not espousing the complete opposite of totalitarianism.

Orwell

Lex Fridman
(00:26:06)
There is, again, democratic socialism—that there is value to the connection between human beings, that you have to lean on each other, help each other, that society is fundamentally more a cohesive collective than a completely disparate set of sovereign individuals. It’s both. And I think he was torn about that idea, because in order to resist a totalitarian state you have to fight for those basic individual freedoms. But at the same time, a well-functioning society allows for that freedom to manifest as collaboration. And so that’s the difficult challenge there.
Lex Fridman
(00:26:52)
Again, that’s why he was a democratic socialist and the criticism of the book was against totalitarianism, of a centralized state that controls speech, thought, the press, and all the basic human freedoms. Controls truth. And I think a lot of people would ask the question, and I hear this tossed around: “Do we live in the world of 1984 today?” And I think that’s used as a shorthand to sort of criticize different policies and different governments. I generally don’t like the use of that kind of language because it’s basically crying wolf. If everything is 1984, if everybody is Hitler, then you’re not going to…
Lex Fridman
(00:27:35)
There’s no way to properly normalize the discussion of the lesser of two evils, which is ultimately what democracy is about. You have a collection of things you’re picking. They all kind of suck, but you want to pick the one that sucks the least. That’s human society, you know? That’s human nature. It’s messy. And so I don’t think we live in a 1984 state, but there’s a lot of elements that this book reveals about human nature and about the operation of a totalitarian state that we should be on the watch for. So surveillance, a state of doublethink, of controlling language, of being in a constant state of war as a way to control the population and the flow of resources.
Lex Fridman
(00:28:24)
All those things have elements of almost useful tools for the establishment of complete control of a populace. And the moment you notice those elements, it’s our job to resist those elements. So I think the point is we have to be vigilant to the slippery slope of the will to power in centralized institutions. Another thing I want to mention is that I think a lot of people rightfully compliment Orwell for predicting some of the elements of future society, especially with technological capabilities, for example, telescreens used by the state to control the population. Maybe I can make a few comments on technology in general.

Technology

Lex Fridman
(00:29:10)
People who criticize technology will often use 1984 as an example that technology is a tool for a totalitarian state. It’s a way they can achieve full control, and we should be extremely cautious of it. And I think there’s a kernel of truth to that. But it’s not obvious to me that on the whole, technology is a tool for totalitarian control. I think it is also a tool for freedom. The internet is an incredible tool for freedom. And so of course, we have to fight for that freedom, but I believe in general, the greater… Let’s just take the internet broadly as an example, and there’s a lot of sub-elements of that, and like a more platonic sense of what the internet is, which is digital interconnectivity.
Lex Fridman
(00:30:01)
We have to fight for freedom, but in general, the greater reach and access that the internet has, the more powerful the resistance to totalitarianism. Technology is a double-edged sword. It provides the tools for oppression and the tools for the ongoing fight for freedom. And as long as the will to fight arises in the human heart, technology, I think, helps humanity win. And of course, there’s been a lot of discussion about free speech and the freedom of thought, and there’s a lot to be said there that’s much more nuanced than the book 1984 provides. I think 1984 just shows the end, horrible conclusion of complete totalitarian control over speech, over thought, over feeling, over everything. But in general, my view of it is it’s a kind of inspiration to…
Lex Fridman
(00:30:57)
In order to prevent ourselves from slipping into an authoritarian or a totalitarian state, Orwellian type of dystopias—to avoid them, we have to value critical and independent thought. I think thought first, before speech. Just thought. I think you have to learn to think deeply from first principles, independent of whatever tribe you find yourselves in. Independent of government, independent of groups, independent of the people around you, the people you love, that love you. You have to learn, at least sometimes, to think independently. Now, this is the Nietzsche, “If you gaze long into the abyss, the abyss gazes into you.” If you think too independently, it can break your mind. I mean, we are social creatures. We need that connection.
Lex Fridman
(00:31:45)
But I think it’s like with Tom Waits: “I like my town with a little drop of poison.” I think of truly, deeply independent thought as a little drop of poison that’s necessary for your mind. Most of your life you live, you kind of assume most things around you are true, and that’s very useful. We stand on the shoulders of giants. But you, on a regular occasion, have to question. Question your assumptions, question your biases, question everything. Question the things you’ve taken for granted. Question what everybody’s telling you. But not too much. It’s a tricky balance, but the act of rebellion against a totalitarian state, against the slippery slope into that state, is that independent thought. And of course, speech is a manifestation of that thought.
Lex Fridman
(00:32:28)
So we have to avoid echo chambers in both thought and speech. Like I said, you have to question your assumptions, challenge your biases. I think that’s the way out. Or maybe that’s a resistance mechanism to slipping into authoritarianism. And maybe I have a few more things to say about the latter part of the book, the part where there’s torture—where there’s Room 101 that has the thing you fear the most, which is different for all of us, and for Winston, that’s rats. It makes you wonder what that thing is for each of us. I left a mental note for myself to do more research into the historical context, the psychology, the neuroscience, the effectiveness of torture. I think there’s probably a lot of really good work.
Lex Fridman
(00:33:22)
I had a brief conversation with Andrew Huberman on the phone about this topic. Andrew Huberman, the brilliant Andrew Huberman, host of the Huberman Lab podcast that you should listen to. And he mentioned to me that there’s a bunch of papers on these topics. This has been studied, sort of the carrot and the stick of the ability of incentives and disincentives to control the perception and the mental state of people and animals. And he mentioned to me a few folks that I could talk to on a podcast about this topic, and a few books. So, I’ll definitely look into this more. I think 1984 uses torture as a philosophical description, as a caricature of the operation of a totalitarian state.
Lex Fridman
(00:34:08)
But at the same time, a lot of those elements were all done under Stalin in the Soviet Union, so it’s not like it’s very different or very far from reality. It’s very, very real. The question is about the actual effect it has on the human mind, which I really have to think because torture in this case breaks Winston. In fact, I’d like to believe that many people, in the most fundamental of ways, can’t be broken in this way. I’ve seen science… again, without extensively reading, so please correct me if I’m wrong. But I’ve seen science that shows that torture, for the purpose of intelligence gathering, is not effective.
Lex Fridman
(00:34:55)
It’s not effective to get accurate information because people will tell you anything, really, to stop the torture, stop the physical and the mental and the emotional suffering. But I think this book is about the use of torture to completely break your ability to think and to perceive the world. One of the things I talked to Andrew about is whether it’s possible to control perception through these kinds of things. And it seems that there is literature that shows it’s possible to literally change your perception of the world. Like in this case, in 1984, it’s when you’re holding up four fingers, can you actually make the person believe that you’re holding up five fingers?
Lex Fridman
(00:35:39)
Not because of some weird delusion or just because your vision is blurry, but you literally, when you look, are holding four fingers and what you see is five fingers. Not because your vision is poor. No, your visual cortex, the way you’re processing that information, something about the processing changes completely your perception. If I tell you there’s a straight line, can, through incentive or disincentive, you start seeing a crooked line or something like that? Anyway, I think that there’s literature that supports that, which is, by the way, terrifying. But the thing I’d like to research more is if that can be long-lasting. I just don’t believe it can be.
Lex Fridman
(00:36:24)
If you’re not pushed to your death, yes, maybe perception, maybe your willingness to think, but your actual ability to think independent thoughts? Maybe you’re terrified. I understand if you’re terrified of any more thinking that leads to rebellious thoughts. Like the book mentions, the idea of face crime, where you can reveal your thoughts, the inner workings of your mind, by the subtleties of your expressions in your face. And I think also, like O’Brien says, “If you want to keep a secret, you must also hide it from yourself.” So I can understand that.
Lex Fridman
(00:37:11)
And maybe that is the basic mechanism that torture leads to: that your body, your mind learns to hide the truth from yourself. Like you don’t even allow yourself to think it because you know if you think it, it’s going to lead to face crime and thought crime, and that’s going to lead to more torture. That’s possible. But I just can’t imagine the capacity for love in the human heart to be extinguished through torture, finally extinguished. Temporarily, yes, but finally, irrecoverably, which I think is the basic claim of the book. That they break… so because through the worst of the torture Winston gives up Julia, the object of his love, he says that some things like that—the fact that you said, “Torture her, not me.”
Lex Fridman
(00:38:15)
“Anything to make this stop,” the fact that you said that, the fact that you thought that, is a statement, is a thought you can’t walk back to yourself. So it’s irrecoverable. You just destroyed your faith in love? I don’t think so. I think it’s possible we have to remember that this is one particular character. This is one particular story. I think there’s a lot of people in which the capacity to love cannot be broken, no matter the torture. But that’s an interesting scientific question, but it’s also a human question. I think Man’s Search for Meaning—there’s a lot of books that explore those kinds of questions. In the worst of conditions that humans had to suffer through, what still persists? What is the source of meaning?
Lex Fridman
(00:39:03)
And I just think that the flame of love persists through atrocities, through torture, through suffering, through all of it. But the claim of the book is that yes, a totalitarian state can use torture to break even that, even that which leads to the only love you’re allowed to have, which is the love for Big Brother. So I think, practically speaking, from the Party’s perspective, I think the point of O’Brien’s torture of Winston was to suffocate the hope in his mind and heart, so there is no hope, by completely destroying the knowledge of what is and isn’t true, so being betrayed.
Lex Fridman
(00:39:53)
And this kind of Goldstein’s book about the society, not knowing if that’s true, not knowing anything about Julia, is basically having no emotional or intellectual ground to stand on. It’s very difficult to have a sense of where you are. To have hope, you have to have a sense of where you are and where things could be. And then you also betray yourself. To force you to be a hypocrite on your own deepest feelings of love, I think basically puts you in a place where there’s no hope, there’s no point. It’s apathy. It’s nihilism. And there, a hardworking member of society that is nihilistic is probably what the Party wants, because that human will not rebel.
Lex Fridman
(00:40:42)
But on the point of hope, I should mention that there’s a kind of long-running theory that since the appendix… The appendix is about the details of Newspeak, the language that the Party is creating and enforcing. Because that appendix was written in the past tense, and it’s talking about Newspeak in the past tense and it’s written in English, sort of non-Newspeak, that means the Party and Newspeak and all of its elements that we see in the story are in the past. That the world from which the book is created has escaped that. And that’s a message of hope. That whatever the rebellion against the Party—whether it’s passionate lust and sex, whether it’s love, whether it’s seeking truth in a world full of lies—whatever it is, there’s a way out.
Lex Fridman
(00:41:40)
Again, to me, the way out is love. But that’s a hopeful message in this dystopian novel, that even these perfectly executed totalitarian states will fall. I took a few random notes here that maybe I’ll comment on. I wrote a quote: “The masses cannot rebel until they become conscious.” That might be either a Winston observation or an O’Brien statement. I’m not sure. But yeah, so you have to think, 80% plus are proles of the working class. They have the power if they want it, but they don’t want it. They don’t want to take it. That’s the whole point of the totalitarian state: to break your will for freedom, your desire for freedom, break your ability to know that you’re not free.
Lex Fridman
(00:42:28)
And that’s where all of it—the changing of history, the doublethink, the thought crime—all of that comes into play: the torture and the Ministry of Love. All of that is about preventing the populace from becoming conscious. And again, as per the cells discussion earlier, I wrote down the O’Brien quote: “The death of the individual is not death. The Party is immortal.” And this is just an interesting observation about the operation of a totalitarian state, that it’s the idea and a kind of amorphous symbol of the messianic figure in Big Brother is all you need for the Party to persist. That person doesn’t actually have to exist. Any one individual doesn’t have to exist.
Lex Fridman
(00:43:18)
It’s just the division of society into high, middle, and low, and the oppression of the low by the high by the centralized Inner Party. That’s all you need, and the individual does not matter in that. And again, the way to fight that is to fight for individual freedoms. An interesting side note is just a quote I wrote down from Julia, I think: “If you keep the small rules, you can break the big ones.” And so she, in the book, is somebody that follows to the T all the rules of the Party. She attends all the committee meetings and all that kind of stuff, and just is like the model citizen from the perspective of the Party. And so that allows her to break the big rules, like having passionate sex with people—the really…
Lex Fridman
(00:44:11)
or falling in love, all the forbidden things. And I think that’s actually a good way to exist in the world. I think for a lot of us, there’s probably a bunch of things that bother us in the local world around us, in the bigger world, and I think you have to pick your battles. You have to not get lost in the muck of small battles if you want to have at least one or a few big victories in your life that make for a better world. I think, at least in my sense, it’s easy to get distracted by the little things that bother you in life.
Lex Fridman
(00:44:49)
And I think staying focused on the big things, again, picking your battles, and staying with that for as long as possible, working your ass off to solve one problem for as long as possible, not giving up against impossible odds, against all the criticism—that’s the way to solve those big problems. And of course, that’s not what Julia is talking about. But in a sense, she is also, because in that particular case, a totalitarian state is the problem. And the way to rebel is to plant that seed of rebellion in each of the people she has sex with: that we are human, that we have lust for each other, that we have the ability to love each other, and that is the necessary act of rebellion there.
Lex Fridman
(00:45:36)
That is the big leap for her, at least in that kind of society. I should also mention that there’s a lot of interpretations of the different small and big things in this book. So it’s very possible in the case of Julia that Winston was played. He was set up with Julia. He was set up to feel all those things. He was set up to have that little secret cove where he can write on his desk in the diary and dream of rebelling against the state, dreaming of the Brotherhood. It’s unclear to me why an oppressive state would want people to have that little journey of desiring freedom in all its manifestations. I’m not sure.
Lex Fridman
(00:46:26)
But maybe O’Brien’s statement that the purpose of torture is torture holds some wisdom: that to attain absolute power, you also have to have a willingness and a mechanism to attain absolute suffering in the populace. And maybe this is a way to maximize suffering: to give them hope before you crush it. Again, the way out to me and the takeaway from this book—the way out is love. Perhaps this is a good place to also mention a little bit of a fun little controversy that evolved over Twitter. So I posted a reading list quickly before heading off to a New Year’s party of books that I hope to read in 2023, and these are based on books that I asked people to vote on; these are many of the ones they selected.

Reading list controversy

Lex Fridman
(00:47:42)
And they happened to be many of the books I’ve read many times throughout my life and really enjoyed, and they were like old friends that I love visiting and revisiting. Every time I read them, I get something new and they just read differently throughout life. You know, the way in my teens when I read The Stranger by Camus is very different than it was in my 20s and different in my 30s. I’ll say my favorite book now by Camus is probably The Plague, and all of that has evolved. With Dostoevsky, I read The Idiot several times. I read The Brothers Karamazov both in English and Russian, and Notes from Underground. I mean, I love Dostoevsky. And a lot of these books are just…
Lex Fridman
(00:48:24)
Yes, they are classics, but they’re also deeply profound and they move me on an intellectual level, but also just as a human being. They’re like travel companions. They’re like old friends. Old dead friends. So yeah, I wanted to celebrate my love for books. And it was very strange to me that—and if I’m just being honest for a second, it was kind of painful that some prominent figures that I respect were kind of cruel about the list. They responded and they mocked it and all that kind of stuff, basically taking the worst possible interpretation. I have to be honest and say it wasn’t fun, because it was just a silly kid—me—kind of in a joyful New Year’s mood, sharing with the world books I love.
Lex Fridman
(00:49:32)
And I think what was happening—and this seems to be happening a bit more—is there’s a bunch of people that are just almost waiting or hoping that I fail, or maybe that I’m some kind of bad human being. They’re looking, they’re trying to discover things about me that reveal that I’m a bad human being, and maybe somehow this reading list reveals that. I don’t know. So, one criticism was that everybody read these books in school, and they’re basic. I think my response to that criticism is: no. First of all, most people have not read them in school; maybe they read CliffNotes. And they’re not basic; they’re deeply profound, some of the greatest words ever written.
Lex Fridman
(00:50:26)
But also, I don’t think I’ve ever gotten a lot from books I was forced to read in school when I had to read them for an assignment. Some of these books I think I read in school, but most of them not. It’s only when I read them outside of school on my own volition that I really gained a lot from it, and especially throughout my life at regular times—as a teenager, in my 20s, and in my 30s. So, no. These books are profound and deserve returning to. Like I said, they are old friends that give me a lot of meaning every time I revisit the ideas, and they give me a new perspective on life. Another criticism was very nitpicky. The list was put together really quickly, and the goal—I like setting tough goals.
Lex Fridman
(00:51:14)
The goal was to read a book a week. And, you know, on one week I had The Little Prince followed by The Brothers Karamazov. And people criticized that: “How can you possibly read The Brothers Karamazov in one week?” Maybe I won’t. Maybe I’ll fail miserably. But I love trying. But that wasn’t actually the goal. I should’ve said I intend to finish reading it by the end of that week. So, you start earlier because The Little Prince takes an hour or two to read. And then for The Brothers Karamazov, I could have the two weeks. It should take about 30, 40, or 50 hours to read it. That said, friends, I’ve read it already in English and in Russian.
Lex Fridman
(00:51:59)
I’m interviewing the world-famous, amazing translators of The Brothers Karamazov, of Dostoevsky, and of Tolstoy—Richard Pevear and Larissa Volokhonsky—probably across multiple days. So, this book means a lot to me. I’m not somebody just kind of rolling in, “What are the cool kids reading these days?” These books have been lifelong companions to me. And the fact that people just want to stomp on that—and a large number of people did, people I respect—yeah, I’d be lying if I said it didn’t suck a bit. Anyway, the love for reading persists. I have to say, after that, I was very hesitant to even make this particular video on Orwell, on 1984. And I’m not sure I want to be public with my reading after this.
Lex Fridman
(00:52:59)
And I know a lot of people will say, “No, we’re here with you.” They’re very supportive, and I love you. I mean, I meet so many incredible people, but the reality is it just does suck to be vulnerable and share something with the world and receive that kind of mockery at scale. So I will definitely—I will not be affected or broken by any of that kind of stuff for something that’s actually meaningful, like the conversations—some of the very difficult conversations I’m going to do. But for a silly side hobby thing of reading that I do throughout my life to be a source of mockery, I’m just going to do that privately. So, I’m a little torn on that, and I’ll try to figure out a way.
Lex Fridman
(00:53:41)
Also, I should say that that list, like a lot of things, is kind of aspirational because if I take a job at a tech company, or if I start a tech company, or if I have to travel for extremely difficult conversations and really have to prepare for them—all that kind of stuff is going to affect my ability to both read and enjoy reading, which I think is a prerequisite for this kind of reading. But in general, what I do is I read about one hour a day on Kindle—on the sort of physical device, in my eyes. And depending on the workout I do and the chores I have, it’s going to be about two hours of audiobooks. So, most of the things I do during chores is audiobooks.
Lex Fridman
(00:54:30)
And when I run—and I usually run about 10 to 15 miles, so you’re talking about—I often run over two hours. It’s like a slow pace. When the days are not insane, it gives me a chance to think and a chance to listen to audiobooks, so I love that process. It’s an escape from the world, a chance for me to collect my thoughts. And yeah, it’s again a source of happiness and joy, and I wanted to share that. I think you can get quite a lot of reading done through that process, especially if it’s a book you’ve read before. It is very challenging to do this kind of takeaway video, or to concretize your thoughts down on paper, especially when you have to present them in this kind of way.
Lex Fridman
(00:55:12)
I’m not sure I’m going to do that much, because it’s an extra bit of effort. But it’s also a chance to share that joy with the world, and to find cool people that also enjoy it. So it’s a trade-off. Anyway, it’s just a temporary thing, but it did suck for a short amount of time—for a few hours, for a couple of days. But in general, I’ll persist with my love of reading. I might not talk about it publicly as much. But again, let me emphasize that this kind of response and mockery will not affect anything of importance that I do. I try to read comments; I try to see criticism. I really value especially high-effort criticism. I try to grow and constantly try to improve.
Lex Fridman
(00:56:05)
But that’s for things that I take very seriously, like the podcast conversations that I do. But for silly things, like book lists, Spotify music playlists, the food I like to eat—I don’t know, anything, any fun side thing—it’s not that important. If it’s something that others don’t enjoy, then whatever. I’ll enjoy them probably with my friends locally here, or the people I meet. So, anyway, I love reading. I love reading classics. I love returning to old friends in book form, and making new ones.
Lex Fridman
(00:56:49)
There’s a bunch of science fiction that I embarrassingly have not read and would love to, because those worlds are so meaningful to so many of the people I’m friends with that I can’t wait to visit those worlds and sort of make new friends in the form of books. So, definitely the love for books, the love for reading persists. And if you share in that love, that’s beautiful. So thank you for joining me on this journey. Thank you for watching this silly little video. And I hope to see you next time. Love you all.

Transcript for State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490

This is a transcript of Lex Fridman Podcast #490 with Nathan Lambert & Sebastian Raschka.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Introduction

Lex Fridman
(00:00:00)
The following is a conversation all about the state of the art in artificial intelligence, including some of the exciting technical breakthroughs and developments in AI that happened over the past year, and some of the interesting things we think might happen this upcoming year. At times, it does get super technical, but we do try to make sure that it remains accessible to folks outside the field without ever dumbing it down. It is a great honor and pleasure to be able to do this kind of episode with two of my favorite people in the AI community, Sebastian Raschka and Nathan Lambert. They are both widely respected machine learning researchers and engineers who also happen to be great communicators, educators, writers, and X posters.
Lex Fridman
(00:00:51)
Sebastian is the author of two books I highly recommend for beginners and experts alike. First is Build a Large Language Model from Scratch, and Build a Reasoning Model from Scratch. I truly believe in the machine learning and computer science world, the best way to learn and understand something is to build it yourself from scratch. Nathan is the post-training lead at the Allen Institute for AI, and author of the definitive book on reinforcement learning from human feedback. Both of them have great X accounts, great Substacks. Sebastian has courses on YouTube, Nathan has a podcast. And everyone should absolutely follow all of those. This is the Lex Fridman podcast.
Lex Fridman
(00:01:40)
To support it, please check out our sponsors in the description, where you can also find links to contact me, ask questions, get feedback, and so on. And now, dear friends, here’s Sebastian Raschka and Nathan Lambert.

China vs US: Who wins the AI race?

Lex Fridman
(00:01:57)
So I think one useful lens to look at all this through is the so-called DeepSeek moment. This happened about a year ago in January 2025, when the open weight Chinese company DeepSeek released DeepSeek R1 that I think it’s fair to say surprised everyone with near or at state-of-the-art performance, with allegedly much less compute for much cheaper. And from then to today, the AI competition has gotten insane, both on the research level and the product level. It’s just been accelerating.
Lex Fridman
(00:02:32)
Let’s discuss all of this today, and maybe let’s start with some spicy questions if we can. Who’s winning at the international level? Would you say it’s the set of companies in China or the set of companies in the United States? Sebastian, Nathan, it’s good to see you guys. So Sebastian, who do you think is winning?
Sebastian Raschka
(00:02:53)
So winning is a very broad term. I would say you mentioned the DeepSeek moment, and I do think DeepSeek is definitely winning the hearts of the people who work on open weight models because they share these as open models. Winning, I think, has multiple timescales to it. We have today, we have next year, we have in ten years. One thing I know for sure is that I don’t think nowadays, in 2026, that there will be any company having access to a technology that no other company has access to. And that is mainly because researchers are frequently changing jobs, changing labs. They rotate. So I don’t think there will be a clear winner in terms of technology access.
Sebastian Raschka
(00:03:37)
However, I do think the differentiating factor will be budget and hardware constraints. I don’t think the ideas will be proprietary, but rather the resources that are needed to implement them. And so I don’t currently see a winner-takes-all scenario. I can’t see that at the moment.
Lex Fridman
(00:03:59)
Nathan, what do you think?
Nathan Lambert
(00:04:00)
You see the labs put different energy into what they’re trying to do. To demarcate the point in time when we’re recording this, the hype over Anthropic’s Claude Opus 4.5 model has been absolutely insane. I’ve used it and built stuff in the last few weeks, and it’s almost gotten to the point where it feels like a bit of a meme in terms of the hype. It’s kind of funny because this is very organic, and then if we go back a few months ago, Gemini 3 from Google got released, and it seemed like the marketing and wow factor of that release was super high. But then at the end of November, Claude Opus 4.5 was released and the hype has been growing, while Gemini 3 was before this.
Nathan Lambert
(00:04:44)
And it kind of feels like people don’t really talk about it as much, even though when it came out, everybody was like, this is Gemini’s moment to retake Google’s structural advantages in AI. Gemini 3 is a fantastic model, and I still use it. It’s just that differentiation is lower. I agree with what you’re saying, Sebastian, that the idea space is very fluid, but culturally Anthropic is known for betting very hard on code, and this Claude Code thing is working out for them right now. So I think that even if the ideas flow pretty freely, so much of this is bottlenecked by human effort and the culture of organizations, where Anthropic seems to at least be presenting as the least chaotic.
Nathan Lambert
(00:05:23)
It’s a bit of an advantage if they can keep doing that for a while. But on the other side of things, there’s a lot of ominous technology from China where there are way more labs than DeepSeek. DeepSeek kicked off a movement within China similar to how ChatGPT kicked off a movement in the US where everything had a chatbot. There are now tons of tech companies in China that are releasing very strong frontier open weight models, to the point where I would say that DeepSeek is kind of losing its crown as the preeminent open model maker in China, and the likes of Z.ai with their GLM models, MiniMax’s models, and Kimi K2 Thinking from Moonshot, especially in the last few months, have shone more brightly.
Nathan Lambert
(00:06:04)
The new DeepSeek models are still very strong, but that could be looked back on as a big narrative point where in 2025 DeepSeek came and provided this platform for way more Chinese companies that are releasing these fantastic models to have this new type of operation. These models from these Chinese companies are open weight, and depending on this trajectory, the business models that these American companies are doing could be at risk. But currently, a lot of people are paying for AI software in the US, and historically in China and other parts of the world, people don’t pay a lot for software.
Lex Fridman
(00:06:37)
So some of these models like DeepSeek have the love of the people because they are open weight. How long do you think the Chinese companies keep releasing open weight models?
Nathan Lambert
(00:06:47)
I would say for a few years. I think that, like in the US, there’s not a clear business model for it. I have been writing about open models for a while, and these Chinese companies have realized it. I get inbound from some of them. They’re smart and realize the same constraints, which is that a lot of top US tech companies and other IT companies won’t pay for an API subscription to Chinese companies for security concerns. This has been a long-standing habit in tech, and the people at these companies then see open weight models as an ability to influence and take part in a huge growing AI expenditure market in the US. They’re very realistic about this, and it’s working for them.
Nathan Lambert
(00:07:24)
And I think the government will see that that is building a lot of influence internationally in terms of uptake of the technology, so there’s going to be a lot of incentives to keep it going. But building these models and doing the research is very expensive, so at some point, I expect consolidation. But I don’t expect that to be a story of 2026; there will be more open model builders throughout 2026 than there were in 2025. And a lot of the notable ones will be in China.
Lex Fridman
(00:07:50)
You were going to say something?
Sebastian Raschka
(00:07:51)
Yes. You mentioned DeepSeek losing its crown. I do think to some extent, yes, but we also have to consider that they are still slightly ahead. It’s not that DeepSeek got worse, it’s just like the other ones are using the ideas from DeepSeek. For example, you mentioned Kimi, same architecture, they’re training it. And then again, we have this leapfrogging where they might be at some point in time a bit better because they have the more recent model. I think this comes back to the fact that there won’t be a clear winner. One person releases something, the other one comes in, and the most recent model is probably always the best model.
Nathan Lambert
(00:08:30)
Yeah. We’ll also see the Chinese companies have different incentives. DeepSeek is very secretive, whereas some of these startups are like the MiniMaxes and Z.ais of the world. Those two literally have filed IPO paperwork, and they’re trying to get Western mindshare and do a lot of outreach there. So I don’t know if these incentives will change the model development, because DeepSeek famously is built by a hedge fund, Highflyer Capital, and we don’t know exactly what they use the models for or if they care about this.
Lex Fridman
(00:08:59)
They’re secretive in terms of communication, but they’re not secretive in terms of the technical reports that describe how their models work. They’re still open on that front. And we should also say on the Claude Opus 4.5 hype, there’s the layer of something being the darling of the X echo chamber, the Twitter echo chamber, and the actual amount of people that are using the model. I think it’s probably fair to say that ChatGPT and Gemini are focused on the broad user base that just wants to solve problems in their daily lives, and that user base is gigantic. So the hype about the coding may not be representative of the actual use.
Sebastian Raschka
(00:09:38)
I would say also a lot of the usage patterns are name recognition and brand, but also almost muscle memory, where ChatGPT has been around for a long time. People just got used to using it, and it’s almost like a flywheel where they recommend it to other users. One interesting point is also the customization of LLMs. For example, ChatGPT has a memory feature. So you may have a subscription and you use it for personal stuff, but I don’t know if you want to use that same thing at work because there is a boundary between private and work. If you’re working at a company, they might not allow that or you may not want that.
Sebastian Raschka
(00:10:16)
And I think that’s also an interesting point where you might have multiple subscriptions. One is just clean code; it has nothing of your personal images or hobby projects in there. It’s just for work. And then the other one is your personal thing. I think the future involves multiple models for different use cases. It doesn’t mean you only have to have one.

ChatGPT vs Claude vs Gemini vs Grok: Who is winning?

Lex Fridman
(00:10:38)
What model do you think won 2025, and what model do you think is going to win ’26?
Nathan Lambert
(00:10:43)
I think in the context of consumer chatbots, the question is: are you willing to bet on Gemini over ChatGPT? Which I would say in my gut feels like a bit of a risky bet because OpenAI has been the incumbent and there are so many benefits to that in tech. I think the momentum in 2025 was on Gemini’s side, but they were starting from such a low point. RIP Bard and those earlier attempts. I think huge credit to them for powering through the organizational chaos to make that happen. But also it’s hard to bet against OpenAI because they always come off as so chaotic, but they’re very good at landing things.
Nathan Lambert
(00:11:26)
Personally, I have very mixed reviews of GPT-5, but it must have saved them so much money with the high-line feature being a router where most users are no longer charging their GPU costs as much. So I think it’s very hard to dissociate the things that I like out of models versus the things that are actually going to be a general public differentiator.
Lex Fridman
(00:11:50)
What do you think about 2026? Who’s going to win?
Nathan Lambert
(00:11:52)
I’ll say something, even though it’s risky. I think Gemini will continue to make progress on ChatGPT. Google has the scale when both of these are operating at such extreme scales, and Google has the ability to separate research and product a bit better, whereas you hear so much about OpenAI being chaotic operationally and chasing the high-impact thing, which is a very startup culture. Then on the software and enterprise side, I think Anthropic will have continued success as they’ve again and again been set up for that. Obviously Google Cloud has a lot of offerings, but I think this Gemini name brand is important for them to build.
Nathan Lambert
(00:12:28)
Google Cloud will continue to do well, but that’s a more complex thing to explain in the ecosystem because that’s competing with the likes of Azure and AWS rather than on the model provider side.
Lex Fridman
(00:12:40)
So in infrastructure, you think TPUs give them an advantage?
Nathan Lambert
(00:12:45)
Largely because the margin on NVIDIA chips is insane and Google can develop everything from top to bottom to fit their stack and not have to pay this margin, and they’ve had a head start in building data centers. So all of these things that have both high lead times and very hard margins on high costs, Google has a kind of historical advantage there. And if there’s going to be a new paradigm, it’s most likely to come from OpenAI. Their research division again and again has shown this ability to land a new research idea or a product. Like Deep Research, Sora, o1 thinking models—all these definitional things have come from OpenAI, and that’s got to be one of their top traits as an organization.
Nathan Lambert
(00:13:28)
So it’s kind of hard to bet against that, but I think a lot of this year will be about scale and optimizing what could be described as low-hanging fruit in models.
Lex Fridman
(00:13:37)
And clearly there’s a trade-off between intelligence and speed. This is what GPT-5 was trying to solve behind the scenes. It’s like, do people actually want intelligence, the broad public, or do they want speed?
Sebastian Raschka
(00:13:52)
I think it’s a nice variety actually, or the option to have a toggle there. For my personal usage, most of the time when I look something up, I use ChatGPT to ask a quick question and get the information I wanted fast. For most daily tasks, I use the quick model. Nowadays, I think the auto mode is pretty good where you don’t have to specifically say “thinking” or “non-thinking.” Then again, I also sometimes want the pro mode. Very often, when I have something written, I put it into ChatGPT and say, “Hey, do a very thorough check. Are all my references correct? Are all my thoughts correct? Did I make any formatting mistakes? Are the figure numbers wrong?” or something like that. And I don’t need that right away.
Sebastian Raschka
(00:14:33)
I can finish my stuff, maybe have dinner, let it run, come back and go through it. This is where I think it’s important to have this option. I would go crazy if for each query I had to wait 30 minutes, or even 10 minutes.
Nathan Lambert
(00:14:46)
That’s me. I’m sitting over here losing my mind that you use the router and the non-thinking model. I’m like, “How do you live with that?”
Nathan Lambert
(00:14:55)
That’s like my reaction. I’ve been heavily on ChatGPT for a while. I never touched GPT-5 non-thinking. I find it just… its tone and then its propensity for errors. It just has a higher likelihood of errors. Some of this is from back when OpenAI released o3, which was the first model to do this Deep Research and find many sources and integrate them for you. So I became habituated with that. I will only use GPT-5.2 thinking or pro when I’m finding any sort of information query for work, whether that’s a paper or some code reference. I will regularly have five pro queries going simultaneously, each looking for one specific paper or feedback on an equation.
Sebastian Raschka
(00:15:38)
I have a fun example where I just needed the answer as fast as possible for this podcast before I was going on the trip. I have a local GPU running at home and I wanted to run a long RL experiment. Usually I unplug things because if you’re not at home, you don’t want to have things plugged in, and I accidentally unplugged the GPU. My wife was already in the car and it was like, “Oh dang.” Basically, I wanted a Bash script as fast as possible that runs my different experiments and the evaluation. I know how to use the Bash terminal, but in that moment I just needed the command in 10 seconds.
Lex Fridman
(00:16:18)
This is a hilarious situation but yeah, so what did you use?
Sebastian Raschka
(00:16:21)
So I did the non-thinking fastest model. It gave me the Bash command. I wanted to chain different scripts to each other and route this to a log file with the `tee` command. Off the top of my head, I was just in a hurry; I could have thought about it myself.
Lex Fridman
(00:16:37)
By the way, I don’t know if there’s a representative case: wife waiting in the car, you have to run, unplug the GPU, you have to generate a Bash script. This sounds like a movie… …Mission Impossible.
Nathan Lambert
(00:16:46)
I use Gemini for that. I use thinking for all the information stuff and then Gemini for fast things or stuff that I could sometimes Google. It’s good at explaining things and I trust that it has this background of knowledge and it’s simple. And the Gemini app has gotten a lot better.
Nathan Lambert
(00:17:01)
It’s good for those sorts of things. And then for code and any sort of philosophical discussion, I use Claude Opus 4.5, also always with extended thinking. Extended thinking and inference-time scaling is just a way to make the models marginally smarter. I will always edge on that side when the progress is very high because you don’t know when that’ll unlock a new use case. And then I sometimes use Grok for real-time information or finding something on AI Twitter that I knew I saw and I need to dig up. Although when Grok 4 came out, the Grok 4 Heavy—which was their pro variant—was actually very good and I was pretty impressed with it, and then I just kind of lost track of it with muscle memory from having the ChatGPT app open. So I use many different things.
Lex Fridman
(00:17:45)
Yeah. I actually do use Grok 4 Heavy for debugging. For hardcore debugging that the other ones can’t solve, I find that it’s the best at. And it’s interesting because you say ChatGPT is the best interface. For me, for that same reason—but this could be just momentum— Gemini is the better interface for me. I think because I fell in love with their needle-in-the-haystack capabilities. If I ever put in something that has a lot of context but I’m looking for very specific information to make sure it tracks all of it, I find Gemini has been the best. So it’s funny with some of these models, if they win your heart over—
Lex Fridman
(00:18:28)
…for one particular feature on a particular day, for that particular query or prompt, you’re like, “This model’s better.” And so you’ll just stick with it for a bit until it does something really dumb. There’s like a threshold effect. Some smart thing happens and then you fall in love with it, and then it does some dumb thing and you’re like, “You know what? I’m gonna switch and try Claude or ChatGPT.” And all that kind of stuff.
Sebastian Raschka
(00:18:51)
This is exactly it. You use it until it breaks, until you have a problem, and then you change the LLM. I think it’s the same way we use anything, like our favorite text editor, operating system, or browser. I mean, there are so many browser options: Safari, Firefox, Chrome. They’re relatively similar, but then there are edge cases, maybe extensions you want to use, and then you switch. But I don’t think anyone types the same thing into different browsers and compares them. You only do that when the website doesn’t render or if something breaks. So that’s a good point. You use it until it breaks, and then you explore other options.
Nathan Lambert
(00:19:28)
On the long context thing, I was also a Gemini user for this, but the GPT-5.2 release blog had crazy long context scores, where a lot of people were like, “Did they just figure out some algorithmic change?” It went from like 30% to like 70% or something in this minor model update. So it’s also very hard to keep track of all of these things, but now I look more favorably at GPT-5.2’s long context. So it’s just kind of like a never-ending battle to actually get to testing this.
Lex Fridman
(00:19:57)
Well, it’s interesting that none of us talked about the Chinese models from a user perspective. What does that say? Does that mean the Chinese models are not as good, or does that mean we’re just very biased and US-focused?
Sebastian Raschka
(00:20:11)
I do think that’s currently the discrepancy between the model and the platform. I think the open models are more known for the open weights, not their platform yet.
Nathan Lambert
(00:20:21)
There are also a lot of companies that are willing to sell you open-model inference at a very low cost. I think, like OpenRouter, it’s easy to look at multi-model things. You can run DeepSeek on Perplexity. I think all of us sitting here are like, “We use OpenAI GPT-5 Pro consistently.” We’re all willing to pay for the marginal—
Nathan Lambert
(00:20:39)
…intelligence gain. And these models from the US are better in terms of the outputs. I think the question is, will they stay better for this year and for years going forward? But so long as they’re better, I’m going to pay for them. I think there’s also analysis that shows that the way the Chinese models are served—which you could argue is due to export controls or not—is that they use fewer GPUs per replica, which makes them slower and leads to different errors. It’s about speed and intelligence.
Nathan Lambert
(00:21:09)
If these things are in your favor as a user, I think in the US a lot of users will go for this. I think that is one thing that will spur these Chinese companies to want to compete in other ways, whether it’s free or substantially lower costs, or it’ll breed creativity in terms of offerings, which is good for the ecosystem. But I just think the simple thing is the US models are currently better, and we use them. I tried these other open models, and I’m like, “Fun, but I’m not gonna… I don’t go back to it.”

Best AI for coding

Lex Fridman
(00:21:38)
We didn’t really mention programming. That’s another use case that a lot of people deeply care about. I use basically half-and-half Cursor and Claude Code, because I find them to be fundamentally different experiences and both useful. You program quite a bit— …so what do you use? What’s the current vibe?
Sebastian Raschka
(00:21:59)
So, I use the Codeium plugin for VS Code. You know, it’s very convenient. It’s just a plugin, and then it’s a chat interface that has access to your repository. I know that Claude Code is a bit different. It is a bit more agentic. It touches more things; it does the whole project for you. I’m not quite there yet where I’m comfortable with that because maybe I’m a control freak, but I still like to see what’s going on. Codeium is the sweet spot for me right now where it is helping me, but it is not taking over completely.
Lex Fridman
(00:22:29)
I should mention, one of the reasons I do use Claude Code is to build the skill of programming with English. I mean, the experience is fundamentally different. As opposed to micromanaging the details of the generation and looking at the diff—which you can in Cursor if that’s the IDE you use—you are understanding the code deeply as you progress, versus just thinking in this design space and guiding it at a macro level. I think that is another way of thinking about the programming process. Also, Claude Code just seems to be a better utilization of Claude Opus 4.5.
Nathan Lambert
(00:23:18)
It’s a good side-by-side for people to do. You can have Claude Code open, you can have Cursor open, you can have VS Code open, and you can select the same models on all of them— …and ask questions, and it’s very interesting. Claude Code is way better in that domain. It’s remarkable.
Lex Fridman
(00:23:32)
All right, we should say that both of you are legit on multiple fronts: researchers, programmers, educators, and on the book front, too. Nathan, at some point soon, hopefully has an RLHF book coming out.
Nathan Lambert
(00:23:50)
It’s available for preorder, and there’s a full digital preprint. I’m just making it pretty and better organized for the physical thing, which is a lot of why I do it—it’s fun to create things that you think are excellent in physical form when so much of our life is digital.
Lex Fridman
(00:24:05)
I should say, going to Perplexity here, Sebastian Raschka is a machine learning researcher and author known for several influential books. A couple that I wanted to mention—and a book I highly recommend—is Build a Large Language Model From Scratch, and the new one, Build a Reasoning Model From Scratch. I’m really excited about that. Building stuff from scratch is one of the most powerful ways of learning.
Sebastian Raschka
(00:24:27)
Honestly, building an LLM from scratch is a lot of fun and a lot to learn. Like you said, it’s probably the best way to learn how something really works, because you can look at figures, but figures can have mistakes. You can look at conceptual explanations, but you might misunderstand them. But if there is code and the code works, you know it’s correct. There’s no misunderstanding; it’s precise. Otherwise, it wouldn’t work. I think that’s the beauty behind coding. It doesn’t lie. It’s math, basically. Even with math, you can have mistakes in a book you would never notice because you aren’t running the math while reading, so you can’t verify it. And with code, what’s nice is you can verify it.
Lex Fridman
(00:25:09)
Yeah, I agree with you about the Build a Large Language Model From Scratch book. It’s nice to tune out everything else, the internet and so on, and just focus on the book. But, you know, compared to history books, it’s just less lonely somehow. It’s really more fun. For example, on the programming front, I think it’s genuinely more fun to program with an LLM. And I think it’s genuinely more fun to read with an LLM. But you’re right. This distraction should be minimized. So you use the LLM to basically enrich the experience, maybe add more context. Maybe I just… the rate of ‘aha’ moments for me on a small scale is really high with LLMs.
Sebastian Raschka
(00:25:54)
100%. I also want to correct myself: I’m not suggesting not to use LLMs. I suggest doing it in multiple passes. Like, one pass just offline, focus mode, and then after that… I mean, I also take notes, but I try to resist the urge to immediately look things up. I do a second pass. For me, it’s just more structured this way and I get less… I mean, sometimes things are answered in the chapter, but also it just helps to let it sink in and think about it. Other people have different preferences. I would highly recommend using LLMs when reading books. For me, it’s just not the first thing to do; it’s the second pass.
Lex Fridman
(00:26:29)
By way of recommendation, I do the opposite. I like to use the LLM at the beginning— …to lay out the full context of what is this world that I’m now stepping into. But I try to avoid clicking out of the LLM into the world of Twitter and blogs because then you’re down this rabbit hole. You’re reading somebody’s opinion, there’s a flame war about a particular topic, and all of a sudden you’re now in the realm of the internet and Reddit and so on. But if you’re purely letting the LLM give you the context of why this matters, what are the big picture ideas… sometimes books themselves are good at doing that, but not always.
Nathan Lambert
(00:27:12)
This is why I like the ChatGPT app, because it gives the AI a home in your computer where you can focus on it, rather than just being another tab in my mess of internet options. And I think Claude Code in particular does a good job of making that a joy, where it seems very engaging as a product design to be an interface that your AI will then go out into the world. There’s something very intangible between it and Codex; it just feels warm and engaging, whereas Codex from OpenAI can often be as good but it just feels a little bit rough around the edges.
Nathan Lambert
(00:27:45)
Whereas Claude Code makes it fun to build things, particularly from scratch where you trust that it’ll make something. Obviously this is good for websites and refreshing tooling, which I use it for, or data analysis. On my blog, we scrape Hugging Face so we keep the download numbers for every dataset and model over time now. Claude was just like, “Yeah, I’ve made use of that data, no problem.” And I was like, “That would’ve taken me days.” And then I have enough situational awareness to be like, “Okay, these trends obviously make sense,” and you can check things. But that’s just a wonderful interface where you can have an intermediary and not have to do the awful low-level work that you would have to do to maintain different web projects.

Open Source vs Closed Source LLMs

Lex Fridman
(00:28:29)
All right. So we just talked about a bunch of the closed-weight models. Let’s talk about the open ones. Tell me about the landscape of open LLM models. Which are interesting ones? Which stand out to you and why? We already mentioned DeepSeek.
Nathan Lambert
(00:28:44)
Do you wanna see how many we can name off the top of our head?
Lex Fridman
(00:28:47)
Yeah, yeah. Without looking at notes.
Nathan Lambert
(00:28:48)
DeepSeek, Kimi, MiniMax, Z.ai, Antlang. We’re just going Chinese.
Sebastian Raschka
(00:28:57)
Let’s throw in Mistral AI, Gemma— …gpt-oss, the open source model by OpenAI. Actually, NVIDIA had a really cool one, Nemotron 3. There’s a lot of stuff, especially at the end of the year. Qwen might be the one—
Nathan Lambert
(00:29:12)
Oh, yeah. Qwen was the obvious name I was gonna say. I was trying to get through… you can get at least 10 Chinese and at least 10 Western. I mean, OpenAI released their first open model—
Sebastian Raschka
(00:29:21)
A long time ago.
Nathan Lambert
(00:29:22)
…since GPT-2. When I was writing about OpenAI’s open model release, people were like, “Don’t forget about GPT-2,” which I thought was really funny because it’s just such a different time. But gpt-oss is actually a very strong model and does some things that the other models don’t do very well. Selfishly, I’ll promote a bunch of Western companies; both in the US and Europe have these fully open models. I work at the Allen Institute for AI where we’ve been building OLMo, which releases data and code and all of this. And now we have actual competition for people that are trying to release everything so that other people can train these models.
Nathan Lambert
(00:29:57)
So there’s the Institute for Foundation Models/LM360, which has had their K2 models of various types. Apertus is a Swiss research consortium. Hugging Face has SmolLM, which is very popular. And NVIDIA’s Nemotron has started releasing data as well. And then Stanford’s Marine Community Project, which is kind of making it so there’s a pipeline for people to open a GitHub issue and implement a new idea and then have it run in a stable language modeling stack. So this space, that list was way smaller in 2024-
Nathan Lambert
(00:30:31)
… so I think it was just AI2. So that’s a great thing for more people to get involved and to understand language models, which doesn’t really have a Chinese company that is an analog. While I’m talking, I’ll say that the Chinese open language models tend to be much bigger and that gives them this higher peak performance as MoEs, whereas a lot of these things that we like a lot, whether it was Gemma or Nemotron, have tended to be smaller models from the US, which is starting to change. Mistral Large 3 came out, which was a giant MoE model, very similar to DeepSeek architecture in December. And then a startup, Reka AI, and both Nemotron have… Nemotron and NVIDIA have teased MoE models way bigger than 100 billion parameters-
Nathan Lambert
(00:31:16)
… in the 400 billion parameter range coming in this Q1 2026 timeline. So I think this kind of balance is set to change this year in terms of what people are using the Chinese versus US open models for, which I’m personally going to be very excited to watch.
Lex Fridman
(00:31:32)
First of all, huge props for being able to name so many of these. Did you actually name LLaMA?
Nathan Lambert
(00:31:38)
No.
Lex Fridman
(00:31:39)
I feel like …
Nathan Lambert
(00:31:40)
RIP.
Sebastian Raschka
(00:31:41)
This was not on purpose.
Lex Fridman
(00:31:43)
RIP LLaMA. All right. Can you mention what are some interesting models that stand out? You mentioned Qwen 3 is obviously a standout.
Sebastian Raschka
(00:31:51)
So I would say the year’s almost book-ended by DeepSeek-V3 and DeepSeek R1. And then on the other hand, in December, DeepSeek-V3.2. Because what I like about those is they always have an interesting architecture tweak- … that others don’t have. But otherwise, if you want to go with the familiar but really good performance, Qwen 3 and, like Nathan said, also gpt-oss. And I think with gpt-oss, what’s interesting about it is it’s kind of the first open-weight model that was really trained with tool use in mind, which I do think is a bit of a paradigm shift where the ecosystem was not quite ready for it. So with tool use, I mean that the LLM is able to do a web search or call a Python interpreter.
Sebastian Raschka
(00:32:33)
And I do think it’s a standout because it’s a huge unlock. One of the most common complaints about LLMs is, for example, hallucinations, right? And so, in my opinion, one of the best ways to solve hallucinations is to not try to always remember information or make things up. For math, why not use a calculator app or Python?
Sebastian Raschka
(00:32:54)
If I ask the LLM, “Who won the soccer World Cup in 1998?” instead of just trying to memorize, it could go do a search. I think mostly it’s usually still a Google search. So ChatGPT and gpt-oss, they would do a tool call to Google, maybe find the FIFA website, and find that it was France. It would get you that information reliably instead of just trying to memorize it. So I think it’s a huge unlock which right now is not fully utilized yet by the open-weight ecosystem. A lot of people don’t use tool call modes because I think it’s a trust thing. You don’t want to run this on your computer where it has access to tools and could wipe your hard drive, so you want to containerize that. But I do think that is a really important step for the upcoming years to have this ability.
Lex Fridman
(00:33:44)
So a few quick things. First of all, thank you for defining what you mean by tool use. I think that’s a great thing to do in general for the concepts we’re talking about, even things as sort of well-established as MOEs. You have to say that means mixture of experts, and you kind of have to build up an intuition for people about what that means, how it’s actually utilized, what are the different flavors. So what does it mean that there’s just such an explosion of open models? What’s your intuition?
Nathan Lambert
(00:34:13)
If you’re releasing an open model, you want people to use it, is the first and foremost thing. And then after that comes things like transparency and trust. I think when you look at China, the biggest reason is that they want people around the world to use these models, and I think a lot of people will not. If you look outside of the US, a lot of people will not pay for software, but they might have computing resources where you can put a model on it and run it. I think there can also be data that you don’t want to send to the cloud. So the number one thing is getting people to use models, use AI, or use your AI that might not be able to do it without having access to the model.
Lex Fridman
(00:34:46)
I guess we should state explicitly, so we’ve been talking about these Chinese models and open weight models. Oftentimes, the way they’re run is locally. So it’s not like you’re sending your data to China or to whoever developed the model in Silicon Valley.
Nathan Lambert
(00:35:04)
A lot of American startups make money by hosting these models from China and selling them. It’s called selling tokens, which means somebody will call the model to do some piece of work. I think the other reason is for US companies like OpenAI. OpenAI is so GPU deprived; they’re at the limits of the GPUs. Whenever they make a release, they’re always talking about how their GPUs are hurting. And I think in one of these gpt-oss-120b release sessions, Sam Altman said, “Oh, we’re releasing this because we can use your GPUs. We don’t have to use our GPUs and OpenAI can still get distribution out of this,” which is another very real thing, because it doesn’t cost them anything.
Sebastian Raschka
(00:35:43)
And for the user, I think also, I mean, there are users who just use the model locally how they would use ChatGPT. But also for companies, I think it’s a huge unlock to have these models because you can customize them, you can train them, you can add more data post-training, like specialize them into, let’s say, law, medical models, whatever you have. And you mentioned Llama; the appeal of the open weight models from China is that the licenses are even friendlier. I think they are just unrestricted open source licenses, whereas if we use something like Llama or Gemma, there are some strings attached. I think it’s like an upper limit in terms of how many users you have.
Sebastian Raschka
(00:36:21)
And then if you exceed so many million users, you have to report your financial situation to, let’s say, Meta or something like that. And I think while it is a free model, there are strings attached, and people like things where strings are not attached. So I think that’s also one of the reasons besides performance why the open weight models from China are so popular, because you can just use them. There’s no catch in that sense.
Nathan Lambert
(00:36:46)
The ecosystem has gotten better on that front, but mostly downstream of these new providers providing such open licenses. That was funny when you pulled up Perplexity and said, “Kimi K2 Thinking hosted in the US.” Which is an exact example of what we’re talking about where people are sensitive to this. Kimi K2 Thinking is a model that is very popular. People say that has very good creative writing and also in doing some software things. So it’s just these little quirks that people pick up on with different models that they like.
Lex Fridman
(00:37:14)
What are some interesting ideas that some of these models have explored that you can speak to, like that are particularly interesting to you?
Sebastian Raschka
(00:37:21)
Maybe we can go chronologically. I mean, there was, of course, DeepSeek R1 that came out in January of 2025. However, this was based on DeepSeek-V3, which came out the year before in December 2024. There are multiple things on the architecture side. What is fascinating is you can still—I mean, that’s what I do with my from-scratch coding projects—you can still start with GPT-2, and you can add things to that model to make it into this other model. So it’s all still kind of like the same lineage. There is a very close relationship between those. But top of my head, DeepSeek, what was unique there is the Mixture of Experts. I mean, they were not inventing Mixture of Experts.
Sebastian Raschka
(00:38:00)
We can maybe talk a bit more about what Mixture of Experts means. But just to list these things first before we dive into detail: Mixture of Experts, but then they also had multi-head latent attention, which is a tweak to the attention mechanism. This was, I would say in 2025, the main distinguishing factor between these open weight models: different tweaks to make inference or KV cache size more economical. We can also define KV cache in a few moments. But it makes it more economical to have long context, to shrink the KV cache size. So what are tweaks that we can do? Most of them focused on the attention mechanism. There is multi-head latent attention in DeepSeek; there is group query attention, which is still very popular.
Sebastian Raschka
(00:38:44)
It’s not invented by any of those models; it goes back a few years. But that would be the other option. Sliding window attention, I think OLMo 3 uses it if I remember correctly. So there are these different tweaks that make the models different. Otherwise, I put them all together in an article once where I just compared them; they are surprisingly similar. It’s just different numbers in terms of how many repetitions of the transformer block you have in the center and just little knobs that people tune. But what’s so nice about it is it works no matter what. You can tweak things, you can move the normalization layers around to get some performance gains.
Sebastian Raschka
(00:39:23)
And OLMo is always very good in ablation studies, showing what it actually does to the model if you move something around. Ablation studies: does it make it better or worse? But there are so many ways you can implement a transformer and make it still work. The big ideas that are still prevalent are Mixture of Experts, multi-head latent attention, sliding window attention, and group query attention. And then at the end of the year, we saw a focus on making the attention mechanism scale linearly with inference token prediction. So there was Qwen3-neXt, for example, which added a gated delta net. It’s inspired by state space models, where you have a fixed state that you keep updating. But it makes essentially this attention cheaper, or it replaces attention with a cheaper operation.

Transformers: Evolution of LLMs since 2019

Lex Fridman
(00:40:08)
And it may be useful to step back and talk about transformer architecture in general.
Sebastian Raschka
(00:40:13)
Yeah, so maybe we should start with GPT-2 architecture, the transformer that was derived from the “Attention Is All You Need” paper.
Sebastian Raschka
(00:40:21)
So the “Attention Is All You Need” paper had a transformer architecture that had two parts: an encoder and a decoder. And GPT went with just focusing in on the decoder part. It is essentially still a neural network and it has this attention mechanism inside. And you predict one token at a time. You pass it through an embedding layer. There’s the transformer block. The transformer block has attention modules and a fully connected layer. And there are some normalization layers in between. But it’s essentially neural network layers with this attention mechanism. So coming from GPT-2 when we move on to gpt-oss-120b, there is, for example, the Mixture of Experts layer. It’s not invented by GPT-OSS; it’s a few years old.
Sebastian Raschka
(00:41:04)
But it is essentially a tweak to make the model larger without consuming more compute in each forward pass. So there is this fully connected layer, and if listeners are familiar with multi-layer perceptrons, you can think of a mini multi-layer perceptron, a fully connected neural network layer inside the transformer. And it’s very expensive because it’s fully connected. If you have a thousand inputs and a thousand outputs, that’s like a million connections. And it’s a very expensive part in this transformer. And the idea is to kind of expand that into multiple feedforward networks. So instead of having one, let’s say you have 256, but you don’t use all of them at the same time.
Sebastian Raschka
(00:41:49)
So you now have a router that says, “Okay, based on this input token, it would be useful to use this fully connected network.” And in that context, it’s called an expert. So a Mixture of Experts means you have multiple experts. And depending on what your input is—let’s say it’s more math-heavy—it would use different experts compared to, let’s say, translating input text from English to Spanish. It would maybe consult different experts. It’s not as clear-cut to say, “Okay, this is only an expert for math and this for Spanish.” It’s a bit more fuzzy. But the idea is essentially that you pack more knowledge into the network, but not all the knowledge is used all the time.
Sebastian Raschka
(00:42:27)
That would be very wasteful. So yeah, kind of like during the token generation, you are more selective. There’s a router that selects which tokens should go to which expert. It adds more complexity. It’s harder to train. There’s a lot that can go wrong, like collapse and everything. So I think that’s why OLMo 3 still uses dense… I mean, you have, I think, OLMo models with Mixture of Experts, but dense models, where dense means… So also, it’s jargon. There’s a distinction between dense and sparse. So Mixture of Experts is considered sparse because we have a lot of experts, but only a few of them are active. And then dense would be the opposite, where you only have, like, one fully connected module, and it’s always utilized.
Lex Fridman
(00:43:08)
So maybe this is a good place to also talk about KV cache. But actually, before that, even zooming out, fundamentally, how many new ideas have been implemented from GPT-2 to today? Like, how different really are these architectures?
Sebastian Raschka
(00:43:25)
Picture like the Mixture of Experts. The attention mechanism in gpt-oss-120b, that would be the Group Query Attention mechanism. So it’s a slight tweak from multi-head attention to Group Query Attention, so that we have two. I think they replaced LayerNorm by RMSNorm, but it’s just like a different normalization there and not a big change. It’s just like a tweak. The nonlinear activation function—for people familiar with deep neural networks, I mean, it’s the same as changing sigmoid with ReLU. It’s not changing the network fundamentally. It’s just like a tweak. And that’s about it, I would say. It’s not really fundamentally that different. It’s still the same architecture. So you can convert one from one… You can go from one into the other by just adding these changes, basically.
Lex Fridman
(00:44:09)
It fundamentally is still the same architecture.
Sebastian Raschka
(00:44:12)
Mm-hmm. Yep. So for example, you mentioned my book earlier. That’s a GPT-2 model in the book because it’s simple and it’s very small, so 124 million parameters approximately. But in the bonus materials, I do have OLMo from scratch, Gemini 3 from scratch, and other types of from-scratch models. And I always start with my GPT-2 model and just, you know, add different components and you get from one to the other. It’s kind of like a lineage in a sense. Yeah.
Lex Fridman
(00:44:37)
Can you build up an intuition for people? Because sort of when you zoom out and look at it, there’s so much rapid advancement in the AI world, and at the same time, fundamentally the architectures have not changed. So where is all the turbulence, the turmoil of the advancement happening? Where are the gains to be had?
Sebastian Raschka
(00:45:01)
So there are the different stages where you develop or train the network. You have pre-training. Now back in the day, it was just pre-training with GPT-2. Now you have pre-training, mid-training, and post-training. So I think right now we are in the post-training focus stage. I mean, pre-training still gives you advantages if you scale it up to better, higher quality data. But then we have capability unlocks that were not there with GPT-2, for example. ChatGPT is basically a GPT-3 model. And GPT-3 is the same as GPT-2 in terms of architecture. What was new was adding the supervised fine-tuning and the Reinforcement Learning with Human Feedback. So, it’s more on the algorithmic side rather than the architecture.
Nathan Lambert
(00:45:44)
I would say that the systems also change a lot. I think if you listen to NVIDIA’s announcements, they talk about things like, “You now do FP8, you can now do FP4.” And what is happening is these labs are figuring out how to utilize more compute to put into one model, which lets them train faster and lets them put more data in. And then you can find better configurations faster by doing this. So you can look at the tokens per second per GPU as a metric that you look at when you’re doing large-scale training. And you can go from, like, 10K to 13K by turning on FP8 training, which means you’re using less memory per parameter in the model. And by saving less information, you do less communication and you can train faster.
Nathan Lambert
(00:46:24)
So all of these system things underpin way faster experimentation on data and algorithms. It’s this kind of loop that keeps going where it’s kinda hard to describe when you look at the architecture and they’re exactly the same. But the code base used to train these models is gonna be vastly different— …and you could probably… the GPUs are different, but you probably train gpt-oss-20b way faster in wall clock time than GPT-2— …was trained at the time.
Sebastian Raschka
(00:46:54)
Yeah. Like you said, they had, for example, in the Mixture of Experts, this NVIDIA FP4 optimization where you get more throughput. But I do think for the speed, this is true, but it doesn’t give the model new capabilities in a sense. It’s just: how much can we make the computation coarser without suffering in terms of model performance degradation? But I do think there are alternatives popping up to the transformer. There are text diffusion models, a completely different paradigm. And although text diffusion models might use transformer architectures, it’s not an autoregressive transformer. And also Mamba models; it’s a State Space Model.
Sebastian Raschka
(00:47:34)
But they do have trade-offs, and what’s true is there’s nothing that has replaced the autoregressive transformer as the state-of-the-art model. So, for state-of-the-art, you would still go with that thing, but there are now alternatives for the cheaper end—alternatives that are kind of making compromises, but it’s not just one architecture anymore. There are little ones coming up. But if we talk about the state-of-the-art, it’s pretty much still the transformer architecture, autoregressive, derived from GPT-2 essentially.

AI Scaling Laws: Are they dead or still holding?

Lex Fridman
(00:48:06)
I guess the big question here is—we talked quite a bit here on the architecture behind the pre-training—are the scaling laws holding strong across pre-training, post-training, inference, context size, data, and synthetic data?
Nathan Lambert
(00:48:20)
I’d like to start with the technical definition of a scaling law-
Nathan Lambert
(00:48:23)
…which kind of informs all of this. The scaling law is the power law relationship between… You can think of the x-axis—what you are scaling—as a combination of compute and data, which are kind of similar, and then the y-axis is like the held-out prediction accuracy over our next tokens. We talked about models being autoregressive. It’s like if you keep a set of text that the model has not seen, how accurate will it get when you train? And the idea of scaling laws came when people figured out that that was a very predictable relationship. I think that technical term is continuing, and then the question is, what do users get out of it? And then there are more types of scaling, where OpenAI’s o1 was famous for introducing inference-time scaling.
Nathan Lambert
(00:49:07)
And I think less famously for also showing that you can scale reinforcement learning training and get kind of this log x-axis and then a linear increase in performance on the y-axis. So there are kind of these three axes now where the traditional scaling laws are talked about for pre-training—which is how big your model is and how big your dataset is—and then scaling reinforcement learning, which is like how long can you do this trial and error learning that we’ll talk about. We’ll define more of this, and then this inference-time compute, which is just letting the model generate more tokens on a specific problem.
Nathan Lambert
(00:49:37)
So I’m kind of bullish; they’re all really still working, but the low-hanging fruit has mostly been taken, especially in the last year on Reinforcement Learning with Verifiable Rewards, which is this RLVR, and then inference-time scaling. That’s why these models feel so different to use, where previously you would get that first token immediately. And now they’ll go off for seconds, minutes, or even hours generating these hidden thoughts before giving you the first word of your answer. And that’s all about this inference-time scaling, which is such a wonderful kind of step function in terms of how the models change abilities. They enabled this tool use stuff and enabled this much better software engineering that we were talking about.
Nathan Lambert
(00:50:17)
And this is, when we say enabled, almost entirely downstream of the fact that this Reinforcement Learning with Verifiable Rewards training just let the models pick up these skills very easily. So if you look at the reasoning process when the models are generating a lot of tokens, what it’ll often be doing is: it tries a tool, it looks at what it gets back, it tries another API, it sees what it gets back and if it solves the problem. The models, when you’re training them, very quickly learn to do this.
Nathan Lambert
(00:50:46)
And then at the end of the day, that gives this kind of general foundation where the model can use CLI commands very nicely in your repo, handle Git for you, move things around, organize things, or search to find more information—which, if we were sitting in these chairs a year ago, is something that we didn’t really think of the models doing. So this is just something that has happened this year and has totally transformed how we think of using AI, which I think is very magical. It’s such an interesting evolution and unlocks so much value. But it’s not clear what the next avenue will be in terms of unlocking stuff like this.
Nathan Lambert
(00:51:23)
I think that there’s—we’ll get to continual learning later, but there’s a lot of buzz around certain areas of AI, but no one knows when the next step function will really come.
Lex Fridman
(00:51:31)
So you’ve actually said quite a lot of things there, and said profound things quickly. It would be nice to unpack them a little bit. You say you’re bullish basically on every version of scaling. So can we just start at the beginning? Pre-training: are we implying that the low-hanging fruit on pre-training scaling has been picked? Has pre-training hit a plateau, or are you still bullish on even pre-training?
Nathan Lambert
(00:52:01)
Pre-training has gotten extremely expensive. I think to scale up pre-training, it’s also implying that you’re going to serve a very large model to the users. So I think that it’s been loosely established the likes of GPT-4 and similar models were around one trillion parameters at the biggest size. There’s a lot of rumors that they’ve actually gotten smaller as training has gotten more efficient. You want to make the model smaller because then your costs of serving go down proportionately. The cost of training these models is really low relative to the cost of serving them to hundreds of millions of users. I think DeepSeek had this famous number of about five million dollars for pre-training at cloud market rates.
Nathan Lambert
(00:52:40)
In the OLMo 3 paper, section 2.4, we just detailed how long we had the GPU clusters sitting around for training—which includes engineering issues, multiple seeds—and it was about two million dollars to rent the cluster to deal with all the problems and headaches of training a model. So these models are… a lot of people could get one to 10 million dollars to train a model, but the recurring costs of serving millions of users is really billions of dollars of compute. A thousand GPU rental you can pay 100 grand a day for. And these companies could have millions of GPUs. You can look at how much these things cost to sit around.
Nathan Lambert
(00:53:19)
So that’s kind of a big thing, and then it’s like, if scaling is actually giving you a better model, is it going to be financially worth it? And I think we’ll slowly push it out as AI solves more compelling tasks—like the likes of Claude Opus 4.5 making Claude Code just work for things. I launched this project called the ATOM project, which is American Truly Open Models, in July, and that was like a true vibe-coded website. I have a job to make plots and stuff. Then I came back to refresh it in the last few weeks and Claude Opus 4.5, versus whatever model was available at the time, just crushed all the issues that it had from building in June and July. It might be a bigger model. There’s a lot of things that go into this, but there’s still progress coming.
Lex Fridman
(00:54:04)
So what you’re speaking to is the nuance of the y-axis of the scaling laws—that the way it’s experienced versus on a benchmark, the actual intelligence might be different. But still, your intuition about pre-training: if you scale the size of compute, will the models get better? Not whether it’s financially viable, but just from the law aspect of it, do you think the models will get smarter?
Nathan Lambert
(00:54:28)
Yeah. And I think that there’s… And this sometimes comes off as almost disillusioned from leadership at AI companies saying this, but they’re like, “It’s held for 13 orders of magnitude of compute; why would it ever end?” So I think fundamentally it is pretty unlikely to stop. It’s just like eventually we’re not even going to be able to test the bigger scales because of all the problems that come with more compute. I think that there’s a lot of talk on how 2026 is a year when very large NVIDIA Blackwell compute clusters—like gigawatt-scale facilities—are coming online. And these were all contracts for power and data centers that were signed and sought out in ’22 and 2023, before or right after ChatGPT.
Nathan Lambert
(00:55:13)
So it took this two-to-three-year lead time to build these bigger clusters to train the models, while there’s obviously immense interest in building even more data centers than that. So that is kind of the crux that people are saying: these new clusters are coming. The labs are going to have more compute for training. They’re going to utilize this, but it’s not a given. I’ve seen so much progress that I expect it, and I expect a little bit bigger models. I would say it’s more like we’ll see a $2,000 subscription this year; we’ve already seen $200 subscriptions. It’s like that could 10x again, and these are the kind of things that could come—and they’re all downstream of a bigger model that offers just a little bit more of a cutting edge.
Lex Fridman
(00:55:53)
So, it’s reported that xAI is going to hit that one-gigawatt scale early ’26, and a full two gigawatts by year end. How do you think they’ll utilize that in the context of scaling laws? Is a lot of that inference? Is a lot of that training?
Nathan Lambert
(00:56:12)
It ends up being all of the above. I think that all of your decisions when you’re training a model come back to pre-training. So if you’re going to scale RL on a model, you still need to decide on your architecture that enables this. We were talking about other architectures and using different types of attention. We’re also talking about Mixture of Experts models. The sparse nature of MoE models makes it much more efficient to do generation, which becomes a big part of post-training, and you need to have your architecture ready so that you can actually scale up this compute. I still think most of the compute is going in at pre-training. Because you can still make a model better, you still want to go and revisit this.
Nathan Lambert
(00:56:53)
You still want the best base model that you can. And in a few years that’ll saturate and the RL compute will just go longer.
Lex Fridman
(00:57:00)
Are there people who disagree with you that say basically pre-training is dead? That it’s all about scaling inference, scaling post-training, scaling context, continual learning, and scaling synthetic data?
Nathan Lambert
(00:57:15)
People vibe that way and describe it in that way, but I think it’s not the practice that is happening.
Lex Fridman
(00:57:19)
It’s just the general vibe of people saying this thing is dead—
Nathan Lambert
(00:57:21)
The excitement is elsewhere. So the low-hanging fruit— …in RL is elsewhere. For example, we released our model in November. Every company has deadlines. Our deadline was like November 20th, and for that, our run was five days, which compared to 2024 is a very long time to just be doing post-training on a model of about 30 billion parameters. It’s not a big model. And then in December, we had another release, which was just letting the RL run for another three and a half weeks, and the model got notably better, so we released it. And that’s a big amount of time to just allocate to something that is going to be your peak— …for the year. So it’s like—
Lex Fridman
(00:57:57)
The reasoning is—
Nathan Lambert
(00:57:58)
There’s these types of decisions that happen when they’re training a model where they just can’t leave it forever. You have to keep pulling in the improvements you have from your researchers. So you redo pre-training, you’ll do this post-training for a month, but then you need to give it to your users. You need to do safety testing. I think there’s a lot in place that reinforces this cycle of just keep updating the models. There’s things to improve. You get a new compute cluster that lets you do something maybe more stably or faster. You hear a lot about Blackwell having rollout issues, where at AI2 most of the models we’re pre-training are on like 1,000 to 2,000 GPUs.
Nathan Lambert
(00:58:36)
But when you’re pre-training on 10,000 or 100,000 GPUs, you hit very different failures. GPUs are known to break in weird ways, and doing a 100,000 GPU run is like… you’re pretty much guaranteed to always have at least one GPU that is down. And you need to have your training code handle that redundancy, which is just a very different problem. Whereas what we’re doing like, “Oh, I’m playing with post-training on DJI Spark,” or people learning ML, what they’re battling to train these biggest models is just like— …mass distributed scale, and it’s very different. But that’s somewhat different than… that’s a systems problem—
Nathan Lambert
(00:59:11)
…in order to enable the scaling laws, especially at pre-training. You need all of these GPUs at once. When we shift to reinforcement learning, it actually lends itself to heterogeneous compute because you have many copies of the model. To do a primer for language model reinforcement learning, what you’re doing is you have two sets of GPUs. One you can call the actor and one you call the learner. The learner is where your actual reinforcement learning updates happen. These are traditionally policy gradient algorithms. Proximal Policy Optimization, PPO, and Group Relative Policy Optimization, GRPO, are the two popular classes.
Nathan Lambert
(00:59:50)
On the other side, you’re going to have actors which are generating completions, and these completions are the things that you’re going to grade. Reinforcement learning is all about optimizing reward. In practice, you can have a lot of different actors in different parts of the world doing different types of problems, and then you send it back to this highly networked compute cluster to do this actual learning, where you take the gradients and you need to have a tightly meshed network where you can do different types of parallelism and spread out your model for efficient training. Every different type of training and serving has these considerations you need to scale.
Nathan Lambert
(01:00:27)
We talked about pre-training, we talked about RL, and then inference time scaling is: how do you serve a model that’s thinking for an hour to 100 million users? I don’t really know about that, but I know that’s a hard problem. In order to give people this intelligence, there’s all these systems problems, and we need more compute and you need more stable compute to do it.
Lex Fridman
(01:00:46)
But you’re bullish on all of these kinds of scaling is what I’m hearing. On the inference, on the reasoning, even on the pre-training?
Sebastian Raschka
(01:00:54)
Yeah, so that’s a big can of worms, but there are basically two knobs: training and inference scaling, where you can get gains. In a world where we had infinite compute resources, you’d want to do all of them. You have training, you have inference scaling, and training is like a hierarchy: pre-training, mid-training, and post-training. Changing the model size, more training data, training a bigger model—it gives you more knowledge. Then the model is a better base model, or what we still call a foundation model, and it unlocks capabilities. But you don’t necessarily have the model be able to solve your most complex tasks—
Sebastian Raschka
(01:01:34)
…tasks during pre-training or after pre-training. You still have these other unlock phases, mid-training or post-training with RL, that unlocks capabilities that the model has in terms of knowledge from the pre-training. And I think, sure, if you do more pre-training, you get a better base model that you can unlock later. But like Nathan said, it just becomes too expensive. We don’t have infinite compute, so you have to decide: do I want to spend that compute more on making the model larger? It’s a trade-off. In an ideal world, you want to do all of them. And I think in that sense, scaling is still pretty much alive.
Sebastian Raschka
(01:02:08)
You would still get a better model, but like we saw with Claude 4.5, it’s just not worth it. I mean, because you can unlock more performance with other techniques at that moment, especially if you look at inference scaling. That’s one of the biggest gains this year with o1, where it took a smaller model further than pre-training a larger model like Claude 4.5. So, I wouldn’t say pre-training scaling is dead; it’s just that there are other more attractive ways to scale right now. But at some point, you will still want to make some progress on the pre-training. The thing to consider is where you want to spend your money.
Sebastian Raschka
(01:02:47)
If you spend it more on pre-training, it’s a fixed cost. You train the model, and then it has this capability forever. You can always use it. With inference scaling, you don’t spend money during training; you spend money later per query, and then it’s about the math. How long is my model going to be on the market if I replace it in half a year? Maybe it’s not worth spending 5 million, 10 million, or 100 million dollars on training it longer. Maybe I will just do more inference scaling and get the performance from there. It maybe costs me 2 million in terms of user queries. It becomes a question of how many users you have and doing the math. I think that’s also where it’s interesting, where ChatGPT is in a position.
Sebastian Raschka
(01:03:27)
I think they have a lot of users where they need to go a bit cheaper, where they have that GPT-5 model that is a bit smaller. For other companies, their customers have other trade-offs. For example, there were the math problems or the Math Olympiad where they had a proprietary model, and I’m pretty sure it’s just a model that has been fine-tuned a little bit more, but most of it was inference scaling to achieve peak performance in certain tasks where you don’t need that all the time. But yeah, long story short, I do think pre-training, mid-training, post-training, and inference scaling are all still things you want to do. At the moment, this year, it’s finding the right ratio that gives you the best bang for the buck, basically.

How AI is trained: Pre-training, Mid-training, and Post-training

Lex Fridman
(01:04:13)
I think this might be a good place to define pre-training, mid-training, and post-training.
Sebastian Raschka
(01:04:18)
So, pre-training is the classic training one next token prediction at a time. You have a big corpus of data. Nathan probably also has very interesting insights there because of OLMo 3. A big portion of the paper focuses on the right data mix. So, pre-training is essentially just training across entropy loss, training on next token prediction on a vast corpus of internet data, books, papers and so forth. It has changed a little bit over the years in the sense people used to throw in everything they can. Now, it’s not just raw data. It’s also synthetic data where people rephrase certain things. So synthetic data doesn’t necessarily mean purely AI-made-up data.
Sebastian Raschka
(01:04:58)
It’s also taking something from a Wikipedia article and then rephrasing it as a Q&A question or summarizing it, rewarding it, and making better data that way. I think of it like with humans. If someone reads a book compared to a messy—no offense, but like—Reddit post or something like that. I do think you learn—no offense, but I think—
Lex Fridman
(01:05:25)
There’s going to be a post about this, Sebastian.
Nathan Lambert
(01:05:28)
Some Reddit data is very coveted and excellent for training. You just have to filter it.
Sebastian Raschka
(01:05:33)
And I think that’s the idea. I think it’s like if someone took that and rephrases it in a, let’s say, more concise and structured way— I think it’s higher quality data that gets the LLM maybe the same—you get the same LLM out of it at the end, but it gets there faster. It trains faster because if the grammar and the punctuation are correct, it already learns the correct way versus getting information from a messy way and then learning later how to correct that. So, I think that is how pre-training evolved and why scaling still works; it’s not just about the amount of data, it’s also the tricks to make that data better for you. And then mid-training is… I mean, it used to be called pre-training.
Sebastian Raschka
(01:06:21)
I think it’s called mid-training because it was awkward to have pre-training and post-training but nothing in the middle, right? It sounds a bit weird. You have pre-training and post-training, but what’s the actual training? So, the mid-training is usually similar to pre-training, but it’s a bit more specialized. It’s the same algorithm, but what you do is you focus, for example, on long context documents. The reason you don’t do that during pre-training is because you don’t have that many long context documents. We have a specific phase. And one problem of LLMs is still that it’s a neural network; it has the problem of catastrophic forgetting.
Sebastian Raschka
(01:06:56)
So, you teach it something, it forgets other things. It’s not 100% forgetting, but there’s no free lunch. It’s also the same with humans. If you ask me some math I learned 10 years ago, I wouldn’t know; I would have to look at it again.
Lex Fridman
(01:07:09)
Nathan was actually saying that he’s consuming so much content that there’s a catastrophic forgetting issue.
Nathan Lambert
(01:07:14)
Yeah, I’m trying to learn so much about AI, and it’s like when I was learning about pre-training parallelism, I’m like, “I lost something and I don’t know what it was.”
Sebastian Raschka
(01:07:22)
I don’t want to anthropomorphize LLMs, but I think it’s the same in terms of how humans learn. Quantity is not always better because it’s about being selective. Mid-training is being selective in terms of quality content at the end, so the last thing the LLM has seen is the quality stuff. And then post-training is all the fine-tuning: supervised fine-tuning, DPO, RLVR with human feedback and so forth. So, the refinement stages. And it’s also interesting, the cost thing, right? Pre-training, you spend a lot of money on that right now. RL a bit less. RL, you don’t really teach it knowledge; it’s more like unlocking the knowledge.
Sebastian Raschka
(01:08:03)
It’s more like skill learning, like how to solve problems with the knowledge that it has from pre-training. There are actually three papers this year, or last year, 2025, on RL for pre-training. But I don’t think anyone does that in production.
Nathan Lambert
(01:08:17)
Toy, toy examples for now.
Sebastian Raschka
(01:08:18)
Toy examples, right. Но to generalize, RL post-training is more like the skill unlock, where pre-training is like soaking up the knowledge essentially.
Nathan Lambert
(01:08:26)
A few things that could be helpful for people. A lot of people think of synthetic data as being bad for training models. You mentioned that DeepSeek got an OCR—Optical Character Recognition—paper. A lot of labs did; AI2 had one, others had multiple. And the reason each of these labs has these is because there’s vast amounts of PDFs and other digital documents on the web in formats that aren’t encoded with text easily. So you use these, like DeepSeek OCR or what we called OLMo OCR, to extract what can be trillions of tokens of candidate data. Pre-training dataset size is on the order of trillions; it’s measured in trillions of tokens.
Nathan Lambert
(01:09:10)
Smaller models from researchers can be something like 5 to 10 trillion. Qwen is documented going up to like 50 trillion, and there’s rumors that these closed labs can go to 100 trillion tokens. Getting this potential data is a very big funnel, and the data you actually train the model on is a small percentage of this. This character recognition data would be described as synthetic data for pre-training in a lab. And then there’s also the fact that ChatGPT now gives wonderful answers, and you can train on those best answers; that’s synthetic data. It’s very different than the early ChatGPT hallucinations data.
Sebastian Raschka
(01:09:48)
One interesting question is, if I recall correctly, OLMo 3 was trained with less data than specifically some other open-weight models, maybe even OLMo 2. But you still got better performance, and that might be one of the examples of how the data helped.
Nathan Lambert
(01:10:01)
It’s mostly down to data quality. I think if we had more compute, we would train for longer. I think we’d ultimately see that as something we would want to do. Especially with big models, you need more compute because big models can absorb more from data, and you get more benefit out of this. It’s like one of those logarithmic graphs—a small model will level off sooner if you’re measuring tons of tokens, and bigger models need more. But mostly, we aren’t training that big of models right now at AI2, and getting the highest quality data we can is the natural starting point.
Lex Fridman
(01:10:38)
Is there something to be said about the topic of data quality? Is there some low-hanging fruit there still where the quality could be improved?
Nathan Lambert
(01:10:46)
It’s like turning the crank. So I think historically, in the open, there’s been a canonical best pre-training dataset that has moved around between who has the most recent one or the best recent effort. Like AI2’s Dolma was very early with the first OLMo and Hugging Face had FineWeb. And there’s the DCLM project, which stands for Data Comp Language Model. There’s been Data Comp for other machine learning projects, and they had a very strong dataset. A lot of it is the internet becoming fairly closed off, so we have Common Crawl, which I think is hundreds of trillions of tokens, and you filter it.
Nathan Lambert
(01:11:21)
And it looks like a lot of scientific work where you’re training classifiers and making decisions based on how you prune down this dataset into the highest quality stuff and the stuff that suits your tasks. Previously, language models were tested a lot more on knowledge and conversational things, but now they’re expected to do math and code. To train a reasoning model, you need to remix your whole dataset. And there are actually some wonderful scientific methods here where you can take your gigantic dataset and sample a lot of really tiny things from different sources, like GitHub, Stack Exchange, Reddit, or Wikipedia.
Nathan Lambert
(01:11:56)
You can sample small things from them, train small models on each of these mixes, and measure their performance on your evaluations. And you can just do basic linear regression, and it’s like, “Here’s your optimal dataset.” But if your evaluations change, your dataset changes a lot. So a lot of OLMo 3 was adding new sources for reasoning to be better at math and code, and then you do this mixing procedure and it gives you the answer. I think a lot of that’s happened at labs this year; there are new hot things, whether it’s coding environments or web navigation, and you just need to bring in new data and change your whole pre-training so that your post-training can work better. And that’s like the constant re-evolution and the re-determining of what they care about for their models.
Lex Fridman
(01:12:35)
Are there fun anecdotes of what sources of data are particularly high quality that we wouldn’t expect? You mentioned Reddit sometimes can be a source.
Nathan Lambert
(01:12:45)
Reddit was very useful. I think that PDFs are definitely one.
Sebastian Raschka
(01:12:51)
Oh, especially arXiv.
Nathan Lambert
(01:12:52)
Yeah, AI2 has run Semantic Scholar for a long time, which is a competitor to Google Scholar with a lot more features. To do this, AI2 has found and scraped a lot of PDFs for openly accessible papers that might not be behind the closed paid garden of a certain publisher—truly open scientific PDFs. If you sit on all of these and process them, you can get value out of it. I think a lot of that style of work has been done by the frontier labs much earlier. You need to have a pretty skilled researcher that understands how things change models, and they bring it in and clean it; it’s a lot of labor.
Nathan Lambert
(01:13:34)
I think at a lot of frontier labs, when they scale researchers, a lot more goes into data. If you join a frontier lab and you want to have impact, the best way to do it is just find new data that’s better. The fancy, glamorous algorithmic things, like figuring out how to make o1, is like the sexiest thought for a scientist. It’s like, “Oh, I figured out how to scale RL.” There’s a group that did that, but I think most of the contributions are-
Lex Fridman
(01:13:58)
On the dataset.
Nathan Lambert
(01:13:58)
… “I’m gonna make the data better,” or, “I’m gonna make the infrastructure better so that everybody on my team can run experiments 5% faster.”
Sebastian Raschka
(01:14:04)
At the same time, I think it’s also one of the closest guarded secrets—what your training data is—for legal reasons. And so there’s also a lot of work that goes into hiding what your training data was, essentially, trying to get the model to not give away the sources because of those legal reasons.
Nathan Lambert
(01:14:19)
The other thing, to be complete, is that some people are trying to train on only licensed data, whereas Common Crawl is a scrape of the whole internet. If I host multiple websites, I’m happy to have them train language models, but I’m not explicitly licensing what governs it. Therefore, Common Crawl is largely unlicensed, which means your consent really hasn’t been provided for how to use the data. There’s another idea where you can train language models only on data that has been licensed explicitly so that the kind of governing contract is provided. I’m not sure if Apertus is the copyright thing or the license thing. I know that the reason they did it was for an EU compliance thing, where they wanted to make sure that their model fit one of those checks.
Sebastian Raschka
(01:15:01)
Mm-hmm. And on that note, there’s also the distinction between the licensing. Some people, like you said, just purchase the license. Let’s say they buy an Amazon Kindle book or a Manning book, and then use that in the training data; that is a gray zone because you paid for the content and you might want to train on it. But then there are also restrictions where even that shouldn’t be allowed. That is where it gets a bit fuzzy.
Sebastian Raschka
(01:15:28)
And I think that is still a hot topic right now. Big companies like OpenAI approached private companies for their proprietary data, and private companies are becoming more and more protective of their data because they know, “Okay, this is going to be my moat in a few years.” And I do think that’s the interesting question. If LLMs become more commoditized, and a lot of people learn about LLMs, there will be a lot more people able to train them. Of course, there are infrastructure challenges.
Sebastian Raschka
(01:16:00)
But if you think of big industries like pharmaceuticals, law, or finance, I do think they at some point will hire people from other frontier labs to build their in-house models on their proprietary data, which will be another unlock with pre-training that is currently not there. Because even if you wanted to, you can’t get that data—you can’t get access to clinical trials most of the time and these types of things. So I do think scaling in that sense might still be pretty much alive if you look at domain-specific applications, because right now we are just looking at general-purpose LLMs like ChatGPT, Anthropic, and so forth. They are just general purpose. They’re not even scratching the surface of what an LLM can do if it is really specifically trained and designed for a specific task.
Nathan Lambert
(01:16:47)
I think on the data thing, this is one of the things where, like, this happened in 2025 and we totally forget it: Anthropic lost in court and owed $1.5 billion to authors. Anthropic, I think, bought thousands of books and scanned them and was cleared legally for that because they bought the books, and that is going through the system. And then on the other side, they also torrented some books, and I think this torrenting was the path where the court said that they were then culpable to pay these billions of dollars to authors, which is just such a mind-boggling lawsuit that kind of just came and went. Like, that is so much money- … from the VC ecosystem.
Lex Fridman
(01:17:22)
These are court cases that will define the future of human civilization because it’s clear that data drives a lot of this, and there’s this very complicated human tension. I mean, you can empathize. You’re both authors. And there’s some degree to which, I mean, you put your heart and soul and your sweat and tears into the writing that you do. It feels a little bit like theft for somebody to train on your data without giving you credit.
Sebastian Raschka
(01:17:49)
And there are, like Nathan said, also two layers to it. Someone might buy the book and then train on it, which could be argued fair or not fair, but then there are the straight-up companies who use pirated books where they’re not even compensating the author. That is, I think, where people got a bit angry about it specifically, I would say.
Lex Fridman
(01:18:06)
Yeah, but there has to be some kind of compensation scheme. This is like moving towards something like Spotify streaming did originally for music. You know, what does that compensation look like? You have to define those kinds of models. You have to think through all of that. One other thing I think people are generally curious about, I’d love to get your thoughts: as LLMs are used more and more, if you look at even arXiv or GitHub, more and more of the data is generated by LLMs. What do you do in that kind of world? How big of a problem is that?
Nathan Lambert
(01:18:38)
The largest problem is the infrastructure and systems, but from an AI point of view, it’s kind of inevitable.
Lex Fridman
(01:18:45)
So it’s basically LLM-generated data that’s curated by humans essentially, right?
Nathan Lambert
(01:18:49)
Yes, and I think that a lot of open source contributors are legitimately burning out. If you have a popular open source repo, somebody’s like, “Oh, I want to do open source AI. It’s good for my career,” and they just vibe code something and throw it in. You might get more of this than I do.
Sebastian Raschka
(01:19:05)
Yeah, so I actually have a case study here. I have a repository called mlxtend that I developed as a student, around 10 or 15 years ago, and it is a reasonably popular library still for certain algorithms, especially frequent data mining stuff. There were recently two or three people who submitted a lot of PRs in a very short amount of time. I do think LLMs have been involved in submitting these PRs. Me, as the maintainer, there are two things. First, I’m a bit overwhelmed; I don’t have time to read through it because, especially since it’s an older library, that is not a priority for me. At the same time, I kind of also appreciate it because I think something people forget is it’s not just using the LLM.
Sebastian Raschka
(01:19:46)
There’s still a human layer that verifies something, and that is in a sense also how data is labeled, right? One of the most expensive things is getting labeled data for RLHF (Reinforcement Learning from Human Feedback) phases. This is kind of like that, where it goes through phases and then you actually get higher quality data out of it. So I don’t mind it, in a sense. It can feel overwhelming, but I do think there is also value in it.
Lex Fridman
(01:20:11)
It feels like there’s a fundamental difference between raw LLM-generated data and LLM-generated data with a human in the loop that does some kind of verification, even if that verification is a small percentage- … of the lines of code.
Sebastian Raschka
(01:20:25)
I think this goes with anything where people think, “Oh, yeah. I can just use an LLM to learn about XYZ,” which is true. You can, but there might be a person who is an expert who might have used an LLM to write specific code. There is this human work that went into it to make it nice and throwing out the not-so-nice parts to pre-digest it for you, and that saves you time. And I think that’s the value-add where you have someone filtering things or even using the LLMs correctly. I think this is still labor that you get for free. For example, when you read a Substack article.
Sebastian Raschka
(01:21:05)
I could maybe ask an LLM to give me opinions on that, but I wouldn’t even know what to ask. And I think there is still value in reading that article compared to me going to the LLM because you are the expert. You select what knowledge is actually spot on and should be included, and you give me this executive summary. This is a huge value-add because now I don’t have to waste three to five hours to go through this myself and maybe get some incorrect information. And so I think that’s also where the future still is for writers, even though there are LLMs that can save you time.
Lex Fridman
(01:21:43)
It’s kind of fascinating to actually watch—and I’m sure you guys do this, but for me to look at the difference between a summary and the original content. Even if it’s a page-long summary of page-long content, it’s interesting to see how the LLM-based summary takes the edge off. What is the signal it removes from the thing?
Nathan Lambert
(01:22:07)
The voice is what I talk about a lot.
Lex Fridman
(01:22:09)
Voice? Well, voice… I would love to hear what you mean by voice, that’s really powerful, but sometimes there’s like literally insights. Like in removing an insight, you’re actually fundamentally changing the meaning of the thing. So I’m continuously disappointed by how bad LLMs are at really getting to the core insights, which is what a great summary does. Yet even if you use extensive, extremely elaborate prompts where I’m really trying to dig for the insights, it’s still not quite there which… I mean, that’s a whole deep philosophical question about what is human knowledge and wisdom and what does it mean to be insightful. But when you talk about the voice, what do you mean?
Nathan Lambert
(01:22:52)
So when I write, I think a lot of what I’m trying to do is take what you think as a researcher, which is very raw. A researcher is trying to encapsulate an idea at the frontier of their understanding, and they’re trying to put what is a feeling into words. And I think that in my writing, I try to do this, which makes it come across as raw but also high-information in a way that some people will get and some won’t. And that’s kind of the nature of research. And I think this is something that language models don’t do well. Particularly, they’re all trained with this reinforcement learning from human feedback which is designed to take feedback from a lot of people and, in a way, average how the model behaves from this.
Nathan Lambert
(01:23:30)
And I think that it’s going to be hard for a model to be very incisive when there’s that sort of filter in it. This is a wonderful fundamental problem for researchers in RLHF: this provides so much utility in making the models better, but also the problem formulation has this knot in it that you can’t get past. These language models don’t have this prior in their deep expression that they’re trying to get at. I don’t think it’s impossible to do. I think there are stories of models that really shock people. Like, I would love to have tried Bing Sydney—did that have more voice? Because it would so often go off the rails on people and affect…
Nathan Lambert
(01:24:13)
And what is historically, obviously, a scary way—like telling a reporter to leave his wife—is a crazy model to potentially put in general adoption. But that’s kind of the trade-off: is this RLHF process, in some ways, adding limitations?
Lex Fridman
(01:24:28)
That’s a terrifying place to be as one of these frontier labs and companies because millions of people are using them.
Nathan Lambert
(01:24:35)
There was a lot of backlash last year with GPT-4o getting removed. I’ve personally never used the model, but I’ve talked to people at OpenAI who get emails from users that might be detecting subtle differences in the deployments in the middle of the night. And they email them and say, “My friend is different.” They find these employees’ emails and send them things because they are so attached to what is a set of model weights and a configuration that is deployed to the users. We see this with TikTok. I don’t use TikTok, but supposedly, in five minutes, the algorithm gets you. It’s locked in. And those are language models doing recommendations.
Nathan Lambert
(01:25:15)
Like, I think there are ways that you can do this with a language model where, within five minutes of chatting with it, the model just gets you. And that is something that people aren’t really ready for. I think that—don’t give that to kids. Don’t give that to kids- at least until we know what’s happening.
Lex Fridman
(01:25:30)
But there’s also going to be this mechanism… What’s going to happen with these LLMs as they’re used more and more… Unfortunately, the nature of the human condition is such that people commit suicide. And what journalists will do is report extensively on the people who commit suicide, and they will very likely link it to the LLMs because they have that data about the conversations. If you’re really struggling, if you’re depressed, if you’re thinking about suicide, you’re going probably to talk to LLMs about it. And so what journalists will do is say, “The suicide was committed because of the LLM.” And that’s going to lead to the companies, because of legal issues and so on, more and more taking the edge off of the LLM.
Lex Fridman
(01:26:13)
So it’s going to be as generic as possible. It’s so difficult to operate in this space because, of course, you don’t want an LLM to cause harm to humans at that level, but also, this is the nature of the human experience—to have a rich conversation, a fulfilling conversation, one that challenges you and from which you grow. You need that edge. And that’s something extremely difficult for AI researchers on the RLHF front to actually have to solve because you’re actually dealing with the human condition.
Nathan Lambert
(01:26:47)
A lot of researchers at these companies are so well-motivated. Anthropic and OpenAI are culturally so wanting to do good for the world through this. And it’s such a… I’m like, “Ooh, I don’t want to work on this,” because, on the one hand, a lot of people see AI as a health ally, as somebody they can talk to about their health confidentially, but then it bleeds all the way into talking about mental health. It’s heartbreaking that this might be the thing where somebody goes over the edge, but other people might be saved. And there’s things that as a researcher training models, it’s like, I don’t want to train image generation models and release them openly because I don’t want to enable somebody to have a tool on their laptop that can harm other people.
Nathan Lambert
(01:27:34)
I don’t have the infrastructure in my company to do that safely. There are a lot of areas like this where it needs people who will approach it with complexity and the conviction that it’s just such a hard problem.
Lex Fridman
(01:27:47)
But also, we as a society and as users of these technologies need to make sure that we’re having the complicated conversation about it versus just fearmongering that big tech is causing harm to humans or stealing your data. It’s more complicated than that. And you’re right, there’s a very large number of people inside these companies, many of whom you know and many of whom I know, that deeply care about helping people. They are considering the full human experience of people from across the world, not just Silicon Valley—what their needs are and what that means. It’s really difficult to design this one system that is able to help all these different kinds of people across different age groups, cultures, and mental conditions.
Nathan Lambert
(01:28:31)
I wish that the timing of AI was different regarding the relationship of big tech to the average person. Big tech’s reputation is so low, and because AI is so expensive, it’s inevitably going to be a big tech thing. It takes so many resources, and people say the US is, quote-unquote, “betting the economy on AI” with this build-out. To have these be intertwined at the same time makes for such a hard communication environment. It would be good for me to go talk to more people in the world who hate big tech and see AI as a continuation of that.
Lex Fridman
(01:29:02)
One of the things you actually recommend, one of the antidotes that you talk about, is to find agency in this whole system, as opposed to sitting back in a powerless way and consuming the AI slop as it rapidly takes over the internet. Find agency by using AI to build things—build apps, build… One, that actually helps you build intuition, but two, it’s empowering because you can understand how it works and what the weaknesses are. It gives your voice power to say, “This is bad use of the technology, and this is good use of technology.” You’re more plugged into the system then, so you can understand it better and steer it better as a consumer.
Sebastian Raschka
(01:29:48)
I think that’s a good point you brought up about agency. Instead of ignoring it and saying, “Okay, I’m not going to use it,” I think it’s probably long-term healthier to say, “Okay, it’s out there. I can’t put it back.” It’s like the internet and computers when they first came out. How do I make the best use of it, and how does it help me up-level myself? The one thing I worry about here, though, is if you just fully use it for something you love to do, the thing you love to do is no longer there. That could potentially lead to burnout. For example, if I use an LLM to do all my coding for me, now there’s no coding; I’m just managing something that is coding for me.
Sebastian Raschka
(01:30:24)
Two years later, let’s say, if I just do that eight hours a day—having something code for me—do I still feel fulfilled? Is this hurting me in terms of being excited about my job and what I’m doing? Am I still proud to build something?
Lex Fridman
(01:30:43)
On that topic of enjoyment, it’s quite interesting. We should just throw this in there, that there’s this recent survey of about 791 professional developers—professional meaning 10-plus years of experience.
Nathan Lambert
(01:30:55)
That’s a long time. As a junior developer?
Lex Fridman
(01:31:01)
Yeah, in this day and age. The results are surprising on many fronts. They break it down by junior and senior developers, and it shows that both groups use AI-generated code in the code they ship. This is not just for fun or learning; this is code they ship. Most of them use it for around 50% or more. What’s interesting is that for the category where over 50% of the shipped code is AI-generated, senior developers are much more likely to do so. But you don’t want AI to take away the thing you love. I think this speaks to my experience. These particular results show that about 80% of people find it either somewhat more enjoyable or significantly more enjoyable to use AI as part of their work.
Sebastian Raschka
(01:31:59)
I think it depends on the task. From my personal usage, for example, I have a website where I sometimes tweak things. I personally don’t enjoy this, so if the AI can help me implement something on my website, I’m all for it. It’s great. But at the same time, when I solve a complex problem—if there’s a bug, and I hunt this bug and find it—it’s the best feeling in the world. You get so much joy. But now, if you don’t even think about the bug and just go directly to the LLM, you never have that kind of feeling, right?
Sebastian Raschka
(01:32:38)
But then there could be a middle ground where you try it yourself, you can’t find it, you use the LLM, and then you don’t get frustrated because it helps you move on to something that you enjoy. Looking at these statistics, what is not factored in is that it’s averaging over all different scenarios. We don’t know if it’s for the core task or for something mundane that people would not have enjoyed otherwise. In a sense, AI is really great for doing mundane things that take a lot of work.
Sebastian Raschka
(01:33:09)
For example, my wife has a podcast for book club discussions, and she was transferring the show notes from Spotify to YouTube, and the links somehow broke. She had some episodes with 100 links or something, and it would have been really painful to go in there and fix each link manually. So I suggested, “Hey, let’s try ChatGPT.” We copied the text into ChatGPT, and it fixed them. Instead of two hours going from link to link, it made that work seamless. I think everyone has a use case where AI is useful for something like that—something that would be really boring and mundane.
Lex Fridman
(01:33:51)
For me personally, since we’re talking about coding, a lot of the enjoyment comes from the cursor side—Claude Code side—where I have a pair programmer. It’s less lonely. You made debugging sound like this great joy. No, I would say debugging is like a drink of water after you’ve been going through a desert for— —for days. You skip the whole desert part where you’re suffering. Sometimes it’s nice to have a friend who can’t really find the bug, but can give you some intuition about the code, and together you go through the desert and find that drink of water. For me, maybe it speaks to the loneliness of the programming experience. That is a source of joy.
Sebastian Raschka
(01:34:48)
It’s maybe also related to delayed gratification. I’m a person who even as a kid liked the idea of Christmas presents better than actually getting them. I would look forward to the day, but then it’s over and I’m disappointed. Maybe it’s like food—it tastes better when you’re really hungry. With debugging, it’s not always great; it’s often frustrating, but if you can solve it, then it’s great. But there’s also a Goldilocks zone where if it’s too hard, then you’re wasting your time. I think another challenge, though, is: how will people learn?
Sebastian Raschka
(01:35:33)
The chart we looked at showed that more senior developers are shipping AI-generated code than the junior ones. I think it’s interesting because intuitively you would think it’s the junior developers because they don’t know how to do the thing yet. It could mean the AI is not good enough yet to solve those tasks, but it could also mean experts are more effective at using it—they know how to review the code and they trust it more. One issue in society in the future will be: how do you become an expert if you never try to do the thing yourself?
Sebastian Raschka
(01:36:12)
I learned by trying things myself. With math textbooks, if you look at the solutions, you learn something, but you learn better if you try first and then appreciate the solution because you know how to put it into your mental framework. If LLMs are here all the time, would you actually go through the length of struggling? Would you be willing to struggle? Struggle is not nice, but if you use the LLM to do everything, at some point you will never really take the next step and you won’t get that unlock that you get as an expert using an LLM.
Sebastian Raschka
(01:36:53)
So, I think there’s a Goldilocks sweet spot where maybe the trick is you make dedicated offline time where you study two hours a day, and the rest of the day you use LLMs. I think it’s important for people to still invest in themselves, in my opinion, and not just LLM everything.

Post-training explained: Exciting new research directions in LLMs

Lex Fridman
(01:37:10)
Yeah, there is a sense that we, together as a civilization, each individually have to find that Goldilocks zone. And in the programming context as developers. Now, we’ve had this fascinating conversation that started with pre-training and mid-training. Let’s get to post-training. There’s a lot of fun stuff in post-training. So, what are some of the interesting ideas in post-training?
Nathan Lambert
(01:37:31)
The biggest one from 2025 is learning this reinforcement learning with verifiable rewards, RLVR. You can scale up the training there, which means doing a lot of this kind of iterative generate-grade loop, and that lets the models learn both interesting behaviors on the tool use and software side. This could be searching, running commands on their own and seeing the outputs, and then also that training enables this inference-time scaling very nicely. It just turned out that this paradigm was very nicely linked, where this kind of RL training enables inference-time scaling. But inference-time scaling could have been found in different ways. So, it was kind of this perfect storm where the models change a lot, and the way that they’re trained is a major factor in doing so.
Nathan Lambert
(01:38:15)
And this has changed how people approach post-training dramatically.
Lex Fridman
(01:38:20)
Can you describe RLVR, popularized by DeepSeek R1? Can you describe how it works?
Nathan Lambert
(01:38:25)
Yeah. Fun fact, I was on the team that came up with the term RLVR, which is from our Tulu 3 work before DeepSeek. We don’t take a lot of credit for being the people to popularize the scaling RL, but as much fun as academics get, as an aside, is the ability to name and influence—
Nathan Lambert
(01:38:43)
—the discourse, because the closed labs can only say so much. One of the things you can do as an academic is, while you might not have the compute to train the model, you can frame things in a way that ends up being… I describe it as like a community can come together around this RLVR term, which is very fun. And then DeepSeek are the people that did the training breakthrough, which is, they scaled the reinforcement learning. They have the model generate answers and then grade the completion if it was right, and then that accuracy is your reward for reinforcement learning. So reinforcement learning is classically an agent that acts in an environment, and the environment gives it a state and a reward back, and you try to maximize this reward.
Nathan Lambert
(01:39:26)
In the case of language models, the reward is normally accuracy on a set of verifiable tasks, whether it’s math problems or coding tasks. And it starts to get blurry with things like factual domains. That is also, in some ways, verifiable or constraints on your instruction, like ‘respond only with words that start with A.’ All of these things are verifiable in some way. The core idea is you find a lot more of these problems that are verifiable and you let the model try it many times while taking these RL gradient updates. The infrastructure evolved from reinforcement learning from human feedback, RLHF, where in that era, the score they were trying to optimize was a learned reward model of aggregate human preferences.
Nathan Lambert
(01:40:13)
So you kind of changed the problem domains and that let the optimization go on to much bigger scales, which kind of kickstarted a major change in what the models can do and how people use them.
Lex Fridman
(01:40:24)
What kind of domains is RLVR amenable to?
Nathan Lambert
(01:40:28)
Math and code are the famous ones, and then there’s a lot of work kind of on what is called the rubrics, which is related to a word people might have heard, LLM-as-a-judge. For each problem, I’ll have a set of problems in my training dataset. I will then have another language model and ask it, “What would a good answer to this problem look like?” And then you could try the problem a bunch of times over and over again and assign a score based on this rubric. So that’s not necessarily verifiable like a math and code domain, but this rubrics idea and other scientific problems where it might be a little bit more vague is where a lot of the attention is. They’re trying to push this set of methods into these more open-ended domains so the models can learn a lot more.
Sebastian Raschka
(01:41:11)
I think that’s called reinforcement learning with AI feedback, right?
Nathan Lambert
(01:41:14)
That’s the older term from it that was coined in Anthropic’s Constitutional AI paper. So a lot of these things come in cycles.
Sebastian Raschka
(01:41:21)
Also, just one step back for the RLVR. I think the interesting, beautiful thing here is that you ask the LLM a math question, you know the correct answer, and you let the LLM figure it out, but how it does it is… I mean, you don’t really constrain it much. There are some constraints you can add, like ‘use the same language’ or ‘don’t switch between Spanish and English.’ But let’s say you’re pretty much hands-off.
Sebastian Raschka
(01:41:44)
You only give the question and the answer, and then the LLM has the task to arrive at the right answer. But the beautiful thing here is what happens in practice: the LLM will do a step-by-step description, like how a student or a mathematician would derive the solution. It will use those steps and that helps the model to improve its own accuracy. And then, like you said, the inference scaling. Inference scaling loosely means spending more compute while using the LLM during inference, and here the inference scaling is that the model would use more tokens. In the DeepSeek R1 paper, they showed the longer they train the model, the longer the responses are.
Sebastian Raschka
(01:42:28)
They grow over time. They use more tokens, so it becomes more expensive for simple tasks, but these explanations help the model with accuracy. There are also a lot of papers showing what the model explains does not necessarily have to be correct, or maybe it’s even unrelated to the answer, but for some reason, it still helps the model—the fact that it is explaining. And I think it’s also—again, I don’t want to anthropomorphize these LLMs—but it’s kind of like how we humans operate, right? If there’s a complex math problem in a math class, you usually have a note paper and you do it step by step. You cross things out.
Sebastian Raschka
(01:43:03)
And the model also self-corrects, and that was, I think, the aha moment in the DeepSeek R1 paper. They called it the ‘aha moment’ because the model itself recognized it made a mistake and then said, “Ah, I did something wrong, let me try again.” I think that’s just so cool that this falls out of just giving it the correct answer and having it figure out how to do it—that it kind of does, in a sense, what a human would do. Although LLMs don’t think like humans, it’s a kind of interesting coincidence. And the nice side effect is it’s great for us humans to see these steps. It builds trust, and we can learn or double-check things.
Nathan Lambert
(01:43:40)
There’s a lot in here. I think- There’s been a lot of debate this year on if the language models—I think these aha moments are kind of fake because in pre-training, you essentially have seen the whole internet. So you have definitely seen people explaining their work, even verbally, like a transcript of a math lecture: “You try this, oh, I messed this up.” And what reinforcement learning—this RLVR—is very good at doing, is amplifying— —these behaviors, because they’re very useful in enabling the model to think longer and to check its work. I agree that it is very beautiful that this training kind of… the model learns to amplify this in a way that is just so useful at the final answers being better.
Sebastian Raschka
(01:44:16)
I can give you also a hands-on example. I was training the Qwen 3 base model with RLVR on MATH-500. The base model had an accuracy of about 15%. Just 50 steps, like in a few minutes with RLVR, the model went from 15% to 50% accuracy. And you can’t tell me it’s learning anything fundamentally about math in—
Nathan Lambert
(01:44:38)
The Qwen example is weird because there’s been two papers this year, one of which I was on, that talks about data contamination in Qwen— —and specifically that they train on a lot of this special mid-training phase that we— —can chime in on for a minute because it’s weird— —because they train on problems that are almost identical to MATH.
Sebastian Raschka
(01:44:53)
Exactly. And so you can see that basically the RL is not teaching the model any new knowledge about math. You can’t do that in 50 steps. So the knowledge is already there in the pre-training; you’re just unlocking it.
Nathan Lambert
(01:45:03)
I still disagree with the premise because there’s a lot of weird complexities that you can’t prove. One of the things that points to weirdness is that if you take the Qwen 3 so-called base model—you could Google “math dataset Hugging Face” and take a problem—if you put it into Qwen 3 base… all these math problems have words, so it would be like, “Alice has five apples and gives three to whoever,” and there are these word problems. With these Qwen-based models, why people are suspicious of them is if you change the numbers but keep the words— —Qwen will produce, without tools, a very high accuracy decimal representation—
Nathan Lambert
(01:45:43)
—of the answer, which means at some point it was shown problems that were almost identical to the test set, and it was using tools to get a very high precision answer. But a language model without tools will never actually have this. So it’s been this big debate in the research community: how much of these reinforcement learning papers that are training on Qwen and measuring specifically on this math benchmark—where there’s been multiple papers talking about contamination—how much can you believe them? I think this is what caused the reputation of RLVR being about formatting, because you can get these gains so quickly and therefore it must already be in the model. But there’s a lot of complexity here. It’s not really like controlled experimentation— —so we don’t really know.
Sebastian Raschka
(01:46:26)
But if it weren’t true, I would say distillation wouldn’t work, right? Distillation can work to some extent, but the biggest problem—and I’m researching this contamination—is we don’t know what’s in the data. Unless you have a new dataset, it is really impossible. Even something simpler like MMLU, which is a multiple-choice benchmark—if you just change the format slightly, like using a dot instead of a parenthesis, the model accuracy will vastly differ.
Nathan Lambert
(01:47:04)
I think that that could be like a model issue rather than a general issue.
Sebastian Raschka
(01:47:09)
It’s not even malicious by the developers of the LLM, like, “Hey, we want to cheat at that benchmark.” It’s just it has seen something at some point. I think the only fair way to evaluate an LLM is to have a new benchmark that is after the cutoff date when the model was deployed.
Lex Fridman
(01:47:22)
Can we lay out what would be the recipe of all the things that go into post-training? And you mentioned RLVR was a really exciting, effective thing. Maybe we should elaborate. RLHF still has a really important component to play. What kind of other ideas are there on post-training?
Nathan Lambert
(01:47:40)
I think you can take this in order. You could view it as what made o1, which is this first reasoning model, possible. You’re going to have similar interventions where you start with mid-training. The thing that is rumored to enable o1 and similar models is really careful data curation where you’re providing a broad set of what is called reasoning traces. This is just the model generating words in a forward process that reflects breaking down a problem into intermediate steps and trying to solve them. So at mid-training, you need to have data similar to this so that when you move into post-training, primarily with these verifiable rewards, it can learn.
Nathan Lambert
(01:48:27)
And then what is happening today is you’re figuring out which problems to give the model, how long you can train it for, and how much inference you can enable the model to use when solving these verifiable problems. As models get better, certain problems are no longer useful; the model will solve them 100% of the time, and therefore there’s very little signal. If we look at the GRPO equation, this one is famous for this because essentially the reward given to the agent is based on how good a given action—a completion—is relative to the other answers to that same problem. So if all the problems get the same answer, there’s no signal in these types of algorithms.
Nathan Lambert
(01:49:09)
So what they’re doing is finding harder problems, which is why you hear about things like scientific domains, which are so hard to get anything right in. If you have a lab or something, it just generates so many tokens, or much harder software problems. The frontier models are all pushing into these harder domains where they can train on more problems and the model will learn more skills at once. The RLHF link to this is that RLHF has been, and still is, the finishing touch on the models, where it makes them more useful by improving the organization, style, or tone.
Nathan Lambert
(01:49:42)
There are different things that resonate with different audiences. Some people like a really quirky model, and RLHF could be good at enabling that personality, and some people hate the markdown bulleted list thing that the models do, but it’s actually really good for quickly parsing information. This human feedback stage is really great for putting this into the model at the end of the day. It’s what made ChatGPT so magical for people. And that use has actually remained fairly stable. This formatting can also help the models get better at math problems, for example.
Nathan Lambert
(01:50:17)
The border between style and formatting and the method that you use to answer a problem are actually very closely linked when you’re training these models. RLHF can still make a model better at math, but these verifiable domains are a much more direct process for doing this because it makes more sense with the problem formulation. To summarize: mid-training gives the model the skills it needs to learn; RL with verifiable rewards lets the model try many times, putting a lot of compute into trial-and-error learning across hard problems; and then RLHF finishes the model, making it easy to use and rounding it out.
Lex Fridman
(01:51:02)
Can you comment on the amount of compute required for RL VR?
Nathan Lambert
(01:51:06)
It’s only gone up and up. I think Grok 4 was famous for saying they use a similar amount of compute for pre-training and post-training. Back to the scaling discussion, they involve very different hardware for scaling. Pre-training is very compute-bound, which is like the FLOPS discussion: how many matrix multiplications can you get through in one time. Because in RL you’re generating these answers and trying the model in real-world environments, it ends up being much more memory-bound. You’re generating long sequences, and the attention mechanisms have a behavior where you get a quadratic increase in memory as you get to longer sequences. So the compute becomes very different.
Nathan Lambert
(01:51:44)
In pre-training, we would talk about a model—if we go back to the Biden administration executive order—it’s like 10 to the 25th FLOPS to train a model. If you’re using FLOPS in post-training, it’s a lot weirder because the reality is just how many hours you are allocating how many GPUs for. In terms of time, the RL compute is getting much closer because you just can’t put it all into one system. Pre-training is so computationally dense where all the GPUs are talking to each other and it’s extremely efficient, whereas RL has all these moving parts and it can take a long time to generate a sequence of a hundred thousand tokens.
Nathan Lambert
(01:52:17)
If you think about Gemini 3 Pro taking an hour, what if your training run has to sample for an hour? You have to make sure that’s handled efficiently. So in GPU hours or wall-clock hours, the RL runs are probably approaching the same number of days as pre-training, but they probably aren’t using as many GPUs at the same time. There are rules of thumb in labs where you don’t want your pre-training runs to last more than a month because they fail catastrophically. If you are planning a huge cluster to be held for two months and then it fails on day 50, the opportunity costs are just so big.
Nathan Lambert
(01:52:54)
People don’t want to put all their eggs in one basket. GPT-4 was like the ultimate YOLO run, and nobody ever wanted to do it before where it took three months to train and everybody was shocked that it worked. I think people are a little bit more cautious and incremental now.
Sebastian Raschka
(01:53:07)
So RLVR is more unlimited in how much you can train or still get benefit, whereas RLHF, because it’s preference tuning, reaches a certain point where it doesn’t really make sense to spend more budget on it. To take a step back with preference tuning: there are multiple people that can give multiple explanations for the same thing and they can both be correct, but at some point, you learn a certain style and it doesn’t make sense to iterate on it. My favorite example is if relatives ask me what laptop they should buy. I give them an explanation or ask about their use case, and they might prioritize battery life and storage.
Sebastian Raschka
(01:53:46)
Other people, like us, would prioritize RAM and compute. Both answers are correct, but different people require different answers. With preference tuning, you are trying to average somehow; you are asking the data labelers to give you the preferred answer and then you train on that. But at some point, you learn that average preferred answer, and there’s no reason to keep training longer on it because it’s just a style. With RLVR, you let the model solve more and more complex, difficult problems. So I think it makes more sense to allocate more budget long-term to RLVR.
Sebastian Raschka
(01:54:27)
Right now, we are in an RLVR 1.0 phase where it’s still that simple thing where we have a question and answer, but we don’t do anything with the stuff in between. There were multiple research papers, by Google for example, on process reward models that also give scores for the explanation—how correct is the explanation? I think that will be the next thing, let’s say RLVR 2.0 for this year, focusing on the steps between question and answer and how to leverage that information to improve the explanation and accuracy. That’s one angle. And there was a DeepSeek-V3.2 paper where they also had interesting inference scaling.
Sebastian Raschka
(01:55:11)
Well, first they had developed models that grade themselves as a separate model. I think that will be one aspect. And the other, like Nathan mentioned, will be RLVR branching into other domains.
Nathan Lambert
(01:55:23)
The place where people are excited is value functions— —which is pretty similar. Process reward models assign how good something is to each intermediate step in a reasoning process, whereas value functions apply value to every token the language model generates. Both of these have been largely unproven in the language modeling and reasoning model era. People are more optimistic about value functions for whatever reason now. I think process reward models were tried a lot more in the pre-o1 era, and a lot of people had headaches with them. Value models have a very deep history in reinforcement learning.
Nathan Lambert
(01:56:06)
They’re one of the first things that were core to deep reinforcement learning existing—training value models. So right now the literature shows people are excited about trying value models, but there’s very little proof in it. And there are negative examples in trying to scale up process reward models.
Nathan Lambert
(01:56:22)
These things don’t always hold in the future. To summarize the scaling: you don’t want to do too much RLHF because of how the signal scales. People have worked on RLHF for years, especially after ChatGPT, but the first release of a reasoning model trained with RLVR, OpenAI’s o1, had a scaling plot where if you increase the training compute logarithmically, you get a linear increase in evaluations. This has been reproduced multiple times; I think DeepSeek had a plot like this. But there’s no scaling law for RLHF where if you log-increase the compute, you get linear performance.
Nathan Lambert
(01:57:02)
In fact, the seminal scaling paper for RLHF is about scaling loss for reward model over-optimization. That’s a big line to draw with RLVR and the methods we have now; they will follow this scaling paradigm where you can let the best runs go for an extra 10x and you get performance, but you can’t do this with RLHF. That is going to be field-defining. To do the best RLHF you might not need the extra 10 or 100x compute, but to do the best RLVR you do. There’s a seminal paper from a Meta internship called “The Art of Scaling Reinforcement Learning with Language Models.”
Nathan Lambert
(01:57:47)
Their framework is called ScaleRL. Their incremental experiment was like 10,000 V100 hours, which is thousands or tens of thousands of dollars per experiment, and they do a lot of them. This cost is not accessible to the average academic, which creates a hard equilibrium when trying to figure out how to learn from each community.

Advice for beginners on how to get into AI development & research

Lex Fridman
(01:58:11)
I was wondering if we could take a bit of a tangent and talk about education and learning. If you’re somebody listening to this who’s a smart person interested in programming and interested in AI, I presume building something from scratch is a good beginning. Can you just take me through what you would recommend people do?
Sebastian Raschka
(01:58:32)
I would personally start, like you said, by implementing a simple model from scratch that you can run on your computer. The goal of building a model from scratch is not to have something you use every day for your personal projects. It’s not going to be your personal assistant replacing an existing open-weight model or ChatGPT. It’s to see exactly what goes into the LLM, what exactly comes out of the LLM, and how pre-training works on your own computer. And then you learn about pre-training, supervised fine-tuning, and the attention mechanism.
Sebastian Raschka
(01:59:03)
You get a solid understanding of how things work, but at some point you will reach a limit because smaller models can only do so much. The problem with learning about LLMs at scale is that it’s exponentially more complex to make a larger model because it’s not just that the model becomes larger. You have to think about sharding your parameters across multiple GPUs. Even for the KV cache, there are multiple ways you can implement it. One is just to understand how it works, like a cache you grow step-by-step by concatenating lists, but then that wouldn’t be optimal on GPUs. You would pre-allocate a tensor and then fill it in. But that adds another 20 or 30 lines of code.
Sebastian Raschka
(01:59:45)
And for each thing, you add so much code. I think the trick with the book is basically to understand how the LLM works. It’s not going to be your production-level LLM, but once you have that, you can understand the production-level LLM.
Lex Fridman
(01:59:56)
So you’re trying to always build an LLM that’s going to fit on one GPU?
Sebastian Raschka
(02:00:00)
Yes. Most of the examples I have fit on one GPU. I have some bonus materials on some MoE models; one or two of them may require multiple GPUs, but the goal is to have it on one GPU. And the beautiful thing is also you can self-verify. It’s almost like RLVR. When you code these from scratch, you can take an existing model from the Hugging Face Transformers library. The Hugging Face Transformers library is great, but if you want to learn about LLMs, I think that’s not the best place to start because the code is so complex. It has to fit so many use cases and some people use it in production. It has to be really sophisticated, so it’s intertwined and hard; it’s not linear to read.
Nathan Lambert
(02:00:39)
It started as a fine-tuning library, and then it grew to be the standard representation of every model architecture and the way it is loaded. Hugging Face is the default place to get a model, and Transformers is the software that enables it— —so people can easily load a model— —and do something basic with it.
Sebastian Raschka
(02:00:56)
And all frontier labs that have open-weight models have a Hugging Face Transformers version of it, from DeepSeek to gpt-oss. That’s the canonical way that you can load them. But again, even the Transformers library is not used in production for inference. People use SGLang or vLLM, and it adds another layer of complexity.
Lex Fridman
(02:01:15)
We should say that the Transformers library has something like 400 models.
Sebastian Raschka
(02:01:19)
So it’s the one library that tries to implement a lot of LLMs, and so you have a huge codebase. It’s massive. It’s—I don’t know, maybe millions— —hundreds of thousands of lines of code. Understanding the part that you want to understand is like finding the needle in the haystack. But what’s beautiful about it is you have a working implementation, so you can work backwards from it. What I would recommend doing is if I want to understand, for example, how OLMo 3 is implemented, I would look at the weights in the model hub and the config file. You can see, “Oh, they used so many layers. They use group query attention.” Then you see all the components in a human-readable 100-line config file. And then you start with your GPT-2 model and add these things.
Sebastian Raschka
(02:02:06)
The cool thing here is you can then load the pre-trained weights and see if they work in your model. You want to match the same output that you get with a Transformers model, and then you can use that basically as a verifiable reward to make your architecture correct. Sometimes it takes me a day. With OLMo 3, the challenge was RoPE for the position embeddings; they had a YaRN extension and there was some custom scaling there. I couldn’t quite match it at first, but in this struggle you kind of understand things. At the end, you know you have it correct because you can unit test it against the reference implementation. I think that’s one of the best ways to learn. Basically, you reverse-engineer something.
Nathan Lambert
(02:02:51)
I think that is something everyone interested in getting into AI today should do, and that’s why I liked your book. I came to language models from the RL and robotics field, so I never had taken the time to just learn all the fundamentals. The Transformer architecture is as fundamental today as deep learning was in the past, and people need to learn it. I think where a lot of people get overwhelmed is how to apply this to have an impact or find a career path.
Nathan Lambert
(02:03:23)
AI language models make this fundamental stuff so accessible, and people with motivation will learn it. Then it’s like, “How do I get the cycles on goal to contribute to research?” I’m actually fairly optimistic because the field moves so fast that a lot of times the best people don’t fully solve a problem because there’s a bigger, lower-hanging fruit to solve, so they move on. In my RLHF book, I try to take post-training techniques and describe how they influence the model. It’s remarkable how many things people just stop studying.
Nathan Lambert
(02:04:06)
I think people trying to go narrow after doing the fundamentals is good. Reading relevant papers and being engaged in the ecosystem—you actually… The proximity that random people have online to leading researchers is incredible. The anonymous accounts on X in ML are very popular, and no one knows who all these people are. It could just be random people who study this stuff deeply. Especially with AI tools to help you keep digging into things you don’t understand, it’s very useful. There are research areas that might only have three papers you need to read, and then one of the authors will probably email you back.
Nathan Lambert
(02:04:45)
But you have to put in a lot of effort into these emails to show you understand the field. It would take a newcomer weeks of work to truly grasp a very narrow area, but going narrow after the fundamentals is very useful. I became very interested in character training—how you make a model funny, sarcastic, or serious, and what you do to the data to achieve this. A student at Oxford reached out to me and said, “Hey, I’m interested in this,” and I advised him. Now that paper exists. There were maybe only two or three people in the world very interested in that specific topic.
Nathan Lambert
(02:05:25)
He’s a PhD student, which gives you an advantage, but for me, that was a topic where I was waiting for someone to say, “Hey, I have time to spend cycles on this.” I’m sure there are a lot more narrow things where you’re just like, “It doesn’t make sense that there was no answer to this.” There’s so much information coming in that people feel they can’t grab onto anything, but if you actually stick to one area, I think there are a lot of interesting things to learn.
Sebastian Raschka
(02:05:48)
Yeah, I think you can’t try to do it all because it would be very overwhelming and you would burn out. For example, I haven’t kept up with computer vision in a long time; I’ve just focused on LLMs. But coming back to your book, I think it’s a really great resource and a good bang for the buck if you want to learn about RLHF. I wouldn’t just go out there and read raw RLHF papers because you would be spending two years—
Nathan Lambert
(02:06:10)
—and some of them contradict each other. I’ve just edited the book, and there’s no chapter where I had to say, “X papers say one thing and Y papers say another, and we’ll see what comes out to be true.”
Lex Fridman
(02:06:21)
What are some of the ideas we might have missed in the bigger picture of post-training? To go through the table of contents: first, you did the problem setup, training overview, what are preferences, preference data and the optimization tools, reward modeling, regularization, instruction tuning, rejection sampling, reinforcement learning. Then constitutional AI and AI feedback, reasoning and inference-time scaling, tool use and function calling, synthetic data and distillation, evaluation, and then the open questions section: over-optimization, style and information, product UX, character and post-training. What are some ideas worth mentioning that connect both the educational component and the research component? You mentioned the character training, which is pretty interesting.
Nathan Lambert
(02:07:08)
Character training is interesting because there’s so little out there, but we talked about how people engage with these models. We feel good using them because they’re positive, but that can go too far; it can be too positive. It’s essentially how you change your data or decision-making to make it exactly what you want. OpenAI has this thing called a “model spec,” which is essentially their internal guideline for what they want the model to do, and they publish this to developers. So you can know what is a failure of OpenAI’s training—where they have the intention but haven’t met it yet—versus what is something they actually wanted to do that you just don’t like.
Nathan Lambert
(02:07:46)
That transparency is very nice, but all the methods for curating these documents and how easy it is to follow them is not very well known. I think the way the book is designed is that the reinforcement learning chapter is obviously what people want because everybody hears about it with RLVR, and it’s the same algorithms and the same math, but you can use it in very different documents. I think the core of RLHF is how messy preferences are. It’s essentially a rehash of a paper I wrote years ago, but this is the chapter that tells you why RLHF is never fully solvable, because the way that RL is set up assumes that preferences can be quantified and reduced to single values.
Nathan Lambert
(02:08:33)
I think it relates in the economics literature to the Von Neumann-Morgenstern utility theorem. That is the chapter where all of that philosophical, economic, and psychological context tells you what gets compressed when doing RLHF. Later in the book, you use this RL map to make the number go up. I think that’s why it’ll be very rewarding for people to do research on, because quantifying preferences is something humans have designed the problem around to make them studyable. But there are fundamental debates; for example, in a language model response, you have different things you care about, whether it’s accuracy or style.
Nathan Lambert
(02:09:13)
When you’re collecting the data, they all get compressed into, “I like this more than another.” There’s a lot of research in other areas of the world that goes into how you should actually do this. I think social choice theory is the subfield of economics around how you should aggregate preferences. I went to a workshop that published a white paper on how you can think about using social choice theory for RLHF. I want people who get excited about the math to stumble into this broader context. I also keep a list of all the tech reports of reasoning models that I like. In Chapter 14, where there’s a short summary of RLVR, there’s a gigantic table where I list every single reasoning model that I like. I think in education, a lot of it needs to be, at this point, what I like—
Nathan Lambert
(02:10:08)
—because language models are so good at the math. For example, the famous paper on Direct Preference Optimization, which is a much simpler way of solving the problem than RL—the derivations in the appendix skip steps of math. I tried for this book to redo the derivations and I was like, “What the heck is this log trick that they use?” But when doing it with language models, they just say, “This is the log trick.” I don’t know if I like that the math is so commoditized. I think some of the struggle in reading this appendix— —and following the math is good for learning.
Lex Fridman
(02:10:43)
Yeah, we’re returning to this often on the topic of education. You both have brought up the word “struggle” quite a bit. There is value in that. If you’re not struggling as part of this process, you’re not fully following the proper process for learning, I suppose.
Nathan Lambert
(02:11:02)
Some of the providers are starting to work on models for education designed to not give… actually, I haven’t used them, but I would guess they’re designed to not give all the information at once— —and make people work for it. I think you could train models to do this and it would be a wonderful contribution. In the book, you had to reevaluate every decision— Which is such a great example. I think there’s a chance we work on it at AI2, which I think would be so fun.
Sebastian Raschka
(02:11:26)
It makes sense. I did something like that the other day for video games. In my spare time, I like video games with puzzles, like Zelda and Metroid. There’s this new game where I got really stuck. I didn’t want to struggle for two days, so I used an LLM. But I told it, “Please don’t add any spoilers. I’m at this point; what do I have to do next?” You can do the same thing for math where you say, “I’m at this point and I’m getting stuck. Don’t give me the full solution, but what is something I could try?” You kind of carefully probe it.
Sebastian Raschka
(02:12:02)
But the problem is that it requires discipline. A lot of people enjoy math, but there are also a lot of people who need to do it for their homework, and then it’s just a shortcut. We can develop an educational LLM, but the other LLMs are still there, and there’s still a temptation to use them.
Lex Fridman
(02:12:20)
I think a lot of people, especially in college, understand the stuff they’re passionate about— —they’re self-aware about it, and they understand it shouldn’t be easy. Like, I think we just have to develop a good taste— —talk about research taste, like school taste about stuff that you should be struggling on— —and stuff you shouldn’t be struggling on. Which is tricky to know, because sometimes you don’t have good long-term vision about what would be actually useful to you in your career. But you have to develop that taste, yeah.
Nathan Lambert
(02:12:51)
I was talking to maybe my fiance or friends about this, and it’s like there’s this brief 10-year window where all of the homework and all the exams could be digital. But before that, everybody had to do all the exams in bluebooks because there was no other way. And now after AI, everybody’s going to need to be in bluebooks and oral exams because everybody could cheat so easily. It’s like this brief generation that had a different education system where everything could be digital, but you still couldn’t cheat. And now it’s just going back. It’s just very funny.
Lex Fridman
(02:13:20)
You mention character training. Just zooming out on a more general topic: for that topic, how much compute was required? And in general, to contribute as a researcher, are there places where not too much compute is required where you can actually contribute as an individual researcher?
Nathan Lambert
(02:13:39)
For the character training thing, I think this research is built on fine-tuning about seven billion parameter models with LoRA, which is like a… Essentially, you’re only fine-tuning a small subset of the weights of the model. I don’t know exactly how many GPU hours that would take.
Lex Fridman
(02:13:55)
But it’s doable.
Nathan Lambert
(02:13:55)
Not doable for every academic. So the situation for some academics is so dire that the only work you can do is doing inference where you have closed models or open models, and you get completions from them and you can look at them and understand the models. And that’s very well-suited to evaluation, where you want to be the best at creating representative problems that the models fail on or show certain abilities, which I think that you can break through with. I think that the top-end goal for a researcher working on evaluation, if you want to have career momentum, is that the frontier labs pick up your evaluation. So you don’t need to have every project do this.
Nathan Lambert
(02:14:33)
But if you go from a small university with no compute and you figure out something that Claude struggles with and then the next Claude model has it in the blog post, there’s your career rocket ship. I think that that’s hard but if you want to scope the maximum possible impact with minimum compute, it’s something like that—which is just get very narrow, and it takes learning of where the models are going. So you need to build a tool that tests where Claude 4.5 will fail. If I’m going to start a research project, I need to think where the models in eight months are going to be struggling.
Lex Fridman
(02:15:05)
But what about developing totally novel ideas?
Nathan Lambert
(02:15:08)
This is a trade-off. I think that if you’re doing a PhD, you could also be like, “It’s too risky to work in language models. I’m going way longer term,” which is like— —what is the thing that’s going to define language model development in 10 years? Which I think that I end up being a person that’s pretty practical. I mean, I went to my PhD where it was like, “I got into Berkeley. Worst case, I get a master’s, and then I go work in tech.” And so I’m very practical about it. The life afforded to people to work at these AI companies, the amount of… OpenAI’s average compensation is over a million dollars in stock a year per employee. For any normal person in the US, getting into this AI lab is transformative for your life. So I’m pretty practical about it—
Nathan Lambert
(02:15:50)
—there’s still a lot of upward mobility working in language models if you’re focused. And looking at the outcomes, look at these jobs. But from a research perspective, for transformative impact and these academic awards, being the next Yann LeCun comes from not caring about language model development very much.
Lex Fridman
(02:16:07)
It’s a big financial sacrifice in that case.
Nathan Lambert
(02:16:09)
So I get to work with some awesome students, and they’re like, “Should I go work at an AI lab?” And I’m like, “You’re getting a PhD at a top school. Are you going to leave to go to a lab?” If you go work at a top lab, I don’t blame you. Don’t go work at some random startup that might go to zero. But if you’re going to OpenAI, I think it could be worth leaving a PhD for.
Lex Fridman
(02:16:30)
Let’s more rigorously think through this. Where would you give a recommendation for people to do a research contribution? The options are academia—get a PhD, spend five years publishing, though compute resources are constrained. There are research labs that are more focused on open-weight models, so working there. Or closed frontier labs. OpenAI, Anthropic, xAI, and so on.
Nathan Lambert
(02:17:04)
The two gradients are: the more closed, the more money you tend to get, but also you get less credit. In terms of building a portfolio of things that you’ve done, it’s very clear what you have done as an academic. Versus if you are going to trade this fairly reasonable progression for being a cog in the machine, which could also be very fun. I think they’re very different career paths. But the opportunity cost for being a researcher is very high because PhD students are paid essentially nothing. I think it ends up rewarding people that have a fairly stable safety net and they realize they can operate in the long term, doing very interesting work and getting a very interesting job.
Nathan Lambert
(02:17:50)
So it is a privileged position to be like, “I’m going to see out my PhD and figure it out after because I want to do this.” And at the same time, the academic ecosystem is getting bombarded by funding getting cut and stuff. There are just so many different trade-offs where I understand plenty of people that are like, “I don’t enjoy it. I can’t deal with this funding search. My grant got cut for no reason by the government,” or, “I don’t know what’s going to happen.” So I think there’s a lot of uncertainty and trade-offs that, in my opinion, favor just taking the well-paying job with meaningful impact. It’s not like you’re getting paid to sit around at OpenAI. You’re building the cutting edge of things that are— changing millions of people’s relationship to tech.
Lex Fridman
(02:18:34)
But publication-wise, they’re being more secretive, increasingly so. So you’re publishing less and less. You are having a positive impact at scale, but you’re a cog in the machine.
Sebastian Raschka
(02:18:47)
I think, honestly, it hasn’t changed that much. I have been in academia; I’m not in academia anymore. At the same time, I wouldn’t want to miss my time in academia. But what I wanted to say before I get to that part, I think it hasn’t changed that much. I was using AI or machine learning methods for applications in computational biology with collaborators, and a lot of people went from academia directly to Google. I think it’s the same thing. Back then, professors were sad that their students went into industry because they couldn’t carry on their legacy in that sense. I think it’s the same thing. It hasn’t changed that much. The only thing that has changed is the scale.
Sebastian Raschka
(02:19:32)
But, you know, cool stuff was always developed in industry that was closed. You couldn’t talk about it. I think the difference now is your preference. Do you like to talk about your work and publish, or are you more in a closed lab? That’s one difference—the compensation, of course. But it’s always been like that. So it really depends on where you feel comfortable. And also, nothing is forever. The only thing right now is there’s a third option, which is starting a startup. There are a lot of people doing startups. Very risky move, but it’s a high-risk, high-reward type of situation, whereas joining an industry lab is pretty safe and offers upward mobility.
Sebastian Raschka
(02:20:16)
Honestly, I think once you have been at an industry lab, it will be easier to find future jobs. But then again, it’s like, how much do you enjoy the team and working on proprietary things versus how do you like the publishing work? I mean, publishing is stressful. Acceptance rates at conferences can be arbitrary and very frustrating, but it’s also high reward. If you have a paper published, you feel good because your name is on there. You have a high accomplishment.
Nathan Lambert
(02:20:48)
I feel like my friends who are professors seem on average happier than my friends who work at— a frontier lab, to be totally honest. Because there’s just a grounding and— the frontier labs definitely do this 9/9/6— which essentially is shorthand for work all the time.

Work culture in AI (72+ hour weeks)

Lex Fridman
(02:21:03)
Can you describe 9/9/6 as a culture? I believe you could say it was invented in China and adopted in Silicon Valley. What’s 9/9/6? It’s 9:00 AM to 9:00 PM—
Sebastian Raschka
(02:21:14)
six days a week.
Lex Fridman
(02:21:15)
Six days a week. What is that, 72 hours? Is this basically the standard in AI companies in Silicon Valley? More and more this kind of grind mindset.
Sebastian Raschka
(02:21:26)
Yeah, I mean, maybe not exactly like that, but I think there is a trend towards it. And it’s interesting—I think it almost flipped because when I was in academia, I felt like that because as a professor, you had to write grants, you had to teach, and you had to do your research. It’s like three jobs in one, and it is more than a full-time job if you want to be successful. And I feel like now, like Nathan just said, the professors in comparison to a lab have even less pressure or workload than at a frontier lab because—
Nathan Lambert
(02:21:57)
I think they work a lot. They’re just so fulfilled. By working with students— …and having a constant runway of mentorship and a mission that is very people-oriented. I think in an era when things are moving very fast and are very chaotic, it’s very rewarding to people.
Sebastian Raschka
(02:22:11)
Yeah, and I think at a startup, there’s this pressure. You have to make it. It is really important that people put in the time, but it is really hard because you have to deliver constantly. I’ve been at a startup. I had a good time, but I don’t know if I could do it forever. It’s an interesting pace and it’s exactly like we talked about in the beginning. These models are leapfrogging each other, and they are just constantly trying to take the next step compared to their competitors. It’s just ruthless right now.
Nathan Lambert
(02:22:42)
I think this leapfrogging nature and having multiple players is actually an underrated driver of language modeling progress where competition is so deeply ingrained. These companies have intentionally created very strong cultures. For example, Anthropic is known to be culturally deeply committed and organized. We hear so little from them, and everybody at Anthropic seems very aligned. Being in a culture that is super tight and having this competitive dynamic is a thing that’s going to make you work hard and create things that are better.
Nathan Lambert
(02:23:20)
But that comes at the cost of human capital. You can only do this for so long, and people are definitely burning out. I wrote a post on burnout as I’ve tread in and out of this myself, especially trying to be a manager while doing full-mode training. It’s a crazy job. In the book Apple in China, Patrick McGee talked about how hard the Apple engineers worked to set up the supply chains in China. He mentioned they had “saving marriage” programs, and he said in a podcast that people died from this level of working hard. It’s a perfect environment for creating progress based on human expense. The human expense is the 996 that we started this with, where people do really grind.
Sebastian Raschka
(02:24:08)
I also read this book. I think they had a code word for if someone had to go home to spend time with their family to save the marriage. Then the colleagues said, “Okay, this is red alert for this situation. We have to let that person go home this weekend.” But at the same time, I don’t think they were forced to work. They were so passionate about the product that you get into that mindset. I had that sometimes as an academic, and as an independent person. I overwork, and it’s unhealthy. I had back issues and neck issues because I did not take the breaks that I should have. But it’s not because anyone forced me; it’s because I wanted to work because it’s exciting stuff.
Nathan Lambert
(02:24:46)
That’s what OpenAI and Anthropic are like. They want to do this work.

Silicon Valley bubble

Lex Fridman
(02:24:49)
Yeah, but there’s also a feeling of fervor that’s building, especially in Silicon Valley, aligned with the scaling laws idea. There’s this hype where the world will be transformed in a scale of weeks and you want to be at the center of it. I have the great fortune of having conversations with a wide variety of human beings, and I get to see all these bubbles and echo chambers across the world. It’s fascinating to see how we humans form them. I think it’s fair to say that Silicon Valley is a kind of echo chamber, a kind of silo and bubble. I think bubbles are actually really useful and effective. It’s not necessarily a negative thing because you can be ultra-productive.
Lex Fridman
(02:25:34)
It could be the Steve Jobs reality distortion field, because you just convince each other the breakthroughs are imminent, and by convincing each other of that, you make the breakthroughs imminent.
Nathan Lambert
(02:25:48)
Bryne Hobart wrote a book classifying bubbles. One of them is financial bubbles, which involve speculation and are bad, and the other is effectively for build-outs, because it pushes people to build. I do think AI is in this, but I worry about it transitioning to a financial bubble.
Lex Fridman
(02:26:05)
Yeah, but also in the space of ideas, that bubble creates a reality distortion field. That means you are deviating from reality, and if you go too far while also working 996, you might miss some fundamental aspects of the human experience. This is a common problem in Silicon Valley. It’s a very specific geographic area. You might not understand the Midwest perspective or the experience of all the other different humans in the United States and across the world. You speak a certain way to each other and convince each other of a certain thing, and that can get you into real trouble.
Lex Fridman
(02:26:47)
Whether AI is a big success and becomes a powerful technology or it’s not, in either trajectory you can get yourself into trouble. So you have to consider all of that. Here you are, a young person trying to decide what you want to do with your life.
Nathan Lambert
(02:27:02)
The thing that is… I don’t even really understand this, but the SF AI memes have gotten to the point where the “permanent underclass” was one of them. This was the idea that the last six months of 2025 was the only time to build durable value in an AI startup or model. Otherwise, all the value will be captured by existing companies and you will therefore be poor. That’s an example of the SF thing that goes so far. I still think for young people who are really passionate about having an impact in AI, being physically in SF is the most likely place where you’re going to do this. But it has trade-offs.
Lex Fridman
(02:27:41)
I think SF is an incredible place, but there is a bit of a bubble. And if you go into that bubble, which is extremely valuable, just get out also. Read history books, read literature, and visit other places in the world. Twitter and Substack are not the entire world.
Nathan Lambert
(02:28:01)
I think I would say, one of the people I worked with is moving to SF, and I need to get him a copy of Season of the Witch. It’s a history of SF from 1960 to 1985 that goes through the hippie revolution, the culture emerging in the city, the HIV/AIDS crisis, and other things. That is so recent, with so much turmoil and hurt, but also love in SF. No one knows about this. It’s a great book, Season of the Witch; I recommend it. A bunch of my SF friends who do get out recommended it to me. I lived there and I didn’t appreciate this context, and it’s just so recent.

Text diffusion models and other new research directions

Lex Fridman
(02:28:46)
Yeah. Okay, let’s… we talked a lot about many things, certainly about what was exciting last year. But this year, one of the things you guys mentioned that’s exciting is the scaling of text diffusion models and just a different exploration of text diffusion. Can you talk about what that is and what possibilities it holds? So, different kinds of approaches than the current LMs?
Sebastian Raschka
(02:29:13)
Yeah, so we talked a lot about the transformer architecture and the autoregressive transformer architecture specifically, like GPT. And it doesn’t mean no one else is working on anything else. People are always on the lookout for the next big thing, because I think it would be almost stupid not to. Sure, right now the transformer architecture is the thing and it works best, but it’s always a good idea to not put all your eggs into one basket. People are developing alternatives to the autoregressive transformer. One of them would be, for example, text diffusion models.
Sebastian Raschka
(02:29:49)
And listeners may know diffusion models from image generation, like Stable Diffusion popularized it. Back then, people used GANs, Generative Adversarial Networks. And then there was this diffusion process where you iteratively de-noise an image, and that resulted in really good quality images over time. Other companies build their own diffusion models. And now people are like, “Okay, can we try this also for text?” It doesn’t make intuitive sense yet because it feels like it’s not something continuous like a pixel that we can differentiate. It’s discrete text, so how do we implement that de-noising process?
Sebastian Raschka
(02:30:25)
But it’s kind of similar to the BERT models by Google. When you go back to the original transformer, there were the encoder and the decoder. The decoder is what we are using right now in GPT and so forth. The encoder is more like a parallel technique where you have multiple tokens that you fill in in parallel. GPT models do autoregressive completion one token at a time. In BERT models, you have a sentence that has gaps—you mask them out—and then one iteration is filling in those gaps.
Sebastian Raschka
(02:31:02)
And text diffusion is kind of like that, where you are starting with some random text, and then you are filling in the missing parts or refining them iteratively over multiple iterations. The cool thing here is that this can do multiple tokens at the same time, so it has the promise of being more efficient. Now, the trade-off is, of course, how good is the quality? It might be faster, but the more de-noising steps you do, the better the text becomes. People are trying to see if that is a valid alternative to the autoregressive model in terms of giving you the same quality for less compute.
Sebastian Raschka
(02:31:46)
Right now, there are papers that suggest if you want to get the same quality, you have to crank up the de-noising steps and then you end up spending the same compute you would spend on an autoregressive model. The other downside is that while it’s parallel, some tasks are not. For reasoning tasks or tool use where you have to ask a code interpreter to give you an intermediate result, it is kind of tricky with diffusion models. So there are some hybrids. But the main idea is how can we parallelize it. It’s an interesting avenue. I think right now there are mostly research models out there, like LaMDA and some other ones.
Sebastian Raschka
(02:32:24)
I saw some by startups, some deployed models, but there is no big diffusion model at scale yet on the level of Gemini or ChatGPT. But there was an announcement by Google where they said they are launching Gemini Diffusion, and they put it into context of their Nano 2 model. They said for the same quality on most benchmarks, we can generate things much faster. I don’t think the text diffusion model is going to replace autoregressive LLMs, but it will be something for quick, cheap, at-scale tasks. Maybe the free tier in the future will be something like that.
Nathan Lambert
(02:33:04)
I think there are a couple of examples where it’s actually started to be used. To paint an example of why this is so much better: when a model like GPT-5 takes time to respond, it’s generating one token at a time. This diffusion idea is essentially generating all of those tokens in the completion in one batch, which is why it could be way faster.
Nathan Lambert
(02:33:27)
The startups I’m hearing are code startups where you have a codebase and somebody is effectively vibe coding. They say, “Make this change,” and a code diff is essentially a huge reply from the model. It doesn’t have to have that much external context, and you can get it really fast by using these diffusion models. They use text diffusion to generate really long diffs because doing it with an autoregressive model would take minutes, and that time causes a lot of churn for a user-facing product. Every second, you lose users. So I think that it’s going to be this thing where it’s going to-
Nathan Lambert
(02:34:02)
-grow and have some applications, but I actually thought that different types of models were going to be used for different things sooner than they have been. I think the tool use point is the one that’s stopping them from being most general purpose because, with something like Claude Code or ChatGPT with search, the autoregressive chain is interrupted with an external tool, and I don’t know how to do that with the diffusion setup.

Tool use

Lex Fridman
(02:34:28)
So what’s the future of tool use this year and in the coming years? Do you think there’s going to be a lot of developments there, and how that’s integrated into the entire stack?
Sebastian Raschka
(02:34:37)
I do think right now it’s mostly on the proprietary LLM side, but we will see more of that in open-source tooling. It is a huge unlock because then you can really outsource certain tasks from just memorization to actual computation—you know, instead of having the LLM memorize what is 23 plus 5, just use a calculator.
Lex Fridman
(02:34:58)
So do you think that can help solve hallucinations?
Sebastian Raschka
(02:35:01)
Not solve it, but reduce it. Still, the LLM needs to know when to ask for a tool call. And second, it doesn’t mean the internet is always correct. You can do a web search for who won the World Cup in 1998, but it still needs to find the right website and get the right information. You can still go to the incorrect website and get incorrect information. I don’t think it will fully solve it, but it is improving. There was another cool paper earlier this year—I think it was December 31st, so not technically 2026, but close—on the recursive language model.
Sebastian Raschka
(02:35:43)
That’s a cool idea to take this even a bit further. Nathan, you mentioned earlier it’s harder to do cool research in academia because of the compute budget. If I recall correctly, they did everything with GPT-5, so they didn’t even use local models. But the idea is, for a long-context task, instead of having the LLM solve all of it in one shot or in a chain, you break it down into sub-tasks. You have the LLM decide what is a good sub-task and then recursively call an LLM to solve that.
Sebastian Raschka
(02:36:16)
And then adding tools—you know, each sub-task maybe goes to the web and gathers information, and then you pull it all together at the end. I think there’s going to be a lot of unlock using things like that where you don’t necessarily improve the LLM itself, you improve how the LLM is used and what it can use. One downside right now with tool use is you have to give the LLM permission to use tools. That will take some trust, especially if you want to unlock things like having an LLM answer emails for you, or just sort them. I don’t know if I would today give an LLM access to my emails, right? I mean, this is a huge risk.
Nathan Lambert
(02:37:03)
I think there’s one last point on the tool use thing. You hinted at this, and we’ve both come at this in our own ways: open versus closed models use tools in very different ways. With open models, people go to Hugging Face and download the model, and then the person’s going to be like, “What tool do I want?” Maybe X.ai is my preferred search provider, but someone else might care for a different search startup. When you release a model, it needs to be useful for multiple tools, which is really hard because you’re making a general reasoning engine, which is actually what gpt-oss-120b is good for.
Nathan Lambert
(02:37:36)
But on the closed models, you’re deeply integrating the specific tool into your experience. I think that open models will struggle to replicate some of the things that I like to do with closed models, where you can reference a mix of public and private information. Something that I keep trying every three to six months is Codex on the web, which is just prompting a model to make an update to some GitHub repository that I have.
Nathan Lambert
(02:38:01)
That set of secure cloud environment is just so nice for just sending it off to do this thing and then come back to me. This will probably help define some of the local open and closed niches. Because there was such a rush to get tool use working, the open models were on the back foot, which is kind of inevitable. There are so many resources in these frontier labs, but it will be fun when the open models solve this because it’s going to necessitate a more flexible model that might work with this recursive idea to be an orchestrator. Hopefully, necessity drives innovation there.

Continual learning

Lex Fridman
(02:38:45)
So, continual learning—this is a longstanding topic and an important problem. I think that increases in importance as the cost of training models goes up. So can you explain what continual learning is and how important it might be this year and in the coming years to make progress?
Nathan Lambert
(02:39:03)
This relates a lot to this kind of SF zeitgeist of: what is AGI, Artificial General Intelligence, and what is ASI, Artificial Superintelligence? What are the language models that we have today capable of doing? I think language models can solve a lot of tasks, but a key milestone for the AI community is when AI can replace any remote worker, taking in information and solving digital tasks. The limitation is that a language model will not learn from feedback the same way an employee does. If you hire an editor, they might mess up, but you will tell them, and they don’t do it again.
Nathan Lambert
(02:39:43)
But language models don’t have this ability to modify themselves and learn very quickly. The idea is, if we are going to get to something that is a true, general adaptable intelligence that can go into any remote work scenario, it needs to be able to learn quickly from feedback and on-the-job learning. I’m personally more bullish on language models being able to just provide very good context. You can write extensive documents where you say, “I have all this information. Here are all the blog posts I’ve ever written. I like this type of writing; my voice is based on this.” But a lot of people don’t provide this to models.
Nathan Lambert
(02:40:24)
The agentic models are just starting. So it’s this kind of trade-off: do we need to update the weights of this model with this continual learning thing to make them learn fast? Or, the counterargument is we just need to provide them with more context and information, and they will have the appearance of learning fast by just having a lot of context and being very smart.
Lex Fridman
(02:40:43)
So we should mention the terminology here. Continual learning refers to changing the weights continuously so that the model adapts and adjusts based on the new incoming information, and does so continually, rapidly, and frequently. And then the thing you mentioned on the other side of it is generally referred to as in-context learning. As you learn stuff, there’s a huge context window. You can just keep loading it with extra information every time you prompt the system, which I think both can legitimately be seen as learning. It’s just a different place where you’re doing the learning.
Sebastian Raschka
(02:41:24)
I think, to be honest with you, continual learning—the updating of weights—we already have that in different flavors. I think the distinction here is: do you do that on a personalized custom model for each person, or do you do it on a global model scale? And I think we have that already with going from GPT-5 to 5.1 and 5.2. It’s maybe not immediate, but it is like a quick curated update where there was feedback by the community on things they couldn’t do. They updated the weights, released the next model, and so forth. So it is kind of a flavor of that. Another even finer-grained example is RLVR; you run it, it updates.
Sebastian Raschka
(02:42:08)
The problem is you can’t just do that for each person because it would be too expensive to update the weights for each person. Even at OpenAI scale, building the data centers, it would be too expensive. I think that is only feasible once you have something on the device where the cost is on the consumer. Like what Apple tried to do with the Apple Intelligence models, putting them on the phone so they learn from the experience.
Lex Fridman
(02:42:33)
A bit of a related topic, but this kind of—maybe anthropomorphized term—memory. What are different ideas of the mechanism of how to add memory to these systems as you’re increasingly seeing? Especially personalized memory?
Sebastian Raschka
(02:42:49)
Right now, it’s mostly like context—stuffing things into the context and then just recalling that. But again, it’s expensive because even if you cache it, you spend tokens on that. And the second one is you can only do so much. I think it’s more like a preference or style. A lot of people do that when they solve math problems. You can add previous knowledge, but you also give it certain preference prompts, like “do what I preferred last time.” But it doesn’t unlock new capabilities. For that, one thing people still use is LoRA adapters.
Sebastian Raschka
(02:43:32)
These are basically, instead of updating the whole weight matrix, two smaller weight matrices that you have in parallel or overlays, like the delta. But you can do that to some extent, and then again, it is economics. There were also papers showing, for example, LoRA learns less but forgets less. There’s no free lunch. If you want to learn more, you need to use more weights, but it gets more expensive. And then if you learn more, you forget more; you have to find that Goldilocks zone.

Long context

Lex Fridman
(02:44:04)
We haven’t really mentioned it much, but implied in this discussion is context length as well. Is there a lot of innovations that’s possible there?
Nathan Lambert
(02:44:13)
I think the colloquially accepted thing is that it’s a compute and data problem. Sometimes there are small architecture things, like attention variants. We talked about hybrid attention models, which is essentially if you have what looks like a state space model within your transformer. Those are better suited because you have to spend less compute to model the furthest along token. But those aren’t free because they have to be accompanied by a lot of compute or the right data. How many sequences of 100,000 tokens do you have in the world, and where do you get these? It just ends up being pretty expensive to scale them.
Nathan Lambert
(02:44:56)
So we’ve gotten pretty quickly to a million tokens of input context length. And I would expect it to keep increasing and get to 2 million or 5 million this year, but I don’t expect it to go to, like, 100 million. That would be a true breakthrough, and I think those breakthroughs are possible. I think of the continual learning thing as a research problem where there could be a breakthrough that makes transformers work way better at this and it’s cheap. These things could happen with so much scientific attention. Но turning the crank, it’ll be consistent increases over time.
Sebastian Raschka
(02:45:27)
I think also looking at the extremes, there’s no free lunch. One extreme to make it cheap is to have, let’s say, an RNN that has a single state where you save everything from the previous stuff. It’s a specific fixed-size thing, so you never really grow the memory. You are stuffing everything into one state, but then the longer the context gets, the more information you forget because you can’t compress everything into one state. Then on the other hand, you have the transformers, which try to remember every token. That is great if you want to look up specific information, but very expensive because you have the KV cache and the dot product that grow.
Sebastian Raschka
(02:46:06)
But then, like you said, the Mamba layers kind of have the same problem. Like an RNN, you try to compress everything into one state, and you’re a bit more selective there. I think it’s like this Goldilocks zone again with NVIDIA Nemotron 3; they found a good ratio of how many attention layers you need for the global information where everything is accessible compared to having these compressed states. I think we will scale more by finding better ratios in that Goldilocks zone between making it cheap enough to run and making it powerful enough to be useful.
Sebastian Raschka
(02:46:43)
And one more plug here: the recursive language model paper is one of the papers that tries to address the long context thing. What they found is, essentially, instead of stuffing everything into this long context, if you break it up into multiple smaller tasks, you save memory and can actually get better accuracy than having the LLM try everything all at once. It’s a new paradigm; we will see if there are other flavors of that. I think we will still make improvement on long context, but like Nathan said, the problem is for pre-training itself, we don’t have as many long-context documents as other documents. So it’s harder to study basically how LMs behave on that level.
Nathan Lambert
(02:47:31)
There are some rules of thumb where, essentially, you pre-train a language model—like OLMo, we pre-trained at an 8K context length and then extended to 32K with training. There’s a rule of thumb where doubling the training context length takes about 2X compute, and then you can normally 2 to 4X the context length again. I think a lot of it ends up being compute-bound at pre-training. Everyone talks about this big increase in compute for the top labs this year, and that should reflect in some longer context windows.
Nathan Lambert
(02:48:02)
But I think on the post-training side, there’s some more interesting things. As we have agents, the agents are going to manage this context on their own. Now people who use Claude Code a lot dread the compaction, which is when Claude takes its entire 100,000 tokens of work and compacts it into a bulleted list. But what the next models will do—I’m sure people are already working on this—is the model can control when it compacts and how. So you can essentially train your RL algorithm where compaction is an action,
Nathan Lambert
(02:48:30)
where it shortens the history. Then the problem formulation will be, “I want to keep the maximum evaluation scores while the model compacts its history to the minimum length.” Because then you have the minimum amount of tokens that you need to do this kind of compounding auto-regressive prediction. There are actually pretty nice problem setups in this where these agentic models learn to use their context in a different way than just plowing forward.
Sebastian Raschka
(02:48:56)
One interesting recent example would be DeepSeek-V3.2, where they had a sparse attention mechanism with a very efficient, small, lightweight indexer. Instead of attending to all the tokens, it selects which tokens I actually need. It almost comes back to the original idea of attention where you are selective, but attention is always on; you have maybe zero weight on some of them, but you use them all. But they are even more like, “Okay, let’s just mask that out or not even do that.” And even with sliding window attention in OLMo, that is also kind of like that idea. You have that rolling window where you keep it fixed, because you don’t need everything all the time.
Sebastian Raschka
(02:49:34)
Occasionally, in some layers you might, but it’s wasteful. But right now, I think if you use everything, you’re on the safe side; it gives you the best bang for the buck because you never miss information. And right now, I think this year will also be the year of figuring out, like you said, how to be smarter about that. Right now people want to have the next state-of-the-art, and the state-of-the-art happens to be the brute force, expensive thing. Once you have that, like you said, you want to keep that accuracy but see how we can do that cheaper now using tricks.
Nathan Lambert
(02:50:07)
Yeah, all this scaling thing. Like the reason we get the Claude 4.5 Sonnet model first is because you can train it faster and you’re not hitting these compute walls as soon. They can just try a lot more things and get the model out faster, even though the bigger model is actually better.

Robotics

Sebastian Raschka
(02:50:22)
I think we should say that there’s a lot of exciting stuff going on in the AI space. My mind has recently been really focused on robotics, so today we almost entirely didn’t talk about robotics. There’s a lot of stuff on image generation and video generation. I think it’s fair to say that the most exciting research work in terms of intensity and fervor is in the LLM space, which is why I think it’s justified for us to focus on the LLMs we’re discussing. But it’d be nice to bring in certain things that might be useful. For example, world models—there’s growing excitement about that. Do you think there will be any use in this coming year for world models in the LLM space?
Sebastian Raschka
(02:51:08)
Also with LLMs, what’s an interesting thing here is I think if we unlock more LLM capabilities, it also automatically unlocks all the other fields because it makes progress faster. Because, you know, a lot of researchers and engineers use LLMs for coding. So even if they work on robotics, if you optimize these LLMs that help with coding, it pays off. But then yes, world models are interesting. It’s basically where you have the model run a simulation of the world—like a little toy version of the real thing—which can unlock capabilities like data the LLM is not aware of. It can simulate things. I think LLMs happen to work well by pre-training and doing next-token prediction, but we could do this in a more sophisticated way.
Sebastian Raschka
(02:52:05)
There was a paper, I think by Meta, called “Coder World Models.” They basically apply the concept of world models to LLMs where, instead of just having next-token prediction and verifiable rewards checking the answer correctness, they also make sure the intermediate variables are correct. The model is basically learning a code environment. I think this makes a lot of sense; it’s just expensive to do. But it is making things more sophisticated by modeling the whole process, not just the result, and that can add more value.
Sebastian Raschka
(02:52:51)
I remember when I was a grad student, there’s a competition called CASP where they do protein structure prediction. They predict the structure of a protein that is not solved yet. In a sense, this is actually great, and I think we need something like that for LLMs also, where you do the benchmark but no one knows the solution until someone reveals it after the fact. When AlphaFold came out, it crushed this benchmark. I mean there were multiple iterations, but I remember the first one explicitly modeled the physical interactions and the physics of the molecule.
Sebastian Raschka
(02:53:34)
Also, things like impossible angles. Then in the next version, I think they got rid of this and just used brute force, scaling it up. I think with LLMs, we are currently in this brute-force scaling because it just happens to work, but I do think at some point it might make sense to bring back this approach. I think with world models, that might be actually quite cool. And of course, for robotics, that is completely related to LLMs.
Lex Fridman
(02:54:03)
Yeah, and robotics is very explicit. There’s the problem of locomotion or manipulation. Locomotion is much more solved, especially in the learning domain. But there’s a lot of value, just like with the initial protein folding systems… …Bringing in the traditional model-based methods. So it’s unlikely that you can just learn the manipulation or the whole-body local manipulation problem end-to-end. That’s the dream. But then you realize when you look at the magic of the human hand… …And the complexity of the real world, you realize it’s really hard to learn this all the way through- …the way I guess AlphaFold 2 didn’t.
Nathan Lambert
(02:54:40)
I’m excited about the robotic learning space. I think it’s collectively getting supercharged by all the excitement and investment in language models generally. The infrastructure for training transformers, which is a general modeling thing, is becoming world-class industrial tooling. Wherever there was a limitation for robotics, it’s just way better now. There’s way more compute. They take these language models and use them as central units where you can do interesting explorative work around something that already works. And then I see it emerging as, kind of like we talked about, Hugging Face transformers and Hugging Face.
Nathan Lambert
(02:55:19)
I think when I was at Hugging Face, I was trying to get this to happen, but it was too early. These open robotic models on Hugging Face enable people to contribute data and fine-tune them. I think we’re much closer now that the investment in robotics and self-driving cars is related and enables this. Once you get to the point where you have this sort of ecosystem, someone can download a robotics model and fine-tune it to their robot or share datasets across the world. There’s some work in this area like RTX from a few years ago where people are starting to do that. But once they have this ecosystem, it’ll look very different. And then this whole post-ChatGPT boom is putting more resources into that, which I think is a very good area for doing research.
Lex Fridman
(02:56:02)
This is also resulting in much better, more accurate, more realistic simulators being built, closing this sim-to-real gap in the robotic space. But you know, you mentioned a lot of excitement and investment. The downside of that, which happens in hype cycles—I personally believe, and most robotics people believe—is that robotics is not going to be solved on the timescale being implicitly or explicitly promised. So what happens when all these robotics companies spring up and then they don’t have a product that works? Then there’s going to be this crash of excitement, which is nerve-wracking. Hopefully something else will swoop in so that the continued development of some of these ideas keeps going.
Sebastian Raschka
(02:56:53)
I think it’s also related to the continual learning issue. The real world is so complex, whereas with LLMs, you don’t really need to have something learn for the user because there are a lot of things everyone has to do—everyone maybe wants to fix their grammar in their email or code. It’s more constrained, so you can prepare the model for that. But preparing a robot for the real world is harder. You have robotic foundation models, and you can learn things like grasping, but every house is different. It’s so different that the robot would have to learn on the job, essentially. And I think that is the bottleneck right now: customizing it on the fly.
Lex Fridman
(02:57:42)
I don’t think I can possibly understate the importance of the thing that doesn’t get talked about almost at all by robotics folks or anyone, and that is safety. All the interesting complexities we talk about regarding learning, all the failure modes and failure cases—everything we’ve been talking about with LLMs where sometimes it fails in interesting ways—all of that is fun and games in the LLM space. In the robotic space, in people’s homes, across millions of minutes and billions of interactions, you really are almost allowed to fail never. When you have embodied systems put out there in the real world, you just have to solve so many problems you never thought you’d have to solve when you’re just thinking about the general robot learning problem.
Nathan Lambert
(02:58:32)
I’m so bearish on in-home learned robots for consumer purchase. I’m very bullish on self-driving cars, and I’m very bullish for robotic automation, like Amazon distribution— …where Amazon has built whole new distribution centers designed for robots first rather than humans. There’s a lot of excitement in AI circles about AI enabling automation—
Nathan Lambert
(02:58:54)
…and mass-scale manufacturing, and I do think that the path to robots doing that is more reasonable. It’s a thing that is designed and optimized to do a repetitive task that a human could conceivably do but doesn’t want to. But it’s also going to take a lot longer than people probably predict. I think the leap from the AI singularity to scaling up mass manufacturing in the US because we have a massive AI advantage is one that is troubled by a lot of political and other challenging problems.

Timeline to AGI

Lex Fridman
(02:59:31)
Let’s talk about timelines specifically: timelines to AGI or ASI. Is it fair, as a starting point, to say that nobody really agrees on the definitions of AGI and ASI?
Nathan Lambert
(02:59:46)
I think there’s a lot of disagreement, but I’ve been getting pushback where people say it is something that could reproduce most digital economic work. The remote worker is a fairly reasonable example. I think OpenAI’s definition is somewhat related to that—an AI that can do a certain number of economically valuable tasks—which I don’t really love as a definition, but it could be a grounding point. Language models today, while immensely powerful, are not this remote worker drop-in. There are things an AI could do that are way harder than remote work, like solving a…
Nathan Lambert
(03:00:29)
…finding an unexpected scientific discovery that you couldn’t even posit, which would be an example of something people call an artificial superintelligence problem. Or taking in all medical records and finding linkages across certain illnesses that people didn’t know or figuring out that some common drug can treat a niche cancer. They would say that is a superintelligence thing. So these are natural tiers. My problem is that it becomes deeply entwined with the quest for meaning in AI and these religious aspects. There are different paths you can take.
Lex Fridman
(03:01:06)
And I don’t even know if remote work is a good definition. I liked the originally titled AI2027 report. They focus more on code and research taste, so the target there is the superhuman coder. They have several milestone systems: superhuman coders, superhuman AI researcher, then superintelligent AI researcher, and then the full ASI. After you develop the superhuman coder, everything else follows quickly. The task is to have fully autonomous, automated coding, so any kind of coding you need to do in order to perform research is fully automated.
Lex Fridman
(03:01:58)
From there, humans would be doing AI research together with that system, and they will quickly be able to develop a system that actually can do the research for you. That’s the idea. Initially, their prediction was 2027 or ’28, and now they’ve pushed it back by three to four years to 2031, mean prediction. My prediction is probably even beyond 2031, but at least you can think concretely about how difficult it is to fully automate programming.
Nathan Lambert
(03:02:31)
Yeah, I disagree with some of their presumptions and dynamics on how it would play out, but I think they did good work in defining concrete milestones to tell a useful story. That’s why the reach of this AI 2027 document well transcended Silicon Valley—because they told a good story and did a lot of rigorous work.
Nathan Lambert
(03:02:53)
I think the camp that I fall into is that AI is so-called jagged, which will be excellent at some things and really bad at some things. I think that when they’re close to this automated software engineer, what it will be good at is traditional ML systems and front end—the model is excellent at those—but the distributed ML, the models are actually really quite bad at because there’s so little training data on doing large-scale distributed learning and things. And this is something that we already see, and I think this will just get amplified. And then it’s kind of messier in these trade-offs, and then there’s how you think AI research works and so on.
Lex Fridman
(03:03:28)
So you think basically a superhuman coder is almost unachievable meaning, because of the jagged nature of the thing, you’re just always going to have gaps in capabilities?
Nathan Lambert
(03:03:38)
I think it’s assigning completeness to something where the models are kind of superhuman at some types of code, and I think that will continue. And people are creative, so they’ll utilize these incredible abilities to fill in the weaknesses of the models and move really fast. There will always be, for a long time, this dance between the humans enabling this thing that the model can’t do, and the best AI researchers are the ones that can enable this superpower.
Nathan Lambert
(03:04:04)
And I think those lines, compared to what we already see… I think like Claude Code for building a website, you can stand up a beautiful website in a few hours or do data analysis. But the whole thing is going to keep getting better at these things, and we’ll pick up some new code skills and stuff along the way. Linking to what’s happening in big tech, this AI 2027 report leans into the singularity idea where I think research is messy and social and largely in the data in ways that AI models can’t process. But what we do have today is really powerful, and these tech companies are all collectively buying into this with tens of billions of dollars of investment. So we are going to get some much better version of ChatGPT, a much better version of Claude Code than we already have.
Nathan Lambert
(03:04:50)
I think that it’s just hard to predict where that is going, but the bright clarity of that future is why some of the most powerful people in the world are putting so much money into this. And I think it’s just kind of small differences—we don’t actually know what a better version of ChatGPT is, but also can it automate AI research? I would say probably not, at least in this timeframe. Big tech is going to spend $100 billion much faster than we get an automated AI researcher that enables an AI research singularity.
Lex Fridman
(03:05:22)
So you think your prediction would be, if this is even a useful milestone, more than 10 years out?
Nathan Lambert
(03:05:30)
I would say less than that on the software side, but I think longer than that on things like research.
Lex Fridman
(03:05:36)
Well, let’s just for fun try to imagine a world where all software writing is fully automated. Can you imagine that world?
Nathan Lambert
(03:05:46)
By the end of this year, the amount of software that’ll be automated will be so high. But it’ll be things like you’re trying to train a model with RL and you need to have multiple bunches of GPUs communicating with each other. That’ll still be hard, but I think it’ll be much easier.
Lex Fridman
(03:06:02)
One of the ways to think about this, the full automation of programming, is just think of lines of useful code written—the fraction of that to the number of humans in the loop. So presumably there’ll be, for a long time, humans in the loop of software writing. It’ll just be fewer and fewer relative to the amount of code written. Right? And with the superhuman coder, I think the presumption there is the number of humans in the loop goes to zero. What does that world look like when the number of humans in the loop is in the hundreds, not in the hundreds of thousands?

Will AI replace programmers?

Nathan Lambert
(03:06:39)
I think software engineering will be driven more to system design and goals of outcomes, where I do think software is largely going to be… I think this has been happening over the last few weeks, where people have gone from a month ago saying, “Oh yeah, agents are kind of slop,” which is a famous Karpathy quote, to the industrialization of software when anyone can just create software with their fingerprints. I do think we are closer to that side of things, and it takes direction and understanding how the systems work to extract the best from the language models. And I think it’s hard to accept the gravity of how much is going to change with software development and how many more people can do things without ever looking at the code.
Sebastian Raschka
(03:07:22)
I think what’s interesting is to think about whether these systems will be independent, in the sense that while I have no doubt that LLMs will at some point solve coding in the way calculators solve calculating, right? At some point, humans developed a tool that you never need a human to calculate that number for; you just type it in, and it’s an algorithm. I think that’s the same probably for coding. But the question isn’t… I think what will happen is you will just say, “Build that website,” and it will make a really good website, and then you maybe refine it. But will it do things independently where…
Sebastian Raschka
(03:07:59)
Will you still have humans asking the AI to do something? Like will there be a person to say, “Build that website?” Or will there be AI that just builds websites or something, or whatever?
Lex Fridman
(03:08:12)
I think talking about building websites is the—
Nathan Lambert
(03:08:15)
Too simple.
Sebastian Raschka
(03:08:16)
Yeah. Sure.
Lex Fridman
(03:08:16)
It’s just that the problem with websites and the problem with the web, you know, HTML and all that kind of stuff, it’s very resilient to just— slop. It will show you slop. It’s good at showing slop. I would rather think of safety-critical systems, like asking AI to end-to-end generate something that manages logistics— or manages cars— a fleet of cars, all that kind of stuff. So it end-to-end generates that for you.
Nathan Lambert
(03:08:45)
I think a more intermediate example is take something like Slack or Microsoft Word. I think if the organizations allow it, AI could very easily implement features end-to-end and do a fairly good job for things that you want to try. You want to add a new tab in Slack that you want to use, and I think AI will be able to do that pretty well.
Lex Fridman
(03:09:06)
Actually, that’s a really great example. How far away are we from that?
Nathan Lambert
(03:09:09)
Like this year.
Lex Fridman
(03:09:11)
See, I don’t know. I don’t know.
Nathan Lambert
(03:09:14)
I guess I don’t know— how bad production codebases are, but I think that within… on the order of a few years, a lot of people are going to be pushed to be more like a designer and product manager, where you have multiple of these agents that can try things for you, and they might take one to two days to implement a feature or attempt to fix a bug. And you have these dashboards—which I think Slack is actually a good dashboard—where your agents will talk to you and you’ll then give feedback. But things like, I make a website and it’s like, “Do you want to make a logo that’s passable?” I think these cohesive design things and the style is going to be very hard for models and deciding on what to add next.
Lex Fridman
(03:09:54)
I just… Okay. So I hang out with a lot of programmers and some of them are a little bit on the skeptical side in general—that’s just the vibe. I just think there’s a lot of complexity involved in adding features to complex systems. Like, if you look at the browser, Chrome. If I wanted to add a feature, if I wanted to have tabs as opposed to up top, I want them on the left side. Interface, right? I think we’re not… This is not a next year thing.
Nathan Lambert
(03:10:26)
One of the Claude releases this year, one of their tests was we give it a piece of software and leave Claude to run to recreate it entirely, and it could already almost rebuild Slack from scratch, just given the parameters of the software and left in a sandbox environment to do that.
Lex Fridman
(03:10:41)
So the from-scratch part, I like almost better.
Nathan Lambert
(03:10:44)
So it might be that the smaller and newer companies are advantaged and they’re like, “We don’t have to have the bloat and complexity, and therefore this feature exists.”
Sebastian Raschka
(03:10:53)
And I think this gets to the point that you mentioned that some people you talk to are skeptical, and I think that’s not because the LLM can’t do X, Y, Z. It’s because people don’t want it to do it this way.
Lex Fridman
(03:11:05)
Some of that could be a skill issue on the human side. Unfortunately, we have to be honest with ourselves. And some of that could be an underspecification issue. So, programming… this is like a communication type of issue in relationships and friendships. You’re assuming the LLM somehow is supposed to read your mind. I think this is where spec-driven design is really important. Like you just, using natural language, specify what you want.
Nathan Lambert
(03:11:32)
I think if you talk to people at the labs, they use these in their training and production code. Claude Code is built with Claude Code, and they all use these things extensively. And Dario talks about how much of Claude’s code… It’s like these people are slightly ahead in terms of the capabilities—
Nathan Lambert
(03:11:49)
—they have, and they probably spend on inference. They could spend 10 to 100 times as much as we’re spending, like we’re on a lowly 100 or $200 a month plan. They truly let it rip. And I think that with the pace of progress that we have, a year ago we didn’t have Claude Code and we didn’t really have reasoning models. The difference between sitting here today and what we can do with these models—it seems like there’s a lot of low-hanging fruit to improve them. The failure modes are pretty dumb. It’s like, “Claude, you tried to use the CLI command I don’t have installed 14 times, and then I sent you the command to run.” From a modeling perspective, that thing is pretty fixable. So, I don’t know.
Lex Fridman
(03:12:34)
I agree with you. I’ve been becoming more and more bullish in general. Speaking to what you’re articulating, I think it is a human skill issue. So Anthropic and other companies are leading the way in understanding how to best use the models for programming; therefore, they’re effectively using them. I think there’s a lot of programmers on the outskirts who don’t… I mean, there’s not a really good guide on how to use them. People are trying to figure it out exactly, but—
Nathan Lambert
(03:13:04)
It might be very expensive. It might be that the entry point for that is $2,000 a month, which is only for tech companies and rich people. That could be it.
Lex Fridman
(03:13:13)
But it might be worth it. If the final result is a working software system, it might be worth it. By the way, it’s funny how we converged from the discussion of the timeline to AGI to something more pragmatic and useful. Is there anything concrete and profound to be said about the timeline to AGI and ASI? Or are these discussions a bit too detached from the day-to-day?
Nathan Lambert
(03:13:39)
There’s interesting bets. There’s a lot of people trying to do Reinforcement Learning with Verifiable Rewards—RLVR—but in real scientific domains. There are startups spending hundreds of millions of dollars in funding, and they have wet labs where they’re having language models propose hypotheses that are tested in the real world. I would say that they’re early, but with the pace of progress—
Nathan Lambert
(03:14:00)
—maybe they’re early by six months and they make it because they were there first, or maybe they’re early by eight years. You don’t really know. So I think that type of moonshot to branch this momentum into other sciences would be very transformative if AlphaFold moments happen in all sorts of other scientific domains by a startup solving this. I think there are startups—maybe Harmonic is one—where they’re going all in on language models plus Lean for math. I think you had another podcast guest where you talked about this recently, and it’s like we don’t know exactly what’s going to fall out of spending $100 million on that model.
Nathan Lambert
(03:14:41)
Most of them will fail, but a couple of them might be big breakthroughs that are very different than ChatGPT or Claude Code type software experiences. Like a tool that’s only good for a PhD mathematician but makes them 100 times more effective.
Sebastian Raschka
(03:14:58)
I agree. I think this will happen in a lot of domains, especially those with a lot of resources like finance, legal, and pharmaceutical companies. But then again, is it really AGI? Because we are specializing it again. Is it really that much different from how we had specialized algorithms back in the day? I think it’s just the same thing but way more sophisticated. Is there a threshold when we call it AGI? I think the real cool thing here is that we have foundation models that we can specialize. That’s the breakthrough.
Sebastian Raschka
(03:15:34)
Right now, I think we are not there yet because first, it’s too expensive, but also ChatGPT doesn’t just give away their model to customize it. I can imagine a business model where OpenAI says at some point, “Hey, Bank of America, for $100 million we will do your custom model.” I think that will be the huge economic value add. The other thing though is, what is the differentiating factor? If everyone uses ChatGPT, they will all do the same thing. Everyone is moving in lockstep, but usually companies want to have a competitive advantage. I think there is no way around using some of their private data and experimenting with specialization. It’s going to be interesting.
Nathan Lambert
(03:16:26)
Given the pace of progress, it does feel like things are coming. I don’t think the AGI and ASI thresholds are particularly useful.
Lex Fridman
(03:16:35)
I think the real question, and this relates to the remote worker thing, is when are we going to see a big, obvious leap in economic impact? Because currently there’s not been an obvious leap in economic impact from LLM models, for example. Aside from AGI or ASI, there’s a real question of when we are going to see a GDP jump. Jump.
Nathan Lambert
(03:17:06)
Yeah, it’s like, what is the GDP made up of? A lot of it is financial services, so I don’t know what this is. It’s just hard for me to think about the GDP bump, but I would say that software development becomes valuable in a different way when you no longer have to look at the code anymore. So when it is like, Claude will make you a small business—which is essentially Claude can set up your website, your bank account, your email, and your whatever else—and you just have to express what you’re trying to put into the world. That’s not just an enterprise market, but it is hard. I don’t know how you get people to try doing that. I guess if ChatGPT can do it—people are trying ChatGPT.
Lex Fridman
(03:17:49)
I think it boils down to the scientific question of, “How hard is tool use to solve?” Because a lot of the stuff you’re implying, the remote work stuff, is tool use. It’s like computer use; how you have an LLM that goes out there, this agentic system, and does something in the world, and only screws up 1% of the time.
Nathan Lambert
(03:18:11)
Computer use is a good example of what labs care about and we haven’t seen a lot of progress on.
Lex Fridman
(03:18:12)
Or less.
Nathan Lambert
(03:18:12)
We saw multiple demos in 2025 of, like, Claude can use your computer, or OpenAI had operator, and they all suck. So they’re investing money in this, and I think that’ll be a good example. Whereas actually, taking over the whole screen seems a lot harder than having an API that they can call in the back end. Some of that is you have to then set up a different environment for them all to work in. They’re not working on your MacBook; they are individually interfacing with Google and Amazon and Slack, and they handle all these things in a very different way than humans do. So some of this might be structural blockers.
Sebastian Raschka
(03:18:55)
Also, specification-wise, I think the problem for arbitrary tasks is that you still have to specify what you want your LLM to do. What is the environment? How do you specify? You can say what the end goal is, but if it can’t solve the end goal—with LLMs, if you ask it for text, it can always clarify or do sub-steps. How do you put that information into a system that, let’s say, books a travel trip for you? You can say, “Well, you screwed up my credit card information,” but even to get it to that point, as a user, how do you guide the model before it can even attempt that? I think the interface is really hard.
Lex Fridman
(03:19:36)
Yeah, it has to learn a lot about you specifically. And this goes to continual learning—about the general mistakes that are made throughout, and then mistakes that are made through you.
Nathan Lambert
(03:19:48)
All the AI interfaces are getting set up to ask humans for input. I think Claude Code we talked about a lot. It asks feedback and questions. If it doesn’t have enough specification on your plan or your desire, it starts to ask questions, “Would you rather?” We talked about Memory, which saves across chats. Its first implementation is kind of odd, where it’ll mention my dog’s name or something in a chat. I’m like, “You don’t need to be subtle about this. I don’t care.” But things are emerging, like ChatGPT has the Pulse feature.
Nathan Lambert
(03:20:19)
Which is like a curated couple paragraphs with links to something to look at or to talk about, and people talk about how the language models are going to ask you questions. It’s probably going to work. The language model knows you had a doctor appointment or something, and it’s like, “Hey, how are you feeling after that?” Which again, goes into the territory where humans are very susceptible to this and there’s a lot of social change to come. But also, they’re experimenting with having the models engage. Some people really like this Pulse feature, which processes your chats and automatically searches for information and puts it in the ChatGPT app. So there’s a lot of things coming.
Sebastian Raschka
(03:20:58)
I used that feature before, and I always feel bad because it does that every day, and I rarely check it out. How much compute is burned on something I don’t even look at, you know?
Nathan Lambert
(03:21:11)
There’s also a lot of idle compute in the world, so don’t feel too bad.
Lex Fridman
(03:21:16)
Okay. Do you think new ideas might be needed? Is it possible that the path to AGI—whatever that is, however we define that—to solve computer use more generally, to solve biology and chemistry and physics, sort of the Dario definition of AGI or powerful AI? Do you think it’s possible that totally new ideas are needed? Non-LLM, non-RL ideas. What might they look like? We’re now going into philosophy land a little bit.
Nathan Lambert
(03:21:50)
For something like a singularity to happen, I would say yes. And the new ideas could be architectures or training algorithms, which are fundamental deep learning things. But they’re, in that nature, pretty hard to predict. But I think we won’t get very far even without those advances. Like, we might get this software solution, but it might stop at software and not do computer use without more innovation. So I think that a lot of progress will be coming, but if you’re going to zoom out, there’s still ideas in the next 30 years that are going to look like a major scientific innovation that enabled the next chapter of this. And I don’t know if it comes in one year or in 15 years.
Lex Fridman
(03:22:32)
Yeah. I wonder if the Bitter Lesson holds true for the next 100 years, and what that looks like.
Nathan Lambert
(03:22:37)
If scaling laws are fundamental in deep learning, I think the Bitter Lesson will always apply, which is compute will become more abundant. But even within abundant compute, the ones that have a steeper scaling law slope or a better offset—like, this is a 2D plot of performance and compute—even if there’s more compute available, the ones that get 100x out of it will win.
Lex Fridman
(03:23:01)
It might be something like literally computer clusters orbiting Earth with solar panels.
Nathan Lambert
(03:23:09)
The problem with that is heat dissipation. You get all the radiation from the sun and you don’t have any air to dissipate heat. But there is a lot of space to put clusters. There’s a lot of solar energy there and you could figure out the heat dissipation, but there is a lot of energy and there probably could be engineering will to solve the heat problem— …so there could be.
Lex Fridman
(03:23:27)
Is it possible—and we should say that it definitely is possible—that we’re basically going to be plateauing this year? Not in terms of— …the system capabilities, but what the system capabilities actually mean for human civilization. So on the coding front, really nice websites will be built. Very nice autocomplete.
Lex Fridman
(03:23:53)
Very nice way to understand code bases and maybe help debug, but really just a very nice helper on the coding front. It can help research mathematicians do some math. It can help you with shopping. It’s a nice helper. It’s Clippy on steroids. What else? It may be a good education tool and all that kind of stuff, but computer use turns out extremely difficult to solve. So I’m trying to frame the cynical case in all these domains where there’s not a really huge economic impact, but realize how costly it is to train these systems at every level—both the pre-training and the inference, how costly the inference is, the reasoning, all of that. Is that possible? And how likely is that, do you think?
Nathan Lambert
(03:24:47)
When you look at the models, there are so many obvious things to improve, and it takes a long time to train these models and to do this art, that it’ll take us with the ideas that we have multiple years to actually saturate in terms of whatever benchmark or performance we are searching for. It might serve very narrow niches. The average ChatGPT user might not get a lot of benefit out of this, but it is going to serve different populations by getting better at different things.

Is the dream of AGI dying?

Lex Fridman
(03:25:18)
But I think what everybody’s chasing now is a general system that’s useful to everybody. So, okay, if that’s not… that can plateau, right?
Nathan Lambert
(03:25:28)
I think that dream is actually kind of dying. As you talked about with the specialized models where it’s like… and multimodal is often… like, video generation is a totally different thing.
Lex Fridman
(03:25:39)
“That dream is kind of dying” is a big statement, because I don’t know if it’s dying. If you ask the actual Frontier Lab people, they’re still chasing it, right?
Sebastian Raschka
(03:25:48)
I do think they are still rushing to get the next model out, which will be much better than the previous one. “Much” is a relative term, but it will be better than the previous one. I can’t see them slowing down. I just think the gains will be made or felt more through not only scaling the model, but now… I feel like there’s a lot of tech debt. It’s like, “Well, let’s just put the better model in there, and better model, better model.” And now people are like, “Okay, let’s also at the same time improve everything around it too.”
Sebastian Raschka
(03:26:20)
Like the engineering of the context and inference scaling. And the big labs will still keep doing that. And now also the smaller labs will catch up to that because now they are hiring more. There will be more people. LLMs, it’s kind of like a circle. They also make them more productive and it’s just like an amplifier. I think what we can expect is amplification, but not a paradigm change. I don’t think that is true, but everything will be just amplified and amplified and amplified, and I can see that continuing for a long time.
Nathan Lambert
(03:26:52)
Yeah. I guess my statement with the dream is dying depends on exactly what you think it’s going to be doing. Like Claude Code is a general model that can do a lot of things, but it depends a lot on integrations and other things. I bet Claude Code could do a fairly good job of doing your email, and the hardest part is figuring out how to give it information and how to get it to be able to send your emails and stuff like this. But I think it goes back to what is the “one model to rule everything” ethos, which is just like a thing in the cloud that handles your entire digital life and is way smarter than everybody.
Nathan Lambert
(03:27:34)
So it’s an interesting leap of faith to go from Claude Code becomes that—which, in some ways, there are some avenues for that—but I do think that the rhetoric of the industry is a little bit different.
Sebastian Raschka
(03:27:49)
I think the immediate thing we will feel next as a normal person using LLMs will probably be related to something trivial, like making figures. Right now LLMs are terrible at making figures. Is it because we are getting served the cheap models with less inference compute than behind the scenes? Maybe with some cranks we can already get better figures, but if you ask today to draw a flowchart of X, Y, Z, it’s most of the time terrible. And it is kind of a very simple task for a human. I think it’s almost easier sometimes to draw something than to write something.
Nathan Lambert
(03:28:25)
Yeah, the multimodal understanding does feel like something that is odd, that it’s not better solved.
Lex Fridman
(03:28:31)
I think we’re not saying one actually obvious thing that we’re not realizing, that’s a gigantic thing that’s hard to measure, which is making all of human knowledge accessible… …To the entire world. One of the things that I think is hard to articulate, but there’s just a huge difference between Google Search and an LLM. I feel like I can basically ask an LLM anything and get an answer, and it’s doing less and less hallucination.
Lex Fridman
(03:29:04)
And that means understanding my own life, figuring out a career trajectory, figuring out how to solve the problems all around me, learning about anything through human history. I feel like nobody’s really talking about that because they just immediately take it for granted that it’s awesome. That’s why everybody’s using it—it’s because you get answers for stuff, and think about the impact of that across time. This is not just in the United States; this is all across the world. Kids throughout the world being able to learn these ideas—the impact that has across time is probably where the real GDP growth will be. It won’t be like a leap.
Lex Fridman
(03:29:51)
It’ll be that that’s how we get to Mars, that’s how we build these things, that’s how we have a million new OpenAIs, all the kind of innovation that happens from there. And that’s just this quiet force that permeates everything, right? Human knowledge.
Sebastian Raschka
(03:30:06)
I do agree with you, and in a sense it makes knowledge more accessible, but it also depends on what the topic is. For something like math, you can ask it questions and it answers, but if you want to learn a topic from scratch—we talked about this earlier—I think the sweet spot is still math textbooks where someone laid it out linearly. That is a proven strategy to learn a topic, and it makes sense if you start from zero to get information-dense text to soak it up, but then you use the LLM to make infinite exercises.
Sebastian Raschka
(03:30:47)
If you have problems in a certain area or have questions about things you are uncertain about, you ask it to generate example problems, you solve them, and then maybe you need more background knowledge and you ask it to generate that. But it won’t give you anything that is not in the textbook. It’s just packaging it differently, if that makes sense.
Sebastian Raschka
(03:31:13)
But then there are things where it also adds value in a more timely sense, where there is no good alternative besides a human doing it on the fly. For example, if you’re planning to go to Disneyland and you try to figure out which tickets to buy for which park when, well, there is no textbook on that. There is no information-dense resource on that. There’s only the sparse internet, and then there is a lot of value in the LLM. You just ask it. You have the constraints on traveling on these specific days, you want to go to certain places, and you ask it to figure out what you need, when and from where… …What it costs and stuff like that. It is a very customized, on-the-fly package. Personalization is essentially like—

How AI will make money?

Sebastian Raschka
(03:32:02)
…pulling information from the sparse internet, the non-information-dense thing where there’s no better version that exists. You make it from scratch almost.
Lex Fridman
(03:32:12)
And if it does exist, it’s full of—speaking of Disney World—ad slop. Like any city in the world, if you ask “what are the top 10 things to do?” An LLM is just way better to ask… …Than anything on the internet.
Nathan Lambert
(03:32:29)
Well, for now, that’s because they’re massively subsidized, and eventually they’re going to be paid for by ads.
Lex Fridman
(03:32:35)
Oh my goodness.
Nathan Lambert
(03:32:37)
It’s coming.
Lex Fridman
(03:32:38)
No. I’m hoping there’s a very clear indication of what’s an ad and what’s not an ad in that context, but—
Sebastian Raschka
(03:32:46)
That’s something I mentioned a few years ago. It’s like, I don’t know, if you are looking for a new running shoe, is it a coincidence that Nike maybe comes up first? Maybe, maybe not. I think there are clear laws around this. You have to be clear about that, but I think that’s what everyone fears. It’s like the subtle message in there or something like that. But also, this brings us to the topic of ads where, I think this was a thing, hopefully they try to launch in 2025 because I think they’re still not making money in that other way right now, so… …Like having actual ad spots in there. And then the thing, though, is they couldn’t because there are alternatives without ads and people would just flock-
Sebastian Raschka
(03:33:31)
…to the other products. And it also is just crazy how they’re one-upping each other, spending so much money just to get the users.
Nathan Lambert
(03:33:41)
I think so. Like some Instagram ads—I don’t use Instagram- …but I understand the appeal of paying a platform to find users who will genuinely like your product. That is the best case of things like Instagram ads.
Nathan Lambert
(03:33:56)
But there are also plenty of cases where advertising is very awful for incentives. I think that a world where the power of AI can integrate with that positive view—like, I am a person and I have a small business and I want to make the best damn steak knives in the world and I want to sell them to somebody who needs them. And if AI can make that sort of advertising work even better, that’s very good for the world, especially with digital infrastructure because that’s how the modern web has been built. But that’s not to say that addicting feeds so that you can show people more content is a good thing. So, I think that’s even what OpenAI would say is they want to find a way that can make the monetization upside of ads while still giving their users agency.
Nathan Lambert
(03:34:45)
And I personally would think that Google is probably going to be better at figuring out how to do this because they already have ad supply. If they figure out how to turn this demand in their Gemini app into useful ads, then they can turn it on. I don’t know if I think it’s this year, but there will be experiments with it.
Sebastian Raschka
(03:35:06)
I do think what holds companies back right now is really just that the competition is not doing it. It’s more like a reputation thing. I think people are just afraid right now of ruining their reputation or losing users- …because it would make headlines if someone launched these ads. But-
Nathan Lambert
(03:35:23)
Unless they were great, but the first ads won’t be great because it’s a hard problem that we don’t know how to solve.
Sebastian Raschka
(03:35:28)
Yeah, I think also the first version of that will likely be something like on X, like the timeline where you have a promoted post sometimes in between. It’ll be something like that where it will say “promoted” or something small, and then there will be an image or something. I think right now the problem is who makes the first move.
Nathan Lambert
(03:35:43)
If we go 10 years out, the proposition for ads is that you will make so much money on ads by having so many users- …that you can use this to fund better R&D and- …make better models, which is why- …like YouTube is dominating the market. Netflix is scared of YouTube. They make, I don’t know—I pay $28 a month for premium. They make at least $28 a month off of me and many other people, and they’re just creating such a dominant position in video. So I think that’s the proposition, which is that ads can give you a sustained advantage- …in what you’re spending per user. But there’s so much money in it right now that it’s like somebody starting that flywheel- is scary because it’s a long-term bet.

Big acquisitions in 2026

Lex Fridman
(03:36:29)
Do you think there’ll be some crazy big moves this year business-wise? Like Google or Apple acquiring Anthropic or something like this?
Nathan Lambert
(03:36:40)
Dario will never sell, but we are starting to see some types of consolidation, with Groq being valued at $20 billion and Scale AI for almost 30 billion. There are countless other deals structured in a way that is actually detrimental to the Silicon Valley ecosystem—these licensing deals where not everybody gets brought along, rather than a full acquisition that benefits the rank-and-file employees by getting their stock vested. That’s a big issue for Silicon Valley culture to address because the startup ecosystem is the lifeblood. If you join a startup, even if it’s not that successful, your startup very well might get acquired at a cheap premium and you’ll get paid out for your equity.
Nathan Lambert
(03:37:24)
And these licensing deals are essentially taking the top talent a lot of the time. I think the deal for Groq to NVIDIA is rumored to be better for the employees, but it is still this antitrust-avoiding thing. I think that this trend of consolidation will continue. Me and many smart people I respect have been expecting consolidation to have happened sooner, but it seems like things are starting to turn. But at the same time, you have companies raising ridiculous amounts of money for reasons that I don’t understand. I’m like, “I don’t know why you’re taking that money.” So it’s maybe mixed this year, but some consolidation pressure is starting.
Lex Fridman
(03:38:04)
What kind of surprising consolidation do you think we’ll see? You say Anthropic is a “never.” I mean, Groq is a big one—Groq with a Q, by the way.
Nathan Lambert
(03:38:12)
Yeah. There’s just a lot of startups and there’s a very high premium on AI startups. So there could be a lot of $10 billion range acquisitions, which is a really big acquisition for a startup that was maybe founded a year ago. I think Manus.ai—this company based in Singapore that was founded eight months ago and then had a $2 billion exit. I think there will be some other big multi-billion dollar acquisitions, like Perplexity.
Lex Fridman
(03:38:39)
Like Perplexity, right?
Nathan Lambert
(03:38:40)
Yeah, people rumor them to Apple. I think there’s a lot of pressure and liquidity in AI. There’s pressure on big companies to have outcomes, and I would guess that a big acquisition gives people leeway to then tell the next chapter of that story.
Lex Fridman
(03:38:56)
I mean, yeah, we’ve been talking about code. Maybe somebody acquires Cursor.
Nathan Lambert
(03:39:02)
They’re in such a good position because they have so much user data. And we talked about continual learning and stuff; they had one of the most interesting blog posts. They mentioned that their new Composer model was a fine-tune of one of these large Mixture of Experts models from China. You can know that from gossip or because the model sometimes responds in Chinese, which none of the American models do. They had a blog post where they said, “We’re updating the model weights every 90 minutes based on real-world feedback from people using it.” Which is the closest thing to real-world RL happening on a model, and it was just right there in one of their blog posts.
Lex Fridman
(03:39:36)
That’s incredible.
Nathan Lambert
(03:39:36)
—which is super cool.
Lex Fridman
(03:39:38)
And by the way, I should say I use Composer a lot because one of the benefits it has is it’s fast.
Nathan Lambert
(03:39:43)
I need to try it because everybody says this.
Lex Fridman
(03:39:45)
And there’ll be some IPOs potentially. You think Anthropic, OpenAI, xAI?
Nathan Lambert
(03:39:51)
They can all raise so much money so easily that they don’t feel a need to… So long as fundraising is easy, they’re not going to IPO because public markets apply pressure.
Nathan Lambert
(03:40:00)
I think we’re seeing in China that the ecosystem’s a little different, with both MiniMax and Z.ai applying for filing IPO paperwork, which will be interesting to see how the Chinese market reacts. I actually would guess that it’s going to be similarly hypey to the US so long as all this is going, and not based on the realities that they’re both losing a ton of money. I wish more of the American gigantic AI startups were public because it would be very interesting to see how they’re spending their money and have more insight. And also just to give people access to investing in these, because I think that they’re the companies of the era. And the tradition is now for so many of the big startups in the US to not go public.
Nathan Lambert
(03:40:43)
It’s like we’re still waiting for Stripe and their IPO, but Databricks definitely didn’t; they raised like a Series G or something. And I just feel like it’s a kind of a weird equilibrium for the market where I would like to see these companies go public and evolve in that way that a company can.

Future of OpenAI, Anthropic, Google DeepMind, xAI, Meta

Lex Fridman
(03:41:01)
You think 10 years from now some of the frontier model companies are still around? Anthropic, OpenAI?
Nathan Lambert
(03:41:08)
I definitely don’t see it to be a winner-takes-all unless there truly is some algorithmic secret that one of them finds that lets this flywheel. Because the development path is so similar for all of them. Google and OpenAI have all the same products, and Anthropic’s more focused, but when you talk to people, it sounds like they’re solving a lot of the same problems. So I think… there’s offerings that’ll spread out. It’s a very big cake that’s being made that people are going to take money out of.
Lex Fridman
(03:41:36)
I don’t want to trivialize it, but OpenAI and Anthropic are primarily LLM service— —providers. And some of the other companies like Google and xAI, linked to X, do other stuff— —too. And so it’s very possible if AI becomes more commodified that the companies that are just providing LLMs will die.
Sebastian Raschka
(03:42:00)
I think the advantage they have is they have a lot of users, and I think they will just pivot. Like Anthropic, I think, pivoted. I don’t think they originally planned to work on code, but it happened that they found, “Okay, this is a nice niche and now we are comfortable in this niche and we push on this niche.” And I can see the same thing once… Let’s say hypothetically speaking, I’m not sure if it will be true, but let’s say Google takes all the market share of the general chatbot. Maybe OpenAI will then be focused on some other sub-topic— —like… They have too many users to go away in the foreseeable future, I think.
Lex Fridman
(03:42:37)
I think Google is always ready to say, “Hold my beer,” with AI mode.
Nathan Lambert
(03:42:40)
I think the question is if the companies can support the valuations. I’d see the AI companies being looked at in some ways like AWS, Azure, and GCP, which are all competing in the same space and all very successful businesses. There’s a chance that the API market is so unprofitable that they go up and down the stack to products and hardware. They have so much cash that they can build power plants and build data centers, which is a durable advantage now. But there’s also just a reasonable outcome that these APIs are so valuable and so flexible for developers that they become the likes of something like AWS. But AWS and Azure are also going to have these APIs, so five or six people competing in the API market is hard. So maybe that’s why they get squeezed out.
Lex Fridman
(03:43:27)
You mentioned “RIP LLaMA.” Is there a path to winning for Meta?
Nathan Lambert
(03:43:32)
I think nobody knows. They’re moving a lot, so they’re signing licensing deals with Black Forest Labs, which is image generation, or Midjourney. So I think in some ways, on the product and consumer-facing AI front, it’s too early to tell. I think they have some people that are excellent and very motivated being close to Zuckerberg. So I think that there’s still a story to unfold there. Llama is a bit different, where Llama was the most focused expression of the organization. And I don’t see Llama being supported to that extent anymore. I think it was a very successful brand for them, so they still might do some part of participation in the open ecosystem or continue the Llama brand into a different service, because people know what Llama is.
Lex Fridman
(03:44:21)
You think there’s a Llama 5?
Nathan Lambert
(03:44:24)
Not an open weight one.
Sebastian Raschka
(03:44:26)
It’s interesting. Just to recap a bit, I mean, Llama was the pioneering open-weight model—Llama 1, 2, 3, a lot of love. But I think then what happened, just hypothesizing or speculating, is that the leaders at Meta, like the upper executives, got very excited about Llama because they saw how popular it was in the community. And then I think the problem was trying to use the open source to make a bigger splash. It felt forced, like developing these very big Llama 4 models just to be on the top of the benchmarks.
Sebastian Raschka
(03:45:09)
But I don’t think the goal of Llama models is to be on top of the benchmarks beating, let’s say, ChatGPT or other models. I think the goal was to have a model that people can use, trust, modify, and understand. That includes having smaller models; they don’t have to be the best models. And what happened was just these models were—of course, the benchmarks suggest that they were better than they were because I think they had specific models trained on preferences so that they performed well on the benchmarks. That’s kind of this overfitting thing to force it to be the best. But then at the same time, they didn’t do the small models that people could use, and no one could run these big models.
Sebastian Raschka
(03:45:45)
And then there was kind of a weird thing. I think it’s just because people got too excited about headlines pushing the frontier. I think that’s it.
Lex Fridman
(03:45:54)
And too much on the benchmark-sync side.
Sebastian Raschka
(03:45:56)
It’s too much work.
Nathan Lambert
(03:45:57)
I think it imploded under internal political fighting and misaligned incentives. The researchers want to build the best models, but there’s a layer of organization— …and management that is trying to demonstrate that they do these things. There are a lot of pieces and rumors where some horrible technical decision was made, and it just seems like it got too bad where it all just crashed out.
Lex Fridman
(03:46:24)
Yeah, but we should also give huge props to Mark Zuckerberg. I think it comes from Mark actually, from the top of the leadership, saying open source is important. The fact that that exists means there could be a Llama 5, where they learn the lessons from the benchmark-syncing and say, “We’re going to be GPT-OSS—” “…and provide a really awesome library of open source.”
Nathan Lambert
(03:46:51)
What people say is that there’s a debate between Mark and Alexander Wang, who is very bright but much more against open source. To the extent that he has a lot of influence over the AI org, it seems much less likely, because Mark brought him in for fresh leadership in directing AI. And if being open or closed is no longer the defining nature of the model, I don’t expect that to be a defining argument between Mark and Alex. They’re both very bright, but I have a hard time understanding all of it because Mark wrote this piece in July of 2024, which was probably the best blog post at the time, making the case for open source AI. And then July 2025 came around and it was like, “We’re reevaluating our relationship with open source.” So it’s just kind of…
Sebastian Raschka
(03:47:42)
But I think also the problem—well, we may have been a bit too harsh, and that caused some of that. I mean, we as open source developers or the open source community. Because even though the model was maybe not what everyone hoped for, it got a lot of backlash. I think that was a bit unfortunate because as a company, they were hoping for positive headlines. Instead of just getting no headlines or positive headlines, they got negative headlines. And then it kind of reflected badly on the company. It’s maybe a spite reaction, almost like, “Okay, we tried to do something nice, we tried to give you something cool like an open source model, and now you are being negative about us.” So in that sense, it looks like, “Well, maybe then we’ll change our mind.” I don’t know.
Lex Fridman
(03:48:38)
Yeah, that’s where the dynamics of discourse on— …X can lead us as a community astray. Because sometimes it feels random; people pick the things they like and don’t like. I mean, you can see the same thing with Grok 4.1 and Grok Code Fast 1.0. I don’t think, vibe-wise, people love it publicly. But a lot of people use it. So if you look at Reddit and X, they don’t really give it praise from the programming community— … but, like, they use it. And the same thing with probably Llama. I don’t understand the dynamics of either positive hype or negative hype. I don’t understand it.
Nathan Lambert
(03:49:25)
I mean, one of the stories of 2025 is the US filling the gap of Llama, which is all the rise of these Chinese open-weight models- … to the point where I was like, “That was the single issue I’ve spent a lot of energy on in the last five months,” which is trying to do policy work- … to get the US to invest in this.
Lex Fridman
(03:49:41)
So just tell me the story of Adam.
Nathan Lambert
(03:49:43)
Adam Project is… It started as me calling it the American DeepSeek Project, which doesn’t really work for DC audiences, but it’s the story of what is the most impactful thing I can do with my career. These Chinese open-weight models are cultivating a lot of power and there is a lot of demand for building on open models, especially in enterprises in the US that are very cagey about Chinese models.
Lex Fridman
(03:50:06)
Looking at Perplexity, The Adam Project—American Truly Open Models—is a US-based initiative to build and host high-quality, genuinely open-weight AI models and supporting infrastructure explicitly aimed at competing with and catching up to China’s rapidly advancing open-source AI ecosystem.
Nathan Lambert
(03:50:25)
I think the one-sentence summary would be that—or two sentences. One is a proposition that open models are going to be an engine for AI research because that is what people start with; therefore, it’s important to own them. And the second one is, therefore, the US should be building the best models so that the best research happens in the US and those US companies take the value from being the home of where AI research is happening. Without more investment in open models, we have all the plots on the website where it’s like, “Qwen, Qwen, Qwen, Qwen,” and it’s all these models that are excellent from these Chinese companies that are cultivating influence in the US and internationally.
Nathan Lambert
(03:51:07)
And the US is spending way more on AI. The ability to create open models that are half a generation or a generation beyond what the cutting edge of closed labs is costs roughly $100 million, which is a lot of money, but not compared to what these companies have. Therefore, we need a centralizing force of people who want to do this. I think we got signed engagement from people pretty much across the full stack, including policy.
Lex Fridman
(03:51:33)
So there has been support from the administration?
Nathan Lambert
(03:51:36)
I don’t think anyone technically in government has signed it publicly, but I know that people that have worked in AI policy, both in the Biden and Trump administrations, are very supportive of trying to promote open-source models in the US. I think, for example, AI2 got a grant from the NSF for $100 million over four years, which is the biggest CS grant the NSF has ever awarded, for AI2 to attempt this, and I think it’s a starting point. But the best results happen when there are multiple organizations building models because they can cross-pollinate ideas and build this ecosystem. It doesn’t work if it’s just Llama releasing models to the world, because Llama could go away. The same thing applies for AI2; I can’t be the only one building models.
Nathan Lambert
(03:52:24)
It becomes a lot of time spent on talking to people, whether they’re in policy… I know NVIDIA is very excited about this. I think Jensen Huang has been specifically talking about the urgency for this, and they’ve done a lot more in 2025, where the Nemotron 3 models are more of a focus. They’ve started releasing some data along with NVIDIA’s open models and very few companies do this, especially of NVIDIA’s size. So there are signs of progress. We hear about Reflection AI where they say their two-billion-dollar fundraise is dedicated to building US open models, and their announcement tweet reads like a cultural tide starting to turn.
Nathan Lambert
(03:53:09)
I think in July was when we had four or five DeepSeek-caliber Chinese open-weight models and zero from the US. That’s the moment where I released this and was like, “I guess I have to spend energy on this because nobody else is gonna do it.” So it takes a lot of people contributing together. I’m not saying the Adam Project is the only thing moving the ecosystem, but it’s people like me doing this sort of thing to get the word out.

Manhattan Project for AI

Sebastian Raschka
(03:53:35)
Do you like the 2025 America’s AI Action Plan? That includes open source stuff. The White House AI Action Plan includes a dedicated section titled “Encourage Open-Source and Open-Weight AI,” defining such models and arguing they have unique value for innovation and startups.
Nathan Lambert
(03:53:52)
Yeah. I mean, the AI Action Plan is just a plan, but I think it’s maybe the most coherent policy document that has come out of the administration, and I hope that it largely succeeds. I know people that have worked on it. The challenge is taking policy and making it real, and I have no idea how to do this as an AI researcher, but largely a lot of things in that were very real. There’s a huge build-out of AI in the country, and while there are issues people hear about, from water use to whatever, we should be able to build things in this country without ruining places in the process. It’s worthwhile to spend energy on.
Nathan Lambert
(03:54:35)
I think that’s a role for the federal government. They set the agenda. And setting the agenda so that open-weight should be a first consideration is a large part of what they can do to get people thinking about it.
Sebastian Raschka
(03:54:49)
Also, for education and talent, it’s very important. Otherwise, if there are only closed models, how do you get the next generation of people contributing? You would only be able to learn after you joined a company, but at that point, how do you identify and hire talented people? I think open source is essential for educating the population and training the next generation of researchers. It’s the only way.
Nathan Lambert
(03:55:24)
The way that I could’ve gotten this to go more viral was to tell a story of Chinese AI integrating with an authoritarian state, becoming ASI and taking over the world, and therefore we need our own American models. But it’s very intentional why I talk about innovation and science in the US, because I think it’s both more realistic as an outcome and it’s a world that I would like to manifest.
Sebastian Raschka
(03:55:47)
I would say, though, that any open-weight model is a valuable model.
Nathan Lambert
(03:55:55)
Yeah. And my argument is that we should be in a leading position. But I think it’s worth saying it so simply because there are still voices in the AI ecosystem that say we should consider banning the release of open models due to the safety risks. And I think it’s worth adding that, effectively, that’s impossible without the US having its own Great Firewall, which is known to not work that well. The cost for training these models, whether it’s one to a hundred million dollars, is attainable to a huge amount of people in the world that want to have influence, so these models will be getting trained all over the world. We want this information and these tools to flow freely across the world and into the US so that people can use them and learn from them.
Nathan Lambert
(03:56:47)
Stopping that would be such a restructuring of our internet that it seems impossible.
Sebastian Raschka
(03:56:51)
Do you think maybe the big open-weight models from China are actually a good thing for US companies? You mentioned earlier they are usually one generation behind in terms of what they release open source. For example, gpt-oss-120b might not be the cutting-edge model, or Gemini 3 might not be, because they want to ensure it is safe. But when these companies see that DeepSeek-V3.2 is really awesome and is being used with no backlash or security risk, that could encourage them to release better models. Maybe that is a very positive thing.
Nathan Lambert
(03:57:30)
A hundred percent. These Chinese companies have set things into motion that I think would potentially not have happened if they were not all releasing models. I’m almost sure that those discussions have been had by leadership.
Sebastian Raschka
(03:57:45)
Is there a possible future where the dominant models, AI models in the world are all open source?
Nathan Lambert
(03:57:50)
Depends on the trajectory of progress that you predict. If you think saturation in progress is coming within a few years, essentially within the time where financial support is still very good, then open models will be so optimized and so much cheaper to run that they’ll win out. Essentially, this goes back to open source ideas where so many more people will be putting money into optimizing the serving of these open-weight common architectures that they will become standards. Then you could have chips dedicated to them and it’ll be way cheaper than the offerings from these closed companies that are custom.
Sebastian Raschka
(03:58:25)
We should say that the AI2027 report predicts—one of the things it does from a narrative perspective is that there will be a lot of centralization. As the AI system gets smarter and smarter, the national security concerns will come to be, and you’ll centralize the labs, and you’ll become super secretive, and there’ll be this whole race.
Lex Fridman
(03:58:45)
…from a military perspective of how you… between China and the United States. And so all of these fun conversations we’re having about LLMs—all the generals and soldiers will come into the room and be like, “All right, we’re now in the Manhattan Project stage of this whole thing.”
Sebastian Raschka
(03:59:02)
I think 2025, ’26, ’27—I don’t think something like that is even remotely possible. I mean, you can make the same argument for computers, right? You can say, “Okay, computers are capable and we don’t want the general public to get them.” Or chips—even AI chips—but you see how Huawei makes chips now. It took a few years, but… and I don’t think there is a way you can contain knowledge like that. I think in this day and age, it is impossible, like the internet. I don’t think this is a possibility.
Nathan Lambert
(03:59:37)
On the Manhattan Project thing, one of my funny things looking at them is I think that a Manhattan Project-like thing for open models would actually be pretty reasonable, because it wouldn’t cost that much. But I think that that will come. It seems like culturally, the companies are changing. But I agree with Sebastian on all of the stuff that he just said. It’s just like, I don’t see it happening nor being helpful.
Lex Fridman
(03:59:58)
Yeah. I mean, the motivating force behind the Manhattan Project was that there was civilizational risk. It’s harder to motivate that for open-source models.
Nathan Lambert
(04:00:08)
There’s not civilizational risk.

Future of NVIDIA, GPUs, and AI compute clusters

Lex Fridman
(04:00:10)
On the hardware side, we mentioned NVIDIA a bunch of times. Do you think Jensen and NVIDIA are going to keep winning?
Sebastian Raschka
(04:00:18)
I think they have the downside that they have to iterate a lot and manufacture a lot. And what they’re doing—they do innovate, but I think there’s always the chance that there is someone who does something fundamentally different, who gets very lucky and then does something. But the problem is, I think, adoption. You know, the moat of NVIDIA is probably not just the GPU; it’s more like the CUDA ecosystem, and that has evolved over two decades. I mean, even back when I was a grad student, I was in a lab doing biophysical simulations, molecular dynamics, and we had a Tesla GPU back then just for the computations. It was fifteen years ago now.
Sebastian Raschka
(04:01:01)
They built this up for a long time and that’s the moat, I think. It’s not the chip itself. Although they have the money now to iterate, build, and scale, it’s really on the compatibility. If you’re at that scale as a company, why would you go with something risky where it’s only— … a few chips that they can make per year? You go with the big one. But then I do think with LLMs now, it will be easier to design something like CUDA. It took 15 years because it was hard, but now that we have LLMs, we can maybe replicate CUDA.
Lex Fridman
(04:01:35)
And I wonder if there will be a separation of the training and the inference- … compute, as we stabilize a bit more and more compute is needed for inference.
Nathan Lambert
(04:01:47)
That’s supposed to be the point of the Groq acquisition. And that’s why part of what Vera Rubin is—
Nathan Lambert
(04:01:52)
… where they have a new chip with no high-bandwidth memory, or very little, which is one of the most expensive pieces. It’s designed for pre-fill, which is the part of inference where you essentially do a lot of matrix multiplications, and then you only need the memory when you’re doing this autoregressive generation and you have the KV cache swaps. So they have this new GPU that’s designed for that specific use case, and then the cost of ownership per flop is actually way lower. But I think that NVIDIA’s fate lies in the diffusion of AI still. Their biggest clients are still these hyperscale companies, whether it’s Google—which obviously can make TPUs—Amazon making Trainium, or Microsoft trying to do its own things.
Nathan Lambert
(04:02:36)
As long as the pace of AI progress is high, NVIDIA’s platform is the most flexible and people will want that. But if there’s stagnation, then with creating bespoke chips, there’s more time to do it.
Lex Fridman
(04:02:50)
It’s interesting that NVIDIA is, is quite active in trying to develop all kinds of different products.
Nathan Lambert
(04:02:55)
They try to create areas of commercial value that will use a lot of GPUs.
Lex Fridman
(04:03:01)
But they keep innovating and they’re doing a lot of incredible research, so…
Nathan Lambert
(04:03:06)
Everyone says the company’s super oriented around Jensen and how operationally plugged in he is. It sounds so unlike many other big companies that I’ve heard about. And so long as that’s the culture, I think that you can expect that to keep progress happening. It’s like he’s still in the Steve Jobs era of Apple. So long as that is how it operates, I’m pretty optimistic for their situation because it is their top-order problem, and I don’t know if making these chips for the whole ecosystem is the top goal of all these other companies. They’ll do a good job, but it might not be as good of a job.
Lex Fridman
(04:03:43)
Since you mentioned Jensen, I’ve been reading a lot about history and about singular figures in history. What do you guys think about the great man view of history? How important are individuals for steering the direction of history in the tech sector? So, you know, what’s NVIDIA without Jensen? You mentioned Steve Jobs. What’s Apple without Steve Jobs? What’s xAI without Elon or DeepMind without Demis?
Nathan Lambert
(04:04:11)
People make things earlier and faster, whereas scientifically, many great scientists credit being in the right place at the right time. Eventually someone else will still have the idea. So I think that in that way, Jensen is helping manifest this GPU revolution much faster and much more focused than it would be without having a person like him there. This is making the whole AI build-out faster. But I do still think that eventually something like ChatGPT would have happened and a build-out like this would have happened, but it probably would not have been as fast. I think that’s the sort of flavor that is applied.
Sebastian Raschka
(04:04:55)
These individual people are placing bets on something. Some get lucky, some don’t. But if you don’t have these people at the helm, it would be more diffused. It’s almost like investing in an ETF versus individual stocks. Individual stocks might go up or down more heavily than an ETF, which is more balanced. We’ll eventually get there, but I just think the focus is the thing. Passion and focus.
Lex Fridman
(04:05:19)
Isn’t there a real case to be made that without Jensen, there’s not a reinvigoration of the deep learning revolution?
Nathan Lambert
(04:05:26)
It could’ve been 20 years later, is the thing I would say.
Lex Fridman
(04:05:29)
Yeah, 20 is…
Nathan Lambert
(04:05:30)
Or another deep learning winter could have come… …If GPUs weren’t around.
Lex Fridman
(04:05:35)
That could change history completely because you could think of all the other technologies that could’ve come in the meantime, and the focus of human civilization would get… Silicon Valley would be captured by different hype.
Sebastian Raschka
(04:05:48)
But I do think there’s certainly an aspect where the GPU trajectory was all planned. But on the other end, it’s also a lot of lucky coincidences or good intuition. Like the investment into, let’s say, biophysical simulations. I mean, I think it started with video games and then it just happened to be good at linear algebra because video games require a lot of linear algebra. And then you have the biophysical simulations. But still, I don’t think the master plan was AI. I think it just happened to be Alex Krizhevsky. So someone took these GPUs and said, “Hey, let’s try to train a neural network on that.” It happened to work really well and… …I think it only happened because you could purchase those GPUs.
Nathan Lambert
(04:06:30)
Gaming would’ve created a demand for faster processors if… …NVIDIA had gone out of business in the early days. That’s what I would think. I think GPUs would still exist… …At the time of AlexNet and at the time of the Transformer. It was just hard to know if it would be one company as successful or multiple smaller companies with worse chips. But I don’t think that’s a 100-year delay. It might be a decade delay.
Lex Fridman
(04:07:01)
Well, it could be a one, two, three, four, five-decade delay. I mean, I just can’t see Intel or AMD doing what NVIDIA did.
Nathan Lambert
(04:07:08)
I don’t think it would be a company that exists.
Sebastian Raschka
(04:07:11)
A new company.
Nathan Lambert
(04:07:11)
I think it would be a different company that would rise.
Sebastian Raschka
(04:07:13)
Like Silicon Graphics or something.
Nathan Lambert
(04:07:15)
So yeah, some company that has died would have done it.
Lex Fridman
(04:07:19)
But looking at it, it seems like these singular figures, these leaders, have a huge impact on the trajectory of the world. Obviously, incredible teams are behind them. But, you know, having that kind of very singular, almost dogmatic focus- …is necessary to make progress.
Sebastian Raschka
(04:07:40)
Yeah, I mean, even with GPT, it wouldn’t exist if there wasn’t a person, Ilya, who pushed for this scaling, right?
Nathan Lambert
(04:07:47)
Yeah, Dario was also deeply involved in that. If you read some of the histories from OpenAI, it almost seems wild thinking about how early these people were like, “We need to hook up 10,000 GPUs and take all of OpenAI’s compute and train one model.” There were a lot of people there that didn’t want to do that.

Future of human civilization

Lex Fridman
(04:08:02)
Which is an insane thing to believe—to believe scaling before scaling has any indication that it’s going to materialize. Again, singular figures. Speaking of which, 100 years from now, this is presumably post-singularity, whatever the singularity is. When historians look back at our time now, what technological breakthroughs would they really emphasize as the breakthroughs that led to the singularity? So far we have Turing to today, which is 80 years.
Sebastian Raschka
(04:08:36)
I think it would still be computing, like the umbrella term “computing.” I don’t necessarily think that even 100 or 200 years from now it would be AI. It could still well be computers, you know? We are now taking better advantage of computers, but it’s the fact of computing.
Lex Fridman
(04:08:53)
It’s basically a Moore’s Law kind of discussion. Even the details of CUDA and GPUs won’t even be remembered, and there won’t be all this software turmoil. It’ll be just, obviously, compute.
Nathan Lambert
(04:09:07)
I generally agree, but is it the connectivity of the internet and compute able to be merged? Or is it both of them?
Sebastian Raschka
(04:09:17)
I think the internet will probably be related to communication—it could be a phone, internet, or a satellite. And compute is more like the scaling aspect of it.
Lex Fridman
(04:09:29)
It’s possible that the internet is completely forgotten. That the internet is wrapped into the phone networks, like communication networks. This is just another manifestation of that, and the real breakthrough comes from just the increased compute—Moore’s Law, broadly defined.
Nathan Lambert
(04:09:46)
Well, I think the connection of people is very fundamental to it. You want to find the best person in the world for something, they are somewhere in the world. Being able to have that flow of information—AIs will also rely on this. I’ve been fixating on when I said the dream was dead about the one central model; the thing that is evolving is that people have many agents for different tasks. People already started doing this with different Clouds for different tasks. It’s described as many AGIs in the data center where each one manages and they talk to each other. That is so reliant on networking and the free flow of information on top of compute. But networking, especially with GPUs, is such a part of the scaling of compute. The GPUs and the data centers need to talk to each other.
Lex Fridman
(04:10:36)
Do you think there’s something very specific and singular to the fact that it’s neural networks that’s seen as a breakthrough? Like a genius move where you’re basically replicating, in a very crude way, the structure of the human brain, the human mind?
Sebastian Raschka
(04:10:54)
I think without the human mind, we probably wouldn’t have neural networks because it was an inspiration for them. But on the other end, I think it’s just so different. I mean, it’s digital versus biological, so I think it will probably be more grouped as an algorithm.
Lex Fridman
(04:11:11)
That’s massively parallelizable— —on this particular kind of compute?
Sebastian Raschka
(04:11:15)
It could have well been genetic computing, like genetic algorithms, just parallelized. It just happens that this is more efficient and works better.
Lex Fridman
(04:11:23)
And it very well could be that the neural networks, the way we architect them now, are just a small component of the system that leads to the singularity.
Nathan Lambert
(04:11:33)
I think if you think of it over 100 years, society can be changed more with more compute and intelligence because of autonomy. But looking at this, what are the things from the Industrial Revolution that we remember? We remember the engine—it is probably the equivalent of the computer in this. But there’s a lot of other physical transformations that people are aware of, like the cotton gin and all these machines that are still known—air conditioning, refrigerators— Some of these things from AI will still be known; the word “transformer” could still very well be known. I would guess that deep learning is definitely still known, but the transformer might be evolved away from in 100 years with AI researchers everywhere. But I think deep learning is likely to be a term that is remembered.
Lex Fridman
(04:12:28)
And I wonder what the air conditioning and the refrigeration of the future is that AI brings. If we travel forward 100 years from now, what do you think is different? How does the world look? First of all, do you think there’s humans? Do you think there’s robots everywhere walking around?
Sebastian Raschka
(04:12:46)
I do think there will be specialized robots for certain tasks.
Lex Fridman
(04:12:49)
Humanoid form?
Sebastian Raschka
(04:12:50)
Maybe half-humanoid. We’ll see. I think for certain things, yes, there will be humanoid robots because it’s just amenable to the environment. But for certain tasks, it might not make sense. What’s harder to imagine is how we interact with devices and what humans do with them. I’m pretty sure it will not be the cellphone or the laptop. Will it be implants?
Lex Fridman
(04:13:16)
I mean, it has to be brain-computer interfaces, right? I mean, 100 years from now, it has to—given the progress we’re seeing now— —there has to be, unless there’s legitimately a complete alteration of how we interact with reality.
Sebastian Raschka
(04:13:33)
On the other hand, if you think of cars, cars are older than 100 years, right? And it’s still the same interface. We haven’t replaced cars with something else; we just made them better. But it’s still a steering wheel, it’s still wheels.
Nathan Lambert
(04:13:45)
I think we’ll still carry around a physical brick of compute— —because people want some ability to have a private interface. You might not engage with it as much as a phone, but having something where you could have private information that is yours as an interface between you and the rest of the internet is something I think will still exist. It might not look like an iPhone, and it might be used a lot less, but I still expect people to carry things around.
Lex Fridman
(04:14:08)
Why do you think the smartphone is the embodiment of privacy? There’s a camera on it. There’s a-
Nathan Lambert
(04:14:15)
Private for you, like encrypted messages, encrypted photos; you know what your life is. I guess this is a question of how optimistic you are on brain-machine interfaces. Is all that just going to be stored in the cloud, like your whole calendar? It’s hard to think about processing all the information that we can process visually through brain-machine interfaces presenting something like a calendar to you. It’s hard to just think about knowing your email inbox without looking. Like you signal to a computer and then you just know your email inbox. Is that something that the human brain can handle being piped into it non-visually? I don’t know exactly how those transformations happen. ‘Cause humans aren’t changing in 100 years.
Nathan Lambert
(04:15:05)
I think agency and community are things that people actually want.
Lex Fridman
(04:15:09)
A local community, yeah.
Nathan Lambert
(04:15:10)
So, like, people you are close to, being able to do things with them and being able to ascribe meaning to your life. I don’t think that human biology is changing away from those on a timescale that we can discuss. UBI does not solve agency. I do expect mass wealth, and I hope that it has spread so that the average life does look very different in 100 years. But that’s still a lot to happen in 100 years. If you think about countries that are early in their development process, to build all the infrastructure and have policy that shares one nation’s wealth with another is… I think it’s an optimistic view to see all that happening in 100 years- …while they are still independent entities and not just absorbed into some international order by force.
Lex Fridman
(04:16:13)
But there could be just better, more elaborate, more effective- …social support systems that help alleviate some levels of basic suffering from the world. With the transformation of society where a lot of jobs are lost in the short term, I think we have to really remember that each individual job that’s lost is a human being who’s suffering. When jobs are lost at scale, it is a real tragedy. You can make all kinds of arguments about economics or say it’s all going to be okay and good for the GDP because new jobs will be created, but fundamentally at the individual level for that human being, that’s real suffering. That’s a real personal tragedy.
Lex Fridman
(04:16:58)
And we have to not forget that as the technologies are being developed. Also, my hope for all the AI slop we’re seeing is that there will be a greater and greater premium for the fundamental aspects of the human experience that are in-person. The things that we all enjoy, like seeing each other and talking together in-person.
Nathan Lambert
(04:17:22)
The next few years are definitely going to see an increased value on physical goods and events- …and even more pressure from slop. The slop is only starting. The next few years will be more and more diverse-
Lex Fridman
(04:17:37)
Do you think we’ll all be drow-
Nathan Lambert
(04:17:37)
…versions of slop.
Lex Fridman
(04:17:38)
They would be drowning in slop. Is that what-
Nathan Lambert
(04:17:40)
So I’m hoping that society drowns in slop enough to snap out of it and be like, “We can’t. It just doesn’t matter. We all can’t deal with it.” And then, the physical has such a higher premium on it.
Sebastian Raschka
(04:17:53)
Even like classic examples, I honestly think this is true, and I think we will get tired of it. We are already kind of tired of it. Same with art. I don’t think art will go away. I mean, you have physical paintings. There’s more value, not just monetary value, but just more appreciation for the actual painting than a photocopy of that painting. It could be a perfect digital reprint, but there is something when you go to a museum and you look at that art and you see that real thing and you just think about, “Okay, a human.” It’s like a craft. You have an appreciation for that.
Sebastian Raschka
(04:18:25)
And I think the same is true for writing, for talking, for any type of experience, where it will be… I do unfortunately think it will be like a dichotomy, like a fork where some things will be automated. There are not as many paintings as there used to be 200 years ago. There are more photographs, more photocopies. But at the same time, it won’t go away. There will be value in that. I think that the difference will just be what’s the proportion of that. But personally, I have a hard time reading things where I see it’s obviously AI-generated. I’m sorry, there might be really good information there, but I have a certain feeling, like, it’s not for me.
Nathan Lambert
(04:19:08)
I think eventually they’ll fool you, and it’ll be on platforms that give ways of verifying or building trust. So you will trust that Lex is not AI-generated, having been here. So then you have trust in this- -channel. But it’s harder for new people- -that don’t have that trust.
Sebastian Raschka
(04:19:25)
Well, that will get interesting because I think fundamentally it’s a solvable problem by having trust in certain outlets that they won’t do it, but it’s all going to be kind of trust-based. There will be some systems to authorize, “Okay, this is real. This is not real.” There will be some telltale signs where you can obviously tell this is AI-generated and this is not. But some will be so good that it’s hard to tell, and then you have to trust. And that will get interesting and a bit problematic.
Nathan Lambert
(04:19:54)
The extreme case of this is to watermark all human content. So all photos that we take on our own- -have some watermark until they- -are edited- -or something like this. And software can manage communications with the device manufacturer- -to maintain human editing, which is the opposite of the discussion to try to watermark AI images. And then you can make a Google image that has a watermark and use a different Google tool to remove the watermark.
Sebastian Raschka
(04:20:20)
Yeah. It’s going to be an arms race, basically.
Lex Fridman
(04:20:23)
And we’ve been mostly focusing on the positive aspects of AI. I mean, all the capabilities that we’ve been talking about can be used to destabilize human civilization with even just relatively dumb AI applied at scale, and then further and further, superintelligent AI systems. Of course, there’s the sort of doomer take that’s important to consider a little bit as we develop these technologies. What gives you hope about the future of human civilization? Everything we’ve been talking about—are we going to be okay?
Nathan Lambert
(04:20:59)
I think we will. I’m definitely a worrier both about AI and non-AI things, but humans do tend to find a way. I think that’s what humans are built for—to have community and find a way to figure out problems. And that’s what has gotten us to this point. I think the AI opportunity and related technologies is really big. I think that there are big social and political problems to help everybody understand that. I think that’s what we’re staring at a lot of right now; the world is a scary place, and AI is a very uncertain thing. And it takes a lot of work that is not necessarily building things. It’s like telling people and understanding people, things that the people building AI are historically not motivated or wanting to do.
Nathan Lambert
(04:21:50)
But it is something that is probably doable. It just will take longer than people want. And we have to go through that long period of hard, distraught AI discussions if we want to have the lasting benefits.
Lex Fridman
(04:22:04)
Yeah. Through that process, I’m especially excited that we get a chance to better understand ourselves at the individual level as humans and at the civilization level, and answer some of the big mysteries, like what is this whole consciousness thing going on here? It seems to be truly special. Like, there’s a real miracle in our mind. And AI puts a mirror to ourselves and we get to answer some of the big questions about what is this whole thing going on here.
Sebastian Raschka
(04:22:35)
Well, one thing about that is also what I do think makes us very different from AI and why I don’t worry about AI taking over is, like you said, consciousness. We humans, we decide what we want to do. AI in its current implementation, I can’t see it changing. You have to tell it what to do. And so you still have the agency. It doesn’t take the agency from you because it becomes a tool. You tell it what to do. It will be more automatic than other previous tools. It’s certainly more powerful than a hammer, it can figure things out, but it’s still you in charge, right? So the AI is not in charge, you’re in charge. You tell the AI what to do and it’s doing it for you.
Lex Fridman
(04:23:17)
So in the post-singularity, post-apocalyptic war between humans and machines, you’re saying humans are worth fighting for?
Sebastian Raschka
(04:23:27)
100%. I mean, the movie Terminator, they made in- -the ’80s, essentially, and I do think the only thing I can see going wrong is, of course, if things are explicitly programmed to do things that are harmful.
Lex Fridman
(04:23:43)
I think actually in a Terminator type of setup, I think humans win. I think we’re too clever. It’s hard to explain how we figure it out, but we do. And we’ll probably be using local LLMs, open source LLMs, to help fight the machines. I apologize for the ridiculousness. Like I said, Nathan, I’ve already been a big fan of yours for a long time. And I’ve been a big fan of yours, Sebastian, for a long time, so it’s an honor to finally meet you. Thank you for everything you put out into the world. Thank you for the excellent books you’re writing. Thank you for teaching us. And thank you for talking today. This was fun.
Sebastian Raschka
(04:24:26)
Thank you for inviting us here and having this human connection, which is actually-
Lex Fridman
(04:24:30)
-extremely valuable- -human connection. Thanks for listening to this conversation with Sebastian Raschka and Nathan Lambert. To support this podcast, please check out our sponsors in the description, where you can also find links to contact me, ask questions, give feedback, and so on. And now let me leave you with some words from Albert Einstein: “It is not that I’m so smart, but I stay with the questions much longer.” Thank you for listening, and hope to see you next time.

Transcript for Paul Rosolie: Uncontacted Tribes in the Amazon Jungle | Lex Fridman Podcast #489

This is a transcript of Lex Fridman Podcast #489 with Paul Rosolie.
The timestamps in the transcript are clickable links that take you directly to that point in
the main video. Please note that the transcript is human generated, and may have errors.
Here are some useful links:

Table of Contents

Here are the loose “chapters” in the conversation.
Click link to jump approximately to that part in the transcript:

Episode highlight

Paul Rosolie
(00:00:00)
… were standing there. Everyone is waiting, because at any moment an arrow could just fly through your neck, and there’s people holding shotguns. And the anthropologist, this little guy, is standing there in the front, and he’s going, “Wamole.” He’s going, “Brothers.” And then it happened. Then you start hearing people screaming, “Mashco! Mashco!” And people are screaming and women are lifting children and running into the huts and the dogs and chickens are going nuts and—
Lex Fridman
(00:00:25)
So fear.
Paul Rosolie
(00:00:26)
Fear. He’s going, “Look there. He has a bow. He has a bow.” And we’re looking up the beach and there’s just this clan walking down the beach with these seven-foot bows and they’re hunched over and they’re pointing at us. They’re going, “Look at that one.” They’re going, “Look, there’s a gun there.” And you can see them communicating to each other and the butterflies are swirling off the beach and they can hit a spider monkey out of the treetops at 40 meters. They can sneak up and you will never know they’re there. And so when that arrow passes through your body, you’ll only have a moment to realize it before you fall over. In order for any of this to make sense, I have to show you this footage.
Lex Fridman
(00:01:01)
And this has not been shown ever before.
Paul Rosolie
(00:01:05)
This is a world first.

Introduction

Lex Fridman
(00:01:08)
The following is a conversation with Paul Rosolie, his third time on the podcast. Paul is a naturalist, explorer, writer, and is someone who has dedicated his life to protecting the Amazon rainforest and celebrating the beauty of the natural world. He has a new book coming out in a few days titled Jungle Keeper that you should definitely go pre-order now. It tells some intense stories about his time in the jungle over the past several years, building up to a few epic recent events, including a new full-on extended encounter with an uncontacted tribe that we discuss in this podcast. Both the book and audiobook are great. I highly recommend it. If you would like to support Paul and his incredible team in their mission to protect the jungle, go to junglekeepers.org.
Lex Fridman
(00:02:01)
You can help with donations or by spreading the word or checking out the gala that Paul is hosting in New York on January 22nd in a few days. They are doing all they can to help raise funds for the mission of safeguarding as much of the rainforest as possible, and I think it’s a mission worth fighting for. The Amazon jungle is one of the most special and beautiful places on Earth. As an aside, allow me to look back briefly and mention something that I’ve been struggling with a bit. For context, I traveled to the Amazon rainforest with Paul a while back. It was an adventure of a lifetime, with lots of crazy twists and turns. We did record a podcast out there, literally in the jungle—Episode 429, if you want to go check it out. It was awesome.
Lex Fridman
(00:02:51)
And we also recorded a bunch of disparate footage of the journey just for fun. And I would still love to somehow put all that together into a cohesive video in case it’s interesting to someone. But I’ve learned just how difficult it is to organize and edit a pile of chaotically recorded footage like that. So, let’s see if I can pull it off. But in any case, this kind of raw vlog-style video is something that I would love to be able to do more of as a way to celebrate amazing human beings like Paul and others, including everyday people who I meet on my travels. So, I’ll keep trying, tinkering, learning, and I ask for your patience and support along the way. Now, back to our regular scheduled programming. This is the Lex Fridman Podcast.
Lex Fridman
(00:03:45)
To support it, please check out our sponsors in the description where you can also find links to contact me, ask questions, give feedback, and so on. And now, dear friends, here’s Paul Rosolie.

Uncontacted tribes in the Amazon Jungle

Lex Fridman
(00:04:00)
We survived a challenging time out in the jungle about a year and a half ago, and since then, your life has increasingly gotten more intense. You’ve achieved the incredible feat of saving now more than 130,000 acres of rainforest. And the goal that you’re working towards is protecting 200,000 acres more.
Lex Fridman
(00:04:23)
And doing so while facing extreme danger from narcos, narco-traffickers, so-called Cocaine Mafia in an escalating drug war. This is insane. These are new developments. Illegal loggers, as we’ve talked about before, gold miners, and the incredible recent encounter with an uncontacted tribe. We’ll talk about all of this. So your new book, Jungle Keeper, opens with the killing of two loggers— … by the warriors of an uncontacted tribe, the Mashco Piro, in August 2024.
Lex Fridman
(00:04:57)
And then you reveal that you had your own dramatic encounter with the tribe two months later in October 2024. So if I may, let me read the opening of the book: “Far out on the western edge of the Amazon rainforest, deep in the Peruvian jungle, a pair of loggers plunged their chainsaws into the buttressed roots of an ancient ironwood. An ironwood, or shihuahuaco, of this size is a giant among giants, an emergent sentinel that reaches heights of 160 feet, towering over the rest of the canopy.” I’ve read that many are over 1,000 years old, by the way, as an aside. And you’ve found ones that are 1,200 years old.
Paul Rosolie
(00:05:41)
Yeah, incredibly old.
Lex Fridman
(00:05:43)
Anyway, you continue: “This particular tree had started its life as a tiny sapling in the great jungle, a story that began before the Spanish reached Peru, long before the United States was even a dream. At a time when Leonardo da Vinci was still honing his talents in a faraway part of the world, through the Renaissance, the First and Second World Wars, and the birth of our grandparents.” This tree was out there slowly charging upward, anonymous, just one pillar among the billions of others. But on this day, in August 2024, when the two loggers worked, this witness of the centuries came crashing down to the canopy with such cataclysmic power that it shook the earth. And then you go on to talk about how the shaking of the earth was felt and heard by the uncontacted tribe.
Lex Fridman
(00:06:34)
So you go on to describe how these particular loggers were killed— … by the uncontacted tribe of Mashco Piro. What do we know about these warriors of the uncontacted tribe?
Paul Rosolie
(00:06:48)
We know that across the Amazon basin there’s still perhaps thousands of clans of uncontacted peoples—people that are living in nomadic isolation in what remains of the intact Amazon basin and want to remain that way. And so, what happened with these loggers was that local people told them, “Don’t go out there. Don’t go into these territories.” And what happens is that people that aren’t from… there’s this thing with the jungle, people don’t believe that it’s as wild as the legends say. And so when they say there’s Calatos out there, there’s wild people out there, these loggers from another region go, “Yeah, that’s just some story. We’re fine. We’ll go.”
Paul Rosolie
(00:07:35)
We have shotguns.” They don’t realize you’re dealing with a civilization of people that is still nomadic, still uses bamboo-tipped arrows, still lives naked in the Amazon rainforest, has knowledge of medicines that we have yet to encounter or may never discover, and that they can hit a spider monkey out of the treetops at 40 meters. And so while you’re using a chainsaw, they can sneak up and you will never know they’re there. And so when that arrow passes through your body, you’ll only have a moment to realize it before you fall over.
Lex Fridman
(00:08:07)
And we’re looking at something you posted on your Instagram— … which are the arrows that they use, which are bigger than you. So they’re like six or seven feet.
Paul Rosolie
(00:08:16)
Six, seven feet. More like seven feet. And that’s—
Lex Fridman
(00:08:19)
Look how sharp that is.
Paul Rosolie
(00:08:19)
…incredibly sharp. They cure it over the fire and they have a way of sharpening it. That edge of bamboo becomes incredibly knife-sharp. You can cut meat with it easily; I’ve done it. These arrows… Look at that. I mean, I’m 5’9″. That’s easily a seven-foot arrow.
Lex Fridman
(00:08:34)
Yeah, so for people who are just listening, this arrow is really a spear. Some people would think it was a spear, but they’re shooting this thing with a gigantic bow. That’s crazy.
Paul Rosolie
(00:08:45)
Yeah, and so to be holding that… Look at that, they even twist the fletching so the arrow spins in the air. They have incredible craftsmanship. And then you see all the little string on there is plant fibers that they’ve woven. And then this is them.
Lex Fridman
(00:09:01)
The warriors of the tribe.
Paul Rosolie
(00:09:03)
The warriors of the tribe. And so the fact that we’re sitting here talking on microphones and that we have airplanes and cell phones and all the things that we have in the modern world, and yet we still live in this age where there’s, right now at this moment, people living out in the jungle who have been there since before history—it is an incredible thing.
Lex Fridman
(00:09:24)
Let me look this up on Perplexity: what are the technologies we modern humans have that the Mashco Piro do not? It’s just interesting to think about the kind of technologies we take for granted. Energy and power—obviously all the electricity generation, grids, batteries, solar panels, and electric motors. Metals and materials—mass-produced steel, aluminum, advanced alloys, plastics, composites, glass, concrete; all of those things.
Paul Rosolie
(00:09:52)
All of those things.
Lex Fridman
(00:09:52)
Tools, of course, and machinery. The infrastructure of roads and bridges and buildings, and the weapons of war—everything but the spears and the arrows that they have—and then medicine and biology. Of course, they probably have complicated medicines that they’ve developed for their own— …that are available within the jungle.
Paul Rosolie
(00:10:14)
I mean, that entire list is “no.”
Lex Fridman
(00:10:16)
No.
Paul Rosolie
(00:10:17)
I mean, metal—I think you have to be able to excavate into the earth and forge metal. These people don’t even… As a Peruvian anthropologist said to me, “You know, people think of them as Stone Age tribes.” And he was like, “They don’t have stones.” So they don’t know that water… They see water that they drink, but they don’t know that water freezes because they’ve never seen it. They don’t know whether water boils because they don’t have… they don’t even make clay pots. They just have their bamboo and their string. And so they’re living an incredibly simple life. So all of that, I mean, even a camera is a miracle to them. You have to bend your mind to even understand how far back they are. It’s like looking into thousands of years ago, like the Stone Age.
Lex Fridman
(00:11:03)
When they hear the sounds of the chainsaws, the sounds of machinery in the distance— …I wonder how they can possibly comprehend what that is.
Paul Rosolie
(00:11:12)
I think they view it as a demonic, destructive force. When I show you the encounter that we had… we left with more questions than answers, but one of the things that they were able to communicate across the language barrier was, “Why are you cutting down the trees?” They don’t like it.
Lex Fridman
(00:11:34)
Yeah. That represents to them the danger that the outside world brings, the destruction that the outside world brings.
Paul Rosolie
(00:11:42)
They see us as the destroyers of worlds.

Intense new encounter

Lex Fridman
(00:11:45)
So tell me about this encounter in October of 2024.
Paul Rosolie
(00:11:52)
So, in order to tell you about that encounter, I think we need to orient people into where we’re talking about. We’re talking about this river that runs through the western edge of the Amazon rainforest that you know well now, after spending time there with me. It’s a high tributary of the Amazon rainforest where you have the main river channel and then smaller and smaller tributaries. And the smaller you get, the less trafficked they are. And so this river has remained wild through the centuries. And even during the ’90s when there was a mahogany boom where people went out for mahogany trees, there were very few people going up this river.
Paul Rosolie
(00:12:30)
And so 20 years ago, when I first got to the region and people were telling me that there were uncontacted tribes out there, it was always in the realm of something… You know, it’s like people say, “There’s Bigfoot,” or, “Don’t go there, it’s haunted.” It was like a tall tale almost. And even the Peruvian government at the time that I went to Peru first, which was 2006, their official position was that the tribes are a myth. “There’s no such thing as the tribes.” That was the official position. And you would hear these stories of people that got shot. You’d meet someone four days upriver, deep in the Amazon, that had an arrow.
Paul Rosolie
(00:13:08)
And you’d look at this thing and it had this mega gravity. And so as we’ve created Jungle Keepers and now we’re protecting 130,000 acres of this river, we’re protecting the plants and the animals and the ancient trees, and trying to preserve the ecosystem, counting the butterflies and conducting ecological surveys, what we’ve inadvertently found ourselves the caretakers of is the fact that these people, in order to continue living, have to remain isolated. They want to remain isolated. That’s their one mandate as a civilization, the tribes of the Mashco Piro. And so in October, as Jungle Keepers now, we’re working with the indigenous people.
Paul Rosolie
(00:13:52)
What we do is we take loggers and gold miners and make them into rangers and give them better jobs, and we try to protect the forest. And those people who live up in the remote indigenous community, they called us on a satellite phone and they said, “Directors, you’ve been working with us and telling us you want to help us. The tribes are coming out. What do we do?”
Lex Fridman
(00:14:13)
So, even they don’t really know, when the tribes emerge from the deep jungle— —what to do?
Paul Rosolie
(00:14:19)
They were terrified.
Lex Fridman
(00:14:20)
What was your thinking when you got the phone call?
Paul Rosolie
(00:14:23)
When we got the phone call, it was a mix of… you know, we should keep… because we’re over here trying to get land concessions and doing all this important work, part of me was like, “That can’t be real,” so we’re going to keep our heads down.
Lex Fridman
(00:14:34)
Bigfoot is emerging— —from the forest.
Paul Rosolie
(00:14:35)
Like, yeah, sure. And then we hung up and we said, “Okay, maybe tomorrow if they’re still there or something.” And then it was crazy because it was probably about noon and we had an important day of meetings. We had a meeting with the police, we had a meeting with the landowner, we were trying to do all this stuff for the conservation work. And then I got together with the core team of directors—JJ, Mohsin, Stephane—and we said, “Wait, if this is real, we have to get there like now. Like now, now.” And so we dropped what we were doing, canceled the meetings, put other people on the meetings. We got a boat and we called Ignacio. We called our most hardcore ranger.
Lex Fridman
(00:15:12)
Who has been shot.
Paul Rosolie
(00:15:14)
Who in 2019 was shot in the head by an arrow and still bears the scar, and he barely survived. And we said, “Look, this is going down.” He said, “I already know, because the whole river already knows.” And we said, “Can you get us there by tomorrow morning?” And he said, “Look, it’s a two-day journey by boat. So, no.” And we said, “Is there any way you can get us there?” And he went, “I’ll get you there.” And so we got a couple of sacks of rice, a couple of cans of tuna, our dry bags, our tents. We got on a boat by 6:00 PM and we started riding up the river—
Lex Fridman
(00:15:46)
Through the night?
Paul Rosolie
(00:15:47)
…through the night. And so, a two-day boat journey that we’re trying to flex in one night. And so I was at the front with the headlamp— …with the torch. And so the first few hours, it was clear, and that comet—remember that comet—
Paul Rosolie
(00:15:59)
…that was going? There was that comet in the sky. I remember looking at the comet and going, somehow, “This is it.” I knew this was it. And the first few hours were clear, the stars were out, and it was beautiful. And then it clouded over and the lightning started, and then it just apocalypse-downpoured. And from midnight until 8:00 AM it was just the front of the boat with the light, and it was just Star Wars vision of just raindrops and galaxies and moths flying in my eye. People don’t realize you can get hypothermia in the tropics, but as you’re going at night, even if it’s 80 degrees outside, in the rain and the wind at night in a lightning storm, you’re freezing.
Paul Rosolie
(00:16:37)
And so by 2:00 AM I’m convulsively shivering, and we’re using the caiman eyes on the side of the river because it was so dark we couldn’t see where we were going, so those shine back at you. So, I was finding the caiman eyes and then motioning with the light to Ignacio where to go, and he knew how to find the channel and we had to jump the waterfalls. We did the two-day boat ride in one night.
Lex Fridman
(00:17:01)
Nice.
Paul Rosolie
(00:17:02)
And we got there and we arrive at this community where—and it’s morning now and the howler monkeys are calling over the jungle and the little naked children are all by the side and everyone’s scared. And we get a hug from this guy, Bacho, who we know, and they’re like, “Come in, come in, come in.” And they’re like, “The tribe came out yesterday. We saw a few of them on the beach and they’re gone now.” And so we collapsed, we fell asleep. It rained the whole day. That night we went out and we looked for them and there was this crazy moment where we’re standing on this beach and their footprints were there.
Paul Rosolie
(00:17:37)
And the local indigenous anthropologist was standing there and we’re looking out into the Amazon beyond, and there’s just all this wreckage. It looked like something very Cormac McCarthy, just dark sky, iron clouds, and we’re standing there. Everyone is waiting, because at any moment an arrow could just fly through your neck. And there’s people holding shotguns and the anthropologist, this little guy, is standing there in the front and he’s going, “Nomole.” He’s going, “Brothers.” There’s only a few words that intersect between the languages and he’s going, “Brothers, we’re here. We don’t want to hurt you.” He’s speaking in the Yine language.
Paul Rosolie
(00:18:13)
And he’s saying, “Come out.” And you can tell by their footprints—the trackers explained this to us—that you could see it was just the balls of their feet. So right as we pulled up to the beach, they had run. So, they were there listening to us, and he’s going, “Nomole, come out. It’s okay. Lay down your arms. We’ll lay down ours. Nomole.” Just kept saying nomole. And nothing happened. And we went back to the village and we went to sleep. We wake up the next morning and it’s 5:00 AM. And again, we’re trying to save the jungle. We’re in a race against time to get these land concessions. And so my team, like Mohsin and Stefan—JJ couldn’t come because he was in town actually signing paperwork and interviewing loggers and landowners.
Paul Rosolie
(00:18:53)
And also, he didn’t think there was any chance this was going to be real because in his entire 50-something years in the Amazon, he’s never seen them. And so we’re getting ready to leave in the morning. We had tents on the boat. And Ignacio comes up to me and he goes, “You’re my director, right? You’re my boss?” And I went, “Yeah.” He goes, “I need to talk to you like a friend.” I was like, “Yeah, shoot, go.” And he goes, “You’d be an idiot to leave right now. They’re coming.” And so he convinced us to stay. We pull our tents off the boat. Stefan and Mohsin go off with their cameras. They start shooting the community that we’re in. These are monkey eaters and fishermen. And everything’s quiet.
Paul Rosolie
(00:19:31)
And I opened my laptop, and I was working, just writing my book. And then it happened. Then you start hearing people screaming, “Mashko, Mashko.” And people are screaming and women are lifting children and running into the huts, and the dogs and chickens are going nuts. And I mean—
Lex Fridman
(00:19:50)
So fear. Fear.
Paul Rosolie
(00:19:51)
Fear.
Lex Fridman
(00:19:52)
Because we should say, kind of the obvious thing is, as far as anyone remembers any encounters, any minimal, small encounters with these tribes— …have been violent.
Paul Rosolie
(00:20:01)
Extremely violent. These tribes have remained alive because of their violence. Almost like the Spartans or the Comanches, they seem to have adopted violence as a first response to contact.
Lex Fridman
(00:20:12)
Maybe you can correct me on this, but I read that in the late 19th century and early 20th century, there was documentation of encounters with these tribes by the private armies of the rubber barons. And those encounters were, from the rubber barons’ armies’ perspective, violent. And so maybe the lesson the uncontacted tribes learned is that any interaction with the outside world is going to have to be violent because they have to defend themselves.
Paul Rosolie
(00:20:43)
Yeah. You had colonial missionaries in the 1600s and 1700s. Then you had the rubber barons in the late 1800s into the 1900s—just periods of extraction, domination, and cruelty. And these tribes, their grandparents must have told them, “When the outside world comes, you shoot first. That’s the only thing that’s going to keep you alive.”
Lex Fridman
(00:21:00)
Do you think the memory of those violent encounters is defining to how they think about the world?
Paul Rosolie
(00:21:06)
Yeah. Because even in my lifetime, in the 20 years I’ve spent in the Amazon, Ignacio was shot in the head. My friend Victor survived a violent encounter where they murdered somebody on a beach. I mean, they’ve shot numerous people. They’ve even shot people who were trying to help them, people who were trying to give them clothing and bananas. They call it “porcupining” them, where they find a body on the beach with so many arrows that when they fall over, all the arrows are sticking up. And they’ll do it out of curiosity too, where it’s like, “Hey, you’re wearing a suit. That’s weird.” We’ve never seen anybody in a black and white suit.”
Paul Rosolie
(00:21:41)
And then get the clothing. You know, the way Teddy Roosevelt would shoot a bird for science. They just want to look at you. And so they’re operating on a different… They don’t have the moral system that we have or understand. They’re truly wild.
Lex Fridman
(00:21:56)
How does Ignacio think about them? Because they almost killed him.
Paul Rosolie
(00:21:59)
Yes. It depends on the mood you get him in because one day I asked him, “If you could see the people that shot you in the head, what would you say to them?” And he looked at me with that Ignacio look and he said, “I wouldn’t say anything. I would kill as many of them as I could.” I said, “Okay.” He also had a time where he was in a really remote guard station working for the Ministry of Culture, and they showed up and he knew that they were going to kill him. And so he climbed up into the peak of the little structure there. And just like a dog in a car, that greenhouse effect, in the top at midday with the sun beating down, he was huddled over a mattress while they were walking on the deck—
Paul Rosolie
(00:22:38)
…moving pots and pans and looking at our items and artifacts. And he knew that if he was found, they’d kill him. But if he stayed up there, he was literally frying to death. He said he was soaking the mattress. He could feel himself dying. For two hours he had to stay there. And he was constantly making this decision of, “If I come out, I die. If I stay here, I probably die.” He’s like, “Probably die is better than definitely die.” So he was terrified. And so as they’re screaming, “Mashko,” and everybody’s running and women are lifting children, Ignacio comes and finds me. And you can see in his eyes, you can see when somebody has that PTSD response where he’s breathing heavy. He’s moving behind trees.
Paul Rosolie
(00:23:16)
He’s keeping me close to him, and he’s going, “Look there. He has a bow. He has a bow.” And we’re looking up the beach, and there’s just this clan of naked men walking down the beach with these seven-foot bows, and they’re hunched over and they’re pointing at us. They’re going, “Look at that one.” They’re going, “Look, there’s a gun there.” And you can see them communicating to each other. And the butterflies are swirling off the beach. And, you know, in these moments you go, “Am I entering a moment that is a one-way door? Is this an irreversible situation?” Because there’s an unfolding situation where they’re coming towards us. Are they going to attack? What do they want?
Paul Rosolie
(00:23:54)
I mean, I am soaked in chills right now just talking about it because I remember standing there and going, “There’s no way this is real life.” It’s burned into my memory, them walking down the beach and seeing them with the bows. And of course, Stefan is up there just firing off pictures and Mohsin is down getting video. And the community that we’re with, you hear shotgun shells loading home. But they’re also getting ready. And there’s this one guy, an anthropologist named Romel, who has been the only person who has communicated with them peacefully. He did it in 2013 where he stood on the beach and he spoke to them.
Paul Rosolie
(00:24:33)
He knows enough of the local dialect that overlaps with theirs that he can speak to them. And so as they’re coming down the beach, the butterflies are flying up and we’re all waiting. And again, you’re talking, how many meters? 30, 40 meters. For an arrow, you loose a seven-foot arrow that weighs nothing, you’re talking about 300 meters easy. They can shoot you from across the river. So Ignacio was pulling me and he was like, “Down. You go down. You stay behind this tree. You watch them from there. Watch out, that guy has an arrow.” He was watching everyone because you could see, he’s like, “This is how it happens.”
Lex Fridman
(00:25:08)
Did you think you might, this might be the last day you have on this earth? Were you afraid?
Paul Rosolie
(00:25:14)
I was, yeah, of course I was afraid. I’m with my two best friends and a bunch of people that I work very closely with. And you’re in the middle of nowhere and there’s no help coming, and you’re with like—
Paul Rosolie
(00:25:26)
…you know, 26 people and there’s 50 of the tribe that you can see, and you know that they’re surrounding us. There are men on the other side of the river. And then we had guns looking back towards the jungle because we knew we were being surrounded. And again, this is always the story of someone’s uncle, brother, or cousin telling a story that happened, and now it’s happening. And it’s not happening in the shadows, it’s not happening in the middle of the night. It’s happening in broad daylight. They’re walking out onto the beach. It’s like the first time they saw the dinosaurs in Jurassic Park. You’re going, “There’s no way.”
Lex Fridman
(00:26:02)
And you’re walking on the knife’s edge. It’s funny you say this, …about taking pictures. ‘Cause there’s two ways to think of this situation. This is fascinating, or this is extremely dangerous. And it’s both. It is a knife’s edge. So you could approach it one of the two ways. Like if I die, I die. I’m gonna take some good pictures.
Paul Rosolie
(00:26:22)
But also we’re there—that was also our mission, you know? As the directors of Jungle Keepers, we’re working with this community to ensure that their lifestyle can continue, and they’re saying, “Hey, that’s great, but as an indigenous community, we’re dealing with these people that come out and raid our stuff, try and steal our women, that kill our hunters, and now they’re coming out. We want you to see it.” And so documenting it is part of our job. We have to show what happened that day. And so those guys were shooting and then—yes, very seriously—Mohsin’s wife and I, we always joked like, “Oh, if the tribe ever comes out, you stand in front of him, you take the arrow.
Paul Rosolie
(00:26:59)
He has kids.” And that day we were strategically positioning ourselves being like, “You, down. You cannot get killed.” And you start in those moments to go, “Okay, where will I be safe from arrows? Where can I run to the river if they come over?” And you start planning, “Okay, if I jump into the river…” I was going, “Okay, I got my bag. I have a can of tuna. I have a flashlight.” I was like, “If I jump into the river and float down and I live, I’m still days upriver.” And so you start going through all these things, but—
Lex Fridman
(00:27:32)
And of course, the Mashco-Piro people are thinking exactly the same thing probably.
Paul Rosolie
(00:27:37)
Well, the interesting thing is that they’re initiating the contact, right? They are the ones coming out of the jungle and confronting us.
Lex Fridman
(00:27:44)
And fundamentally that contact is they’re at least giving peace a chance. They’re trying the peaceful contact first, correct? Or was there a violent element? Like what did you sense in the caution of them emerging to the beach?
Paul Rosolie
(00:28:02)
Fear.
Lex Fridman
(00:28:03)
Fear.
Paul Rosolie
(00:28:03)
As they came out, you could see fear on them because the way they were hunched over, the way they had their bows ready, they were worried. And so they came and Ramal is standing there, closer than any of us at the edge on one side of the river, and it was like shirts versus skins. It was two tribes looking at each other with a thousand years of civilization between them. And Ramal’s going, “Put down your bows. Put down your bows and we can talk.” And he’s saying, “Namole, Namole.” He kept saying, “Namole.” He kept saying, “Brothers, brothers, please put down your…”
Lex Fridman
(00:28:35)
So Namole means brother in a language that they might be able to understand.
Paul Rosolie
(00:28:39)
Namole means brother in a language that they do understand, and it seems like they refer to themselves as the Namoles. The brothers.
Lex Fridman
(00:28:48)
So potentially, that’s what they call themselves as a tribe, Namoles?
Paul Rosolie
(00:28:53)
Exactly, and actually, the anthropologists that we’ve been speaking to post this event have been explaining to us that Mashco-Piro—you know, Piro is the group that they’re from, these various nomadic tribes, and Mashco basically means like wild Piros. And so the one thing we know they call themselves is Namoles.
Lex Fridman
(00:29:12)
So at the end of this, we might converge towards the name of this tribe being Namole versus Mashco-Piro?
Paul Rosolie
(00:29:17)
The Namoles, yeah. It seems like the most current, or at least their self-appointed identity, is the brothers, Namole.
Lex Fridman
(00:29:24)
Anyway, there’s these shredded warriors on the beach. They’re gigantic.
Paul Rosolie
(00:29:29)
With seven-foot arrows, and we’re all standing there. And so the first thing, again, you just think of like the peace pipe in the old stories. And the first thing is let’s make them an offering of peace. And so they got a canoe with no motor, and we piled it with plantains, like just full of plantains, 16 feet of endless green bananas. And then, I mean, the balls on this guy, the anthropologist, he gets into the river, takes the canoe—and it’s the dry season, so the river’s only about three or four feet deep at the channel—and so he walks this thing out, this one man walking in the face of all these warriors. And he takes the boat and he pushes it towards them.
Paul Rosolie
(00:30:12)
And they rush out, and they start grabbing the bananas, and they’re not going, “Okay, we will unload these bananas and use them later.” They’re saying, “These are my bananas” and “You’re grabbing your bananas.”
Paul Rosolie
(00:30:22)
And they’re fighting and they’re yelling and they’re all grabbing them, and then they push the boat back and he talks to them a little bit. Again, it’s not a perfect translation. So he’s saying, “Where have you come from? What do you want? Who’s your leader?” And he’s trying to establish these things, and they’re saying things, and they all sort of talk at the same time, like a flock of birds. It wasn’t like one man speaks. And there were no women. The women were nowhere to be seen. And actually, at one point as we were preparing the second canoe of bananas, there was a moment of absolute panic.
Paul Rosolie
(00:30:58)
And it happened when there was a noise behind us and you just hear a bunch of shotguns swing around. Mohsin goes down. I go running away from the river now because I want to see it coming if there’s an attack coming. And I’m standing there, me and this guy were sharing a tree as cover, and he’s got a shotgun and he’s looking back into the forest and peering through. And what was happening was the women of the tribe had come silent-foot and they were just pulling the yucca out of the ground and taking the banana plants and ruining the farm completely. They were raiding the farm behind us while the men were talking up here. So again, were they peacefully contacting us or were they like, “Hey, we need some food, so go make a diversion”? …and take the food out the back”?
Lex Fridman
(00:31:42)
So you really were surrounded.
Paul Rosolie
(00:31:44)
We were completely surrounded.
Lex Fridman
(00:31:46)
So they could have murdered all of you, probably.
Paul Rosolie
(00:31:51)
Easily. We were outnumbered five to one at the least.
Lex Fridman
(00:31:54)
Yeah. And it’s probably fair to say that part of the reason they did—maybe they wanted peace, but part of the reason is they didn’t know how deep this goes. They didn’t know if you have backup.
Paul Rosolie
(00:32:04)
They don’t know if we have backup. They also had questions. Some of their questions were incredible. “How do we tell the difference? How do we know who the good guys and the bad guys are?” Because to them, all you outsiders are the same. So, who were the ones cutting down the trees?
Lex Fridman
(00:32:22)
And those are the ones they know are the bad guys.
Paul Rosolie
(00:32:25)
Well, the big trees seemed to have incredible significance to them. They’re significant to us in a different way, but to them, it’s offensive on an almost religious level to cut a big tree, as if you’re killing their gods.
Lex Fridman
(00:32:40)
So there’s a spirituality to the trees to them.
Paul Rosolie
(00:32:43)
It seems like that.
Lex Fridman
(00:32:43)
And so whoever’s cutting them down is a source of destruction on a spiritual, existential level.
Paul Rosolie
(00:32:50)
Yeah. “Why would you destroy our home?” And I think they’re right.
Lex Fridman
(00:32:54)
Yeah. In a deep sense, the uncontacted tribes represent the deep jungle. And so if they’re threatened, that means the deep jungle is threatened.
Paul Rosolie
(00:33:06)
Yeah. I mean, they are the human voice of the jungle. They’re asking questions and they’re also demanding. They’re clapping at us and waving and saying, “Send more bananas.” And so they loaded up another boat and pushed it out, and this time they gave them some rope. They all had rope tied around their waists. But they love rope, and some of them were wearing rope that they had made, which is brown or reddish. And then some of them were wearing rope that they had clearly pillaged from logging camps or the communities because it was modern nylon paracord. They had this wound around their waists like a thick belt. And they took the second boat, and they had some rope and some plantains on there.
Lex Fridman
(00:33:47)
So some of these guys might have been the ones that murdered the loggers.
Paul Rosolie
(00:33:51)
Could be.
Lex Fridman
(00:33:51)
From a couple of months before that.
Paul Rosolie
(00:33:53)
Absolutely, could be. But what Romel said as he was talking to them, he turned to us and he said, “You know, this group… the other groups call me the Grandfather. This group, I don’t know any of these. This is first contact. This is the first time this group is talking to us.” And you saw people from maybe 12 years old to what looks like 40-something, like a banged-up 40. And no really old people and no women.
Lex Fridman
(00:34:22)
So this is a particular clan of the uncontacted—
Paul Rosolie
(00:34:24)
It’s a particular clan.
Lex Fridman
(00:34:25)
… tribe who they’ve never contacted. Yeah, is there, just from your memory, interesting aspects about the way they were trying to communicate? Like you said, clapping. I think it’s, from an anthropology perspective, from a human perspective, fascinating. How do you talk to people from an uncontacted tribe like this? So clapping, yelling. It’s interesting to know that there’s not a hierarchy where there’s a leader that represents them.
Paul Rosolie
(00:34:49)
Well-
Lex Fridman
(00:34:49)
Or is that we know for sure?

Never-before-seen footage of tribe warriors

Paul Rosolie
(00:34:51)
Before even coming to talk to you about this, we passed this through anthropologists and ethicists and people, and we said, “Look, is it even, can we talk about this?” Because if you talk about this and you tell people there are these uncontacted tribes, people have misconceptions. They go, “They’re the last free people on Earth. They’re living the real life. We need to go join them. We want to see them. We want to photograph them.” There’s all this bad stuff that happens and all these people want is to be left alone. So, the last thing we want to do is kill the thing we’re trying to protect by telling the world. But at the same time, they’re speaking out. They’re saying, “Stop cutting our trees.
Paul Rosolie
(00:35:25)
Leave us alone.” And so if we’re not successful in the greater Jungle Keepers’ mission of protecting this river, they cease to exist. And so advocating for these people requires us to have this conversation. It requires us to have this footage and to show the world, and then leave them alone. In order for any of this to make sense, I have to show you this footage.
Lex Fridman
(00:35:46)
And this has not been shown ever before.
Paul Rosolie
(00:35:49)
This is a world first. I mean, up until now, that’s the other thing. You know, we’re sitting there this day and the only thing you’ve ever seen are these blurry images of them from someone’s cellphone from 100 meters away. And we’re sitting there with 800-millimeter lenses with a 2X teleconverter and R5s. And so this is as we’re looking through the binoculars, anticipating the tribe coming. I’ll put a little bit of volume so you can hear it. And then you can see, this is the moment. This is us running when they’re like, “They’re out. They’re coming down the beach.”
Lex Fridman
(00:36:25)
Oh, wow. Oh, wow.
Paul Rosolie
(00:36:31)
You see how many thousands of butterflies? But look at the way they move. Look at the way they point. Look at him with his bow.
Lex Fridman
(00:36:39)
Wow.
Paul Rosolie
(00:36:45)
There it is.
Lex Fridman
(00:36:48)
They’re trying to figure out what they’re looking at.
Paul Rosolie
(00:36:54)
And they didn’t know what the cameras are there for. So this was the guys looking out the back. So he’s going, “There’s something back here.” … hear the women in the woods. And I’m looking in every direction because I’m going, “Which way is the arrow coming from?” But see, he has his shotgun. This is just like a farm shotgun. Even if he shot it, you have to use a stick to bang out the shell. But see, as they come closer, they start laying down their… See, he’s laying down his bow and arrow. They understand.
Lex Fridman
(00:37:22)
So these are warriors, and the way they were at first moving, it really looked like they’re ready for violence. And now they’re all standing in a relaxed- And they’re smiling? Are they smiling?
Paul Rosolie
(00:37:33)
Smiles come at some point. I would say that one of these guys seemed like he was in a leadership position. He did most of the talking.
Lex Fridman
(00:37:43)
What’s with the different hand gestures? This holding your hand up to the face—all of this means something.
Paul Rosolie
(00:37:51)
All of this means something. Some had red smeared on their faces. Some had yellow.
Lex Fridman
(00:37:55)
Did you have a sense of hierarchy at all, like the boss?
Paul Rosolie
(00:37:58)
Again, there were just these two dominant guys. And this guy and one other guy who looked almost like him, like his brother. A lot of gesturing.
Lex Fridman
(00:38:10)
Wow. This is incredible, Paul.
Paul Rosolie
(00:38:22)
Yeah. You see the rope? Some of that rope is…
Lex Fridman
(00:38:31)
Yeah, I can kind of tell who the bosses are.
Paul Rosolie
(00:38:33)
Right? All right, so a few of the… But see, even that, as he’s pointing- … with them, what are you pointing at?
Lex Fridman
(00:39:02)
You guys are nuts. You guys are nuts.
Paul Rosolie
(00:39:07)
You see as they’re rushing in, there’s this desperation. They’re hungry. They also-
Lex Fridman
(00:39:12)
Is that in the water, or is that Ramo in the water in that case?
Paul Rosolie
(00:39:15)
In this particular video, it’s a guy named Liner. But see these guys? They’re fighting over it. It’s not that we’re all going to share it later. It’s, “I get mine, you get yours.” And so what does that mean?
Lex Fridman
(00:39:27)
Yeah. But here, they’re in peaceful mode, for sure.
Paul Rosolie
(00:39:31)
Now, after we’d given them several boatloads of bananas, things did calm down. Ramo said to them, “Look, we’ve given you what we can give you. We gave you sugarcane. We gave you boatloads of plantains.” And so then there came a time where things were a little more relaxed. They were walking around. We had a great moment where we’d given them the plantains and the bananas, and he’d said, “Look, that’s it. We’ve given you what you asked for. You asked for bananas. We don’t cut the trees here. All of us here are not tree-cutters.
Paul Rosolie
(00:40:09)
We’re indigenous people.” And he couldn’t explain who the hell we were, but they were like, “We don’t cut the trees. We’re not the loggers.” And they’re like, “Okay.” So then at some point, Ignacio went out and started, you know, he’d go like this and they’d go like this. He’d dance a little bit, they’d dance a little bit. And then there was this very human moment of just sort of joking.
Lex Fridman
(00:40:30)
So even Ignacio warmed up.
Paul Rosolie
(00:40:31)
Even Ignacio warmed up. Once he realized that it didn’t seem like anyone was going to die that day, things did calm down. It was a false sense of security. Here, I’ll show you. There’s a couple more things that are relevant here, though. This is just them interacting with the boat.
Lex Fridman
(00:40:48)
This is truly incredible, man.
Paul Rosolie
(00:40:51)
But then they don’t have boats. They don’t have stone tools. They don’t… Imagine if you showed them ice. You know, they wouldn’t…
Lex Fridman
(00:40:59)
This is historic.
Paul Rosolie
(00:41:03)
I mean, you hear of Percy Fawcett encountering the tribes. We’ve heard of anecdotal accounts of the tribes. This is the first time that the tribes have been filmed, that we can hear their voices— —that there’s a documented interaction happening. I mean, look how comfortable he’s getting. He’s so close. They asked him for his shirt. He gave his shirt.
Lex Fridman
(00:41:24)
This is incredible.
Paul Rosolie
(00:41:25)
They asked him for his pants. He gave his pants. He was in his underwear. You see this? The shirt that’s over his shoulder. Ignacio took off his JungleKeepers shirt and threw it to the anthropologist, and then the anthropologist walked off and threw it to them. So over the shoulder of that uncontacted naked warrior is a JungleKeepers shirt with the logo showing.
Lex Fridman
(00:41:47)
That’s great.
Paul Rosolie
(00:41:47)
So their second shirt and they’re…
Lex Fridman
(00:41:49)
You just upgraded that guy’s status in the tribe. He’s gonna be the new boss with that shirt.
Paul Rosolie
(00:41:54)
He’s got a dope polo. Yeah, and he didn’t even have to order it. But yeah, this is in the aftermath when things were calm. And my sort of moment with this that really stuck with me was when Ramo said to me, “They’re asking about you.” And I said, “Me?” And he goes, “Yeah, they’re asking about you.” Again, I’m not tall, but compared to the people in the village, I was a little bit taller with big shoulders. And he said, “They said you look like a warrior. Could you come forward? Show them that you don’t mean any harm. Show them your palms.” And so he pulled me up onto the beach. This was right before they left. See, I hold up my hands. Listen. And they sang back. They’re singing. They raised their hands. I raised my hands.
Lex Fridman
(00:42:50)
Wow.
Paul Rosolie
(00:43:08)
And then we were left watching them walk off the beach into the jungle with everything that we’d given them, and they were gone. And so we went downriver the next day and the community said to us, “Okay, now you understand this is real. This is terrifying. You felt that fear. You have a duty, if you’re going to protect this river, to protect us from them and to help us figure out what future they want. If they want to come to us, if they want to learn farming, whatever it is, that’s fine.” But they were like, “We need protection from you guys.” And then in this video in the beginning, I’m narrating to the camera and walking around right as they’re coming up the beach. But you see this guy, right there in the blue shirt?
Paul Rosolie
(00:44:00)
That’s George. And he was very friendly, very confident with this. He said, “Don’t be scared. They’re not going to hurt us.” And the next day, we went back to town—a long journey back to town. We go to sleep, we wake up in the morning, and we find out that the following early morning, our friends in the community had said, “Okay, the tribe is gone. We gave them all the things they wanted. We gave them sugarcane, bananas, and we said, ‘Please come back, you’re welcome here anytime.'” And George was driving a boat with people on it, and as they were going upriver, 200 of the tribe ran out, surrounded the boat, and they started firing arrows.
Paul Rosolie
(00:44:34)
And everybody else could hit the deck and get under the benches and hide behind bags of rice. George was driving and he was leaning back as he was driving as fast as he could. And one arrow came in just above his scapula and came out by his belly button. And so he had that seven-foot arrow tip through him. And so they pulled him out—and I saw the boat afterward, and there’s just horrific amounts of blood all over the boat. And he had to be medevacked out, and somehow he lived. We were able to help get him a helicopter, getting him evaced, all this. But again, you just go, “What?” You know, these people came out of the jungle and they asked for bananas.
Paul Rosolie
(00:45:16)
We gave them bananas and we, in every way possible, said, “We mean peace. We want friendship with you.” And then the next day they attacked.
Lex Fridman
(00:45:28)
What do you think happened? Why do you think their mind turned? Or maybe this has to do with the role of violence in their society. Maybe it’s so integrated into how they interact with the world that they don’t even see that as a fundamental shift in the interaction.
Paul Rosolie
(00:45:50)
I don’t know. I don’t know what to make of it. And the only thing I can think is that the way they hid the women from us, you don’t know—for them, maybe we’re not allowed to see their women. Or because the one thing that we got was that as George’s boat and this other boat were going upriver, which is how they live—it’s not like they were doing anything wrong, these people live in a community days into the Amazon and were going fishing—they came around a bend and I think they spooked the tribe. The tribe might have just acted defensively and said, “We don’t know who this is.” The motors could have set them off, we don’t know. But they shot him. And then the other thing is the thing with the necklace.
Paul Rosolie
(00:46:29)
I’ve asked anthropologists about this, and their answer was that at this point, they said, “You know more than we do.” Because two of them had the exact same item around their necks, and it seems to be a Brazil nut and then some sort of casing around the side, and it looked like animal teeth positioned in there. And it’s like, what are you carrying? Are you carrying medicine? Are you carrying some sort of a totem? But both of them had it, and it’s not a comfortable thing to wear around your neck— it was grapefruit-sized or bigger.
Lex Fridman
(00:47:03)
Do you have a sense if that’s a container or is it just like a totem?
Paul Rosolie
(00:47:08)
It seems like a container. They didn’t let it get wet; they cared for it. The guy in this picture, he’s got this piece of tree fiber that he has on him, and then he’s gotten his hands on Brazil nut sacks—plastic sacks from one of the farms across the river. And so they just take, they take, and one of them got a machete. As they were leaving, again, during that period where he got friendly, he was leaving and he had the machete and was playing with it and swinging it at butterflies. And one of my friends, this guy Bacho, he goes, “Oye, deja mi machete.” He’s like, “Drop the machete.” And the guy just looked at him and was like, “Yeah, come and get it.” It’s like, “Yeah, you cross the river and see what happens.”
Lex Fridman
(00:47:46)
Do you think he figured out or they later figured out how to use a machete?
Paul Rosolie
(00:47:51)
Oh, they know machetes.
Lex Fridman
(00:47:52)
They understand the machete?
Paul Rosolie
(00:47:53)
Yeah, they do raids for machetes.
Lex Fridman
(00:47:56)
They understand the power of sharpened metal.
Paul Rosolie
(00:47:59)
I mean, it’s an Excalibur sword to them. But that one has stuck with me because I wonder, what were they carrying in there?

The mysteries of the jungle

Lex Fridman
(00:48:08)
So what are some of the questions? Like if you could know everything you’d want to know about them— Maybe in the space of communication and language, that’s really interesting. You mentioned that there’s all kinds of calls, animal calls. So they obviously know how to mimic animal calls.
Paul Rosolie
(00:48:25)
Yeah, they can use animal calls with enough complexity that they can do basic commands. They can speak in Capuchin; they use Tinamou calls. Some of our rangers were upriver recently, and they found a Nomole trail, a Mashco Piro trail. It was Ignacio, of course, and he made a secret whistle they do. He whistled out into the jungle and he’s listening, and they whistle back. So him and everybody on the team just ran back to the boat and got out of there. But at least they answered. They didn’t just shoot. He whistled, they whistled, they said “out,” and he got out.
Paul Rosolie
(00:49:10)
But it’s like we don’t know: where are the old people? Do they not survive? What are the marriage rituals? How is reproduction handled? There are one or two children in the Amazon that I know of who washed down river on a log and were rescued by communities and raised. They either learn the native dialect or Spanish, and then at some point, somebody will ask, “What was it like when you lived with them?” And the answer is always the same: “I forget.” They don’t talk about it.
Lex Fridman
(00:49:46)
So maybe we know that they value secrecy. I mean, when you’re afraid of the outside world, part of that is confidentiality. They all sign NDAs.
Paul Rosolie
(00:49:57)
Yeah, there’s some really good NDAs.
Lex Fridman
(00:50:00)
It’s understood. It’s an NDA. There are no lawyers; there’s only one way to execute the law.
Paul Rosolie
(00:50:08)
Yeah. It’s either a really strong NDA or just that it is savage living out there in the jungle. You’re eating monkeys and turtles, and you’re hungry for days on end. Your wife might get stolen by another tribe; your baby might get stolen. Imagine the botflies and the things they must put up with. I mean, what we experienced in three days of living out with modern camping gear and headlamps, they’re doing none of that. You could put us out there naked, and it’s a very different story.
Lex Fridman
(00:50:43)
Yeah, the brutality of nature- Werner Herzog comes to mind. That they have to live in that. But then, there must be something about the jungle that serves as a catalyst for spirituality, so they must also have a religious component, a spiritual component that probably unifies them. There must be an ideology they operate under.
Paul Rosolie
(00:51:06)
Oh, there must be. They probably have a belief system. They probably have amazing origin stories. It would be amazing to know what things they have accurately and inaccurately guessed about us, about the outside world. I mean, they’ve never heard of the country they live in or of World War II or any of it. And so seeing them come across the beach was surreal because it’s like this aperture into history.
Lex Fridman
(00:51:36)
By the way, I mean, you do have a certain look, so you realize like— …as I’m saying to you, your face is carved in some wood somewhere. And there’s a few of them gathering around and still singing about the great gringo with the—
Paul Rosolie
(00:51:50)
The full beard and the big nose. They probably drew this like he’s got hair all over his face and a huge nose, and they tell their children.
Lex Fridman
(00:51:57)
Yeah. And it could be anything. You- they- You could be like… To the children, they say, “This is the monster you should be afraid of,” or you could be the most beautiful encapsulation of the outside world. It could be everything in between. You don’t get to control the myths.
Paul Rosolie
(00:52:12)
You don’t get to control the myths. Yeah, God only knows, but I mean, it’s—
Lex Fridman
(00:52:16)
That’s so interesting.
Paul Rosolie
(00:52:17)
So now in that 130,000 acres that we have, we know—and this is what we sort of have to come out of the closet with—we are now protecting these people. And the only way to do that is to make sure that they’re not contacted, let alone that they don’t get machine guns shot at them by the narcos or that crazy hippie gringos don’t go down there thinking they’re going to join the coolest commune on Earth.
Lex Fridman
(00:52:46)
So how much of the land that they move about is within the 130,000 acres of rainforest you’ve been able to save? And how much of it is not? How much of it is in the extra 200,000 acres that you’re trying to save?
Paul Rosolie
(00:53:01)
Most of that 200,000 that we’re still trying to protect is territory that is theirs. People always ask me this. They’re like, “How could you buy the Amazon? That doesn’t make sense.” And it’s like, well, I have bad news for you. Somebody already owns it and we have to buy it from them so that they don’t log it. These landowners are going to sell their forest to the logging companies because owning 10,000 acres of the Amazon doesn’t help you if you’re a third generation jungle man. If you live in the city, they’re going to contract either the narcos or the loggers or the miners to go out there and use it, and they’ll get a little money. And those people, when they see these tribes, will kill them. That’s for sure. Shotguns and machine guns in the end will win, not to mention the germs.
Lex Fridman
(00:53:53)
So all the money you’re trying to raise and all the land that you’re trying to save, it’s all towards that, protecting the deep jungle. So when you buy up the jungle, you just want to let it be, let the natural ecosystem come back to life in the cases when it was logged or just flourish— …if it hasn’t?
Paul Rosolie
(00:54:12)
Again, we’re talking about the last great jungle. I always called it the last endless forest because this place is so incredibly remote. The other question I always get is, “Why is this river so important?” For my whole career, 20 years in the Amazon, it’s been that it’s massively intact forest. Places like the ancient forest where the trees have never been cut, so it’s forest that’s been growing since the dawn of time. Thousands of species can be on a single Shihuahuaco tree. It’s Avatar on Earth. You can see the sweat come off your skin and rain down and then drink it out of the river; you’re part of the chemical physical reality there. It’s one of the last places that’s untouched.
Paul Rosolie
(00:54:58)
This changed everything because we realized that along with the butterflies and the monkeys and the jaguars and the trees and the ecosystem, there’s also a human culture that will, in the next few years, cease to exist, that will be exterminated if we don’t protect them. When you look back at what happened to indigenous cultures all over the world over the past few centuries, we collectively now have a chance to undo all of those injustices by at least doing one right—by saying these people want one thing: to just be left alone. Imagine if we just protected the river. Then it’s not that they’re this thing that’s vanishing from reality, but they get to continue living that way.
Paul Rosolie
(00:55:47)
And if they want to come out and contact us, great, and if they want to continue living like this for the next 10,000 years, they can. That’s what we’re working with now. It’s become so much more important than just trying to protect the environment. It’s like protecting Yellowstone or Yosemite or the sequoias that occur nowhere else on Earth. You protect the things that are unique and special, the crown jewels. In both a biological way and an anthropocentric way, this has now become a river with global historic significance because this story is going to play out in the next 18 months.
Lex Fridman
(00:56:30)
You’re further and further trying to save more and more rainforest. And the mission is clear because there’s just this deep jungle— —that’s full of this incredible life. And now we know with uncontacted tribes, there’s a lot of interests that don’t care about the jungle, they’re pushing— —and want to cut it down, want to destroy it. And the mission is pretty clear. You just want this whole territory to be preserved.
Paul Rosolie
(00:56:56)
Yeah. And that’s what makes it so beautiful is that this is one of those crown jewels. This is one of those special places on earth where it’s like a time capsule for nature, for human culture, for biodiversity, for climate services, for everything. And then, you know, I think people get overwhelmed when you say, “Okay, we have to save the environment. We have to save the ocean.”
Paul Rosolie
(00:57:20)
This is one watershed. It’s 300,000 acres and we’re already at 130,000. We’ve shown we can do it. The loggers are happy to turn into rangers. People all over the world have become Jungle Keepers supporters. We have several thousand people that every month give us between five and a thousand dollars, and that keeps the rangers going, that employs the local people. So it’s not just drawing a line and making a park and saying, “Everybody stay out.” No, you have the Nomole, you have the indigenous people, you have a future for the indigenous people where their kids don’t have to worry about eating monkeys. They can be park rangers.
Paul Rosolie
(00:57:57)
And I get blowback from people right away where I say, “And people can even come see it through the treehouse.” And people go, “Oh, are you going to bring tourists into the wildest place on earth?” And it’s like, man, look at that jungle. There’s 300,000 acres of that, and we’re talking about two blades of grass on a football field that we access so people can see it, which makes a huge difference. And so the fact that we can share it with people… Look, since the first time I came here and spoke to you, the amount to which you’ve made it possible for us to protect this place, the amount of spider monkeys and jaguars and giant anteaters and those ancient millennium trees that you’ve made it possible to protect is monstrous. And so—
Lex Fridman
(00:58:43)
Thank you, brother. It’s been—
Paul Rosolie
(00:58:45)
No, thank you.
Lex Fridman
(00:58:45)
—it’s been an honor of a lifetime to be able to watch you. I tell this to a lot of people, there’s certain people I’m glad exist in this world because you’ve educated me and millions of people about the beauty of the jungle and how important the fight to save the jungle is. So if you’re listening to this, you absolutely must go. Please donate or post about it, share it with friends at junglekeepers.org. You’re also doing a gala in New York at the end of January. So if you can, please go and donate to help save the jungle.
Paul Rosolie
(00:59:25)
Yes, please do. Because our first conversation led to the first surge where people realized what Jungle Keepers was— —and then because we got this surge of support, we were able to expand our work, protect more acres. A lot of our major donors and small-scale donors came in because of that. So these are people that went, “Wait, if Lex thinks it’s a good idea, then we’ll do it.” I think that based on your trust they came in.
Lex Fridman
(00:59:50)
I guess also I should say it’s not enough to speak and communicate the importance of saving the rainforest. You actually have to have incredible people there making it happen. And we have talked, and we’ll talk more, about the dangers and the complexities involved on how to navigate everything. And one of the things, and the reason I’m really excited about what you’re doing, is I just got to meet the team, and it brings a smile to my face— —several of the people I know who are extremely competent. Stefan, somebody we’ve talked about—
Lex Fridman
(01:00:23)
Yes, he likes to take pictures of stuff, but primarily the thing he does incredibly well is run everything—organize everything to make sure that stuff happens and happens quickly and efficiently. These are the kinds of things that are required to make stuff like this happen in the complex environment that the jungle operates in, the sometimes lawless environment— —that the jungle operates in. So the team is incredible, which is why when you sort of connect the money, how does the money lead to the solution of the problem? It’s the team, and the team— —makes it happen.
Paul Rosolie
(01:01:07)
I didn’t know that people like Stefan existed.
Lex Fridman
(01:01:10)
Yeah, me neither. When I met him— He was a beautiful, wonderful human being.
Paul Rosolie
(01:01:16)
I’m, you know, again, I can use a machete to catch a fish. But his systems knowledge and his ability… I mean, his bandwidth is the size of a country. It has its own area code. Just like JJ opened the door of the Amazon and gave us that local indigenous perspective—I mean, yeah, okay, I told some stories about it, but Stefan came in and went, “Okay, you guys have good ideas, but you’re both jungle guys.”
Paul Rosolie
(01:01:44)
“You’re not helping each other.” And running those systems, making the website, and making it possible to connect the people that care with the indigenous ranger program and make sure the rangers have shirts and cans of tuna and that there’s a person running the ranger team—I mean, these are things that I couldn’t dream of organizing. I can’t even make my bed. You know, I can’t even get that far.
Lex Fridman
(01:02:06)
Caveman want fish.
Paul Rosolie
(01:02:07)
Caveman want fish.
Lex Fridman
(01:02:09)
Watching you hunt for fish with a machete is one of the most awesome things I’ve ever seen. You were literally able to catch a fish with a machete. So that’s what you’re good at. And then Stefan is good at everything else.
Paul Rosolie
(01:02:22)
Everything else. You remember the Most Interesting Man in the World? And they’re like, “You know, he once had an awkward moment just to see how it felt.” And it’s like, Stefan’s to-do list doesn’t exist because it’s already done. It’s just incredible.
Lex Fridman
(01:02:35)
Quick pause. Bathroom break.
Paul Rosolie
(01:02:36)
Oh, 100%. I’m so happy about that. Yes, sir.

Tribe’s diet: Monkeys, turtles, and turtle eggs

Lex Fridman
(01:02:41)
And we’re back. One thing I forgot to ask you is about the diet of the uncontacted tribes. You mentioned— —potentially monkeys and turtle— —eggs? So, what do we know about what they eat? What’s the source of protein? Do they eat monkeys?
Paul Rosolie
(01:02:59)
Oh, yeah. Their primary sources of food, I would say, would be monkeys, turtles, turtle eggs, and small game like paka, the large rodent that’s like the size of a beagle. Capybaras. Stuff they can shoot. They don’t really fish. And we know these things because our indigenous trackers and our rangers find their camps, and so they’ll find some of those little thatch structures they make on the beaches and we see the bones. There’ll be tapir bones. There’ll be turtle shells, which seems like is their closest thing to a bowl. The day that we interacted with them, they did find a bowl. We saw them walking away with it in one of the farms, and then days later we found it destroyed. So, they didn’t seem like they saw much utility in the bowl.
Lex Fridman
(01:03:47)
It’s a temporary container.
Paul Rosolie
(01:03:49)
It’s temporary. So, they kill it. They make a fire. They must be amazing at making fire. I don’t know how they do it out there.
Lex Fridman
(01:03:56)
It’s very difficult because everything is wet.
Paul Rosolie
(01:03:59)
I don’t know how they do it. And I’m a really good firestarter.
Lex Fridman
(01:04:02)
And it’s tough in the jungle.
Paul Rosolie
(01:04:03)
It is almost impossible most of the year because everything is wet to its core.
Lex Fridman
(01:04:09)
So you think they cook the meat?
Paul Rosolie
(01:04:11)
I mean, they have to be cooking their meat from a parasite standpoint, from everything. We know that— —they’re cooking their meat. We see it, that they’ve cooked it. You know, there’s not a lot of excess berries. Things like berries and nuts and fruits, that the monkeys and the birds are— —and the bats are getting to those first. As soon as… I mean, that’s what fruit does, right? A tomato is green until its seeds are mature and then it turns red to advertise, “Eat me,” so that you eat it and then your gut transports that to somewhere else and it gets free transportation. In the jungle, that happens so quick that we’re never getting produce.
Lex Fridman
(01:04:45)
In the book, you have a picture of a native girl on the Los Piedras- … Having monkey for lunch.
Paul Rosolie
(01:04:52)
Yes.
Lex Fridman
(01:04:53)
It looks really strange. The monkey kind of looks a little bit like cannibalism because it looks like a small human. I don’t know what it is about monkeys. There’s a human- … element to them. In their eyes, in the form factor, but even in the warmth they bring to the interaction.
Paul Rosolie
(01:05:22)
Yeah, I was babysitting her and she was six at the time, Dira, and her parents went out and we were left at camp. And they just said, “You know, keep an eye on her. Make sure nothing eats her.” And I said, “Sure.” And she was like, “Hey, I want lunch.” And I said, “Great. Well, what is there?” And she pulls out this monkey head and she was like, “It’s ready,” and she starts pulling at the ear. And she’s like, “I can’t get the ear. Can you help me?” So I pulled off the ear with my teeth- … and then I gave it to her, and then we just shared this monkey head back and forth.
Paul Rosolie
(01:05:51)
And we’re sitting there and I took a few pictures of her as she’s eating. And I have this video where I go, “What’s your favorite food?” And she was like, “Monkey.” And I said, “Not cake?” And she was like, “Monkey.” And she was pulling its lips off and, like you said- … you see the teeth and the eyes and it’s like sort of grilled in static agony. And it looks like a tortured human and she was just enjoying it.
Lex Fridman
(01:06:12)
Let me look it up on Perplexity how many people in the world eat monkey. Does it taste good?
Paul Rosolie
(01:06:24)
If it was prepared right, it would taste good, but they just throw it over the fire and then eat it. So, even if you took a perfectly good chicken and did that, it wouldn’t taste great.
Lex Fridman
(01:06:33)
There’s no reliable global count of how many people eat monkey meat, but available data suggests many millions of people regularly or occasionally consume primate bushmeat- … especially in parts of Africa, Latin America and Asia. I mean, she looks like that is her favorite meal- … is monkey.
Paul Rosolie
(01:06:53)
Yeah. Yeah, we had a great time.
Lex Fridman
(01:06:55)
Who are we to judge?
Paul Rosolie
(01:06:56)
Who are we to judge? I mean, have a tuna sandwich or a monkey face, whatever.
Lex Fridman
(01:07:02)
She’s loving it. That’s awesome. That’s a good picture there.
Paul Rosolie
(01:07:05)
And she’s adorable.
Lex Fridman
(01:07:06)
Yeah. Now that some time has passed, when you look back at that encounter, which I really do think is historic, with the uncontacted tribe—what do you think about? What lingers with you?
Paul Rosolie
(01:07:19)
Honestly, I’m still processing it. I’ll still find myself just staring off, sort of remembering it or looking at the footage. But it felt like the voice of the jungle was speaking. These people are… there’s that separation between humans and nature where we go, “We have to protect nature,” you know? It’s like explaining what water is to a fish. We’re part of it. We depend on it. And these are people that depend on it 100%. And as we sit here surrounded by technology and concrete and civilization, they’re still out there right now. And the fact that we’ve been trying to protect their home without even really knowing that they were in it, because they’re so elusive, it gives you perspective on where we came from and how far we’ve come.
Paul Rosolie
(01:08:14)
I look at simple things. You board an airplane or you take a picture and you go, “This is a miracle.” I think having that perspective of having interacted with them where you go, “How much work does it take to make this?” If you and I were standing in the jungle and somebody said, “You have to make this,” how many years before we came up with this? How many rubber trees, and where would we get the metal, and what would we use as dye, and how do we make the spring mechanism and figure out how to make it work? I don’t know. They are working with the bare essentials. So it’s an interesting reference point to start at in terms of how incredibly privileged we are.
Paul Rosolie
(01:09:00)
The other thing is we have written text, we have so many different types of text, and we have code, and we have language, and we have music, and we can communicate in all these different ways. And they have spoken word. They have oral tradition, and that’s it. And so they’re operating the way our ancestors did, persisting in modern times. I think, for me, I come back to the world and it moves very fast when I see it because I’m still stuck on, you know, whether or not you and I can drink out of that puddle. You know? And thinking about that.
Lex Fridman
(01:09:42)
The big questions of life.
Paul Rosolie
(01:09:43)
The big questions of life.
Lex Fridman
(01:09:45)
Yeah. You’re right from the perspective of the uncontacted tribe. Going from the technological world to the jungle, you realize the majesty, the magic of the biological system that is the jungle, that is nature. But from their perspective, also there is a majesty and magic to the technological world. The human-created technological world of the pen and the computer— … and the light bulb, that too is magical. So sometimes we— … don’t give enough credit to both: the magic of the technological world, all the incredible things humans have been able to build, and the magic of the natural world.
Paul Rosolie
(01:10:34)
I think you and I and people that spend large amounts of time in the wilderness, especially somewhere as remote and fundamental as the Western Amazon, have a different perspective on it. Because I think that when you’re born in it, you don’t necessarily have the framework to appreciate how far we’ve come. You go, “Yeah, I got on the train today. I checked my phone. I FaceTimed my mom,” and you’re like, “This is all normal.” It’s like we found a way to take things out of the ground and mix them together into magic devices that can do anything. It’s mind-blowing.
Lex Fridman
(01:11:14)
There’s a deep optimism to that. And you actually write in the book, which I really like, I think somewhere in the beginning, quote: “Given all the death and destruction I’ve witnessed, it would be easy to slip into the popular anti-human narrative that we are a plague on the planet and there’s nothing that can be done, but my career in conservation has given me a glimpse into an alternate narrative. I’ve met people who are proving more and more that something can be done. I’m talking about real heroes, people who have dedicated their lives to redeeming the evil that is capable of being waged by the human soul, people who are guarding the flame amidst the storm, proving every day what so many have forgotten.

Jane Goodall

Lex Fridman
(01:11:56)
There is still hope.” And that speaks against the cynicism and maybe apathy and the view that humans are a destructive force in the world. That speaks to the fact that humans, with all the technological elements that we have created, can actually do a lot of good. I wrote in my notes here a quote from the great Jane Goodall: “The greatest danger to our future is apathy.” So caring about the world, having optimism for the world, having hope for the world is the way to help have an impact, help save it. But on that, I have to ask you about Jane. She passed away on October 1st. Some humans in this human civilization of ours can open our eyes to the beauty of the world, and she is one of the best of them. And she’s had an impact on your life. Maybe can you speak to the impact that she’s had?
Paul Rosolie
(01:13:03)
I mean, when I grew up, being dyslexic, I couldn’t read for a very long time. And so my parents read to us every night, which was amazing considering how hard they were working. But they’d find the time to give us an hour of reading every night, whether it was Lord of the Rings or Sherlock Holmes or Jane Goodall. And so I grew up with Jane being this figurehead of conservation and of adventure and sort of a living historical figure, this legendary person. And so then one time, right around the time that I’d been going to the jungle for a few years, I got to go see Jane speak, I think it was at NYU. And sitting in the crowd, watched her, completely amazed.
Paul Rosolie
(01:13:47)
And I had, at the time, my cousins had been telling me that I should write down my stories, the stories of taking care of an anteater and stories of catching anacondas. And they’re like, “Write, you know? These are such good stories.” And so I’d been writing them down and I just remember after the talk, she did at least an hour on stage and then hundreds of people lined up, and she sat there and each of those people wants a moment with this legend.
Paul Rosolie
(01:14:18)
And so she has to take a picture, shake their hand, they say, “You mean so much to me.” She says, “Thank you.” And then they move on and they say, “We’ll send you the picture.” “Okay, great.” And so then I got my moment and we waited in line for a long time and I gave her this manila envelope with two chapters in it. One chapter was Lulu the Giant Anteater from Mother of God, and the other chapter was me, JJ and Pico out on the river catching anacondas, just talking about how amazing the jungle was. And I said, “I’d love it if you could endorse my book that doesn’t exist yet.” And I felt like such a loser doing that.
Paul Rosolie
(01:14:47)
And I felt so stupid because I feel like everyone was probably asking something of her and it’s incredibly draining to talk to that many people, even if it is for a good reason. And 48 hours later, she got back and she said, “This is incredible. I would love to write a recommendation for your book as soon as you find a publisher.” And what happened with that is that Jane, the way I think of it is, she waved her very powerful magical wand in my direction, and she had the incredible compassion and presence to actually—I mean, after talking to that many people and being on the road 300 days a year and being Jane Goodall, this living legend scientist, to actually do something so mundane as look at some kid’s writing.
Paul Rosolie
(01:15:36)
And of course when I went to publishers they said, “Jane who? Who said that they would endorse your book?” Because everyone had said no. Every publisher in New York had already said no. And then after that, HarperCollins took me on and they said, “Well, if Jane Goodall thinks it’s a good idea, then we think it’s a good idea.” And it became Mother of God and then because of that, Jungle Keepers, Dax, everything else stemmed from that. So had Jane not been the legend that she is truly in every moment, my whole career would never have happened, which also means that those thousands of heartbeats and thousands of acres in the Amazon wouldn’t be protected because we never would’ve started Jungle Keepers.
Lex Fridman
(01:16:17)
And she did that not because you’re special, she did that to everybody. And now just imagine the scale, the impact she’s had because of that. And guess what? You have a bit of that responsibility now as well. There’s young people that walk up to you in that way and you have that responsibility of seeing them, of giving them a chance, seeing the potential in every single human being that walks up to you.
Paul Rosolie
(01:16:45)
It definitely… I would say that we could do four hours on just Jane, what she did for humanity, what she did for science, what she did for women, what she did for wildlife, the amount of other people that she inspired and gave careers to, everything she did for me. But to me, that presence of mind when you reach that level to not be worried about your own travel and your own schedule and busy with getting some rest, and that she actually looked at it, has informed how I operate. And indeed like you say, at this point as strange as it is, people will stop me on the street and say, “Hey, I watch your videos every night with my kids,” or someone will say, “How do I get your job?”
Paul Rosolie
(01:17:27)
I’ve been watching you for years and I’d love to help conservation.” And so it’s made it so that I follow her example where you stop what you’re doing and you pay attention. Because you don’t know, that might be the next kid that’s out there saving a river, or the next person that makes an innovation that makes it possible to clean rivers, or whatever their dream is. But Jane was in the hope business. She always said it. That not losing hope was key to staying in the fight. And we live at a time when that apathy is a poison peddled by the darkness. They’re trying to make you feel disoriented and apathetic and scared.
Paul Rosolie
(01:18:12)
And fighting back against that and having conviction and passion and fire and hope are the only way that we’re going to fight that. And she understood that, and she spent her whole life spreading it, guarding the flame against the storm, and tipping her candle to others to light them. I mean, that was her whole thing.

Advice for young people

Lex Fridman
(01:18:30)
What advice would you give to young people on how to do that? Those young Pauls sitting there, and your life story’s just incredible in that way. You’ve taken a leap into adventure— …into the unknown. What would you recommend they do?
Paul Rosolie
(01:18:49)
I think the thing that I try to communicate to them—and again, my inboxes are filled with people from Finland, Spain, Georgia, saying, “How do I get your job? How do I get out there and do it?”—and it really is just that: you throw yourself headfirst into adventure. You just do it. And I remember hearing people say that, like, “You know, if I can do it, you can do it.” And I remember how hollow that sounds because I’m like, “Yeah, you’re on a talk show or you just wrote a book.” These titans of their industries and innovators saying, “Oh, if I can do it, anybody can do it.” But now that we’re protecting all this rainforest and that I’ve lived with the animals and met the tribes, and it’s becoming this global movement—you know, I didn’t have a PhD.
Paul Rosolie
(01:19:38)
There’s that quote that someone less qualified than you is living your dream life and has your dream job right now, and I am the poster child for that because I failed out of high school and started taking unmatriculated college classes and going to the jungle with my friend JJ and just doing it for the sheer love of it for years, almost a decade, before anything surfaced. And the other thing is there’s not even a path. There was no path ahead of us. There was no, “Okay, you go to school, you get trained in this, and you’re going to become this.” I went there and it was like, “You’re never going to be a conservation biologist because you don’t have the grades. You don’t have a PhD.”
Paul Rosolie
(01:20:16)
You don’t have family money. You’re not going to be able to protect rainforests.” So I said, “All right, well then, selfishly, I just want to see it.” And then I ended up getting trained by the indigenous people, and like what happens so many times—you could use a restaurant example—where you might start washing dishes, but at least you’re in the restaurant, you know? And then at some point, the manager’s going to need you to help with restocking and so on. And at some point after a few years, you’re going to be helping the new guy, and at some point you might end up being the manager, and at some point you might end up in a position where you’re starting your own restaurant. That’s the only way to do that. You can’t just search it on a computer. You have to go sweat and bleed and do it.
Lex Fridman
(01:20:58)
And that said, especially if you fall in love with the journey that you take on, it is full of difficult periods. I think you said somewhere this just seems to be the nature of it. That there’s going to be pain, there’s going to be suffering along the way. You have a really nice post… …That I recommend people watch about just this. When people ask for advice, that the hardship, the suffering—
Lex Fridman
(01:21:27)
…and I’ve seen how much you care. I’ve seen it in your face when you see a tree being cut down or you see the fires. There’s real pain there in your heart and you have to carry that. And so the post is, “How honest can I be? What do I tell these kids who message me asking how they can do what I do? It’s not David versus Goliath. There’s no sword or sling that can hold back a dragon this big. You’re going against the current of global economic entropy and human apathy. Swimming against the current is tiring, a great way to drown. Every day, we don’t win, we lose, and when we do, worlds burn. The more you know, the more it bleeds. The heartbeats all stop when the flames come through. Constellations of species turn to ghosts, and we’re the only ones saving them.”
Lex Fridman
(01:22:25)
“Cupped our hands around a candle in the howling darkness. And people want to be inspired. Keep that social media going, keep it up. You’re doing great. They want to know we’re winning, and we’ve done a lot of winning, but not right now. We’re getting slaughtered. We’re at that part of the story. We’re almost at the end game. We can think as positively as we want. Thoughts and prayers won’t stop a chainsaw, and the motor that’s carrying us against the current towards the miraculous goal only works when there’s gasoline in it. As soon as that stops, we drown. We can take the warm light from all of those who help and not let it bother us that there are people who could buy a planet’s claim to care.” At some point you realize what’s really happening.”
Lex Fridman
(01:23:17)
“As a kid you’d rather be Aragorn. You don’t want to actually carry the ring, not when you learn what it’s going to cost, even if you make it. How can you explain to Sam why you can’t get on the boats? Whatever it takes, whatever it takes. It’s that time of year again. Here come the flames. Whatever it takes, it’s coming.” And people should watch the video that goes along with this. But that speaks to the pain, the difficulty, the challenge, the suffering involved— …when you’re faced with the possibility of destruction. And that’s the other side of the sword of caring for something deeply.
Paul Rosolie
(01:23:57)
Yeah, we’ve watched a lot of forest burn. We’ve pulled a lot of animals out of the flames. I wrote that at a time where we were just getting hammered. Funding wasn’t coming in. There were miners. It was just months and months out in the jungle alone. It’s a Thom Yorke track that I’d just been listening to again and again, and it was just so low. There was a huge new invasion where they just burned the whole side of the river and it’s never going to come back. And it’s part of the forest that I loved and I knew the animals there and it’s gone. And so we have to live through that on a weekly basis, at least, a day-to-day basis.
Paul Rosolie
(01:24:46)
And when you take on responsibility for something like this, you go to sleep thinking, “Yeah, if we don’t do it then worlds burn. If we don’t save it, then…” Every time you mention the sadness that surrounds a happy moment, well, it’s like, how am I supposed to go to a party and talk with people about anything? How am I supposed to even go to sleep when if we don’t succeed at what we’re trying to do, if we don’t outrace the chainsaws and the roads, then those trees die—those millennium trees—and we’re the only ones out there protecting them. And then when you see that black scorched earth with nothing left, it’s just ashes on the ground…
Paul Rosolie
(01:25:29)
…the cacophony of life is silenced, and it’s just this horrible violent silence. It makes you sick. And so yeah, there’s a lot of weight that comes with that where we’re not theoretically doing something. We’re practically doing it.
Lex Fridman
(01:25:51)
So that’s the other side of the advice to young people.
Paul Rosolie
(01:25:55)
Oh, yeah.
Lex Fridman
(01:25:56)
It’s not gonna be easy.
Paul Rosolie
(01:25:58)
No. I mean, when they say, “How do I get your job?” It’s like, “Well, you don’t want my job. You don’t want the botflies, and you don’t want the dengue, and don’t even inquire what a normal life looks like.” I lived out of a backpack for 20 years. You know how many monkey faces I had to eat because there was no other food? Like, seriously. Just being alone on the boat in the river and how many days the motor didn’t work. And you sleep out there, and you get rained on because you don’t have any protection, and you have some leaves over your face. And then you go home, and everyone’s got a job, and everyone’s got kids, and everyone’s happy.
Paul Rosolie
(01:26:35)
And they’re like, “What are you doing down there?” “I’m trying to save the rainforest.” They’re like, “Sure.” And now we’re at this point where I cared a whole lot for a long time. We’ve had rises, and then we’ve had falls, and we’ve had wins, and then we’ve had failures. And the last few years, we’ve had this rolling success of people finding out about our work and coming in. And we start to go, “Wow, if we protected 130,000 acres, we might actually be able to do this.” There’s that moment in 300 where they show Leonidas and they say, “Even the king allows himself a moment of hope that this might be okay” right before they get slaughtered.
Paul Rosolie
(01:27:13)
And someone very dear to me recently said, “In celebration of where we’ve gotten to, if it happened in any harder of a way, it would have actually killed you. And if it had happened in an easier way, it wouldn’t have been so divine.” And that slapped me in the face because it was like, “Man, it has been so hard, but look where we are.” We might actually do this.

Cartel, Narco-traffickers & assassination attempts

Lex Fridman
(01:27:41)
It just has to be that way. Speaking of which, another complexity in all of this—you write about in the afterward of the book about the narco-traffickers that have moved into the river basin. They are not the loggers that we’ve spoken about anymore. They’re growing coca for cocaine, and they’re building airstrips. So tell me how this came to be.
Paul Rosolie
(01:28:14)
Like you said, our whole life on this river, when loggers come in, JJ and I would walk up to them and say, “Hey, what’s up?” and sit down with them and have a beer or share a meal and talk to them and ask who their father was and if we know them, and then hire them. And they’re friendly.
Lex Fridman
(01:28:32)
They are, in a way, brothers. They’re the same.
Paul Rosolie
(01:28:38)
They come from the same people. They’re simple local people. They’re not evil. They’re just people who usually have a kid and a wife, and they’re looking for work. So they work with the chainsaw because that’s what they know. And they work for, you know, $30 a day if that, in very challenging, harsh environments. And so when we see clearings, I would always go with the drone and fly it over. We’d get some intel, and then we’d bring that to the police. Jungle Keepers supports the police at this point because the Peruvian government has a hard time with resources trying to manage Amazonia. And when you’re three days from civilization, getting cops out there is not the easiest thing.
Paul Rosolie
(01:29:21)
So sometimes we’ll lend boats or gasoline or logistical support. And there was a moment in March, several hours upriver from home base. I’m with JJ on the boat, and I fly the drone. There’s this big new clearing, and I lower the drone. A few times, I’ve had people come out and wave at the drone or say, “Get away.” And we’re out in the middle of the river just sort of idling, and I lower the drone. I see these little huts, and we’re saying, “Okay, this is a big clearing.” I’m snapping images. There are visitors who had flown in on the boat with us, and I have my local team, and all of a sudden, people come running out of the houses.
Paul Rosolie
(01:30:04)
And they run straight to their boats. Home is in the downriver direction. They get in their boats and start chasing us, and we start driving at full speed. We have a 60 horsepower; they had a 40. We’re doing this chase now, and our guests, who are potential funders, you know—at one point, the father looked at me and goes, “Hey, this whole running from the Pirates of the Caribbean thing… it’s getting scary.” “You’re scaring us.” He was like, “When are you going to put the drone down?” And I go, “I’m flying the drone at full speed to keep up with the boat.” And I just crash-landed the drone on the side of the river near a big tree. I just said, “Fuck it. We’ll get it later.”
Paul Rosolie
(01:30:46)
And I was like, “This happens all the time. They get mad, they chase us. It’s no big deal.” And I smiled at him, and JJ’s smiling. He goes, “This is so bad.” And he’s smiling. And JJ looked at me, and the smile fell off him like a mask. He looked at me and was like, “This is not good.” And we kept going upriver and luckily, there was a camp of police that we’ve worked with quite a bit. I went to a friend of mine, and I remember we got off the boat. I shook his hand. He said, “What’s going on?” I said, “Look downriver, there’s a boat tearing upriver towards us.” And he did three things. He got the rest of the guys, they armed up, they got on the boat with guns. They put ski masks on. They got ready for combat. They told us to get down. He also said, “Hey, turn on the sat-link.
Paul Rosolie
(01:31:33)
Call for support back home.” We turned our boat around. And as soon as the narcos—which we didn’t even realize were narcos chasing us; we thought we were looking at loggers—as soon as they saw the guns and they saw us face them, they turned their boat around and went back downriver. So we got escorted downriver, and I remember shaking my friend’s hand and saying, “Thank you for saving us today.” And telling the other guys they did a good job. We’d been brought home safe. Hours later, I said, “Good job. Thank you so much.” And they went back upriver, and then that night, I’m sitting at the station. And I get a phone call from Stefan.
Paul Rosolie
(01:32:14)
And he goes, “Pick up the phone.” I go, “I’m in the middle of a conversation.” He goes, “Pick up the phone.” And my friend whose hand I had just shaken a few hours ago, they went back upriver, and as they were unloading their boat and washing off in the stream, the narcos did a drive-by and shot him straight in the chest with a shotgun. And so all of that enthusiasm—that we’re protecting the biodiversity, this is so great—it’s like that scene in the movie where there’s a montage of success and winning, then gunshot. I could still feel his hand in my hand. I just shook his hand. I said, “No. You’re not…”
Paul Rosolie
(01:32:57)
I said, “Well, is he okay?” He said, “He took a shotgun straight to the chest. He’s dead.” I said, “Okay.” And so I had to go out to dinner and not show the guests anything, and just smile and laugh and talk to them about whatever and keep that in, which felt very difficult to do. And as you said, the threat level escalated and we didn’t know it.
Paul Rosolie
(01:33:25)
The narcos had come in and started realizing that there’s so much wilderness here that they can operate and there’s no police. And then when we flew the drone, they got mad. So we communicated with the police and they said, “Oh yeah, these are narcos.” Now we realize this is part of the serious drug mafia. And then I had gone back, the incident that you’re referring to at the end of the book, I had gone back to New York to speak to donors to try and get this work to continue. You know how it works. We’re at the station and then you go to that little logging town, and then there’s a road.
Paul Rosolie
(01:34:08)
And so our pickup truck had come in on the road and JJ was supposed to come down, get in the truck and drive back to the city. JJ was on the river and went, “I forgot I was supposed to get more stuff at the city. I’ll go tomorrow.” He went back up and he sent the boat driver down and told our driver, Percy, who was waiting with the pickup truck, “JJ’s not coming today. Go back and come back tomorrow.” Percy starts driving down the road and he sees a tree across the road—this is a single-lane road through the jungle. Men with guns come and stick pistols in through the open windows, gun against his head.
Paul Rosolie
(01:34:50)
They pull him out and they go, “Where’s JJ and the mierda gringo volador?” He said, “Where’s that shithead gringo that flew the drone?” And if either of us had been in the car that day, they would have killed us. And we know that because they took his wallet, they took his phone—our driver, Percy. Thank God they didn’t hurt him, but they sent a message to us. They said, “We missed you this time, but we’ll get you next time. We’re going to get you.” And so when JJ called me, he was howling. He just had that adrenaline and that emotion that it almost happened. And so that changed everything.
Paul Rosolie
(01:35:35)
Since then it’s not counting butterflies and taking ecological surveys; it’s that there’s a drug war being fought on our river. And now when these roads come in, we can’t just go out and meet these people anymore because they are actively looking to shoot us. They know our names. The police intercepted a phone from someone they arrested, and in the WhatsApp chat, it said, “If you see JJ or the gringo, anyone in our network, please kill them. You’ll be rewarded.” So we both have a hit out on us and life on the river has changed. We can’t…
Paul Rosolie
(01:36:16)
You know, I can’t just go out walking around and swimming and driving my boat. You have to be looking over your shoulder at all times. You can get as trained as you want with a pistol and sleep with it under your pillow, but the way these people work, they’ll catch you when you’re least expecting it. They’ll wait till you’re at a cafe in town. They’ll wait till your motor doesn’t work on the side of the river. It’ll just be a quick one and they’ll go. And so that feeling on top of the weight of protecting the ecosystem and the animals, it’s like now we’re actively being hunted when we’re there.
Lex Fridman
(01:36:54)
And this is very directed at you and JJ? So they really don’t care about the others. They understand. Are you afraid? What’s it been like living with the real fear of being murdered at any moment?
Paul Rosolie
(01:37:20)
I wish I could say I handled it better than I’ve been handling it. I wonder how people in war zones do it. I wonder how some of my soldier friends that I have immense respect for did it when they were deployed. Because for me, once this happened, with every phone call now I think, “Did something happen to JJ?”
Paul Rosolie
(01:37:40)
Every time I go to sleep, my dreams are that I’m being shot. It really threw me. It really affected me. When J.J. called me, he was just shouting. I don’t even remember what he was saying. He was just shouting, “They almost got us. They almost got us.” He was so terrified and angry. There was a day not that long ago that I was swimming in the river, right in front of the stairs at the station, and a boat came around the bend. I remember thinking, “Do I run? Do I go underwater? Do I hide? What the hell do I do?” I didn’t have a gun near me.
Paul Rosolie
(01:38:24)
The security people were up the stairs. It’s like, you go, “Holy shit.” And it’s not the danger of, you know, if I jump on an anaconda, it might kill me, or if I climb this, I might fall. These are people who want to kill you. And on top of it, when you see what your friend looks like after three days of floating in a river—what a body looks like of a person you used to know—that’s very viscerally terrifying. There’s the tragedy of that person who lost his life, who was younger than I was. He was a kid in his 20s. It’s very hard to do anything because… I mean, right now, my hands are sweating. It just affects me.
Paul Rosolie
(01:39:12)
Even in the daylight, if I can go, “You know, it’s fine. This is part of the thing. This is the adventure, people deal with this all over the world.” You can talk yourself tough, and then in those quiet moments, that 4:00 AM thing, you wake up and you go, “Fuck. Why am I sweating? Why did I just have those dreams? Why is my heart racing?” It sinks its way into your subconscious, and it’s just not what we signed up for. We wanted to just protect this beautiful place and this is a whole new threat. We’re not trained for this.
Paul Rosolie
(01:39:45)
We’re not police or military and we’ve now seen violence on a scale that we were very unprepared for. Just two days ago, I was on my way to you and my phone rang at nine o’clock at night and it was J.J. My heart was jackhammering. I had to pull over because I was going, “What news now? Did we lose another bunch of acres? Is it a new road? Did somebody die?” It really scatters you.
Lex Fridman
(01:40:20)
In some sense, it’s a twist that you didn’t ask for and it doesn’t necessarily have anything to do with the fight you’re fighting, which is protecting the rainforest. But because of it being pristine and quiet and away from civilization, it also becomes a place where you can have airstrips. It becomes lawless in a certain way because it’s so far away from civilization.
Paul Rosolie
(01:40:46)
Yeah. It’s the only place that they can operate with impunity. There’s no police out there. And so they saw us helping the police and they went, “Cut the head off the snake.”
Paul Rosolie
(01:40:57)
And that… you know, Chico Mendes, Dorothy Stang—the list of environmental defenders that are assassinated in the Amazon every year is huge. There’s endless examples of it. It’s staggering. I forget the exact numbers, but every year we lose people. There’ll be local leaders who are trying to stop an oil company or a drug cartel, and they just shoot them because they know that one person who’s able to rally that support, who has that voice—if you just shoot them, usually it’ll end the thing and then they can go back to doing whatever the hell they want. And so right now, we’re working very closely with the Peruvian government.
Paul Rosolie
(01:41:39)
People assume that a Latin American government is automatically corrupt, but what we found is that these are really good people that want to help their citizens. And the police have been working very hard to stop the narcos, to protect the local indigenous people because with the narcos comes human trafficking. With a team of male narcos out in the woods making drugs, they want prostitutes. And how do they get prostitutes? They go steal girls from indigenous communities that don’t know any better. And then there’s reports that the narcos have made contact with the uncontacted tribes. Of course, they’re going to shoot machine guns at them. They’re not going to have a shotgun where it’s a fair fight.
Paul Rosolie
(01:42:24)
They’re going to mow them down and the uncontacted tribes are going to have no idea. That’s why I posted a video of me in the rain saying, “This is endgame,” because there was a new road coming off the north of our territory above the ancient forest. They had jumped over because we stopped it at the ancient forest. They’ve gone above the ancient forest. Now they’re trying to cut down to a new area. And so it looks like this. …Trans-Amazonian… Stefan made this map, of course. But you see the area that we’re trying to protect—loosely, so that we don’t give away anything—the area that we are protecting. So, the light green is the 130,000 acres—
Paul Rosolie
(01:43:09)
…and then this metastasizing network of roads just reaching out and trying to get in. And so they’re trying to come in from the north where that arrow is, they’re trying to come down. And so the police are fighting them along this— …and it’s a full-on drug war right now. Stopping that, securing this northern boundary… and again, just the power of what we have. When I posted this, I asked Stefan to show people the road and where it’s going to go. We posted this video and said, “We have to protect this 100,000 acres right now.” And all up here is uncontacted tribe territory. And just from that one post, we got $150,000 in like 48 hours and we bought this concession. We stopped that road. But now they’re up here—
Paul Rosolie
(01:43:55)
…and they’re trying to come down. This is the thing, again, you said it’s great. Yes, you get to be an adventurer and you get to live in the jungle, sure. But it’s like there’s this Mission: Impossible thing where you might get lucky enough to pull off your psychotic mission. You know, jump your motorcycle off the train and parachute down and stop the bomb before it goes off. Great. How many of those do you get? And we’re having to do it every month. These amazing people that are supporting the rangers allow us to patrol and protect this because once we have this land protected, the interesting thing is that the police can go into any of the light green areas. If anybody’s there, they just arrest them.
Paul Rosolie
(01:44:37)
They’re on Jungle Keepers’ land, they’re out. And eventually, that land will become national park if we’re successful. The problem with the land that’s not is it’s a gray area. It’s the middle of the Amazon; are they allowed to be here? Do they really have cocaine? Because they’ll plant papaya for acres and a little bit of cocaine behind it. They’re sneaky. And so they have to build a case and it takes time, and then the road comes in… and in that time, then they’ll knock off a police officer. If we were just able to get this tomorrow, the whole problem gets solved. We could give the police two more boats, and then they could do all the patrolling they need.
Lex Fridman
(01:45:17)
So the mission is clear.
Paul Rosolie
(01:45:18)
The mission is very clear, and the problem is that right now we’ve been playing defense and sustaining losses. Either we need to inspire enough people that the donor program goes through the roof, and instead of having several thousand donors we have 50,000 donors and we raise—we need $20 million to save the rest of the corridor. We’d raise $20 million overnight with enough people. Or we need one of these people who has the resources to come in like Batman and just go, “I want the park named after me and I’m just going to give you the $20 million.” And then we do it tomorrow, and then we make a documentary about how we saved a river and the tribe and the monkeys. But right now, we’re…
Paul Rosolie
(01:46:03)
Yeah, right now we’re begging on the side of the road for enough change to buy bullets so that we can stay alive.
Lex Fridman
(01:46:12)
So these narcos, they’re… there’s a kind of distributed network where a bunch of them are pretending to be farmers. They’re holding onto the land and then maybe they start planting cocaine on the land— …slowly, and they build the airstrips. Are they trying to stay under the canopy— …with the airstrip?
Paul Rosolie
(01:46:33)
It’s brilliant. First, what they do is they subsidize the poorest people and they say, “Go up this river, turn left at the tree and just start there.” And they’re like, “Here’s a few grand.” And these people are like, “I never had a few grand before.” They’re like, “Buy gasoline. Here’s a chainsaw. Go clear some land.” They send these people up there, and then when they show up a year later and these people have made an illegal farm out in the jungle, they go, “Hey, we need a safe house. Remember that time we gave you the gasoline and now you live here? You’re going to work for us now.” And so they’re kind of a friend of the people like that, and they have safe houses all over the jungle. And then when the bosses come to collect what they’re growing out there…
Paul Rosolie
(01:47:14)
I mean, the police busted a narco operation that was in the middle of the jungle. I mean, you know, hiking to the ancient forest— …just days into the jungle. These people are going on foot with sacks and stuff. And the way they do their airstrips is you think the canopy of the rainforest is 150 feet tall, 160 feet tall. And if you clear the interior of the landing strip, the trees are still meeting overhead. And so you can’t fly over and see down—
Paul Rosolie
(01:47:42)
…which is the same reason we didn’t know about the road that was going to the ancient forest, because overhead the trees are meeting, so you’re not gonna see it on satellite and you’re not gonna see it from a plane. And these bush pilots fly in and they’ll just duck in under the canopy, land their plane, load up, and then they fly out. I mean, expert pilots.
Lex Fridman
(01:48:01)
So it’s impossible to detect.
Paul Rosolie
(01:48:03)
It’s almost impossible to detect. We’re working with people now. You know, it’s this arms race. There are drone programs. I talked to someone that has a different type of drone, a 16-foot drone that uses the thermals to climb up and has solar panels on the wings and flies for two weeks at a time. It’s like a glider- …that recharges itself. And it’ll keep constant imagery so we’ll get almost up-to-the-moment data on disturbances in the canopy. And it’s like, well, that’ll be a first-hand alert system, but then we gotta get the police out there which, as you know, is a two-day expedition by boat, and it’s the only way. And so the local police force there may be dedicated, but putting people on a multi-day expedition to go get shot at in the jungle is nobody’s idea of a good time.
Lex Fridman
(01:48:49)
You understand, have you researched into this whole other world of drug trafficking, cocaine trafficking? How big is the operation here, looking at Perplexity— …multi-thousand ton, multi-billion dollar global industry?
Paul Rosolie
(01:49:05)
I mean, globally it’s a monster.
Lex Fridman
(01:49:07)
Colombia, Peru, Bolivia. And they move north and east through the Americas, the Caribbean, the Atlantic to reach major consumer markets. Yeah, this is a machine fueled by a lot of money and a lot of brutality. The number of cocaine users worldwide is about 25 million people.
Paul Rosolie
(01:49:32)
Users.

Climbing the giant tree

Lex Fridman
(01:49:33)
Users. So there’s a market. And when there’s a market, you’re going to find a way. Quick pause, bathroom break. All right, and we’re back. And me as somebody who is afraid of heights, and I’ve had a chance to interact with you a bunch—you’re in some sense fearless and I’ve watched you climb a lot of trees. You’ve helped me climb a tree. And there’s this wonderful part of the book where you talk about finding the tallest tree in the forest you knew at the time, and that was something that you passed and thought was impossible to climb. And you talk about climbing it. Take us through the experience of that. And that leads you to seeing the Mist River in the rainforest as the sun rises.
Lex Fridman
(01:50:24)
I was wondering if you could talk through the story of that, both for at least for me, but even for you at that time, the terrifying process of climbing a tree like that for the first time with J.J. at the bottom cheering you on, and what it felt like to see the Mist River.
Paul Rosolie
(01:50:44)
That tree, you’ve met that tree. She’s a good one. Her base is at least as big as this room, and she’s probably about 160-something feet tall. And so when you’re looking at these giant buttress roots going up, which I’d been doing for 18 years at that point, I always said, “Man, if I could just climb it.” And I’d never had the rope skills, you know, and I’d developed as a rock climber. I was working on strength, and I trained for it. It’s like most things. You can’t just do it. I’d gone and climbed up 30 feet and gone, “No way.” The trunk of the tree goes vertical for about 70 feet before branches even come out, so there’s just this one big vine. And J.J.
Paul Rosolie
(01:51:29)
and I did it at, I want to say like 4:00 in the morning, like really early. The howler monkeys had just started. And you start climbing with the rope up this one vine, and you have to… it’s not a technical climb. It’s a strength climb. You have to gorilla up this vine, and it’s all back strength. And so I did it no shirt, no shoes, straight up, and J.J. had the belay device. And so every like 30 feet, I would put in a piece of webbing and a carabiner. So then you go up another 30 feet and you put a piece of webbing and a carabiner, and you don’t know what you’re gonna find. And you’re going up in the dark.
Lex Fridman
(01:52:05)
And so when you say it’s a lot of strength that’s involved, there’s very few places to rest. You’re essentially just lifting the whole time. So it’s extremely exhausting.
Paul Rosolie
(01:52:13)
Extremely exhausting. Like, I really trained for a long time, and there is no rest. The only rest you get hurts. You’ll have to cling to the tree and your feet are smeared against the bark and you’re holding on with your toes, if anything. And if you fall—you know, if you’re climbing up, and this is basically trad climbing—if you’re climbing up and you put a safety, which is a piece of rope with a carabiner, and you put my rope through that, again, as you’re doing that, it’s dangerous ’cause if you fall, you fall. Then I do that and then you climb up. Right before you put the next one, you’re gonna fall double. So if you climb 30 feet, you fall 60 feet.
Paul Rosolie
(01:52:48)
And so your head’s gonna smack against the side of the tree. As you’re climbing, you don’t know if you’re gonna reach into a wasp nest or if there’s gonna be a venomous snake.
Lex Fridman
(01:52:56)
And there’s, by the way, in those trees, a lot of those.
Paul Rosolie
(01:52:58)
A lot of those. And it took me over an hour just to get to the branches the first time, and it’s just, again, full exertion, everything I had. And then you get to the branches above you, and each of the branches is the size of a mature oak tree. They’re just these huge branches, thick as a minivan, and you’re climbing up this straight tree that’s like the World Trade Center. It’s just huge and then I had to traverse around the tree on vines, and then finally I get up into the crown of this tree. And then from there, I called down to J.J. and I just see this little speck of light 85 feet below me. And then I climbed up to about 120 feet and I sat there.
Lex Fridman
(01:53:39)
And you’re doing all this still in darkness.
Paul Rosolie
(01:53:41)
We’re doing all this in the pre-dawn light. And so when I got up there, now the howler monkeys are going and the jungle’s starting to vibrate and you can hear the first macaws starting to chirp and everything’s starting to turn on. And in the east, the sun is coming over the jungle. When the first rays get line of sight to the canopy, it starts lifting the mist off the canopy. All of that moisture starts coming up, and I’m sitting on this branch at 100-something feet above the ground with dark jungle below me, and all of a sudden I see the river. I see the Mist River I’d always heard about.
Paul Rosolie
(01:54:16)
They say that there’s a river above the Amazon, an invisible river that has more moisture and that more water is flowing above the Amazon than is flowing in the Amazon. And I’d heard this my whole life and you think, “Okay, the fact that there’s a molten core of the Earth or that black holes theoretically exist.” It’s just like one of those things you’re never gonna see. And in this moment on this tree, sweating and just ripped apart and bleeding, I was sitting up there and I saw the Mist River and it was flowing over the canopy in the golden rays of the morning and the macaws start taking flight and there was monkeys below me that were looking up. And you could tell they were confused.
Paul Rosolie
(01:54:53)
They were looking at me going, “What is that?” And I just had this absolutely incredible moment. It felt like you’re seeing God. I wanted to share it with everyone. I felt guilty afterwards for having had a moment like that. But it felt like I had taken this insane risk, risked falling out of the tree or getting strung up on the ropes, and of course it’s just me and J.J., so if something goes wrong, no one’s gonna help you. And being out there on that branch felt suicidal ’cause even then, if you fall, it’s a giant swing back to the tree.
Paul Rosolie
(01:55:31)
But the beauty that I saw up there was so intense that it sucked the air right out of my lungs. I had tears in my eyes and I’m just watching this incredible process flow over the Earth, this legendary thing that I’d heard about, that scientists described, and now I’m seeing it with my own eyes. It felt like the gift of the tree.
Lex Fridman
(01:55:56)
And you write, “Now, in the branches of the greatest tree in the jungle, I watched as the Mist River caught the morning rays, illuminating golden currents, swirling as it rushed over the canopy like a stream from heaven. In the troughs and basins in lower areas, the river was deep blue. But then, as it flowed up and over the taller trees, slow rapids washing over the canopy, the Mist River became ignited, electrified in the gold magnificence of the sunlight. Scores of birds flew up, in and out of the churning currents. The life and breath of the Amazon was flowing from north to south along the basins of the Las Piedras over the jungle. My God. My God.” I thought of everyone I loved, of every creature contained in the leafy distance.
Lex Fridman
(01:56:45)
The jungle itself was like a great being, a monstrous leviathan of warm green might. I wanted to call down to JJ and tell him to find a way up. I wanted my mother to see it. I wanted the world to see it. The light filled my eyes, and I found myself wiping away tears.” You know, I should take the small tangent of saying the obvious, but the thing that needs to be said is you’re a fucking great writer.
Paul Rosolie
(01:57:15)
Thank you. I mean it, come on, I’m just describing what happened, but…
Lex Fridman
(01:57:20)
All right. You mentioned macaws as part of the process of the jungle waking up. I read that when you first start in the jungle, that’s kind of your job, studying those. And me as a fan of monogamy and birds… So macaws are beautiful, but they’re also monogamous creatures. They scream at each other quite loudly. What are some interesting things about them? Among which, by the way, you write how important the ironwoods are to— …their wellbeing, to their life.
Paul Rosolie
(01:57:53)
Yeah, I mean, when I went down there, that’s—like I said, for young people, if you wanna get out there, go do it. I agreed to stay at the station and do like six hours of macaw research every morning. So you’d wake up before dawn and go sit and just stare at the side of the river. And the macaws would show up—
Paul Rosolie
(01:58:11)
…and like you said, they all scream and bicker at each other. It’s just how they talk. It’s very, very loud and very, very harsh. But they do love each other. You can actually hear when you walk through the forest, I know what the sound of macaws giving affection is. They make a certain kind of sound when they’re preening each other’s feathers and taking care of each other and just nuzzling. And then there’s a different call altogether when they’re yelling at other macaws or saying, “Let’s go.” And you start to learn macaw language.
Lex Fridman
(01:58:42)
What have you learned about relationships and successful marriage from listening to macaws screaming at each other in those nuanced ways you’re talking about?
Paul Rosolie
(01:58:52)
Well, I guess…
Lex Fridman
(01:58:54)
Never mind, you can skip that question.
Paul Rosolie
(01:58:56)
It’s interesting to see two animals sticking by each other’s side while they’re raising a chick. And at the bottom of the stairs at the station, there is a macaw nest in an ironwood. The relationship that you mentioned is that in the jungle, there’s a limited amount of macaw real estate. And those are all ancient ironwood trees, at least 500 years or more. So they have to be thick. This, again, car thickness or bigger. And when a branch falls off, it creates a hollow and the macaws use that to reproduce. And because there’s only so many nest sites in the forest, only about 17 to 20% of the macaw population reproduces in a given year. So they have a slow replacement rate. And macaws are one of the things that people come to the jungle to see.
Paul Rosolie
(01:59:42)
And so along with gold mining and logging and all these extractive things, in our region, ecotourism has been great. It’s given the local people jobs as guides, cooks, chefs, and carpenters. And so macaws are a huge part of that because it’s one of the last places where you can see these flying rainbows over the canopy. Or when you’re on a branch from one of these trees and the macaws fly under you. And again, they’ll fly by; you just hear the wind in their feathers. And they just look at you over their shoulder, like, “What?” and just keep going. And then they’ll join up with other macaws and they fly across the horizon.
Paul Rosolie
(02:00:24)
And it gives you this sense like you’re seeing something from the dinosaur times. It just looks like wild jungle and there’s nothing human in sight. And there’s just this savage canopy to the horizon and just these beautiful birds flying over. They’re just magical.

Giant anaconda

Lex Fridman
(02:00:42)
You have this Instagram post with an anaconda around your neck. There’s a million questions. Maybe you can talk about that experience, but also, how did you not die?
Paul Rosolie
(02:00:53)
So as you know, we’ve been studying the habits of Eunectes murinus for quite a while. The lowland green anaconda is the largest, heaviest snake on earth. And I’ve been practicing a lot for a long time, and this is the biggest one we’ve ever physically caught. This was just under 20 feet—it was 19 feet something. And you can see she’s in the middle of shedding. And the other interesting thing with her is that she had blue eyes because her scales over her eyes turn blue right before it comes off of her head. And so I’ve never caught a blue-eyed anaconda before. But if you look at the size of my head and the size of my hands, you start to imagine that thing’s head is bigger than a Great Dane’s.
Paul Rosolie
(02:01:41)
It’s huge. And so the power on that—when we tried to lift her to measure her, we wanted to bring her up out of the stream and get her over to the side so we could straighten her out and measure her. We’re just trying to take some simple data points and then release her. And she, at one point, she just decided to flex her body, and you just see 10 people fly this way, and then she’s flexing the other way and 10 people fly this way. And every time that mouth would open, she would just reach back and she’d just be like, “Just let me do it.” And you know that if she gets purchase—
Paul Rosolie
(02:02:11)
Once they get purchase, they wrap you so quick and they’ll just crush the life out of you like you’re a bag of chips. And if you’ve ever seen a mouse in a mousetrap, when the mousetrap goes down and the eyes come out? Anyone that’s owned snakes and fed them mice knows this, that sometimes if they catch it right, the guts will either come out the back end or the front end. So I’d imagine that the same thing will happen with a snake that’s that big. That’s bigger than I am around.
Lex Fridman
(02:02:39)
So they have a process. When you say purchase, they want a bite just to hold and then they— It’s good. So…
Paul Rosolie
(02:02:44)
But again, all she wants is to be let go. In her defense, this massive snake—we named her Millie for the data entry—she just wanted to go on her way down the stream. The comments on this are hysterical. People are like, “This is the worst example of white people shit I’ve ever seen.” I mean, Snoop Dogg shared it. So one guy goes, “Congratulations, you’ve touched enough grass. Go back inside.”
Lex Fridman
(02:03:14)
Yeah, somebody said, “Interesting use of free will.” And I saw Killpopper007 commented— And maybe you can tell me if this is correct. “Anacondas are ambush predators. If you approach them, they will usually try to flee and will not register you as food. There’s other reasons too.” This is in response to how Paul possibly did not die from this. “There’s other reasons too, but this is the main reason. They’re pretty much apex at that size, so their fear isn’t as prominent. He was calm, so the snake was calm. It’s insane to do—” “…and still risky, but he might actually be the most qualified anaconda handler on Planet Earth. Paul is one interesting cat.” Hugging emoji. Is that accurate?
Paul Rosolie
(02:04:05)
Yes. At that size, they’re apex, so they’re really not thinking about defense. They’re just like, “Get off me.” If I was to hurt her, like if I was to touch you in the arm with a needle, you’d react. If I was to do anything that hurt her, which I’m not doing, she would turn around and bite me to say, “Go away.” But they also don’t want to bite because their recurved teeth make it very difficult to detach. And also, they’re putting their head at the source of the danger. It’s not a good calculation. And so these giants—and I’ve had the privilege of interacting with four or five anacondas in the 20 to 26-foot range—all of them have been very Leviathan-like. They just don’t want to move. They just want to keep going. He’s 100% right on all of that stuff.
Paul Rosolie
(02:04:54)
I’ve caught 90-something anacondas at this point, and many of them have been massive. Then there’s the one that me and JJ didn’t get at the Floating Forest because it was bigger than—
Paul Rosolie
(02:05:03)
…bigger than we could tackle, bigger than my hands. I couldn’t touch fingers. But every single one of them has chosen flight over fight. Only the little babies and the smaller males get snappy. They’ll come back at you like a normal snake, and if you grab their tail, they’ll try to just bite you and then go. But these big females, you know, they’re like dragons. They’re like these big, legendary things that live in swamps, and the only reason they’ve gotten that big is because they have a reliable prey source in a secluded place away from humans, and they’ve been there for decades just pulling things down to hell and eating them. And the other thing—I mean, look, I have a team with me. You know? So…
Lex Fridman
(02:05:43)
So there’s people holding the—
Paul Rosolie
(02:05:44)
Yeah. I mean, let’s be real here. I would never do this. If I was out in the jungle by myself at night, doing this would be suicide, 100%, because for every second there that I’m going, “Oh, I’m in the water and she’s over my neck,” if JJ wasn’t there to jump in and unwrap her— …then I die. 100%.
Lex Fridman
(02:06:03)
Because she’s continuously wrapping.
Paul Rosolie
(02:06:06)
She’s continuously on her back saying, “Come in here—”
Lex Fridman
(02:06:12)
Come in here.
Paul Rosolie
(02:06:12)
“…and let me arm bar you. Let me squeeze the guts out of you.” She’s just going, “Let it happen.”
Lex Fridman
(02:06:17)
And moving slowly.
Paul Rosolie
(02:06:18)
Moving really slow.
Lex Fridman
(02:06:20)
Conjuring.
Paul Rosolie
(02:06:20)
With that assurance of power where she doesn’t need to try and tap you quick. She’s going to get you eventually.
Lex Fridman
(02:06:25)
Although, to push back on something you just said, having known you long enough, let’s be honest. You’re saying, “I wouldn’t be insane enough to do it.” I think you would be. I mean, there’s a line of insanity, and you, my friend, walk that line masterfully so far. I think there’s a sense when you’re able to sense the animal, whether it’s crocodiles, caiman, or anacondas, and maybe radiate a sense of calm. I’ve seen you be able to go into some dangerous, from my perspective, situations, and make it seem like it’s not dangerous at all. And maybe when you become one with the ecosystem, you’re not a threat to it, and maybe that’s why you can survive? I haven’t been able to make sense of it, really.
Paul Rosolie
(02:07:21)
Look, I would say this. In the case of elephants, if we ever end up in Africa together, I can get incredibly close to elephants because I’ve spent enough time with them where, so far, it’s always been a mock charge. And you can be one with the elephant and learn their language enough that you respect their boundaries and you also show them that this better be serious because you’re either going to have to kill me, or you’re going to have to just turn around and go back to eating. And you can have that exchange with them. And with smaller snakes, I’ll be careful and whatever else.
Paul Rosolie
(02:07:59)
I can tell you with this that when you have both of your hands around an anaconda’s neck, I mean, I’ve been known to surprise myself with the decisions I make, but this alone would lead to death, 100%. It’s like laying down in front of an 18-wheeler with it in neutral. It’s going to roll over you. This is going to turn into anaconda handcuffs with this thickness, and then that is going to wrap you, and then six more of those are going to go around your body and you will get squeezed and you will turn into goop. And she will not… just like that guy said, she probably is in defense mode and not food mode, so she’ll probably just neutralize the threat and then go back to sleep.
Lex Fridman
(02:08:47)
I have to ask you about the floating forest. And you write about Santiago, once again, beautifully in the book, of the time when he told you the stories and when your mind and eyes were still fresh and maybe skeptical and more leaning towards the Western world point of view versus the jungle point of view. “Santiago’s eyes were glowing in the darkness. He watched the orange ember spark upward to join the celestial river of stars that arched across the night sky as if the memories were written there. He squinted, his face as wrinkled and weathered as an old map of the world.”
Lex Fridman
(02:09:25)
“Vast experience whispered in the firelight, as ephemeral as the breath that spoke the words, but powerful enough to latch on and sink down into some deep part of me.” This is Pico saying, “Papa, tell me about the anaconda on the blackwater stream.” And he tells a story of that. And he talks about it being big and having horns.
Lex Fridman
(02:09:50)
And you write once again masterfully about you at that time having doubts. It sounds like bullshit, but now more and more of the things you’ve seen of the jungle and the things you sense you have not seen yet, all of those stories seem to be true. The one he was referring to may be 36 feet long, this big. He says that, “The floating forest is the place you need to go, Gringo, if you want to be liberated of your doubts and skepticism.” So tell me about the anacondas you’ve encountered in the floating forest.
Paul Rosolie
(02:10:32)
Well, the thing he’s describing there is that he’s saying they found an anaconda that had horns. And, in that moment, we were all hanging out by the side of the river and I said, “That’s enough.” I stood up. I was like, “Come on, there’s no anaconda that has horns.” If I’ve learned anything in 20 years of living with the indigenous people in the Amazon, it is that they’re not wrong. You know, if they say there’s a tribe of naked people with arrows out there, they’re right. And they know what an anaconda looks like. So if he says he saw an anaconda with horns, he saw something that ain’t a normal anaconda. A smaller version of this played out recently where one of my…
Paul Rosolie
(02:11:12)
One of the people that works at the treehouse, he came and he said, “I found a snake and it was in the water tank. And it had green spikes on it.” And I said, “There’s no snake that has green spikes. Congratulations, you’re an idiot.” I made fun of him.
Paul Rosolie
(02:11:29)
And I said, “I know all the snake species that are here. None of them have spikes.” He said, “No, it had long spikes. The snake is this big and had spikes this long on it.” I said, “There’s no snake with spikes.” Until finally he came and he got me in the night and he goes, “The snake with spikes is there.” And I said, “Well, I’ll get out of bed for that. Let’s go.” I said, “And I guarantee it’s not going to be there when we get there.” And we got to the water tank and I shined my flashlight down and sure as shit, there’s a snake in there and it’s got thousands of green spikes coming off of it.
Paul Rosolie
(02:12:06)
And the spikes are coming completely perpendicular out from its body. For a second, I really was having this out-of-body experience. And then the snake saw us, got scared and swam, and all of the spikes collapsed onto its body and became smooth. And then I realized the snake had been living in the stagnant water for a while and developed algae that was growing off of it. So when it was sitting still, all the algae would settle out. And so if you look straight down on it, it’s a water snake that has algae growing on it. And so it does look like a snake with spikes. He’s not wrong. It was. It was a water snake. It was some sort of Helicops. But there’s always an answer like that.
Lex Fridman
(02:12:44)
Amazing, yeah.
Paul Rosolie
(02:12:45)
Where they’re not wrong. So when they tell you something like, “There’s an anaconda with horns,” and multiple people have seen it— …you make an expedition there. You know, like if somebody said there’s giant ground sloths in this one valley, I wouldn’t be like, “They’re extinct.” I’d be like, “Where?” You know, you start to listen. I mean, after the tribe walked out of the forest… You could tell me, that day, if a Tyrannosaurus rex walked out behind them, I would’ve been like, “Makes sense.”
Lex Fridman
(02:13:10)
Let’s go to the floating forest. Do you ever think about what creatures are in there? I just had a conversation with Michael Levin at Tufts University. He’s this biologist- … who creates biological life forms in the lab, but he also studies all kinds of weird, what he calls unconventional intelligences on earth. And he speaks about that from a perspective of just understanding the incredible intricacies and weirdnesses of biological systems. So, you know, the soup of organisms that’s there- …in the floating forest is probably incredible. You ever think about what kind of weirdness is there?
Paul Rosolie
(02:13:53)
Yeah. I mean, along with giant snakes are animals that are existing in an ecosystem that’s isolated, right? And so the tepuis… You know, like in the movie Up, those Venezuelan cliff jungles where it’s like the straight… Like Angel Falls? And up there you have this allopatric speciation occurring where these isolated communities are departing from whatever’s down there.
Paul Rosolie
(02:14:18)
So on the floating forest, you have this very unique ecosystem where there’s animals living on grassy islands, there’s animals living in the tops of palm trees. And so in that nightmare soup that exists beneath the rafts, there’s probably insects and… I mean, I’ve seen lizards there that we have been unable to identify. There’s things there in the… I mean, I can’t imagine. I don’t think the decay is going to happen. There’s probably not a lot of oxygen in that water. And so, I brought a few scientists there and they’ve all just been like, “This is…”
Lex Fridman
(02:14:52)
Yeah. How do you even
Paul Rosolie
(02:14:53)
Yeah. How did this form? We’ve brought hydrologists there and they’re like, “How the hell did this thing form?” And then, trying to study what creatures live under that is amazing.
Lex Fridman
(02:15:04)
But the big anacondas, it’s interesting because they truly are the apex, so they’re unbothered. They’re not really using- …their power for anything.
Paul Rosolie
(02:15:14)
No, and I’m sure if I bit her, she’d turn around and kill me.
Lex Fridman
(02:15:17)
Yeah, but in a bored kind of way. Like it wouldn’t even… It would just slowly kill you.
Paul Rosolie
(02:15:22)
But I wonder if once she killed you, if she’d be like-
Lex Fridman
(02:15:27)
Just take a bite?
Paul Rosolie
(02:15:28)
I mean, if she’d… I mean, bite? They swallow, right? So- …once you collapse your shoulders, it’s like if you killed a perfectly good hamburger and it was like in your hands dead, you’d be like- “Maybe I’ll try it.”
Lex Fridman
(02:15:41)
I mean, they need the calories.
Paul Rosolie
(02:15:43)
Yeah, and then take a six-month nap.
Lex Fridman
(02:15:47)
Yeah. They’re truly incredible, majestic creatures though.
Paul Rosolie
(02:15:51)
Yeah. I love this picture. Just look at the size. I want you one day to feel them, because the wild ones are not like the captive ones. The captive ones are soft from sitting in a cage their whole lives. These guys have been flexing every day. So it’s like you’re hitting steel cables. It’s just wild.
Lex Fridman
(02:16:12)
And even if it’s just being chill, you can probably get a hint of the power it’s capable of, right?
Paul Rosolie
(02:16:18)
The one good thing about those really big ones is that when they do strike, it’s like being in a fight with a big guy. That haymaker comes from way back here and you’re like- …”Oh, good. I’m going to duck.” And you get down, because they open their mouth and they start accelerating. And it’s pretty easy to either get out of the way or get it right before it hits you in the face— … usually. Again, if you ever mess that up, just like the haymaker from the big guy, it’s over.
Lex Fridman
(02:16:49)
Your level of knowledge and comfort with snakes is incredible. I think they—
Paul Rosolie
(02:16:53)
Play with them a lot.
Lex Fridman
(02:16:53)
… sense that. I mean, I’ve just seen you with snakes and they must sense in you the camaraderie. I don’t know. You have a way of speaking to animals and about animals like there’s zero danger. Well, from my outsider perspective, it seems like a lot of them are full of danger if you’re not communicating to them correctly.
Paul Rosolie
(02:17:18)
With snakes, I think it’s more of a “the highway is dangerous, but you can drive safely” thing. I know what I’m doing, so I’m working with a snake that can’t envenomate me and is small, so I can allow it to freak out. And then if I can get it into my hands and warm it up and it goes, “Ooh, it’s nice in here.” And of course, like you said, I’m not scared and so the snake is going… They are very sensitive to that and so he’s going, “Okay, this isn’t so bad.” You can chill him out. But I don’t think snakes have any camaraderie. I think that whales, monkeys, elephants—I think that they can sense. They can say, “Okay, this person’s trying to help me get out of this net. I’m gonna relax and not kill them.” I think you have that dynamic then very much so.

Rescuing a spider monkey

Lex Fridman
(02:18:00)
Speaking of somebody that does have camaraderie, there’s this incredible video on your Instagram that people should go watch where this spider monkey was drowning and you jumped in to rescue her.
Paul Rosolie
(02:18:12)
Sure. So we’re coming downriver. It’s seven o’clock in the morning so I’m cold—I’m always cold. I’m sitting on the boat and JJ’s like, “Look, spider monkey.” And I go, “Great, spider monkey in the river,” like that’s normal. And JJ’s like, “No, she’s having trouble.” And I was like, “Why is she having trouble? They swim all the time.”
Paul Rosolie
(02:18:30)
And he goes, “No, she—” he goes, “You should help.” And so the boat comes around. Then sure enough, what you can’t see in the video is that the river was so full that there’s these little whirlpools and currents and she was trying to get to the side. And again, all the animal righteous people are very quick to be like, “Let nature take its course, you know? Let the monkey drown,” or, “She doesn’t need help. You’re interfering.” Sure, sure, sure. If you were actually there, you would know something, and that is that she did need help and she was drowning. Her head kept going under. And so I saw that JJ was right.
Paul Rosolie
(02:18:59)
And so we pull around, I took off whatever I could in the moment, jumped in with the paddle because now here again, I trust monkeys but I don’t want her to bite me. She is gonna be scared so I thought, “There’s two ways I can do this. I can grab her by the neck and ‘animal control’ her—grab her by the neck and the tail and take her out of the river, which is gonna be scary for her.” Instead I thought, “I know spider monkeys so well. I’ve raised so many of them.” And when you raise them, they curl up to your neck and they’ll…
Paul Rosolie
(02:19:31)
Like if you have an orphan spider monkey whose mother got shot by poachers and you’re taking care of her before we bring them to the animal rehabilitation experts, they’ll curl up on your neck and they’ll just talk to you in your ear. And so I feel like I know a little bit of spider monkey—broken spider monkey—and so I pull up next to her and I give her the paddle. And we’re in this rushing river and we’re moving at 10 miles an hour downstream, and I tried to give her the paddle and she smacks it away. She was like—
Paul Rosolie
(02:20:00)
… “No. Get away from me. I don’t know what you are.” And then she keeps swimming. She goes under again. I give her the paddle. No, and then she puts a hand around the paddle. In that moment that you had paused on, she looked back at me and she registered like, “Oh, this is another animal, with a face.”
Lex Fridman
(02:20:18)
For people just listening, you need to go watch the video. You guys are just looking at each other, and she’s looking at you. It’s so cool.
Paul Rosolie
(02:20:26)
She looked right at me, but then she went, “No.” She was like, “Whatever you are, no.” She was like, “I’d rather die in the river. I’m so scared and I’m drowning.” She looked at me and she got scared and she jumped back in. And then I lifted her up, and I started talking in spider monkey. And then, the next moment, you see it. She just goes, “Sure.” And she wraps her tail… You see her tail is around the edge of the paddle. And she puts her hand around it, and then I lifted her. Because I’m taller than she is, I lifted her out of the river. And so now, instead of manhandling her like a raccoon you’re catching by the neck—
Paul Rosolie
(02:21:02)
she’s holding on in her spider monkey way to the paddle, and she looks back over her shoulder. She looks at me, and I’m sitting there talking to her in spider monkey. And she looks at me, and you hear her. She goes… I can’t do the sound she makes, but she makes this spider monkey sound like, “Guh!” And she goes, “Fine.” And then she’s looking off the front end of the paddle as she’s looking at the jungle, and she looks back at me and she’s like… You could just tell. She’s like, “I have no idea what’s happening.”
Paul Rosolie
(02:21:28)
But she accepted the help. And the difference is because I spoke her language in this case. And I know that would be one of those stories that people would nail me on every time if it wasn’t on camera. You can see the moment that she makes direct eye contact with me and goes, “Okay.” And then as soon as we get to shore, she jumps off and runs off into the forest, but it was—
Lex Fridman
(02:21:48)
It’s so… I mean, to me, just watching the video, it’s so amazing. Because she’s looking at you. Like, real… You can see that there’s an actual connection. That there’s like communication, like a social… You know, the way humans, when you’re maybe saving a human being that’s drowning or something like this. There’s that connection. It was beautiful to see, man. And then I read a little bit that spider monkeys are very intelligent, but they’re especially socially intelligent. So they have social connections with each other. They understand what that means. They understand what another entity means. So you speaking in a broken language… Probably is really important and a powerful way to indicate that, “Wow, you’re in network.” Like a foreigner, but—
Paul Rosolie
(02:22:42)
It’s like you’re in a foreign country and someone goes, “Helping, helping.” Like, “Helping.” And you go, “Okay, sure.” Like, you know, “You’re not robbing me, you’re helping,” right? But no, they’re incredibly… And I’m telling you, I’ve had orphan spider monkeys so many times. And they wrap their tail around your neck and they hug you. And you realize that connection that they have with their mothers when they hold onto them in the canopy… When the loggers shoot the mother and then I’m taking care of this baby, they hold onto you. And they need that love and that connection more than they need food. If you put food or you put the warmth of a body, they’ll choose the connection— —over the sustenance.
Lex Fridman
(02:23:22)
Yeah, they really value the touching, that connection.
Paul Rosolie
(02:23:26)
Very tactile. They’re very loving. They wrap their long spider monkey arms around each other. They’re very much like us. They hold their babies. When it rains, all the spider monkeys will get together and they’ll huddle up, and they’ll pull leaves down and they’ll all huddle up together. When it’s cold out, they get close. It’s very cute.
Lex Fridman
(02:23:45)
Yeah, that’s true for a lot of… I mean, they’re distant relatives, but that’s true for a lot of our relatives. The apes, the chimps, all of them, they have this intricate… They’re different. Sometimes more violent, sometimes more loving. But social interactions, it’s cool. It’s cool that way.

Dangerous animal encounters

Paul Rosolie
(02:23:59)
Yeah, I mean, you expect it from them. They’re practically us. To me, it’s when other animals show it. You know, the times that I’ve been on a trail and a jaguar has walked by and just been like, “Mm, ‘sup?” Keep walking. And it’s like, “Eh, it’s kind of cool of you not to eat me. I appreciate it.”
Lex Fridman
(02:24:16)
Has that happened to you?
Paul Rosolie
(02:24:18)
Yeah. I thought somebody was walking on the trail behind me and I was setting a camera trap. And I put my finger up and I was going to go, “Could you walk any louder?” And I had my finger up and I’m crouched because I was setting a camera trap. A jaguar walked by and he literally was just like, shoom, shoom, shoom, just kicking leaves, just having fun, mouth open. And he just walked by and he looked at me and just went, “‘Sup?” Never broke stride. But like— —dead-ass eye contact with the bottom teeth out and that jaguar look of just like, “Hey.” I was like, “Okay.” Now I’m gonna have a full meltdown. Your system, you start sweating.
Paul Rosolie
(02:24:50)
You’re like, “Whoa.” Because they’re also so beautiful. When you actually see a jaguar, and it’s like bright yellow and the teeth and all the muscles… It’s, you know…
Lex Fridman
(02:25:00)
What do you think you communicated to the jaguar that it didn’t kill you?
Paul Rosolie
(02:25:04)
No, nothing. The jaguar was making the decisions. I didn’t do anything that like saved my life. He was just going somewhere. And because he’s the king there, he just went, “Uh.”
Lex Fridman
(02:25:16)
Yeah, probably also not threatened.
Paul Rosolie
(02:25:18)
Not threatened at all.
Lex Fridman
(02:25:18)
I don’t know. But I think there is something to you. See, you’re just taking for granted the things that you’re putting out into the world. You’re probably radiating calm. Or not—
Paul Rosolie
(02:25:29)
Or not…
Lex Fridman
(02:25:29)
…but non-threat.
Paul Rosolie
(02:25:31)
Certainly non-threat. I also smell like an animal when I’m in the jungle, right? I shower in the river. I don’t use deodorant or shampoo or any of that stuff. So I don’t smell… You know, you can just imagine to animals that have a smell that’s four times as good as ours, that just your deodorant, just your conditioner— …just whatever other products, the detergent on your clothes— We smell like Times Square. We smell like a fire alarm to them.
Paul Rosolie
(02:25:59)
You know, they’re like, “What is this thing? It smells very foreign and scary.” Everything’s scary. Speaking of scary, the jaguar was kind of friendly. He was like, “‘Sup?” It’s almost like he’d seen me before on the trail, so he was like, “Oh, it’s just you.” The one time I stood on the forest floor in India with a wild tiger and nobody else was there, the thing that the tiger did that was so unnerving… And again, a tiger’s back is so much bigger than you think. It’s like four jaguars. They’re so big. She wouldn’t look at me, and it was terrifying. She would look over there, she’d look like that, and never eye contact.
Paul Rosolie
(02:26:39)
But it was like, “You’re as important to me as a stick.” And, you know, when you see two fighters square up and it’s all about the eye contact, trust me, you look through a person. You pretend they’re not even there. That tiger insulted me on such a profound and disarming level that I never forgot it. It was just like, “You matter as much as a sparrow. You’re just not one of the things that I care about.” She just was looking around and carried on doing it. And she was like, “I’m gonna walk this way.” And I was just like, “Holy shit, I’m gonna run.” You know, it’s just profound insignificance from this god of an animal with paws the size of dinner plates. And I was like, “Man, if she does, I don’t want her to look at me because if she looks at me, I’m gonna probably…”
Lex Fridman
(02:27:27)
That’s the end.
Paul Rosolie
(02:27:27)
You know, that’s the end.
Lex Fridman
(02:27:28)
Yeah, it just shows how much more powerful she is. That’s probably the most terrifying animal on earth. Yeah, tigers—
Paul Rosolie
(02:27:37)
The rock-paper-scissors of land predators. I think like polar bear and tiger gotta be the most scary.
Lex Fridman
(02:27:44)
Yeah, polar bear.
Paul Rosolie
(02:27:45)
Polar bear’s pretty scary.
Lex Fridman
(02:27:46)
Yeah, you don’t fuck with a polar bear.
Paul Rosolie
(02:27:47)
I don’t think they’re as fast as tigers, but I don’t think you’re gonna go fast on the ice and… But I mean, with a tiger, you can’t outrun it. If you climb a tree, they climb better than you. If you get in the car, they could smash through the door. If a tiger decides it wants you, pretty much nothing… Even if you had a gun, even if you had like a nine millimeter, it ain’t gonna stop a tiger that wants you.
Lex Fridman
(02:28:08)
In the jungle, have you ever felt in danger? Putting the humans aside, were there animals… We’ve talked about how humans are really the source of danger. You often speak about animals as a source of beauty and wonder— —and elegance and grace and all these things which they are. But I’m sure you’ve felt danger.
Paul Rosolie
(02:28:38)
Yeah. I mean, I’m very aware that a hornet’s nest can kill you.
Lex Fridman
(02:28:43)
Oh, so the little guys.
Paul Rosolie
(02:28:45)
The little guys suck. You know, the… I always think like when we were going through the jungle- …one machete whack, and again, people don’t realize how dense it is. You try to run, you get hung up on vines, you trip, you fall onto one of those trees with the black spikes. And then while you’re laying there dealing with all that, they’re just stinging you and your body— …goes into anaphylactic shock and you die instantly. That can very quickly just take you out.
Lex Fridman
(02:29:08)
You’re right. I mean, speaking of spikes, the biggest danger is not even the spikes. I mean, the spikes just—because it creates open wounds and then that can lead slowly—
Paul Rosolie
(02:29:16)
Infection
Lex Fridman
(02:29:16)
…to infection. So it’s really that— …is the biggest danger.
Paul Rosolie
(02:29:20)
Yeah. In the Amazon, again, I’ve never heard of a human-directed violent jaguar in our region. They just don’t attack people. I’d say mosquitoes are the thing that come after you. The snakes just want to be left alone. Even the venomous snakes. Again, the bushmaster, I grabbed an 11-foot bushmaster by the tail and he turned around, he lifted up to about this high off the ground. And if you could translate what he said, it was just, “Don’t make me do it.” It just said, “Make my day.”
Lex Fridman
(02:29:50)
See, but that’s the thing. You speak snake language.
Paul Rosolie
(02:29:53)
And then I put the tail down.
Lex Fridman
(02:29:54)
You speak snake.
Paul Rosolie
(02:29:54)
I went, “Okay.” I was like, “I’m sufficiently scared.” So the problem happens when you don’t know what you’re doing. So I’ll give you— …an example. You want a dangerous animal story, I’ll give you one. I was walking one time and I was trying to be responsible. It always happens when I’m trying to be responsible— …I get into trouble. I’m trying to be safe and I’m on the side of a stream and there’s elephants on the other side… I’m in India. There’s a deep, like a 12-foot thing, and then a stream and then on the other side there’s elephants. And I’m walking and I’m like, “I’m going to sit in a tree and I’m going to enjoy these elephants.” I’m going to make notes in my book like Jane Goodall.”
Paul Rosolie
(02:30:29)
Then I came up against a cement wall and it was the back of a male elephant. And in India, it’s a male elephant that’s been harassed and had fire thrown at it and God knows what else. And if I translate what he said, he turned around and he just went, “What the fuck?” Like, he just looked at me like, “How dare you?” And then he just smacks apart the tree, turns around, and then that elephant was trying to kill me. That was not a mock charge. I threw off my backpack, zigzagged through the woods. He broke apart trees. If I had a GoPro on my back to show you what I saw—just the shrapnel and devastation of this thing just bashing through trees. And again, every bush that I encounter is a possible trip.
Paul Rosolie
(02:31:12)
Every vine is a possible hangup. And then if they get you, he’ll step on you and crush you. And so I threw myself off the edge of this cliff, rolled down into the stream, and the elephant got to the edge of the cliff and almost fell on me. He got to the edge of the cliff and did one of these and then came back down on his hind feet. Picked up a stick, threw it at me. And the stick just smacked down next to me in the stream, and I remember I gave him the finger because it was like, “I’m alive.” And then he just stormed off into the jungle.
Lex Fridman
(02:31:42)
I mean, there’s nothing like an elephant.
Paul Rosolie
(02:31:43)
There’s nothing like an elephant anywhere. I loved listening—I was so excited when I put on your podcast with the dinosaur guy— because he was like, “When a baby is born,” he was like, “it learns, you know, elephant, giraffe, T-Rex.” And I was like, “Holy shit.” You know? Along with like banana, water— sky is blue, and somehow these are initial things in your first few months on Earth. These are the characters you’re introduced to. Like, how the hell did T-Rex get there? They don’t even exist anymore. It’s like— It was just such a fun… and I could hear you smiling through the mic as I’m listening to it, and I was like, “Oh, this is gonna be a good one.”
Lex Fridman
(02:32:19)
Yeah. I mean, the dinosaur world is incredible. But like, the fact that you have such a predator evolve with such a gigantic jaw, so much destructive power is weird.
Paul Rosolie
(02:32:30)
And then he broke my heart because he was talking about how the T-Rex and Stegosaurus—he’s like, “All the books have them together.” And he’s like— “They’re nowhere near each other.” “They did not exist anywhere near each other,” and I was like…
Lex Fridman
(02:32:42)
I want them to battle with each other.
Paul Rosolie
(02:32:44)
Yes.
Lex Fridman
(02:32:46)
Speaking of elephants, I feel like we’ll be up for an adventure at some point. After all this chaos is over, do you think back in the jungle? Africa? India?
Paul Rosolie
(02:32:58)
I think I would love to show you a herd of truly wild elephants in the African jungle. I think going on a boat trip through the Amazon, not a hiking one— —where we’re going through some really… there’s areas where you can get permits to go through areas where no one’s allowed to go. They’re completely protected areas, and you can just go for a week through areas where the animals have no idea what a human is. And so you can move through it, and it would be a little bit more of an enjoyable experience, not a survival situation. Go with J.J. in a boat and just travel through the Amazon. “Hey, maybe we protect this river.” And then the river’s mapped from north to south, and we just raft down with boat support. You know?
Lex Fridman
(02:33:45)
It’s really incredible to see how it’s all connected. I mean, the river is the thread that connects the whole story. And so it’s nice— —to see how it all is connected. And that’s why us starting in the mountains is also really nice, to see where it begins. But it keeps going. The story keeps going.
Paul Rosolie
(02:34:01)
It keeps going. We did start in the mountains. An epic first day together.

Writing, journaling, and great writer inspirations

Lex Fridman
(02:34:07)
And hopefully, people get a chance to see that video. So I gotta ask you about the writing. I mentioned you’re— —an incredible writer. What’s your writing process like for this book, Jungle Keeper, for Mother of God, for future books you’re writing? Are you like a Stephen King? Do you have a drinking thing where you go to some dark places in the basement? Do you write every single day? Do you take little notes here and there? Like, your notebook has a bunch of doodles— —a bunch of writing. What’s your process like?
Paul Rosolie
(02:34:44)
I try to journal every day for a number of reasons. It’s accountability. It helps me keep track of… It’s fun to see your hopes and dreams. It’s fun to record the mundane moments that we all forget about, and that might be, like, cooking in the kitchen with your mother. That might be a fun walk you had with your dog. Little things that you think you’re going to remember everything, but you just don’t. And so I have piles of notebooks in my room. When something happens, I write it down. If a cool story happens, I will write it down, or if I find a leaf from an extinct tree I will make an etching of it. But I just…
Paul Rosolie
(02:35:28)
Anything that happens that I find remarkable in any way, either for my own personal memory or for writing, I’ll write it down. And then when I go back to it later, one, I have a very good memory, and then two, the facts are there. And so when something happens like you rescue a spider monkey or something remarkable in life, you get to spend time with someone that you haven’t in a long time and you get that feeling of, “Oh, that’s why I’m such good friends with them.” You write these things down, and then it’s always there. And so I feel like whenever I don’t journal, I’m missing out on keeping my life and my memories. So yeah.
Paul Rosolie
(02:36:12)
I don’t do that Stephen King thing. That quote about how “amateurs wait for inspiration, and the professionals, we go to work every day,” and he’s like, “10 pages a day,” or whatever it is. I don’t do that. I write when I feel like it. I’ll start thinking, “Oh, this is a perfect way to start this scene,” because at the moment this happened, I felt it so intensely. If we’re bringing people in and out, I’ll just be in a car or boat, and I’ll start thinking about it and I’ll go, “This is…” You’ve just got to carpe diem. And I’ll go, “Okay, where did that happen again?” and I’ll go to that page. And I’ll go, “Okay, so what exactly… …Happened.” Then you get the laptop, and yeah. So it’s brain to paper to laptop, always paper in between.
Lex Fridman
(02:36:57)
Well, how do you go from disparate notes to the final thing? Because it’s difficult to convey the experience through words, and you do that well. So, do you edit a lot? Do you iterate?
Paul Rosolie
(02:37:13)
That’s where Stephen King was right. Because I look at writing like sculpting. You have to have something to sculpt. And so when you’re thinking of a story… Again, I love listening to great storytellers. And I actually love listening to bad stories, just like I like watching bad movies to see what they did wrong. When you listen to someone that starts a story and they have you hooked from the second they start, and then you’re like, “Wait, but how did that happen? And why was that happening? What happened next?” and they keep you going and they drop the information perfectly. And so every now and then, you figure that out in that moment of inspiration. So then I have my facts written down here.
Paul Rosolie
(02:37:51)
And then I’ll do an outline on a page or something. But then I have to get it all out of me with a pen. Then I can move to… and I’ll almost just close my eyes and write the story out. You’re literally making your clay, making the shape of the thing. And then editing is the giving it details.
Lex Fridman
(02:38:14)
So, you do take passes like-
Paul Rosolie
(02:38:16)
Oh, my God, yes.
Lex Fridman
(02:38:17)
Oh, editing. Yeah.
Paul Rosolie
(02:38:17)
I mean, dozens and dozens. That’s where writing sucks. When you’re finishing a book… I’ll never do that again. So what I’m doing now with this last book, there’s so much that it covered.
Paul Rosolie
(02:38:30)
And I was in the jungle, and it’d be like hiking for 10 hours a day, dealing with narco-traffickers, all this stuff, and then I’d have to edit at night. And it was like, “This is no way to live.” So now what I’m doing is I’m writing chapters as I feel like writing chapters. When something amazing or remarkable happens, I go, “This is going to be its own chapter.” I write it, edit it, and then I send it to my sister who is an expert editor and has lived more in literature than most people live in real life. And she’ll let me know if it’s good or bad, or needs to be tweaked, or moved along. When I get it back from her, it’s marked up. And now what I’m going to do is I’m just going to put those aside.
Paul Rosolie
(02:39:10)
And then, the next time I want to write a book, it’s not starting from scratch on 300,000 words, it’s just here, and it’s ready. Much easier.
Lex Fridman
(02:39:20)
What kind of books do you think you might write in the future?
Paul Rosolie
(02:39:23)
Well, there’s Mother of God, and now there’s Jungle Keeper. And then I’m already working on Endgame. Because there’s so much that has happened. I think I told you when you were there, but right before you came, me and JJ went to the back end, behind our river—
Paul Rosolie
(02:39:40)
—to this horrible part of the Amazon that’s 10 times more lawless than where we are. And instead of having no people, there are people. And you want to talk about Amazonian No Country for Old Men? It’s the oil companies, and the missionaries, and the newly contacted tribe. There’s a people called the Nahua people, and they’re recently contacted, and they’ve been ripped out of the forest. And they’re standing there with their little bows and arrows. They’re tiny people. The normales are tall, the Nahua are small. And we just saw brutality in this horrific, horrible… It’s like Sicario. It’s just absolute lawlessness.
Paul Rosolie
(02:40:18)
I remember the moment JJ looked at me and he said—and we both think of ourselves as tough, I think, until we get in these certain situations—he looked at me and he went, “We’re not safe.” And we looked at the people around us, and we’re at this side of the river port eight days up this river, and you could tell that everyone that was looking at us was making a calculation about how inconvenient it would be to kill us at this moment and how much money they could get. They were like: camera, watch, clothing, backpack.
Paul Rosolie
(02:40:49)
And they were like, “That’s a nice backpack.” You could tell they were just shopping. JJ and me were like, “Where are we putting the tent tonight?” I was like, “We’re not staying here.” And then I was like, “Well, maybe we should stay here.” I didn’t know what to do. And then one of the Nahua people came over to JJ and was asking for food, and he made the mistake of explaining money to them. They’d never had money before.
Paul Rosolie
(02:41:11)
And so he gave them a piece of money, a couple coins. And he was like, “Oh, if you just go over there, there’s a man that’ll sell you something and then you can eat it.” And the guy was like, “Bow and arrow?” And JJ was like, “No, no. Give him this and he’ll give you food,” and it worked. And then JJ got swarmed by like 60 of these tribals; they all had bows and arrows, hands out, and JJ was running with all these half-naked people behind him. That whole saga right there is… that chapter’s going to be called River of the Dolphin Fuckers because everyone we met on the river kept telling us—
Paul Rosolie
(02:41:49)
I’d have my camera with me and I’d go, “Are there dolphins here?” And they’d go, “Yeah, there’s dolphins. And if you fuck one, be careful because they’ll pull you under.” I went, “Okay, weirdo,” to the first guy. And then we got to like eight hours further upriver, met the next guy and I had my camera out, and I’m like, “Hey, are there any dolphins here?” And he goes, “Yeah. If you fuck any, be careful because they’ll grab on and pull you under.” And I was like, “What?” And then like four more people told me the same thing. So I was like, “Okay.” You know?
Lex Fridman
(02:42:14)
The lesson we learned in the jungle: you know, horned anacondas, believe them.
Paul Rosolie
(02:42:18)
Believe them. So apparently on that river, they were all trying to be good Samaritans and warn me about the clear and present dangers involved with amorous dolphin encounters.
Lex Fridman
(02:42:28)
So stylistically, I mean, that is a bit Cormac McCarthy.
Paul Rosolie
(02:42:32)
Ooh, he would have loved it.
Lex Fridman
(02:42:33)
Are there writers you draw inspiration from like that? I mean, you’re very close to him in terms of—
Paul Rosolie
(02:42:40)
It’s too big of a compliment.
Lex Fridman
(02:42:40)
—the style you plug into every once in a while. You jump around stylistically, actually.
Paul Rosolie
(02:42:45)
I do. It depends, because sometimes I want to sink in and flex a little bit, which I don’t think people really enjoy, but I enjoy it. You know, just use all those flowery words— —and make these beautiful metaphors. But what I’m finding more and more is that modern readers aren’t really looking for that. They want an easy read. In my style of storytelling, people really enjoy and tend to thank me for more of an Anthony Bourdain style where you’re like, “So we found ourselves on the side of this river and we knew we were in danger. The reason we were in danger…” and you just start telling the story. Forget the… maybe once every two pages you can throw in one of those beautiful little zingers, but no one wants to watch you flex.
Lex Fridman
(02:43:30)
But also sometimes you go even more than… I don’t think Anthony Bourdain did like Hemingway-like minimal, like— —word, period— —word, like that. That’s another way to flex that I really like that you do sometimes, which is just— —less, and just power in the spacing, the silences. The unsaid is what does the driving.
Paul Rosolie
(02:43:53)
I mean, that’s what’s so arresting about it. You read For Whom the Bell Tolls, and you know, “The air was crisp, and the water was sweet, and the wine was good, and the afternoon was warm.” And you’re like, “I know what that’s like.” These are not complicated sentences, but when he puts them together into a paragraph, you go, “Oh, yeah. I want to drink wine out of leather and lie by the side of that stream.” It sounds so beautiful. And so sometimes, I mean, just look at that. Look at that fire cracking on the horizon there. And it’s like sometimes the only way is just these simple statements, you know?
Lex Fridman
(02:44:27)
Writing’s beautiful. I love writing; I love reading it. Have you interacted with LLMs much? You know, AI systems like ChatGPT? There’s a bit of a scary and a sad aspect to the fact that they can generate language extremely well. But something is missing, and it’s very hard to put your finger on it.
Paul Rosolie
(02:44:50)
My question to you is, I can pick out with stunning accuracy when someone sends me a message and they’ve passed it through ChatGPT— —I know. Somehow I could tell. I don’t know how, but I could tell. We’re at the point with images where we almost can’t tell anymore. I don’t know if that’s going to go away. Like you said, there’s something… one of the things that F. Scott Fitzgerald does is describe these incredibly human moments with such crystalline accuracy that you go, “It must have taken you a month. You must have studied life so much to string those words together.” In one book, he writes about someone screaming with such abandonment that at the highest register, her voice wobbled and cracked.
Paul Rosolie
(02:45:47)
And you’re like, “Oh my God, I know what that sounds like.” And I wonder if it’s because you can say, “Write me The Jungle Book but make it sound like Cormac McCarthy wrote it.” And it’ll be like, “The jungle was dark and stern, and the boy was…” It’ll do it, and it’s amazing. My question to you is, at least right now, what are we picking up on in something as simple as a text message?
Lex Fridman
(02:46:12)
It is very difficult to define. But it’s important to keep thinking about because— … like, what makes us human?
Paul Rosolie
(02:46:19)
You reassured me recently because I called you and I said, “I come out of the jungle and all anybody wants to talk about is AI.” And everyone’s like… It’s like people are walking themselves into the Matrix and asking to be hooked. Everyone’s just obsessed with this topic. And you were like, “Man, human art and human literature is going to actually become so valuable as this other thing happens.” And I expected the opposite answer. I thought you were going to be like, “Yeah, man, this really is. We’re taking off and everything’s going to change.” And you were like, “Man, real artists are going to become more appreciated.”
Lex Fridman
(02:46:57)
As more and more compelling and effective bots appear on the internet— … we’re going to value that less and less, I think. And we’re going to value in-person interaction more and more. And so, you know, artists showing art at galleries versus on the internet— … meeting in person. And, actually, it’s going to force people to be more authentic and real and raw with each other. That’s going to be the valuable resource.
Paul Rosolie
(02:47:25)
I mean, I think already, AI aside, in today’s world, everyone’s so… I mean, movies have become so polished. There’s no weird, quirky stuff. There’s no risky stuff anymore. It’s all very curated. I’ve almost stopped watching movies. And I used to love movies. But it’s fun when they take risks, when they’re messy, when they’re real.
Lex Fridman
(02:47:49)
Yeah. I think Hollywood, the Hollywood stars, and the Hollywood movie-making process have become less and less popular because of that. So I can’t wait for movies to be reinvented—
Paul Rosolie
(02:47:59)
Oh, I can’t wait
Lex Fridman
(02:48:00)
… like independent film. Just raw, edgy, dangerous, all that kind of stuff.
Paul Rosolie
(02:48:03)
And all the actors we like are in TV shows on various streaming platforms. It’s like they’ve all just gone home. They’re not there. I was literally like, “Man, I miss movies. What happened?” I’m re-watching all the old movies that I like, and I was like, “Where is everybody?” What are they doing? It’s like they all have a TV series on Hulu or something, you know? It’s like, “Damn.”
Lex Fridman
(02:48:28)
Yeah. I think it’ll come. The raw, the dangerous, the edgy.
Paul Rosolie
(02:48:32)
What we just described is almost perfect for… There’s a scene in Dead Poets Society where Robin Williams makes them open their books. And the first page of the poetry book is like, “How do you identify a good poem?” He’s like, “A good poem can be…” and he makes a graph. He’s like, “By the subject of the poem, and then the accuracy with which it is described, you can tell whether or not it’s a good poem.” And he reads this, and the whole class is sitting there bored. And he’s like, “Now rip that page out of your book.” And they rip the page out. And then he’s like, “Now stand up. Describe something.” And he makes them bleat it and scream it. It’s almost exactly what we’re describing right now.
Paul Rosolie
(02:49:05)
It’s like, yeah, you can turn it into a graph if you need to, but it’s something way messier than that.
Lex Fridman
(02:49:10)
Yeah. And Robin Williams, the person—
Paul Rosolie
(02:49:12)
God.
Lex Fridman
(02:49:13)
…is a perfect example of a complicated, beautiful human. I miss him. And whenever I see clips of him come up, it’s just— I still to this day can’t make sense that a person like that can take their own life—somebody who’s brought so much joy to the world. It scares me, man. It scares me. I’m scared of my own mind in that way, you know? That he could be at the top of the world…
Paul Rosolie
(02:49:47)
Mm-hmm. But he had an illness.
Lex Fridman
(02:49:50)
Yeah, that’s what I understand. Dude, life is a rollercoaster.
Paul Rosolie
(02:49:57)
I’m telling you.
Lex Fridman
(02:49:57)
And you’re living through it.
Paul Rosolie
(02:49:58)
As scary as that… like, that you can go down the Robin Williams hole, I’ll give you this. My very close friend, Gleb, has a story. He was in New York City as a kid and he saw Robin Williams walking down the street. He went up to him and said, “Oh my God, it’s Robin Williams.” And Robin Williams was like, “Yeah.” And he goes, “Can I have an autograph?” And he goes, “Do you have any paper?” And my friend was like, “No, I’m eleven.” And Robin Williams was like, “Go get some paper.” And Robin Williams’ manager was with him and he was like, “Robin, we don’t have time. We gotta get up there.” And he was like, “Hold on. I told the kid I’d give him a thing.”
Paul Rosolie
(02:50:32)
He’ll be back.” And my friend heard this as he’s thinking, “Just please stay, please stay.” Like his whole life depended on this. And he ran into a diner, grabbed a napkin, and ran back out into the street. It took him a few minutes, and he said Robin Williams was sitting there, and the irate manager was there being like, “Come on, let’s go.” Robin had waited there and signed the napkin for him, and actually did it with a smile and a wink.
Lex Fridman
(02:51:00)
Yeah, man. You can bring a lot of joy to the world. Never forget that. All those little interactions. I love it. I love it.
Paul Rosolie
(02:51:07)
That was another one of Jane’s amazing quotes that I couldn’t reproduce, but it’s just that you don’t realize the degree to which the things you do each day matter— …even if it’s just to the people around you. To the people around you, you are their entire life experience— …if they’re your kids, your parents, your partner. So yeah, the things you do. And if you can manage to put that extra energy where you put a little magic on it, where it is fun—showing up home with something, or playing with the kids in a way that surprises them. I had a good friend of mine.
Paul Rosolie
(02:51:44)
This guy Vinnie, he told me—I called him and said, “What are you doing?” He said, “Oh, I have a whole plan set up. It’s supposed to be really good stars tonight. I’m putting my kids to bed. I’m putting my daughter to bed. I’m gonna wake her up in the middle of the night and I’m gonna have a candle. She’s never seen it. And I’m gonna take her up to the roof to go stargazing.” He’s like, “But I want her to sleep.” And he’s like, “You know, remember when you were a kid and you would wake up?” And it’s like he was curating a magical experience for her to see the stars, making warm tea, and all this. Man, you can just make it so great.
Lex Fridman
(02:52:18)
Jane Goodall’s the reason you met this guy.
Paul Rosolie
(02:52:20)
That’s right.
Lex Fridman
(02:52:21)
You’ve continuously spoken really highly of him, and he gave me this book that he has recently written, Echoes from Eden. Signed it.
Paul Rosolie
(02:52:31)
Yeah. Dax, A, saved my life, and B, is the example of what everybody wishes. Dax made an amazing company, amassed an amazing fortune, and then said, “I’m gonna use it for good.”
Lex Fridman
(02:52:43)
He’s given a lot of resources, a lot of love, a lot of effort to helping the Amazon rainforest and the environment in general. And he’s one of the only guys I know who has a sexier beard than you.
Paul Rosolie
(02:52:58)
Yeah, he’s got me beat big time. That thing is—
Lex Fridman
(02:53:01)
He wrote, “Thank you, brother, for your love of the wild. This book is about the heroes fighting on the front line for nature. Together we can protect Earth’s last wild places. Speak soon, Dax.”
Paul Rosolie
(02:53:14)
He supported all these initiatives. He went to the Amazon with Jane. He supported Jungle Keepers. He’s supported Sea Shepherd. And so he really went out and said, “Okay, what are the environmental projects that are doing the most good? And where do I want to put my resources?” And everyone always whines about that, like, “How come these guys don’t?” And it’s like he did, and he got a lot done. Then he went and visited all those projects—sea turtles, Indonesian orangutans, working with Jane. So that book is sort of a State of the Union on where conservation is at, with a lot of knowledge about how all the different strategies…
Paul Rosolie
(02:53:52)
It’s so different protecting sea turtle eggs versus trying to save a river in the Amazon versus Jane’s global message of hope. And then he has a guy in there who’s trying to save a specific part of I think Sumatra, and it’s just amazing stuff.
Lex Fridman
(02:54:07)
The Congo.
Paul Rosolie
(02:54:08)
The Congo. And then he actually took the time to go to these places and see the operations on the ground.
Lex Fridman
(02:54:14)
And are you still working with him?
Paul Rosolie
(02:54:16)
Yeah. Well, the way it happened in my life was the one time I quit conservation was right around the time COVID hit and I was going through a divorce. I was 32 years old, and I had no job, no nothing. JJ’s mom had COVID. Don Ignacio, the shaman, had COVID. Pico’s leg was coming off. It was like nothing was working. Nobody could go anywhere. And I called Mohsin and I was like, “I quit.” I was like, “We’re never going back to the jungle.” The loggers just went out and were tearing down everything. I just said, “I’ve got nothing.” In that absolute black depression, I called him and I said, “I quit.”
Paul Rosolie
(02:55:02)
I’m gonna go get a job. I guess I’ve just been like a jungle Peter Pan, and it’s time to grow up.” I was really embarrassed at the time that I did that. And then I spent like four days just laying in bed with no idea what to do. The only thing I can do is this. And I had talked to Dax months earlier, told him my plan for protecting the river, for making a ranger team, and he’d been looking over the budgets and spreadsheets and seeing if this was real. He was still forming Age of Union. And then four days after I quit, the phone rings and it’s Dax. And he goes, “Hey, I looked over the budget by the way. I’d like to make a 10-year commitment to Jungle Keepers. Let’s go.”
Paul Rosolie
(02:55:45)
He had no idea what I was going through, and he was just like, “Let’s go.” Going from that depressed to that inspired in a single conversation… you could get the bends from that.
Lex Fridman
(02:55:57)
Yeah, and it’s not just the money. It’s having somebody who believes in you.
Paul Rosolie
(02:56:01)
No, it’s that he believes we can do it. Money means tuna cans and gasoline and being able to buy shoes, you know? We never had those things before. We were just living in the jungle watching our bodies decay. And he was like, “No, I know how to run a company. I can tell what you guys need to run an organization.” And he did that and has stuck by us. He came not that long ago to the Amazon, and we— And we took him around, and he looked around and he went, “I’ve never seen people…” Because when we started, he said, “You guys remind me of a startup. You’re a mess.” And that was really right before Stephon had come in.
Paul Rosolie
(02:56:43)
And so now, he’s seeing ranger teams and boats going up and down. We have complex systems and a donor program, and all these things are working well. We’re actually making progress and we have annual reports and all this data. And he’s like, “People have donor fatigue, where they donate money and they don’t know where it’s going. Here, they can see what’s happening.” And so having someone like Dax in your corner is a miracle, really. In the book, it’s gonna sound… again, a lot of the things that happened to me in my life sound like bad writing.
Paul Rosolie
(02:57:16)
You know in the movie when they’ve got the gun against their head and they’re on the ground, and you go, “They’re not getting out of this one.” And then someone bursts through the door and saves them. That’s just happened too many times to me. It sounds like bad writing, but it’s a really good life.
Lex Fridman
(02:57:31)
Since you mentioned Stefan one more time— …one of the things I forgot to mention, one of my happiest moments in life, and I had many of them in the jungle with you, is just talking late at night after ayahuasca, funny enough— …chatting with Stefan and Dan and you, and giggling and just talking about life and everything. And Dan is a guy I have to give a shout-out to. You should go follow him on Instagram. Life with Dan. He’s an incredible wildlife photographer. I’ve seen him. He’s worked quite a lot with you. He has a love of nature, a love of the wilderness, a love of beauty, and is extremely good at taking pictures, but just goes to the edges with you. He’s the only guy I’ve seen with two giant cameras able to follow you into the darkness.
Paul Rosolie
(02:58:24)
Well, Dan… First of all, that picture I showed you where I’m in the tree, because I told you the story about with JJ where I climbed the giant tree. Well, this is years later, I climbed it with Dan. Dan was there, and so he flew the drone up and got me in the tree. But what Dan’s a really good example of is, like you were saying, what would you say to the kids? Dan listened to our first podcast while living in Singapore, and he’s a young filmmaker.
Paul Rosolie
(02:58:53)
He signed himself up—again, just get out there—to come on a Tamandua Expeditions with my company, and he showed up. Sure enough, their boat broke down while I was off doing Jungle Keepers stuff, and someone was like, “Yo, their boat broke down.” So we show up and I haul their boat and he comes up to me and goes, “I’m such a big fan. I just wanted to say hi.” I said, “Well, great. Hello. Let’s get you back on the river.” And then someone came up to me and they said, “You know, he’s a really good photographer.” I said, “Everybody’s a good photographer today. That’s great. Amazing.”
Paul Rosolie
(02:59:25)
We have Stefan and Mohsin.” I said, “What else do we need?” And then someone I trust was like, “Hey, listen, look at his stuff. It’s not normal.” And then I watched a few of his videos and I went, “Holy shit.” And I went, “Would you ever think of coming down for a few weeks to film?” And at the time, he was like, “No way.” He was so amazed. And then like now we’re bros. And we film together all the time. But he put himself in the position where he has the skill, the insane skill. I mean, some of his things, he’s doing tracking shots of a white-winged sparrow over the water where he’s in the boat with an 800-millimeter lens— Getting these insane shots. I’ve never seen a talent like him with video.
Lex Fridman
(03:00:12)
But wildlife photography and documentary filmmaking in general, it’s not just about the competence of being able to pull off a difficult shot. It’s the patience required and the discipline to just sit there and wait. I mean, when we went out into the jungle, he waited.
Paul Rosolie
(03:00:32)
Yeah. No, I mean, even looking on this page, that shot of the— …of the emerald tree boa there, he got up before dawn to wait for the sideways light because he had a vision of lighting the snake from the side, and then the macaws coming off the clay lick. How many days at the clay lick till he got the explosion of macaws? And I’m up in the tree and he’s on the walkie-talkie. And then also your lenses are gonna fog. You have to be able to hike and do everything everybody else is doing, and your job. I mean, the dude is…
Lex Fridman
(03:01:04)
You attract a lot of incredible people because the mission is clear and there’s just a vibrancy and energy to the whole thing. It’s exciting. That’s why the best people come to work with you, come to hang out with you.
Paul Rosolie
(03:01:17)
It’s become an amazing team. I look around at the people and I go, “How did this happen?”
Lex Fridman
(03:01:24)
But it is getting more intense and dangerous and so on. I have to ask you the thing we’ve talked about. What do you think you’ll do when you’re getting older? This is pretty intense. This is pretty insane. Where do you see yourself years from now?
Paul Rosolie
(03:01:39)
I want to protect this river. We have to protect this river in the next year and a half or else we’ll lose the chance. First book, I got to the Amazon and it was wild. Second book, we built this amazing organization and we got so close. It’ll be like those movies, like Blow, where it’s like, “For a time it was amazing,” and then at the end it’s not so great.
Lex Fridman
(03:02:00)
By the way, great movie.
Paul Rosolie
(03:02:01)
Great movie. But I’m writing this story as it happens and the endgame might be written by somebody else. Or we just got really close and then it all fell apart. But we’re 130,000 acres of the way. If we make it to 300,000, I think enough people are going to learn about this. It’s going to tidal wave. We’re going to make an amazing documentary about how we protected the wildest place on earth. And then I would love to have a few kids, get a PhD, teach other conservationists around the world how to do this to save really wild places, keep inspiring people, keep writing books, and keep going on expeditions. I don’t have any problems with that.
Paul Rosolie
(03:02:48)
I can’t do this much longer because the pressure of wondering if it’s going to be okay—I’ve used all of it that I can. My Lord of the Rings analogy of carrying the ring, it’s like you can only do that for so long. And so I’m actually very excited to… I need to know that it’s safe. I mean, that monkey that I rescued out of the river, the toucan, Lucas, who comes back to visit us. We just saw a giant anteater not that long ago with Dax in the jungle. I know these animals and I’m responsible for protecting their home. It would be so amazing to bring people to the treehouse and show them this amazing place and put out documentaries. So I have no problem imagining a transition period. I’d like to transition out of Blood Diamond and go to more of the professor role after this.
Lex Fridman
(03:03:45)
You mean like an Indiana Jones type of professor?
Paul Rosolie
(03:03:48)
Yeah. Running from the tribes. As long as it doesn’t go supernatural at the end, I’ll be very happy. That always kind of let me down.
Lex Fridman
(03:03:58)
Well, thank you for giving basically everything you’ve got towards this mission. And thank you for being who you are. It’s been the honor of a lifetime to be able to call you a friend and to have this conversation, brother. This is the third time we’ve spoken. I think we’ll talk at least 10 more times, and I think I speak for everybody in saying thank you, and please don’t die trying to save the rainforest.
Paul Rosolie
(03:04:30)
I have to say thank you to you because our first conversation changed everything. It really did. It brought so many more people onto the mission. I think it also lifted me up because, as we often acknowledge, this can weigh you down. I often do get weighed down and I lose hope myself. And then I get lifted up by moments like that where someone I’m a huge fan of and who I respect so much reaches out and goes, “Do you want to come to Austin and do this podcast I do?” And I respond, “The Lex Fridman podcast?” But you’ve really changed the narrative and allowed this to be a reality.
Lex Fridman
(03:05:18)
And everybody, go pre-order Jungle Keeper, the book, available everywhere. And if you can, donate on junglekeepers.org. This is an important mission, an ultra-competent team, and this is such a beautiful part of the world that I really hope we protect. So thank you for talking today, and now let’s go eat.
Paul Rosolie
(03:05:45)
Thank you, brother.
Lex Fridman
(03:05:47)
Thanks for listening to this conversation with Paul Rosolie. To support this podcast, please check out our sponsors in the description, or you can also find links to contact me, ask questions, give feedback, and so on. And once more, let me say thank you for everything. Thank you for your support. Thank you for the love. And thank you for listening. I hope to see you next time.

#488 – Infinity, Paradoxes that Broke Mathematics, Gödel Incompleteness & the Multiverse – Joel David Hamkins

Joel David Hamkins is a mathematician and philosopher specializing in set theory, the foundations of mathematics, and the nature of infinity, and he’s the #1 highest-rated user on MathOverflow. He is also the author of several books, including Proof and the Art of Mathematics and Lectures on the Philosophy of Mathematics. And he has a great blog called Infinitely More.
Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep488-sc
See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc.

Transcript:
https://lexfridman.com/joel-david-hamkins-transcript

CONTACT LEX:
Feedback – give feedback to Lex: https://lexfridman.com/survey
AMA – submit questions, videos or call-in: https://lexfridman.com/ama
Hiring – join our team: https://lexfridman.com/hiring
Other – other ways to get in touch: https://lexfridman.com/contact

EPISODE LINKS:
Joel’s X: https://x.com/JDHamkins
Joel’s Website: https://jdh.hamkins.org
Joel’s Substack: https://www.infinitelymore.xyz
Joel’s MathOverflow: https://mathoverflow.net/users/1946/joel-david-hamkins
Joel’s Papers: https://jdh.hamkins.org/publications
Joel’s Books:
Lectures on the Philosophy of Mathematics: https://amzn.to/3MThaAt
Proof and the Art of Mathematics: https://amzn.to/3YACc9A

SPONSORS:
To support this podcast, check out our sponsors & get discounts:
Perplexity: AI-powered answer engine.
Go to https://www.perplexity.ai/
Fin: AI agent for customer service.
Go to https://fin.ai/lex
Miro: Online collaborative whiteboard platform.
Go to https://miro.com/
CodeRabbit: AI-powered code reviews.
Go to https://coderabbit.ai/lex
Chevron: Reliable energy for data centers.
Go to https://chevron.com/power
Shopify: Sell stuff online.
Go to https://shopify.com/lex
LMNT: Zero-sugar electrolyte drink mix.
Go to https://drinkLMNT.com/lex
MasterClass: Online classes from world-class experts.
Go to https://masterclass.com/lexpod

OUTLINE:
(00:00) – Introduction
(01:58) – Sponsors, Comments, and Reflections
(15:40) – Infinity & paradoxes
(1:02:50) – Russell’s paradox
(1:15:57) – Gödel’s incompleteness theorems
(1:33:28) – Truth vs proof
(1:44:52) – The Halting Problem
(2:00:45) – Does infinity exist?
(2:18:19) – MathOverflow
(2:22:12) – The Continuum Hypothesis
(2:31:58) – Hardest problems in mathematics
(2:41:25) – Mathematical multiverse
(3:00:18) – Surreal numbers
(3:10:55) – Conway’s Game of Life
(3:13:11) – Computability theory
(3:23:04) – P vs NP
(3:26:21) – Greatest mathematicians in history
(3:40:05) – Infinite chess
(3:58:24) – Most beautiful idea in mathematics